README
¶
Hydra Plugin
AI-powered vulnerability detection engine for 0xGen. Hydra analyzes HTTP traffic using multiple specialized analyzers and AI consensus evaluation to identify security vulnerabilities with high confidence.
Overview
Hydra is the core AI detection plugin for 0xGen, providing:
- 5 Vulnerability Analyzers: XSS, SQLi, SSRF, Command Injection, Open Redirect
- AI Consensus Evaluation: Multi-stage validation reducing false positives
- Passive Analysis: Zero-impact detection from HTTP traffic observation
- Context-Aware Detection: Understands application behavior patterns
- Production-Grade Accuracy: Optimized for real-world pentesting workflows
Features
Vulnerability Detection
| Vulnerability Type | Detection Method | Confidence Level |
|---|---|---|
| Cross-Site Scripting (XSS) | Reflected payloads, context analysis | High (AI-validated) |
| SQL Injection (SQLi) | Error patterns, boolean-based, time-based | High (AI-validated) |
| Server-Side Request Forgery (SSRF) | Cloud metadata, internal IPs, DNS rebinding | Medium-High |
| Command Injection | Shell metacharacters, output patterns | High (AI-validated) |
| Open Redirect | URL parameter manipulation, header injection | Medium |
AI Evaluation Pipeline
HTTP Response → Analyzer Detection → AI Evaluator → Confidence Scoring → Finding Emission
- Analyzer Stage: Specialized detectors scan for vulnerability patterns
- AI Stage: Machine learning model evaluates findings in context
- Decision Stage: Consensus algorithm determines final confidence
- Emission Stage: High-confidence findings sent to 0xGen core
Capabilities
CAP_EMIT_FINDINGS: Permission to emit vulnerability findingsCAP_HTTP_PASSIVE: Observe HTTP traffic without modificationCAP_FLOW_INSPECT: Access complete request/response pairs for contextCAP_AI_ANALYSIS: Use AI evaluation services for decision-making
Architecture
Component Overview
plugins/hydra/
├── main.go # Plugin entry point and hook registration
├── engine.go # Core analysis engine and coordinator
├── analyzers.go # Vulnerability-specific detection logic
├── evaluator.go # AI consensus evaluation
├── manifest.json # Plugin metadata and capabilities
└── README.md # This file
Analysis Engine
The hydraEngine coordinates all analyzers and manages the AI evaluation pipeline:
type hydraEngine struct {
analyzers []analyzer // List of vulnerability detectors
evaluator aiEvaluator // AI consensus evaluator
now func() time.Time // Timestamp generator (testable)
}
Key Methods:
process(): Main entry point for HTTP event analysis- Iterates through all analyzers
- Collects candidate findings
- Submits to AI evaluator for validation
- Emits high-confidence findings
Analyzers
Each analyzer implements the analyzer interface:
type analyzer interface {
Analyse(ctx responseContext) *candidateFinding
}
Available Analyzers:
-
xssAnalyzer: Detects reflected XSS by searching for injected payloads in responses- HTML context detection
- JavaScript context detection
- Attribute context detection
- Event handler injection
-
sqliAnalyzer: Identifies SQL injection vulnerabilities- Database error message patterns
- Boolean-based blind SQLi
- Time-based blind SQLi
- Union-based injection
-
ssrfAnalyzer: Finds SSRF vulnerabilities- Cloud metadata endpoints (AWS, GCP, Azure)
- Internal IP ranges (RFC1918)
- Localhost variations
- DNS rebinding indicators
-
commandInjectionAnalyzer: Detects OS command injection- Shell metacharacter injection
- Command output patterns
- Error message analysis
- Path traversal indicators
-
openRedirectAnalyzer: Identifies open redirect vulnerabilities- URL parameter manipulation
- HTTP 3xx redirect analysis
- Location header injection
- Meta refresh detection
AI Evaluator
The aiEvaluator provides context-aware validation:
type aiEvaluator interface {
Decide(candidate *candidateFinding) (decision, bool)
}
Decision Types:
decisionEmit: High confidence, emit finding immediatelydecisionDrop: Low confidence, discard candidatedecisionDefer: Uncertain, collect more evidence
Evaluation Factors:
- Pattern match strength
- Response context analysis
- Historical false positive rate
- Application behavior baseline
- Request/response correlation
Usage
Basic Usage
Hydra is enabled by default in 0xGen:
# Start 0xGen with Hydra active
0xgend start
# Or explicitly enable
0xgend start --enable-plugin hydra
Configuration
Configure Hydra via 0xGen config file (~/.0xgen/config.yaml):
plugins:
hydra:
enabled: true
config:
# AI evaluation threshold (0.0 - 1.0)
confidence_threshold: 0.75
# Maximum findings per target
max_findings_per_target: 100
# Enable/disable specific analyzers
analyzers:
xss: true
sqli: true
ssrf: true
command_injection: true
open_redirect: true
# AI model configuration
ai:
model: "gpt-4"
temperature: 0.3
max_tokens: 500
Command-Line Options
# Disable Hydra temporarily
0xgend start --disable-plugin hydra
# Adjust AI confidence threshold
0xgend start --plugin-config hydra.confidence_threshold=0.85
# Enable only specific analyzers
0xgend start --plugin-config hydra.analyzers.xss=true --plugin-config hydra.analyzers.sqli=false
Programmatic API
Use Hydra from Go code:
import "github.com/RowanDark/0xgen/plugins/hydra"
// Create Hydra engine
engine := hydra.NewEngine(hydra.Config{
ConfidenceThreshold: 0.75,
Analyzers: []string{"xss", "sqli", "ssrf"},
})
// Process HTTP response
finding, err := engine.Analyze(httpResponse)
if err != nil {
log.Fatal(err)
}
if finding != nil {
fmt.Printf("Vulnerability detected: %s\n", finding.Type)
}
Detection Examples
Example 1: Reflected XSS
Request:
GET /search?q=<script>alert(1)</script> HTTP/1.1
Host: vulnerable.example.com
Response:
HTTP/1.1 200 OK
Content-Type: text/html
<html>
<body>
<h1>Search results for: <script>alert(1)</script></h1>
</body>
</html>
Hydra Detection:
xssAnalyzerdetects injected payload in response- AI evaluator confirms HTML context injection
- Finding emitted:
{ "type": "xss.reflected", "severity": "high", "confidence": 0.92, "message": "Reflected XSS via 'q' parameter", "target": "https://vulnerable.example.com/search?q=...", "evidence": { "injected_payload": "<script>alert(1)</script>", "reflection_context": "html_body", "parameter": "q" } }
Example 2: SQL Injection
Request:
GET /user?id=1' OR '1'='1 HTTP/1.1
Host: vulnerable.example.com
Response:
HTTP/1.1 200 OK
You have an error in your SQL syntax near ''1'='1' at line 1
Hydra Detection:
sqliAnalyzerdetects SQL error message- AI evaluator confirms database-specific error pattern
- Finding emitted with high confidence (0.95)
Example 3: SSRF to Cloud Metadata
Request:
GET /proxy?url=http://169.254.169.254/latest/meta-data/ HTTP/1.1
Host: vulnerable.example.com
Response:
HTTP/1.1 200 OK
ami-id
hostname
instance-id
Hydra Detection:
ssrfAnalyzerdetects AWS metadata endpoint access- AI evaluator confirms cloud metadata pattern
- Finding emitted as critical severity
Performance
Benchmark Results
From pre-alpha performance testing (see internal/atlas/BENCHMARKS.md):
| Metric | Value | Notes |
|---|---|---|
| Throughput | ~340 targets/sec | Single XSS analyzer |
| Latency | ~3ms/target | All analyzers active |
| Memory | ~132KB/target | Includes AI evaluation |
| False Positive Rate | <5% | With AI validation |
| False Negative Rate | ~8% | Complex obfuscation cases |
Optimization Tips
-
Disable Unused Analyzers: Only enable vulnerability types you're testing
analyzers: xss: true sqli: false # Disable if not testing SQLi ssrf: false -
Adjust Confidence Threshold: Higher threshold = fewer false positives
confidence_threshold: 0.85 # Default: 0.75 -
Limit Findings: Prevent finding explosion on large targets
max_findings_per_target: 50 # Default: 100
Security Considerations
Sandbox Restrictions
Hydra runs with the following sandbox restrictions:
- cgroups: CPU (50%), Memory (512MB), PIDs (256)
- chroot: Isolated filesystem (read-only root)
- Network: Restricted to localhost and allowed IPs
- seccomp-bpf: Syscall filtering (only safe syscalls allowed)
- Capabilities: Dropped all Linux capabilities except analysis APIs
AI Model Security
The AI evaluator communicates with external AI services:
- TLS Required: All AI API calls use HTTPS
- API Key Protection: Keys stored in secure keyring
- Rate Limiting: Built-in rate limits prevent abuse
- Data Sanitization: PII is stripped before sending to AI
- Audit Logging: All AI decisions logged for review
Privacy Considerations
Hydra processes potentially sensitive HTTP traffic:
- Local Processing First: Pattern matching done locally
- Minimal AI Submission: Only candidates sent to AI (not all traffic)
- PII Stripping: Sensitive data removed before AI evaluation
- Configurable AI: Can disable AI and use pattern matching only
Disable AI Mode:
plugins:
hydra:
config:
ai:
enabled: false # Use pattern matching only
Troubleshooting
No Findings Detected
Symptom: Hydra loads but doesn't emit findings
Solutions:
-
Check confidence threshold (too high filters all findings):
0xgend start --plugin-config hydra.confidence_threshold=0.5 -
Verify analyzers are enabled:
# Check which analyzers are active 0xgenctl config get plugins.hydra.analyzers -
Enable debug logging:
0xgend start --log-level debug | grep hydra -
Test with known vulnerable target:
# DVWA (Damn Vulnerable Web Application) docker run -p 8080:80 vulnerables/web-dvwa 0xgend start --target http://localhost:8080
False Positives
Symptom: Hydra reports vulnerabilities that don't exist
Solutions:
-
Increase confidence threshold:
confidence_threshold: 0.85 # Stricter validation -
Review AI decisions:
# Check AI evaluation logs tail -f ~/.0xgen/logs/hydra-ai.log -
Disable problematic analyzer:
analyzers: open_redirect: false # If causing false positives
High Memory Usage
Symptom: Hydra consumes excessive memory
Solutions:
-
Limit findings per target:
max_findings_per_target: 25 # Reduce from default 100 -
Reduce analyzer count:
analyzers: xss: true sqli: true ssrf: false command_injection: false open_redirect: false -
Check for memory leaks:
# Monitor memory usage watch -n 1 "ps aux | grep hydra"
AI Evaluator Errors
Symptom: AI evaluation fails with errors
Solutions:
-
Check API key:
# Verify API key is set 0xgenctl config get plugins.hydra.ai.api_key -
Test AI connectivity:
curl -H "Authorization: Bearer YOUR_API_KEY" \ https://api.openai.com/v1/models -
Disable AI temporarily:
ai: enabled: false # Fall back to pattern matching
Development
Building from Source
# Navigate to plugin directory
cd plugins/hydra
# Install dependencies
go mod download
# Build plugin binary
go build -o hydra main.go
# Run tests
go test ./...
# Run with race detector
go test -race ./...
Adding a New Analyzer
-
Implement the
analyzerinterface:type myAnalyzer struct{} func (a *myAnalyzer) Analyse(ctx responseContext) *candidateFinding { // Your detection logic if vulnerabilityDetected { return &candidateFinding{ Type: "my_vulnerability", Severity: SeverityHigh, Message: "Description", Evidence: evidence, } } return nil } -
Register analyzer in
engine.go:analyzers := []analyzer{ &xssAnalyzer{}, &sqliAnalyzer{}, &myAnalyzer{}, // Add your analyzer } -
Add tests (
my_analyzer_test.go):func TestMyAnalyzer(t *testing.T) { analyzer := &myAnalyzer{} ctx := responseContext{ Body: "vulnerable response", } finding := analyzer.Analyse(ctx) assert.NotNil(t, finding) } -
Update configuration schema:
analyzers: my_vulnerability: true
Testing Strategies
Unit Tests (fast, isolated):
go test -run TestXSSAnalyzer ./...
Integration Tests (slower, realistic):
go test -run TestHydraEngine ./...
Benchmark Tests:
go test -bench=BenchmarkXSSAnalyzer -benchmem ./...
Live Testing (manual verification):
# Against DVWA
docker run -p 8080:80 vulnerables/web-dvwa
0xgend start --target http://localhost:8080 --enable-plugin hydra --log-level debug
Roadmap
Current Status (v2.0.0-alpha)
- ✅ 5 vulnerability analyzers
- ✅ AI consensus evaluation
- ✅ Passive HTTP analysis
- ✅ Context-aware detection
- ✅ <5% false positive rate
Planned Features (v2.1.0)
- 🔄 DOM-based XSS detection
- 🔄 XML External Entity (XXE) analyzer
- 🔄 Deserialization vulnerability detection
- 🔄 CSRF token analysis
- 🔄 Custom analyzer plugin system
Future Enhancements (v3.0.0)
- 📋 Active exploitation verification
- 📋 Automatic payload generation
- 📋 Vulnerability chaining detection
- 📋 Machine learning model training interface
- 📋 Real-time threat intelligence integration
Contributing
We welcome contributions to Hydra! Focus areas:
- New Analyzers: Add detection for additional vulnerability types
- AI Models: Improve evaluation accuracy with better models
- Performance: Optimize analyzer speed and memory usage
- Test Coverage: Add tests for edge cases
- Documentation: Improve detection examples and troubleshooting
See CONTRIBUTING.md for guidelines.
References
- Plugin SDK: docs/en/plugins/sdk-reference.md
- Atlas Core: internal/atlas/README.md
- Security Guide: PLUGIN_GUIDE.md
- Benchmarks: internal/atlas/BENCHMARKS.md
License
MIT License - see LICENSE for details.
Version History
- v2.0.0-alpha (2025-11-20): Initial release with 5 analyzers and AI evaluation
- v0.1.0 (2024-Q4): Internal pre-alpha testing
Documentation
¶
There is no documentation for this package.