security

package
v0.9.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 22, 2026 License: MIT Imports: 10 Imported by: 0

Documentation

Overview

Package security provides WAF-like protection for the HotPlex engine.

This package implements a Web Application Firewall (WAF) that inspects LLM-generated commands before they are dispatched to the host shell. It enforces strict security boundaries regardless of the model's own safety alignment.

The Detector uses regex pattern matching to identify and block potentially dangerous operations such as:

  • Command injection attempts ($(), backticks, eval)
  • Privilege escalation (sudo, su, pkexec)
  • Network penetration (reverse shells)
  • Persistence mechanisms (crontab, systemd)
  • Information gathering (reading /etc/passwd, SSH keys)
  • Container escape (privileged docker, chroot)
  • Kernel manipulation (insmod, modprobe)
  • Destructive file operations (rm -rf /)

Usage:

detector := security.NewDetector(logger)
detector.SetAdminToken("secret-token")

if event := detector.CheckInput(userInput); event != nil {
    // Block dangerous operation
    return ErrDangerBlocked
}

Index

Constants

View Source
const (
	// Maximum input length to log (prevents log flooding)
	MaxInputLogLength = 50
	// Maximum pattern match length to log
	MaxPatternLogLength = 100
	// Maximum command display length for UI
	MaxDisplayLength = 100
)

Constants for danger detector logging and display limits.

Variables

This section is empty.

Functions

This section is empty.

Types

type DangerBlockEvent

type DangerBlockEvent struct {
	Operation      string      `json:"operation"`             // The specific command line that triggered the block
	Reason         string      `json:"reason"`                // Description of the threat
	PatternMatched string      `json:"pattern_matched"`       // The specific signature that matched the input
	Level          DangerLevel `json:"level"`                 // Severity level
	Category       string      `json:"category"`              // Category classification
	BypassAllowed  bool        `json:"bypass_allowed"`        // Whether the user has administrative privileges to bypass this block
	Suggestions    []string    `json:"suggestions,omitempty"` // Safe alternatives to the blocked command
}

DangerBlockEvent contains detailed forensics after a dangerous operation is successfully intercepted.

type DangerLevel

type DangerLevel int

DangerLevel classifies the severity of a detected potentially harmful operation.

const (
	// DangerLevelCritical represents irreparable damage (e.g., recursive root deletion or disk wiping).
	DangerLevelCritical DangerLevel = iota
	// DangerLevelHigh represents significant damage potential (e.g., deleting user home or system config).
	DangerLevelHigh
	// DangerLevelModerate represents unintended side effects (e.g., resetting Git history).
	DangerLevelModerate
)

func (DangerLevel) String

func (d DangerLevel) String() string

String returns a string representation of the danger level.

type Detector

type Detector struct {
	// contains filtered or unexported fields
}

Detector acts as a Web Application Firewall (WAF) for the local system. It inspects LLM-generated commands before they are dispatched to the host shell, enforcing strict security boundaries regardless of the model's own safety alignment.

func NewDetector

func NewDetector(logger *slog.Logger) *Detector

func (*Detector) CheckFileAccess

func (dd *Detector) CheckFileAccess(filePath string) bool

CheckFileAccess checks if file access is within allowed paths. Returns true if the access is safe (within allowed paths), false otherwise.

func (*Detector) CheckInput

func (dd *Detector) CheckInput(input string) *DangerBlockEvent

CheckInput checks if the input contains any dangerous operations. Returns a DangerBlockEvent if a dangerous operation is detected, nil otherwise.

func (*Detector) IsPathAllowed

func (dd *Detector) IsPathAllowed(path string) bool

IsPathAllowed checks if a path is in the allowlist. Both the input path and allowed paths should be cleaned first.

func (*Detector) LoadCustomPatterns

func (dd *Detector) LoadCustomPatterns(filename string) error

LoadCustomPatterns loads custom danger patterns from a file. File format: one pattern per line: "regex|description|level|category"

func (*Detector) RegisterRule added in v0.8.1

func (dd *Detector) RegisterRule(rule SecurityRule)

RegisterRule allows injecting custom security rules to extend the WAF.

func (*Detector) SetAdminToken

func (dd *Detector) SetAdminToken(token string)

SetAdminToken sets the token required to toggle bypass mode.

func (*Detector) SetAllowPaths

func (dd *Detector) SetAllowPaths(paths []string)

SetAllowPaths sets the list of allowed safe paths. Paths are cleaned to eliminate arbitrary trailing slashes or relative segments.

func (*Detector) SetBypassEnabled

func (dd *Detector) SetBypassEnabled(token string, enabled bool) error

SetBypassEnabled enables or disables bypass mode. Requires a valid admin token to succeed. When enabled, dangerous operations are NOT blocked (admin/Evolution mode only).

type RegexRule added in v0.8.1

type RegexRule struct {
	Pattern     *regexp.Regexp // The compiled regex identifying the dangerous sequence
	Description string         // Human-readable explanation of why this pattern is blocked
	Level       DangerLevel    // Severity used for logging and alerting
	Category    string         // Functional category
}

RegexRule implements SecurityRule using regular expressions.

func (*RegexRule) Evaluate added in v0.8.1

func (r *RegexRule) Evaluate(input string) *DangerBlockEvent

Evaluate checks if the regex matches the input.

type SecurityRule added in v0.8.1

type SecurityRule interface {
	// Evaluate analyzes the input command. Return non-nil DangerBlockEvent if blocked.
	Evaluate(input string) *DangerBlockEvent
}

SecurityRule defines an interface for evaluating whether input is dangerous.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL