ml

package module
v1.0.28 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 17, 2026 License: BSD-3-Clause Imports: 18 Imported by: 9

Documentation

Overview

Package ml provides machine learning fingerprint classification functionality Implements three-layer hierarchical classifier architecture, ported from Rust version ML design

Package ml — evolution.go provides a profile evolution system that tracks real-world traffic patterns, detects outdated profiles, and automatically triggers model retraining.

The ProfileEvolutionEngine monitors:

  • Browser version drift (new Chrome versions appearing faster than profiles update)
  • Traffic distribution shift (e.g., Chrome share increasing, IE decreasing)
  • Forgery pattern changes (new anti-detect tools appearing)
  • Profile staleness (profiles not seen in real traffic for a long time)

When significant drift is detected, it orchestrates profile registry updates and model evolution.

Package ml provides feature extraction functionality

Package ml provides three-layer hierarchical classifier implementation

Package ml — learner.go provides an online learning system that continuously improves ML models from real-world feedback.

The OnlineLearner collects feedback samples, detects accuracy drift, and triggers model evolution when quality degrades.

Architecture:

Feedback samples → ring buffer → drift detector → evolve trigger
                                                ↓
                                         ProfileRegistry update
                                                ↓
                                         NeuralTrainer.Evolve()

Package ml — models.go provides domain-specific neural network models for multi-layer browser fingerprint analysis.

┌─────────────────────────────────────────────────────────────────────┐ │ Fingerprint Analysis Model Library — Domain-Driven Design │ ├─────────────────────────────────────────────────────────────────────┤ │ │ │ Core mission: │ │ Identify real browsers vs automation tools / forged clients │ │ through multi-layer fingerprint analysis │ │ (TLS + HTTP/2 + TCP/IP + JS + Behavioral) │ │ │ │ Four specialized models, each with clear purpose: │ │ │ │ ┌──────────────────┐ │ │ │ FingerprintEncoder│ Learn: intrinsic structure & browser uniqueness │ │ │ (encoder) │ Infer: map raw features to 32-dim embeddings │ │ │ │ Output: embeddings — same browser close, │ │ │ │ different browsers far apart │ │ └────────┬─────────┘ │ │ │ 32-dim embedding │ │ ▼ │ │ ┌──────────────────┐ │ │ │ BrowserClassifier│ Learn: embedding-to-identity mapping │ │ │ (classifier) │ Infer: identify browser family from embedding │ │ │ │ Output: family probability + confidence │ │ └────────┬─────────┘ │ │ │ classification result │ │ ┌──────────────────┐ │ │ │ ForgeryDetector │ Learn: real vs forged fingerprint patterns │ │ │ (detector) │ Infer: cross-layer consistency analysis │ │ │ │ Output: forgery prob + type (real/headless/ │ │ │ │ antidetect/proxy) │ │ └────────┬─────────┘ │ │ │ detection result │ │ ┌──────────────────┐ │ │ │ ThreatAssessor │ Learn: optimal security response from all │ │ │ (assessor) │ signals combined │ │ │ │ Infer: threat class + recommended action │ │ │ │ Output: threat probabilities + action probs │ │ └──────────────────┘ │ │ │ │ Training strategies: │ │ FingerprintEncoder: Triplet Margin Loss over 207 browser profiles │ │ BrowserClassifier: Cross-entropy from profile labels │ │ ForgeryDetector: Binary CE from real + synthetic forged data │ │ ThreatAssessor: CE from rule labels, then feedback fine-tuning │ │ │ │ GPU acceleration: all forward/backward operations use Tensor ops, │ │ switchable to GPU backend via SetDevice(gpu) │ └─────────────────────────────────────────────────────────────────────┘

Package ml provides lightweight neural network primitives built on Tensor.

Contents:

  • Layer interface and Dense (fully-connected) layer implementation
  • Activation functions: ReLU, Sigmoid, Softmax, Tanh
  • Batch normalization layer for training stability
  • Sequential model: chains multiple layers into a forward/backward graph
  • Adam optimizer with weight decay and gradient clipping
  • Learning rate schedulers: cosine annealing, step decay
  • Loss functions: CrossEntropy, MSE, TripletMargin
  • Parameter initialization: He, Xavier, zero-init

These components build directly on top of Tensor and support mini-batch training.

Package ml — pipeline.go provides end-to-end model inference and training pipelines.

┌─────────────────────────────────────────────────────────────────────┐ │ ModelPipeline — Inference Orchestration │ │ │ │ Raw request │ │ ↓ EncodeFingerprint() │ │ 30-dim feature vector │ │ ↓ FingerprintEncoder.EncodeSingle() │ │ 32-dim embedding │ │ ├→ BrowserClassifier.ClassifySingle() → browser identification │ │ │ │ │ │ 30-dim features + 10-dim cross-layer features │ │ ├→ ForgeryDetector.DetectSingle() → forgery detection │ │ │ │ │ │ 32-dim embedding + forgery result + 8-dim behavior features │ │ └→ ThreatAssessor.AssessSingle() → threat + action │ │ │ │ All model results aggregated into PipelineResult │ └─────────────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────────────┐ │ Trainer — Training Pipeline │ │ │ │ 207 browser profiles → data augmentation → training samples │ │ │ │ Phase 1: Encoder pre-training (triplet loss) │ │ - Positive pairs: same browser + Gaussian noise │ │ - Negative samples: different browser family │ │ │ │ Phase 2: Classifier training (cross-entropy) │ │ - Freeze encoder, train classification head │ │ │ │ Phase 3: Forgery detector training (binary cross-entropy) │ │ - Real samples: from profiles │ │ - Forged samples: cross-browser layer mixing, headless features │ │ │ │ Phase 4: Threat assessor training (cross-entropy) │ │ - Initial labels: generated by rule engine │ │ - Ongoing: online feedback fine-tuning │ └─────────────────────────────────────────────────────────────────────┘

Package ml provides pretrained models and initialization functionality

Package ml provides data compatibility with Rust fingerprint library Supports importing training data and models from Rust versions

Package ml — service.go provides a central ML service that unifies inference, validation, feedback, and generation across the entire project.

MLService is the single entry point for all AI capabilities:

  • Infer: Run the 4-stage neural pipeline on any fingerprint
  • Validate: Check if a generated fingerprint looks realistic
  • Feedback: Feed real-world observations back for online learning
  • Generate: Produce ML-guided fingerprint feature vectors
  • Evolve: Trigger incremental model evolution from new profiles

Typical usage:

svc, _ := ml.NewMLService(&ml.ServiceConfig{ModelStorePath: "./models"})
result := svc.Infer(profile, nil)    // inference
ok     := svc.Validate(profile)      // validation
svc.Feedback(profile, 0.9)           // reward feedback

Package ml — store.go provides persistent, versioned model storage.

┌─────────────────────────────────────────────────────────────────────┐ │ ModelStore — Persistent Model Library │ │ │ │ Versioned model snapshots on disk: │ │ <baseDir>/ │ │ manifest.json ← index of all versions │ │ v1/weights.json ← snapshot 1 │ │ v2/weights.json ← snapshot 2 (evolved from v1) │ │ ... │ │ latest/weights.json ← symlink to newest version │ │ │ │ Auto-load on startup: pipeline.LoadFromStore(store) │ │ Auto-save after train: store.Save(pipeline, metadata) │ │ Incremental evolution: trainer.Evolve(pipeline, newProfiles) │ │ │ │ Keeps at most MaxVersions snapshots; prunes oldest automatically. │ └─────────────────────────────────────────────────────────────────────┘

Package ml — tensor.go provides the tensor computation engine with CPU parallel execution and GPU device abstraction.

This is the computational foundation for the entire fingerprint analysis model library. All neural network forward/backward propagation and parameter updates are built on top of Tensor operations.

GPU support strategy:

  • ComputeDevice interface abstracts compute backends (CPU / GPU)
  • CPU implementation uses goroutine-parallel batch operations
  • GPU implementation can be plugged in via build tag "gpu_cuda" + CGo
  • Tensor data layout is row-major, compatible with CUDA/cuBLAS

Package ml — tls_validator.go provides ML-guided TLS fingerprint validation.

The TLSValidator uses the trained forgery detector and cross-layer consistency checks to evaluate whether a TLS ClientHello configuration looks realistic for its claimed browser.

This is used by the generator to reject unrealistic TLS configurations and can be called independently to validate any ClientHello before use.

Package ml provides training data loading and model training functionality

Index

Constants

View Source
const (
	// FingerprintFeatureDim is the raw fingerprint feature dimension (30-dim).
	//
	// TLS layer (8-dim):
	//   [0] tls_version:       TLS version normalized (1.0=TLS1.0, 1.1, 1.2→0.75, 1.3→1.0)
	//   [1] cipher_count:      cipher suite count / 20.0
	//   [2] tls13_ratio:       TLS 1.3 cipher suite ratio
	//   [3] extension_count:   extension count / 20.0
	//   [4] has_sni:           SNI present (0/1)
	//   [5] has_alpn:          ALPN present (0/1)
	//   [6] curve_count:       elliptic curve count / 6.0
	//   [7] grease_ratio:      GREASE value ratio
	//
	// HTTP/2 layer (6-dim):
	//   [8]  h2_window:        initial window size / 10M
	//   [9]  h2_streams:       max concurrent streams / 1000
	//   [10] h2_header_table:  header table size / 100K
	//   [11] h2_pseudo_order:  pseudo-header order encoding / 24 (24 permutations)
	//   [12] h2_priority:      priority frame (0/1)
	//   [13] h2_settings_cnt:  settings count / 10
	//
	// TCP/IP layer (4-dim):
	//   [14] tcp_ttl:          TTL normalized (32→0.25, 64→0.5, 128→1.0)
	//   [15] tcp_window:       TCP window size / 128K
	//   [16] tcp_mss:          MSS / 2000
	//   [17] tcp_timestamps:   TCP timestamps (0/1)
	//
	// JS frontend layer (8-dim):
	//   [18] canvas_entropy:   Canvas fingerprint entropy
	//   [19] webgl_score:      WebGL score
	//   [20] audio_entropy:    Audio fingerprint entropy
	//   [21] font_count:       font count / 200
	//   [22] storage_score:    storage score
	//   [23] webrtc_active:    WebRTC active (0/1)
	//   [24] hardware_cores:   CPU core count / 16
	//   [25] headless_score:   headless browser score
	//
	// Meta-feature layer (4-dim):
	//   [26] ua_entropy:       User-Agent entropy
	//   [27] config_entropy:   overall config entropy
	//   [28] tool_marker:      automation tool marker
	//   [29] behavior_pattern: behavioral pattern feature
	FingerprintFeatureDim = 30

	// EmbeddingDim is the embedding vector dimension.
	EmbeddingDim = 32

	// CrossLayerFeatureDim is the cross-layer consistency feature dimension (10-dim).
	//   [0] tls_h2_window_match:   TLS↔HTTP/2 window match score
	//   [1] tls_h2_pseudo_match:   TLS↔HTTP/2 pseudo-header order match
	//   [2] tls_tcp_ttl_match:     TLS↔TCP/IP TTL match score
	//   [3] ua_tls_version_match:  UA↔TLS version match
	//   [4] ua_h2_settings_match:  UA↔HTTP/2 settings match
	//   [5] js_headless_indicator: JS headless browser indicator
	//   [6] canvas_webgl_consist:  Canvas↔WebGL consistency
	//   [7] cipher_order_anomaly:  cipher suite order anomaly score
	//   [8] ext_pattern_anomaly:   extension pattern anomaly score
	//   [9] contradiction_count:   cross-layer contradiction count (normalized)
	CrossLayerFeatureDim = 10

	// BehaviorFeatureDim is the behavioral feature dimension (8-dim).
	//   [0] fp_switch_rate:      fingerprint switch rate / 10
	//   [1] request_rate:        request rate / 20
	//   [2] consistency_score:   consistency score [0,1]
	//   [3] risk_trend:          risk trend [-1,1] → [0,1]
	//   [4] observations_norm:   observation count normalized
	//   [5] unique_fp_ratio:     unique fingerprint ratio
	//   [6] session_duration:    session duration normalized
	//   [7] burst_indicator:     burst request indicator
	BehaviorFeatureDim = 8

	// NumBrowserFamilies is the number of browser families.
	NumBrowserFamilies = 7 // chrome, firefox, safari, edge, opera, brave, samsung

	// NumForgeryTypes is the number of forgery types.
	NumForgeryTypes = 4 // real, headless, antidetect, proxy

	// NumThreatClasses is the number of threat classes.
	NumThreatClasses = 6 // none, bot, fingerprint_spoof, session_anomaly, behavioral_anomaly, evasion

	// NumActions is the number of security actions.
	NumActions = 5 // allow, monitor, challenge, throttle, block
)

Variables

View Source
var DefaultClassifier = InitPretrainedClassifier()

DefaultClassifier default pretrained classifier (singleton)

View Source
var DefaultEvolutionConfig = &EvolutionConfig{
	StalenessDays:              30,
	MinObservations:            100,
	DistributionDriftThreshold: 0.1,
	VersionDriftWindow:         7 * 24 * time.Hour,
	MaxProfileAge:              180 * 24 * time.Hour,
}

DefaultEvolutionConfig provides sensible defaults.

View Source
var DefaultGenerateConfig = &GenerateConfig{
	TargetBrowser:  "",
	TargetOS:       "",
	MaxAttempts:    10,
	NoiseIntensity: 0.05,
}

DefaultGenerateConfig provides sensible generation defaults.

View Source
var DefaultNeuralTrainerConfig = &NeuralTrainerConfig{
	Epochs:          100,
	BatchSize:       32,
	LearningRate:    0.001,
	AugmentNoise:    0.05,
	TripletMargin:   1.0,
	ForgeryRatio:    1.5,
	ValidationSplit: 0.2,
}

DefaultNeuralTrainerConfig is the default training configuration.

View Source
var DefaultOnlineLearnerConfig = &OnlineLearnerConfig{
	BufferSize:         1024,
	DriftThreshold:     0.05,
	DriftWindowSize:    100,
	MinSamplesForDrift: 50,
}

DefaultOnlineLearnerConfig provides sensible defaults.

View Source
var DefaultServiceConfig = &ServiceConfig{
	ModelStorePath:             "./models",
	MaxStoreVersions:           10,
	AutoLoadLatest:             true,
	FeedbackBufferSize:         1024,
	EvolveInterval:             24 * time.Hour,
	DriftThreshold:             0.05,
	ValidationForgeryThreshold: 0.3,
	ValidationConsistencyMin:   0.7,
	InferenceBackend:           "native",
	ONNXModelDir:               "./models/onnx",
	ONNXPythonBin:              "python3",
	ONNXPythonScript:           "training/onnx_infer.py",
	ONNXTimeout:                10 * time.Second,
	ShadowCompareEnabled:       false,
	ShadowSampleRate:           0.1,
	ShadowMetricsPath:          "./models/shadow_compare.jsonl",
	CanaryEnabled:              false,
	CanaryRate:                 0.05,
	CanaryBackend:              "onnx",
}

DefaultServiceConfig provides sensible defaults.

Functions

func ClipGradNorm added in v1.0.16

func ClipGradNorm(params []*Param, maxNorm float64)

ClipGradNorm clips parameter gradients by global L2 norm. maxNorm: maximum allowed gradient norm (typical: 1.0-5.0).

func ComputeCrossLayerFeatures added in v1.0.14

func ComputeCrossLayerFeatures(fp []float64) []float64

ComputeCrossLayerFeatures computes cross-layer consistency features (10-dim). These features capture contradiction levels between layers — key signals for the forgery detector.

func EncodeFingerprint added in v1.0.14

func EncodeFingerprint(profile *profiles.ClientProfile) []float64

EncodeFingerprint extracts a 30-dim fingerprint feature vector from a ClientProfile. This bridges raw profile data and neural network models.

func EncodeFingerprintFromFeatureVector added in v1.0.14

func EncodeFingerprintFromFeatureVector(fv *core.FeatureVector) []float64

EncodeFingerprintFromFeatureVector builds a 30-dim vector from an already-extracted FeatureVector. This allows integration with the existing FeatureExtractor pipeline.

func InitWithSyntheticData

func InitWithSyntheticData(classifier *HierarchicalClassifier, sampleCount int) error

InitWithSyntheticData initialize classifier with synthetic data

func SetDevice added in v1.0.14

func SetDevice(dev ComputeDevice)

SetDevice sets the global default compute device.

func SortVersions added in v1.0.15

func SortVersions(versions []ModelVersion)

SortVersions sorts model versions by version number ascending.

func TripletMarginLoss added in v1.0.14

func TripletMarginLoss(anchor, positive, negative *Tensor, margin float64) (float64, *Tensor, *Tensor, *Tensor)

TripletMarginLoss computes triplet margin loss. anchor, positive, negative are each [batch × dim] embedding tensors. margin: minimum margin between positive and negative distances.

Types

type ActionClass added in v1.0.14

type ActionClass int

ActionClass represents a security action.

const (
	ActAllow     ActionClass = iota // allow
	ActMonitor                      // monitor
	ActChallenge                    // challenge verification
	ActThrottle                     // throttle
	ActBlock                        // block
)

func (ActionClass) String added in v1.0.14

func (a ActionClass) String() string

String returns the string name of the action.

type AdamOptimizer added in v1.0.14

type AdamOptimizer struct {
	LR      float64 // learning rate
	Beta1   float64 // first moment decay rate (default 0.9)
	Beta2   float64 // second moment decay rate (default 0.999)
	Epsilon float64 // numerical stability (default 1e-8)
	// contains filtered or unexported fields
}

AdamOptimizer implements the Adam (Adaptive Moment Estimation) optimizer.

func NewAdamOptimizer added in v1.0.14

func NewAdamOptimizer(params []*Param, lr float64) *AdamOptimizer

NewAdamOptimizer creates an Adam optimizer.

func (*AdamOptimizer) Step added in v1.0.14

func (opt *AdamOptimizer) Step()

Step performs a single parameter update.

type AudioData

type AudioData struct {
	Entropy    float64
	SampleRate float64
}

AudioData Audio fingerprint data

type BatchNormLayer added in v1.0.16

type BatchNormLayer struct {
	// contains filtered or unexported fields
}

BatchNormLayer implements batch normalization: normalizes activations across the batch, then applies learned scale (gamma) and shift (beta). During inference, uses running mean/variance estimated during training.

func NewBatchNormLayer added in v1.0.16

func NewBatchNormLayer(dim int) *BatchNormLayer

NewBatchNormLayer creates a batch normalization layer.

func (*BatchNormLayer) Backward added in v1.0.16

func (l *BatchNormLayer) Backward(gradOutput *Tensor) *Tensor

func (*BatchNormLayer) Forward added in v1.0.16

func (l *BatchNormLayer) Forward(input *Tensor) *Tensor

func (*BatchNormLayer) Params added in v1.0.16

func (l *BatchNormLayer) Params() []*Param

func (*BatchNormLayer) SetTraining added in v1.0.16

func (l *BatchNormLayer) SetTraining(training bool)

type BrowserClassifier added in v1.0.14

type BrowserClassifier struct {
	Net *Sequential // internal neural network
}

BrowserClassifier is the browser family classifier model.

func NewBrowserClassifier added in v1.0.14

func NewBrowserClassifier() *BrowserClassifier

NewBrowserClassifier creates a browser classifier. Architecture: Embedding(32) → Dense(128) → BN → ReLU → Dropout(0.2)

→ Dense(64) → BN → ReLU → Dropout(0.1)
→ Dense(7) → Softmax

func (*BrowserClassifier) Classify added in v1.0.14

func (bc *BrowserClassifier) Classify(embedding *Tensor) []BrowserPrediction

Classify predicts browser family. embedding: [batch × 32] → returns prediction for each row

func (*BrowserClassifier) ClassifySingle added in v1.0.14

func (bc *BrowserClassifier) ClassifySingle(embedding []float64) BrowserPrediction

ClassifySingle classifies a single embedding vector (convenience method).

type BrowserDistribution added in v1.0.17

type BrowserDistribution struct {
	// contains filtered or unexported fields
}

BrowserDistribution tracks the observed real-world browser distribution.

func NewBrowserDistribution added in v1.0.17

func NewBrowserDistribution() *BrowserDistribution

NewBrowserDistribution creates a new distribution tracker.

func (*BrowserDistribution) Distribution added in v1.0.17

func (d *BrowserDistribution) Distribution() map[string]float64

Distribution returns the current probability distribution.

func (*BrowserDistribution) KLDivergence added in v1.0.17

func (d *BrowserDistribution) KLDivergence(reference map[string]float64) float64

KLDivergence computes the KL divergence between the observed distribution and a reference distribution. Higher values indicate more drift.

func (*BrowserDistribution) Record added in v1.0.17

func (d *BrowserDistribution) Record(family string)

Record records an observation of a browser family.

func (*BrowserDistribution) Total added in v1.0.17

func (d *BrowserDistribution) Total() int64

Total returns the total number of observations.

type BrowserPrediction added in v1.0.14

type BrowserPrediction struct {
	Family      core.BrowserType   // predicted browser family
	Confidence  float64            // confidence [0,1]
	Probs       []float64          // family probability distribution
	FamilyNames []core.BrowserType // family names, aligned with Probs
}

BrowserPrediction holds browser classification results.

type CanaryStats added in v1.0.27

type CanaryStats struct {
	Enabled       bool    `json:"enabled"`
	CanaryBackend string  `json:"canaryBackend,omitempty"`
	CanaryRate    float64 `json:"canaryRate"`
	TotalRequests int64   `json:"totalRequests"`
	CanaryRouted  int64   `json:"canaryRouted"`
	FallbackCount int64   `json:"fallbackCount"`
}

CanaryStats tracks gradual rollout routing and fallback behavior.

type CanvasData

type CanvasData struct {
	Entropy float64
	Hash    string
}

CanvasData Canvas fingerprint data

type ClassificationResult

type ClassificationResult struct {
	Protocol   core.ProtocolType
	Family     core.BrowserType
	Version    string
	Confidence float64
	Labels     map[string]string
	LayerScores
}

ClassificationResult classification result

func QuickClassify

func QuickClassify(features *core.FeatureVector) *ClassificationResult

QuickClassify quickly classify using default classifier

func (*ClassificationResult) IsHighConfidence

func (r *ClassificationResult) IsHighConfidence() bool

IsHighConfidence determine if the classification is high confidence

type Classifier

type Classifier interface {
	// Train train classifier
	Train(features [][]float64, labels []string) error
	// Predict predict label
	Predict(features []float64) (string, float64)
	// PredictTopK return top-K prediction results
	PredictTopK(features []float64, k int) []Prediction
}

Classifier classifier interface

type CompatibilityChecker

type CompatibilityChecker struct {
	// contains filtered or unexported fields
}

CompatibilityChecker compatibility checker

func NewCompatibilityChecker

func NewCompatibilityChecker() *CompatibilityChecker

NewCompatibilityChecker create new compatibility checker

func (*CompatibilityChecker) CheckFeatureCompatibility

func (cc *CompatibilityChecker) CheckFeatureCompatibility() map[string]string

CheckFeatureCompatibility check feature compatibility

type ComputeDevice added in v1.0.14

type ComputeDevice interface {
	Name() string
	// MatMul performs matrix multiplication: C = A × B, A[m×k], B[k×n] → C[m×n]
	MatMul(a, b *Tensor) *Tensor
	// MatMulAdd performs fused multiply-add: C = A × B + bias (broadcast)
	MatMulAdd(a, b, bias *Tensor) *Tensor
	// Apply applies an element-wise function: out[i] = fn(in[i])
	Apply(t *Tensor, fn func(float64) float64) *Tensor
	// BatchParallel executes fn in parallel over [0, batchSize)
	BatchParallel(batchSize int, fn func(i int))
}

ComputeDevice defines the compute device interface. CPU uses goroutine parallelism; GPU can be plugged in via CGo + CUDA.

var CPU ComputeDevice = &cpuDevice{workers: runtime.NumCPU()}

CPU is the default CPU compute device using goroutine parallelism.

var DefaultDevice ComputeDevice = CPU

DefaultDevice is the currently active compute device, defaults to CPU. Use SetDevice(gpu) to switch to a GPU backend.

type CosineAnnealingLR added in v1.0.16

type CosineAnnealingLR struct {
	InitialLR float64
	MinLR     float64
}

CosineAnnealingLR implements cosine annealing: LR decays from initial to minLR following a cosine curve.

func NewCosineAnnealingLR added in v1.0.16

func NewCosineAnnealingLR(initialLR, minLR float64) *CosineAnnealingLR

func (*CosineAnnealingLR) StepLR added in v1.0.16

func (s *CosineAnnealingLR) StepLR(opt *AdamOptimizer, epoch, totalEpochs int)

type CrawlerFeedback added in v1.0.24

type CrawlerFeedback struct {
	ProfileID     string
	Profile       *profiles.ClientProfile
	URL           string
	Success       bool
	Blocked       bool
	BlockReason   string
	DetectionInfo map[string]interface{}
	Duration      time.Duration
	Timestamp     time.Time
}

CrawlerFeedback represents a feedback event from the crawler subsystem.

func (*CrawlerFeedback) ToFeedbackSample added in v1.0.24

func (cf *CrawlerFeedback) ToFeedbackSample() *FeedbackSample

ToFeedbackSample converts a CrawlerFeedback to a ML FeedbackSample.

type CrossLayerValidationResult added in v1.0.17

type CrossLayerValidationResult struct {
	TLS   *TLSValidationResult
	HTTP  *HTTPValidationResult
	Valid bool

	// OverallConsistency is the average consistency across layers.
	OverallConsistency float64

	// BrowserAgreement is true if TLS and HTTP agree on the browser.
	BrowserAgreement bool

	// Issues lists all cross-layer problems.
	Issues []string
}

CrossLayerValidationResult aggregates TLS and HTTP validation.

type CrossLayerValidator added in v1.0.17

type CrossLayerValidator struct {
	// contains filtered or unexported fields
}

CrossLayerValidator validates TLS and HTTP together for consistency.

func NewCrossLayerValidator added in v1.0.17

func NewCrossLayerValidator(pipeline *ModelPipeline) *CrossLayerValidator

NewCrossLayerValidator creates a validator that checks TLS+HTTP consistency.

func (*CrossLayerValidator) ValidateProfile added in v1.0.17

ValidateProfile validates both TLS and HTTP layers of a profile.

type DataLoader

type DataLoader struct {
	// contains filtered or unexported fields
}

DataLoader training data loader

func NewDataLoader

func NewDataLoader(dataPath string) *DataLoader

NewDataLoader create new data loader

func (*DataLoader) LoadDataset

func (dl *DataLoader) LoadDataset(filename string) (*Dataset, error)

LoadDataset load dataset

func (*DataLoader) LoadMultipleDatasets

func (dl *DataLoader) LoadMultipleDatasets(filenames []string) (*Dataset, error)

LoadMultipleDatasets load multiple datasets and merge

type Dataset

type Dataset struct {
	Name        string           `json:"name"`
	Version     string           `json:"version"`
	Description string           `json:"description"`
	Samples     []TrainingSample `json:"samples"`
	Statistics  DatasetStats     `json:"statistics"`
}

Dataset dataset

func GenerateSyntheticDataset

func GenerateSyntheticDataset(name string, sampleCount int) *Dataset

GenerateSyntheticDataset generate synthetic training data

func (*Dataset) ExportToRust

func (d *Dataset) ExportToRust() *RustDataset

ExportToRust export to Rust format (bidirectional compatibility)

func (*Dataset) SaveDataset

func (d *Dataset) SaveDataset(path string) error

SaveDataset save dataset to file

func (*Dataset) SaveToRustFormat

func (d *Dataset) SaveToRustFormat(path string) error

SaveToRustFormat save to Rust format

func (*Dataset) ToTrainingData

func (d *Dataset) ToTrainingData() *TrainingData

ToTrainingData convert to training data format

type DatasetStats

type DatasetStats struct {
	TotalSamples   int            `json:"total_samples"`
	ProtocolCounts map[string]int `json:"protocol_counts"`
	FamilyCounts   map[string]int `json:"family_counts"`
	VersionCounts  map[string]int `json:"version_counts"`
}

DatasetStats dataset statistics

type DenseLayer added in v1.0.14

type DenseLayer struct {
	Weight  *Tensor // [outDim, inDim]
	Bias    *Tensor // [outDim]
	WeightG *Tensor // weight gradient
	BiasG   *Tensor // bias gradient
	// contains filtered or unexported fields
}

DenseLayer implements a fully-connected affine layer. Output is y = x*W^T + b.

func NewDenseLayer added in v1.0.14

func NewDenseLayer(inDim, outDim int) *DenseLayer

NewDenseLayer creates a fully-connected layer with He initialization.

func (*DenseLayer) Backward added in v1.0.14

func (l *DenseLayer) Backward(gradOutput *Tensor) *Tensor

func (*DenseLayer) Forward added in v1.0.14

func (l *DenseLayer) Forward(input *Tensor) *Tensor

func (*DenseLayer) Params added in v1.0.14

func (l *DenseLayer) Params() []*Param

func (*DenseLayer) SetTraining added in v1.0.14

func (l *DenseLayer) SetTraining(training bool)

type DropoutLayer added in v1.0.14

type DropoutLayer struct {
	// contains filtered or unexported fields
}

DropoutLayer implements dropout regularization.

func NewDropoutLayer added in v1.0.14

func NewDropoutLayer(rate float64) *DropoutLayer

func (*DropoutLayer) Backward added in v1.0.14

func (l *DropoutLayer) Backward(gradOutput *Tensor) *Tensor

func (*DropoutLayer) Forward added in v1.0.14

func (l *DropoutLayer) Forward(input *Tensor) *Tensor

func (*DropoutLayer) Params added in v1.0.14

func (l *DropoutLayer) Params() []*Param

func (*DropoutLayer) SetTraining added in v1.0.14

func (l *DropoutLayer) SetTraining(training bool)

type EvolutionConfig added in v1.0.17

type EvolutionConfig struct {
	// StalenessDays: profiles not seen in traffic for this many days
	// are flagged as stale.
	StalenessDays int

	// MinObservations: minimum traffic observations before making
	// evolution decisions.
	MinObservations int

	// DistributionDriftThreshold: KL divergence above this triggers
	// distribution-based evolution.
	DistributionDriftThreshold float64

	// VersionDriftWindow: how far back to look for version gaps.
	VersionDriftWindow time.Duration

	// MaxProfileAge: profiles older than this are considered for retirement.
	MaxProfileAge time.Duration
}

EvolutionConfig controls the profile evolution system.

type EvolutionEvent added in v1.0.17

type EvolutionEvent struct {
	Timestamp       time.Time
	Trigger         string // "drift", "staleness", "version_gap", "manual"
	Description     string
	ProfilesAdded   int
	ProfilesRetired int
	MetricsBefore   *TrainingMetrics
	MetricsAfter    *TrainingMetrics
}

EvolutionEvent records a single evolution trigger.

type EvolutionHealthReport added in v1.0.17

type EvolutionHealthReport struct {
	Timestamp             time.Time
	TotalProfiles         int
	ActiveProfiles        int
	StaleProfiles         int
	StaleProfileIDs       []string
	HighForgeryProfiles   int
	HighForgeryProfileIDs []string
	LowConfidenceProfiles int
	DistributionDrift     float64
	DriftDetected         bool
	NeedsEvolution        bool
	Reasons               []string
}

EvolutionHealthReport summarises the profile ecosystem health.

func (*EvolutionHealthReport) ShouldEvolve added in v1.0.17

func (r *EvolutionHealthReport) ShouldEvolve() bool

NeedsEvolution returns whether automatic evolution should be triggered.

type EvolveConfig added in v1.0.15

type EvolveConfig struct {
	Epochs       int     // fine-tuning epochs (typically 5-20, much less than full training)
	LearningRate float64 // lower LR for fine-tuning (e.g. 0.0001)
	AugmentNoise float64 // data augmentation noise
}

EvolveConfig configures incremental model evolution.

func DefaultEvolveConfig added in v1.0.15

func DefaultEvolveConfig() *EvolveConfig

DefaultEvolveConfig returns sensible defaults for incremental evolution.

type FamilyClassifier

type FamilyClassifier struct {
	// contains filtered or unexported fields
}

FamilyClassifier browser family classifier (layer 2)

func NewFamilyClassifier

func NewFamilyClassifier(protocol core.ProtocolType) *FamilyClassifier

NewFamilyClassifier create new browser family classifier

func (*FamilyClassifier) Predict

func (fc *FamilyClassifier) Predict(features []float64) (core.BrowserType, float64)

Predict predict browser family

func (*FamilyClassifier) Train

func (fc *FamilyClassifier) Train(features [][]float64, labels []core.BrowserType) error

Train train browser family classifier

type FeatureExtractor

type FeatureExtractor struct {
	// contains filtered or unexported fields
}

FeatureExtractor feature extractor

func NewFeatureExtractor

func NewFeatureExtractor() *FeatureExtractor

NewFeatureExtractor create new feature extractor

func (*FeatureExtractor) AnalyzeFeatureImportance

func (fe *FeatureExtractor) AnalyzeFeatureImportance(trainingData []*core.FeatureVector, labels []string) []FeatureImportance

AnalyzeFeatureImportance analyze feature importance

func (*FeatureExtractor) ExtractCombined

func (fe *FeatureExtractor) ExtractCombined(serverData ServerFingerprintData, frontendData FrontendFingerprintData) *core.FeatureVector

ExtractCombined extract combined features

func (*FeatureExtractor) ExtractFromClientHello

func (fe *FeatureExtractor) ExtractFromClientHello(spec core.ClientHelloSpec) *core.FeatureVector

ExtractFromClientHello extract features from ClientHello

func (*FeatureExtractor) ExtractFromFrontend

func (fe *FeatureExtractor) ExtractFromFrontend(data FrontendFingerprintData) *core.FeatureVector

ExtractFromFrontend extract features from frontend fingerprint

func (*FeatureExtractor) ExtractFromHTTPHeaders

func (fe *FeatureExtractor) ExtractFromHTTPHeaders(headers *core.HTTPHeaders) *core.FeatureVector

ExtractFromHTTPHeaders extract features from HTTP headers

func (*FeatureExtractor) ExtractFromProfile

func (fe *FeatureExtractor) ExtractFromProfile(profile *profiles.ClientProfile) *core.FeatureVector

ExtractFromProfile extract feature vector from profile

func (*FeatureExtractor) Normalize

Normalize feature normalization

func (*FeatureExtractor) SelectTopFeatures

func (fe *FeatureExtractor) SelectTopFeatures(importance []FeatureImportance, topK int) []core.FeatureType

SelectTopFeatures select most important features

type FeatureImportance

type FeatureImportance struct {
	Feature    core.FeatureType
	Importance float64
}

FeatureImportance feature importance analysis

type FeedbackSample added in v1.0.17

type FeedbackSample struct {
	Profile   *profiles.ClientProfile
	Features  []float64
	Label     string  // ground-truth browser family (optional)
	Reward    float64 // 0–1 reward signal
	Timestamp time.Time
}

FeedbackSample is a single observation fed back into the learning system.

type FeedbackSource added in v1.0.24

type FeedbackSource string

FeedbackSource identifies where a feedback sample originated.

const (
	FeedbackSourceCrawler FeedbackSource = "crawler"
	FeedbackSourceWAF     FeedbackSource = "waf"
	FeedbackSourceGateway FeedbackSource = "gateway"
)

type FingerprintEncoder added in v1.0.14

type FingerprintEncoder struct {
	Net *Sequential // internal neural network
}

FingerprintEncoder is the fingerprint embedding model.

func NewFingerprintEncoder added in v1.0.14

func NewFingerprintEncoder() *FingerprintEncoder

NewFingerprintEncoder creates a fingerprint encoder. Architecture: Input(30) → Dense(256) → BN → ReLU → Dropout(0.2)

→ Dense(128) → BN → ReLU → Dropout(0.1)
→ Dense(64) → BN → ReLU → Dense(32)

func (*FingerprintEncoder) Encode added in v1.0.14

func (enc *FingerprintEncoder) Encode(features *Tensor) *Tensor

Encode encodes raw fingerprint features into embedding vectors. features: [batch × 30] → embedding: [batch × 32] (L2-normalized)

func (*FingerprintEncoder) EncodeSingle added in v1.0.14

func (enc *FingerprintEncoder) EncodeSingle(features []float64) []float64

EncodeSingle encodes a single fingerprint vector (convenience method).

type FingerprintMatcher

type FingerprintMatcher struct {
	// contains filtered or unexported fields
}

FingerprintMatcher fingerprint matcher

func NewFingerprintMatcher

func NewFingerprintMatcher() *FingerprintMatcher

NewFingerprintMatcher create new fingerprint matcher

func (*FingerprintMatcher) Initialize

func (fm *FingerprintMatcher) Initialize()

Initialize initialize matcher

func (*FingerprintMatcher) Match

Match match fingerprint

func (*FingerprintMatcher) MatchWithProfile

func (fm *FingerprintMatcher) MatchWithProfile(features *core.FeatureVector, profiles []core.FingerprintSpec) *ClassificationResult

MatchWithProfile match with known profiles

func (*FingerprintMatcher) Train

func (fm *FingerprintMatcher) Train(data *TrainingData) error

Train train matcher

type FontsData

type FontsData struct {
	List []string
}

FontsData font fingerprint data

type ForgeryDetector added in v1.0.14

type ForgeryDetector struct {
	DetectorNet *Sequential // forgery probability network
	TypeNet     *Sequential // forgery type classification network
}

ForgeryDetector is the forgery detection model.

func NewForgeryDetector added in v1.0.14

func NewForgeryDetector() *ForgeryDetector

NewForgeryDetector creates a forgery detector. Detector: Input(40) → Dense(128) → BN → ReLU → Dropout(0.2)

→ Dense(64) → BN → ReLU → Dense(32) → ReLU → Dense(1) → Sigmoid

TypeNet: Input(40) → Dense(128) → BN → ReLU → Dropout(0.2)

→ Dense(64) → BN → ReLU → Dense(32) → ReLU → Dense(4) → Softmax

func (*ForgeryDetector) Detect added in v1.0.14

func (fd *ForgeryDetector) Detect(input *Tensor) []ForgeryResult

Detect performs forgery detection. input: [batch × 40] (30-dim fingerprint + 10-dim cross-layer features)

func (*ForgeryDetector) DetectSingle added in v1.0.14

func (fd *ForgeryDetector) DetectSingle(fpFeatures, crossLayerFeatures []float64) ForgeryResult

DetectSingle detects a single sample (convenience method).

type ForgeryResult added in v1.0.14

type ForgeryResult struct {
	IsForgery   bool        // whether forged (ForgeryProb > 0.5)
	ForgeryProb float64     // forgery probability [0,1]
	ForgeryType ForgeryType // predicted forgery type
	TypeProbs   []float64   // per-type probability distribution
	TypeNames   []string    // type names, aligned with TypeProbs
}

ForgeryResult holds forgery detection results.

type ForgeryType added in v1.0.14

type ForgeryType int

ForgeryType represents the type of fingerprint forgery.

const (
	ForgeryReal       ForgeryType = iota // real browser
	ForgeryHeadless                      // headless browser (Puppeteer/Selenium/PhantomJS)
	ForgeryAntiDetect                    // anti-detection tool (tls-client/curl-impersonate/GoLogin)
	ForgeryProxy                         // proxy/MITM (misconfigured features)
)

func (ForgeryType) String added in v1.0.14

func (ft ForgeryType) String() string

ForgeryTypeName returns the string name of the forgery type.

type FrontendFingerprintData

type FrontendFingerprintData struct {
	Canvas   CanvasData
	WebGL    WebGLData
	Audio    AudioData
	Fonts    FontsData
	Storage  StorageData
	WebRTC   WebRTCData
	Hardware HardwareData
	Timing   TimingData
}

FrontendFingerprintData frontend fingerprint data

type GenerateConfig added in v1.0.17

type GenerateConfig struct {
	// TargetBrowser is the desired browser family.
	TargetBrowser string

	// TargetOS is the desired operating system.
	TargetOS string

	// MaxAttempts is how many candidates to try before giving up.
	MaxAttempts int

	// NoiseIntensity controls how much random variation to inject [0,1].
	NoiseIntensity float64
}

type GenerateResult added in v1.0.17

type GenerateResult struct {
	// Profile is the generated fingerprint.
	Profile *profiles.ClientProfile

	// Validation is how the generated fingerprint scored.
	Validation *ValidationResult

	// Attempts is how many candidates were evaluated.
	Attempts int

	// SourceProfileID is the base profile that was mutated.
	SourceProfileID string
}

GenerateResult is the output of ML-guided fingerprint generation.

type HTTPValidationResult added in v1.0.17

type HTTPValidationResult struct {
	// Valid indicates whether the HTTP configuration looks realistic.
	Valid bool

	// DetectedBrowser is the browser the ML model thinks owns these headers.
	DetectedBrowser string

	// Confidence is the classification confidence [0,1].
	Confidence float64

	// ForgeryProb is the probability this is a forged HTTP fingerprint [0,1].
	ForgeryProb float64

	// ConsistencyScore is how consistent HTTP headers are with TLS [0,1].
	ConsistencyScore float64

	// HeaderOrderScore rates the header ordering quality [0,1].
	HeaderOrderScore float64

	// Issues lists specific problems found.
	Issues []string
}

HTTPValidationResult contains the ML validation of HTTP configuration.

type HTTPValidator added in v1.0.17

type HTTPValidator struct {
	// contains filtered or unexported fields
}

HTTPValidator validates HTTP configurations using ML models.

func NewHTTPValidator added in v1.0.17

func NewHTTPValidator(pipeline *ModelPipeline) *HTTPValidator

NewHTTPValidator creates a new HTTP validator backed by the given pipeline.

func (*HTTPValidator) ValidateProfile added in v1.0.17

func (v *HTTPValidator) ValidateProfile(profile *profiles.ClientProfile) *HTTPValidationResult

ValidateProfile validates the HTTP aspects of a full client profile.

type HardwareData

type HardwareData struct {
	Cores       int
	Memory      int
	TouchPoints int
}

HardwareData hardware fingerprint data

type HierarchicalClassifier

type HierarchicalClassifier struct {
	// contains filtered or unexported fields
}

HierarchicalClassifier three-layer hierarchical classifier Ported from Rust version ML architecture Layer 1: Protocol type classifier (TLS/HTTP/QUIC) Layer 2: Browser family classifier (Chrome/Firefox/Safari) Layer 3: Version recognition classifier (Chrome 120/121/122)

func InitPretrainedClassifier

func InitPretrainedClassifier() *HierarchicalClassifier

InitPretrainedClassifier create and initialize pretrained classifier

func NewHierarchicalClassifier

func NewHierarchicalClassifier() *HierarchicalClassifier

NewHierarchicalClassifier create new three-layer hierarchical classifier

func (*HierarchicalClassifier) Classify

Classify perform hierarchical classification

func (*HierarchicalClassifier) ClassifyBatch

func (hc *HierarchicalClassifier) ClassifyBatch(features []*core.FeatureVector) []*ClassificationResult

ClassifyBatch batch classification

func (*HierarchicalClassifier) GetConfidenceThresholds

func (hc *HierarchicalClassifier) GetConfidenceThresholds() (protocol, family, version float64)

GetConfidenceThresholds get confidence thresholds for each layer

func (*HierarchicalClassifier) Initialize

func (hc *HierarchicalClassifier) Initialize()

Initialize initialize classifier hierarchy

func (*HierarchicalClassifier) Train

func (hc *HierarchicalClassifier) Train(data *TrainingData) error

Train train three-layer hierarchical classifier

type LRScheduler added in v1.0.16

type LRScheduler interface {
	// StepLR updates the learning rate. Call once per epoch.
	StepLR(optimizer *AdamOptimizer, epoch, totalEpochs int)
}

LRScheduler adjusts the optimizer learning rate over training.

type Layer added in v1.0.14

type Layer interface {
	// Forward computes layer output for the input tensor.
	Forward(input *Tensor) *Tensor
	// Backward propagates output gradients and accumulates parameter gradients.
	Backward(gradOutput *Tensor) *Tensor
	// Params returns trainable parameters and their gradients (for optimizer updates).
	Params() []*Param
	// SetTraining switches between training and inference modes.
	SetTraining(training bool)
}

Layer defines the interface for a single neural network layer.

type LayerScores

type LayerScores struct {
	ProtocolConfidence float64
	FamilyConfidence   float64
	VersionConfidence  float64
}

LayerScores scores for each layer

type MLService added in v1.0.17

type MLService struct {
	// contains filtered or unexported fields
}

MLService is the project-wide ML service singleton.

func NewMLService added in v1.0.17

func NewMLService(config *ServiceConfig) (*MLService, error)

NewMLService creates and initialises the ML service.

func (*MLService) Evolve added in v1.0.17

func (s *MLService) Evolve(registry *profiles.ProfileRegistry) (*TrainingMetrics, error)

Evolve triggers incremental training with the current profile registry. Returns the training metrics and an error if the evolution fails.

func (*MLService) Feedback added in v1.0.17

func (s *MLService) Feedback(sample *FeedbackSample)

Feedback records an observation for online learning. After recording, it checks for accuracy drift and triggers auto-evolution if the learner has detected a significant accuracy drop.

func (*MLService) Generate added in v1.0.17

func (s *MLService) Generate(config *GenerateConfig) (*GenerateResult, error)

Generate produces a fingerprint that passes forgery/consistency validation. It works by selecting a base profile, applying controlled mutations, and validating with the neural pipeline until a candidate passes.

func (*MLService) Infer added in v1.0.17

func (s *MLService) Infer(profile *profiles.ClientProfile, behavior []float64) *PipelineResult

Infer runs the full 4-stage neural pipeline on a client profile.

func (*MLService) InferBatch added in v1.0.17

func (s *MLService) InferBatch(profs []*profiles.ClientProfile, behaviors [][]float64) []*PipelineResult

InferBatch runs batch inference for multiple profiles.

func (*MLService) InferFromFeatures added in v1.0.17

func (s *MLService) InferFromFeatures(fv *core.FeatureVector, behavior []float64) *PipelineResult

InferFromFeatures runs inference from a pre-extracted feature vector.

func (*MLService) IsReady added in v1.0.17

func (s *MLService) IsReady() bool

IsReady returns true if the pipeline has trained/loaded weights.

func (*MLService) Learner added in v1.0.19

func (s *MLService) Learner() *OnlineLearner

Learner returns the online learner instance (nil if not configured).

func (*MLService) Pipeline added in v1.0.17

func (s *MLService) Pipeline() *ModelPipeline

Pipeline returns the underlying model pipeline (for advanced use).

func (*MLService) Stats added in v1.0.17

func (s *MLService) Stats() *ServiceStats

Stats returns current service statistics.

func (*MLService) Store added in v1.0.17

func (s *MLService) Store() *ModelStore

Store returns the model store.

func (*MLService) Train added in v1.0.17

func (s *MLService) Train(registry *profiles.ProfileRegistry) error

Train runs full training from scratch with the given profile registry.

func (*MLService) Validate added in v1.0.17

func (s *MLService) Validate(profile *profiles.ClientProfile) *ValidationResult

Validate checks whether a fingerprint profile looks realistic according to the trained models. This is used by the generator to reject bad candidates and by TLS/HTTP modules to verify output quality.

func (*MLService) ValidateFeatures added in v1.0.17

func (s *MLService) ValidateFeatures(features []float64) *ValidationResult

ValidateFeatures validates from a raw feature vector.

type ModelInfo

type ModelInfo struct {
	Name          string
	Version       string
	Description   string
	ProtocolCount int
	FamilyCount   int
	VersionCount  int
	TotalCenters  int
}

ModelInfo model information

func GetModelInfo

func GetModelInfo(model *PretrainedModel) *ModelInfo

GetModelInfo get model information

type ModelLoader

type ModelLoader struct {
	// contains filtered or unexported fields
}

ModelLoader model loader

func NewModelLoader

func NewModelLoader(modelPath string) *ModelLoader

NewModelLoader create new model loader

func (*ModelLoader) LoadModel

func (ml *ModelLoader) LoadModel(filename string) (*PretrainedModel, error)

LoadModel load pretrained model

func (*ModelLoader) SaveModel

func (ml *ModelLoader) SaveModel(model *PretrainedModel, filename string) error

SaveModel save model

type ModelManifest added in v1.0.15

type ModelManifest struct {
	Versions []ModelVersion `json:"versions"`
}

ModelManifest is the index of all stored model versions.

type ModelPipeline added in v1.0.14

type ModelPipeline struct {
	// contains filtered or unexported fields
}

ModelPipeline is the end-to-end inference pipeline chaining four models.

func NewModelPipeline added in v1.0.14

func NewModelPipeline() *ModelPipeline

NewModelPipeline creates a new inference pipeline.

func (*ModelPipeline) Infer added in v1.0.14

func (p *ModelPipeline) Infer(profile *profiles.ClientProfile, behavior []float64) *PipelineResult

Infer executes the full inference chain from a ClientProfile.

func (*ModelPipeline) InferBatch added in v1.0.14

func (p *ModelPipeline) InferBatch(profs []*profiles.ClientProfile, behaviors [][]float64) []*PipelineResult

InferBatch performs batch inference, leveraging matrix operations for acceleration.

func (*ModelPipeline) InferFromFeatures added in v1.0.14

func (p *ModelPipeline) InferFromFeatures(fv *core.FeatureVector, behavior []float64) *PipelineResult

InferFromFeatures executes the inference chain from an already-extracted feature vector.

func (*ModelPipeline) LoadFromStore added in v1.0.15

func (p *ModelPipeline) LoadFromStore(store *ModelStore) (bool, error)

LoadFromStore loads the latest model from the store. Returns true if a model was loaded, false if the store is empty (no error).

func (*ModelPipeline) LoadWeights added in v1.0.14

func (p *ModelPipeline) LoadWeights(path string) error

LoadWeights loads model weights from a file.

func (*ModelPipeline) SaveToStore added in v1.0.15

func (p *ModelPipeline) SaveToStore(store *ModelStore, description string, metrics *TrainingMetrics) error

SaveToStore saves the current weights to the store as a new version.

func (*ModelPipeline) SaveWeights added in v1.0.14

func (p *ModelPipeline) SaveWeights(path string) error

SaveWeights saves model weights to a file.

func (*ModelPipeline) SetTraining added in v1.0.14

func (p *ModelPipeline) SetTraining(training bool)

SetTraining switches all models to training mode or inference mode.

func (*ModelPipeline) Trained added in v1.0.14

func (p *ModelPipeline) Trained() bool

Trained returns whether the pipeline has been trained or has loaded weights.

type ModelStore added in v1.0.15

type ModelStore struct {
	// contains filtered or unexported fields
}

ModelStore manages versioned model snapshots on disk.

func NewModelStore added in v1.0.15

func NewModelStore(config *StoreConfig) (*ModelStore, error)

NewModelStore opens or creates a model store at the configured directory.

func (*ModelStore) Latest added in v1.0.15

func (s *ModelStore) Latest() *ModelVersion

Latest returns the latest model version, or nil if the store is empty.

func (*ModelStore) ListVersions added in v1.0.15

func (s *ModelStore) ListVersions() []ModelVersion

ListVersions returns all stored versions, oldest first.

func (*ModelStore) Load added in v1.0.15

func (s *ModelStore) Load(pipeline *ModelPipeline, version int) error

Load restores pipeline weights from a specific version.

func (*ModelStore) LoadLatest added in v1.0.15

func (s *ModelStore) LoadLatest(pipeline *ModelPipeline) (bool, error)

LoadLatest restores the most recent version. Returns false if store is empty.

func (*ModelStore) Save added in v1.0.15

func (s *ModelStore) Save(pipeline *ModelPipeline, description string, metrics *TrainingMetrics) error

Save persists the current pipeline weights as a new version.

func (*ModelStore) VersionCount added in v1.0.15

func (s *ModelStore) VersionCount() int

VersionCount returns the number of stored versions.

type ModelVersion added in v1.0.15

type ModelVersion struct {
	Version     int       `json:"version"`
	CreatedAt   time.Time `json:"created_at"`
	ParentVer   int       `json:"parent_version,omitempty"` // 0 = trained from scratch
	Epochs      int       `json:"epochs"`
	TrainLoss   float64   `json:"train_loss,omitempty"`
	ValAccuracy float64   `json:"val_accuracy,omitempty"`
	Description string    `json:"description,omitempty"`
}

ModelVersion describes a single stored model snapshot.

type ModelWeights added in v1.0.14

type ModelWeights struct {
	Version     string            `json:"version"`
	Encoder     []SerializedParam `json:"encoder"`
	Classifier  []SerializedParam `json:"classifier"`
	DetectorNet []SerializedParam `json:"detector_net"`
	TypeNet     []SerializedParam `json:"type_net"`
	ThreatNet   []SerializedParam `json:"threat_net"`
	ActionNet   []SerializedParam `json:"action_net"`
	Metrics     []TrainingMetrics `json:"metrics,omitempty"`
}

ModelWeights holds serialized model weights.

type NeuralTrainer added in v1.0.14

type NeuralTrainer struct {
	Pipeline *ModelPipeline
	Config   *NeuralTrainerConfig
	Metrics  []TrainingMetrics
}

NeuralTrainer is the neural network training pipeline.

func NewNeuralTrainer added in v1.0.14

func NewNeuralTrainer(pipeline *ModelPipeline, config *NeuralTrainerConfig) *NeuralTrainer

NewNeuralTrainer creates a new neural network training pipeline.

func (*NeuralTrainer) Evolve added in v1.0.15

func (t *NeuralTrainer) Evolve(registry *profiles.ProfileRegistry, config *EvolveConfig) (*TrainingMetrics, error)

Evolve performs incremental fine-tuning on the pipeline's existing weights. Unlike TrainFromProfiles (full training from scratch), Evolve:

  • Starts from current weights (not random initialization)
  • Uses a lower learning rate for stability
  • Runs fewer epochs
  • Preserves previously learned patterns while adapting to new data

This implements the "continuously evolving" model the user described: the model library is saved, and only fine-tuned — never retrained from scratch unless explicitly requested.

func (*NeuralTrainer) EvolveAndSave added in v1.0.15

func (t *NeuralTrainer) EvolveAndSave(
	registry *profiles.ProfileRegistry,
	store *ModelStore,
	evolveConfig *EvolveConfig,
) (int, *TrainingMetrics, error)

EvolveAndSave is a convenience method that performs the full evolution cycle: 1. Fine-tune existing weights with new/updated profiles 2. Save evolved model as a new version in the store

Returns the new model version number and training metrics.

func (*NeuralTrainer) TrainAndSave added in v1.0.15

func (t *NeuralTrainer) TrainAndSave(
	registry *profiles.ProfileRegistry,
	store *ModelStore,
) (int, error)

TrainAndSave performs full training from scratch and saves to the store. Use this only for initial training; prefer EvolveAndSave for subsequent updates.

func (*NeuralTrainer) TrainFromProfiles added in v1.0.14

func (t *NeuralTrainer) TrainFromProfiles(registry *profiles.ProfileRegistry) error

TrainFromProfiles trains all models from browser profiles. Main training entry: load 207 profiles → data augmentation → multi-phase training.

type NeuralTrainerConfig added in v1.0.14

type NeuralTrainerConfig struct {
	Epochs          int     // number of training epochs
	BatchSize       int     // mini-batch size
	LearningRate    float64 // Adam learning rate
	AugmentNoise    float64 // data augmentation noise stddev
	TripletMargin   float64 // triplet loss margin
	ForgeryRatio    float64 // ratio of forged to real samples
	ValidationSplit float64 // validation set ratio
}

NeuralTrainerConfig holds neural network training configuration.

type OnlineClassifier added in v1.0.12

type OnlineClassifier struct {
	// contains filtered or unexported fields
}

OnlineClassifier wraps a SimpleClassifier and supports incremental (online) learning.

Instead of retraining from scratch, it updates class centroids incrementally as new samples arrive. This is essential for adapting to concept drift in real-time fingerprint classification where browser versions and fingerprint patterns evolve continuously.

func NewOnlineClassifier added in v1.0.12

func NewOnlineClassifier(featureCount int) *OnlineClassifier

NewOnlineClassifier creates a new online learning classifier. featureCount is the dimensionality of the feature vector.

func NewOnlineClassifierFrom added in v1.0.12

func NewOnlineClassifierFrom(base *SimpleClassifier, initialCounts map[string]int) *OnlineClassifier

NewOnlineClassifierFrom wraps an existing trained SimpleClassifier for online updates. initialCounts maps each class label to the number of training samples used to build its centroid; this is required so incremental updates are weighted correctly.

func (*OnlineClassifier) Accuracy added in v1.0.12

func (oc *OnlineClassifier) Accuracy() float64

Accuracy returns the recent prediction accuracy in [0,1].

func (*OnlineClassifier) ClassCount added in v1.0.12

func (oc *OnlineClassifier) ClassCount() int

ClassCount returns the number of distinct classes known to the classifier.

func (*OnlineClassifier) DriftDetected added in v1.0.12

func (oc *OnlineClassifier) DriftDetected() bool

DriftDetected returns true if the recent accuracy has dropped below the drift threshold.

func (*OnlineClassifier) PartialFit added in v1.0.12

func (oc *OnlineClassifier) PartialFit(features []float64, label string)

PartialFit incrementally updates the classifier with a single new sample.

For an existing class, the centroid is updated using the online mean formula:

new_centroid = old_centroid + (x - old_centroid) / (n + 1)

For a new (unseen) class, a new centroid is created from the sample.

func (*OnlineClassifier) PartialFitBatch added in v1.0.12

func (oc *OnlineClassifier) PartialFitBatch(features [][]float64, labels []string)

PartialFitBatch updates the classifier with a batch of samples.

func (*OnlineClassifier) Predict added in v1.0.12

func (oc *OnlineClassifier) Predict(features []float64) (string, float64)

Predict returns the predicted label and confidence.

func (*OnlineClassifier) PredictTopK added in v1.0.12

func (oc *OnlineClassifier) PredictTopK(features []float64, k int) []Prediction

PredictTopK returns top-K predictions.

func (*OnlineClassifier) RecordOutcome added in v1.0.12

func (oc *OnlineClassifier) RecordOutcome(correct bool)

RecordOutcome records whether a prediction was correct for drift detection.

func (*OnlineClassifier) SampleCount added in v1.0.12

func (oc *OnlineClassifier) SampleCount() int

SampleCount returns the total number of samples that have been used for training.

func (*OnlineClassifier) WeightedPartialFit added in v1.0.12

func (oc *OnlineClassifier) WeightedPartialFit(features []float64, label string, weight float64)

WeightedOnlineUpdate performs an importance-weighted partial fit. Samples with higher weight shift the centroid more aggressively. weight should be in (0, 1]; values > 1 are clamped.

type OnlineHierarchicalClassifier added in v1.0.12

type OnlineHierarchicalClassifier struct {
	Protocol *OnlineClassifier
	Family   map[string]*OnlineClassifier // keyed by protocol type
	Version  map[string]*OnlineClassifier // keyed by browser family
	// contains filtered or unexported fields
}

OnlineHierarchicalClassifier wraps three OnlineClassifiers for the three-layer hierarchical classification pipeline: Protocol → Family → Version.

func NewOnlineHierarchicalClassifier added in v1.0.12

func NewOnlineHierarchicalClassifier() *OnlineHierarchicalClassifier

NewOnlineHierarchicalClassifier creates a three-layer online classifier.

func (*OnlineHierarchicalClassifier) Classify added in v1.0.12

func (ohc *OnlineHierarchicalClassifier) Classify(features []float64) *ClassificationResult

Classify performs three-layer classification with the online-updated models.

func (*OnlineHierarchicalClassifier) Stats added in v1.0.12

Stats returns per-layer statistics.

func (*OnlineHierarchicalClassifier) UpdateFamily added in v1.0.12

func (ohc *OnlineHierarchicalClassifier) UpdateFamily(protocol string, features []float64, label string)

UpdateFamily incrementally trains the family layer for a given protocol.

func (*OnlineHierarchicalClassifier) UpdateProtocol added in v1.0.12

func (ohc *OnlineHierarchicalClassifier) UpdateProtocol(features []float64, label string)

UpdateProtocol incrementally trains the protocol layer.

func (*OnlineHierarchicalClassifier) UpdateVersion added in v1.0.12

func (ohc *OnlineHierarchicalClassifier) UpdateVersion(family string, features []float64, label string)

UpdateVersion incrementally trains the version layer for a given browser family.

type OnlineHierarchicalStats added in v1.0.12

type OnlineHierarchicalStats struct {
	Protocol OnlineLayerStats            `json:"protocol"`
	Family   map[string]OnlineLayerStats `json:"family"`
	Version  map[string]OnlineLayerStats `json:"version"`
}

OnlineHierarchicalStats contains per-layer statistics.

type OnlineLayerStats added in v1.0.12

type OnlineLayerStats struct {
	Classes int `json:"classes"`
	Samples int `json:"samples"`
}

OnlineLayerStats contains statistics for a single classifier layer.

type OnlineLearner added in v1.0.17

type OnlineLearner struct {
	// contains filtered or unexported fields
}

OnlineLearner manages online learning from real-world feedback.

func NewOnlineLearner added in v1.0.17

func NewOnlineLearner(config *OnlineLearnerConfig) *OnlineLearner

NewOnlineLearner creates a new online learning system.

func (*OnlineLearner) AddSample added in v1.0.17

func (l *OnlineLearner) AddSample(sample *FeedbackSample)

AddSample adds a feedback sample to the learning buffer.

func (*OnlineLearner) DriftDetected added in v1.0.17

func (l *OnlineLearner) DriftDetected() bool

DriftDetected returns whether accuracy drift has been detected.

func (*OnlineLearner) RecentSamples added in v1.0.17

func (l *OnlineLearner) RecentSamples(n int) []FeedbackSample

RecentSamples returns the most recent n samples from the buffer.

func (*OnlineLearner) Stats added in v1.0.17

func (l *OnlineLearner) Stats() *OnlineLearnerStats

Stats returns current learner statistics.

type OnlineLearnerConfig added in v1.0.17

type OnlineLearnerConfig struct {
	// BufferSize is the max number of feedback samples retained.
	BufferSize int

	// DriftThreshold is the accuracy drop that triggers evolution.
	// E.g., 0.05 means 5% drop from peak accuracy triggers retraining.
	DriftThreshold float64

	// DriftWindowSize is how many recent samples to use for drift detection.
	DriftWindowSize int

	// MinSamplesForDrift is the minimum samples before drift can be detected.
	MinSamplesForDrift int
}

OnlineLearnerConfig configures the online learning system.

type OnlineLearnerStats added in v1.0.17

type OnlineLearnerStats struct {
	TotalSamples    int
	BufferFilled    int
	PeakAccuracy    float64
	RecentAccuracy  float64
	DriftDetected   bool
	DriftEventCount int
	LastDriftTime   time.Time
}

OnlineLearnerStats holds online learner statistics.

type Param added in v1.0.14

type Param struct {
	Value *Tensor // parameter values
	Grad  *Tensor // parameter gradients
}

Param represents a trainable parameter and its gradient.

type PipelineResult added in v1.0.14

type PipelineResult struct {
	// Fingerprint embedding (32-dim)
	Embedding []float64

	// Browser identification result
	Browser BrowserPrediction

	// Forgery detection result
	Forgery ForgeryResult

	// Threat assessment result
	Threat ThreatPrediction

	// Raw features (for debugging/explainability)
	RawFeatures   []float64
	CrossFeatures []float64
}

PipelineResult aggregates inference results from all models.

type Prediction

type Prediction struct {
	Label      string
	Confidence float64
}

Prediction prediction result

type PretrainedModel

type PretrainedModel struct {
	Name            string                          `json:"name"`
	Version         string                          `json:"version"`
	Description     string                          `json:"description"`
	ProtocolCenters map[string][]float64            `json:"protocol_centers"`
	FamilyCenters   map[string]map[string][]float64 `json:"family_centers"`  // protocol -> family -> center
	VersionCenters  map[string]map[string][]float64 `json:"version_centers"` // family -> version -> center
	FeatureWeights  []float64                       `json:"feature_weights"`
}

PretrainedModel pretrained model

func ExportBuiltinModel

func ExportBuiltinModel() *PretrainedModel

ExportBuiltinModel export builtin model

func (*PretrainedModel) ToClassifier

func (pm *PretrainedModel) ToClassifier() *HierarchicalClassifier

ToClassifier convert pretrained model to classifier

type ProfileEvolutionEngine added in v1.0.17

type ProfileEvolutionEngine struct {
	// contains filtered or unexported fields
}

ProfileEvolutionEngine monitors profile health and triggers evolution.

func NewProfileEvolutionEngine added in v1.0.17

func NewProfileEvolutionEngine(config *EvolutionConfig) *ProfileEvolutionEngine

NewProfileEvolutionEngine creates a new evolution engine.

func (*ProfileEvolutionEngine) CheckHealth added in v1.0.17

CheckHealth evaluates the overall profile ecosystem health.

func (*ProfileEvolutionEngine) Events added in v1.0.17

func (e *ProfileEvolutionEngine) Events() []EvolutionEvent

Events returns the evolution event history.

func (*ProfileEvolutionEngine) InitFromRegistry added in v1.0.17

func (e *ProfileEvolutionEngine) InitFromRegistry(registry *profiles.ProfileRegistry)

InitFromRegistry initialises profile stats from the current registry.

func (*ProfileEvolutionEngine) ProfileCount added in v1.0.17

func (e *ProfileEvolutionEngine) ProfileCount() int

ProfileCount returns the number of tracked profiles.

func (*ProfileEvolutionEngine) RecordEvent added in v1.0.17

func (e *ProfileEvolutionEngine) RecordEvent(event EvolutionEvent)

RecordEvent appends an evolution event to the history log.

func (*ProfileEvolutionEngine) RecordObservation added in v1.0.17

func (e *ProfileEvolutionEngine) RecordObservation(profileID string, browserType string, result *PipelineResult)

RecordObservation records a fingerprint observation from real traffic.

func (*ProfileEvolutionEngine) SnapshotDistribution added in v1.0.17

func (e *ProfileEvolutionEngine) SnapshotDistribution()

SnapshotDistribution saves the current distribution as reference for future drift detection.

func (*ProfileEvolutionEngine) TopForgeryProfiles added in v1.0.17

func (e *ProfileEvolutionEngine) TopForgeryProfiles(n int) []ProfileStat

TopForgeryProfiles returns profiles with the highest forgery rates.

func (*ProfileEvolutionEngine) TopStaleProfiles added in v1.0.17

func (e *ProfileEvolutionEngine) TopStaleProfiles(n int) []ProfileStat

TopStaleProfiles returns up to n stale profiles sorted by staleness.

type ProfileStat added in v1.0.17

type ProfileStat struct {
	ProfileID     string
	BrowserType   string
	LastSeen      time.Time
	FirstSeen     time.Time
	HitCount      int64
	ForgeryRate   float64 // rolling average of forgery probability
	AvgConfidence float64 // rolling average of classification confidence
	IsStale       bool
}

ProfileStat tracks the health of a single profile.

type ProtocolClassifier

type ProtocolClassifier struct {
	// contains filtered or unexported fields
}

ProtocolClassifier protocol type classifier (layer 1)

func NewProtocolClassifier

func NewProtocolClassifier() *ProtocolClassifier

NewProtocolClassifier create new protocol classifier

func (*ProtocolClassifier) Predict

func (pc *ProtocolClassifier) Predict(features []float64) (core.ProtocolType, float64)

Predict predict protocol type

func (*ProtocolClassifier) Train

func (pc *ProtocolClassifier) Train(features [][]float64, labels []core.ProtocolType) error

Train train protocol classifier

type ReLULayer added in v1.0.14

type ReLULayer struct {
	// contains filtered or unexported fields
}

ReLULayer implements ReLU activation: max(0, x)

func NewReLULayer added in v1.0.14

func NewReLULayer() *ReLULayer

func (*ReLULayer) Backward added in v1.0.14

func (l *ReLULayer) Backward(gradOutput *Tensor) *Tensor

func (*ReLULayer) Forward added in v1.0.14

func (l *ReLULayer) Forward(input *Tensor) *Tensor

func (*ReLULayer) Params added in v1.0.14

func (l *ReLULayer) Params() []*Param

func (*ReLULayer) SetTraining added in v1.0.14

func (l *ReLULayer) SetTraining(training bool)

type RustClassifierLayer

type RustClassifierLayer struct {
	Centroids    map[string][]float64 `json:"centroids"` // label -> center
	Weights      []float64            `json:"weights"`
	FeatureNames []string             `json:"feature_names"`
}

RustClassifierLayer Rust classifier layer

type RustDataset

type RustDataset struct {
	Name         string            `json:"name"`
	Version      string            `json:"version"`
	Description  string            `json:"description"`
	Fingerprints []RustFingerprint `json:"fingerprints"`
}

RustDataset Rust format dataset

type RustFingerprint

type RustFingerprint struct {
	ID       string             `json:"id"`
	Browser  string             `json:"browser"`
	Version  string             `json:"version"`
	OS       string             `json:"os"`
	TLS      RustTLSData        `json:"tls"`
	HTTP2    RustHTTP2Data      `json:"http2"`
	Headers  map[string]string  `json:"headers"`
	Features map[string]float64 `json:"features"`
}

RustFingerprint Rust format fingerprint data

type RustHTTP2Data

type RustHTTP2Data struct {
	Settings          map[string]uint32 `json:"settings"`
	Priorities        []RustPriority    `json:"priorities"`
	PseudoHeaderOrder []string          `json:"pseudo_header_order"`
}

RustHTTP2Data Rust HTTP/2 data

type RustImporter

type RustImporter struct {
	// contains filtered or unexported fields
}

RustImporter Rust data importer

func NewRustImporter

func NewRustImporter(dataPath string) *RustImporter

NewRustImporter create new Rust data importer

func (*RustImporter) ImportDataset

func (ri *RustImporter) ImportDataset(filename string) (*Dataset, error)

ImportDataset import Rust format dataset

func (*RustImporter) ImportModel

func (ri *RustImporter) ImportModel(filename string) (*PretrainedModel, error)

ImportModel import Rust format model

func (*RustImporter) ImportWithValidation

func (ri *RustImporter) ImportWithValidation(filename string) (*Dataset, error)

ImportWithValidation import and validate

type RustModel

type RustModel struct {
	Name          string                         `json:"name"`
	Version       string                         `json:"version"`
	ProtocolLayer RustClassifierLayer            `json:"protocol_layer"`
	FamilyLayers  map[string]RustClassifierLayer `json:"family_layers"`  // protocol -> layer
	VersionLayers map[string]RustClassifierLayer `json:"version_layers"` // family -> layer
}

RustModel Rust format model

type RustPriority

type RustPriority struct {
	StreamID  uint32 `json:"stream_id"`
	Weight    uint8  `json:"weight"`
	DependsOn uint32 `json:"depends_on"`
	Exclusive bool   `json:"exclusive"`
}

RustPriority Rust HTTP/2 priority

type RustTLSData

type RustTLSData struct {
	Version         uint16   `json:"version"`
	CipherSuites    []uint16 `json:"cipher_suites"`
	Extensions      []uint16 `json:"extensions"`
	SupportedCurves []uint16 `json:"supported_curves"`
	PointFormats    []uint8  `json:"point_formats"`
}

RustTLSData Rust TLS data

type Sequential added in v1.0.14

type Sequential struct {
	Layers []Layer
}

Sequential chains multiple layers into an end-to-end model.

func NewSequential added in v1.0.14

func NewSequential(layers ...Layer) *Sequential

NewSequential creates a sequential model.

func (*Sequential) Backward added in v1.0.14

func (m *Sequential) Backward(gradOutput *Tensor) *Tensor

Backward executes all layers in reverse order.

func (*Sequential) Forward added in v1.0.14

func (m *Sequential) Forward(input *Tensor) *Tensor

Forward executes all layers in order.

func (*Sequential) Params added in v1.0.14

func (m *Sequential) Params() []*Param

Params collects trainable parameters from all layers.

func (*Sequential) SetTraining added in v1.0.14

func (m *Sequential) SetTraining(training bool)

SetTraining sets training/inference mode for all layers.

func (*Sequential) ZeroGrad added in v1.0.14

func (m *Sequential) ZeroGrad()

ZeroGrad zeroes all parameter gradients.

type SerializedParam added in v1.0.14

type SerializedParam struct {
	Shape []int     `json:"shape"`
	Data  []float64 `json:"data"`
}

SerializedParam holds a serialized parameter.

type ServerFingerprintData

type ServerFingerprintData struct {
	ClientHello core.ClientHelloSpec
	Headers     *core.HTTPHeaders
}

ServerFingerprintData server-side fingerprint data

type ServiceConfig added in v1.0.17

type ServiceConfig struct {
	// ModelStorePath is the directory for persisted model snapshots.
	ModelStorePath string

	// MaxStoreVersions is the maximum number of model versions retained.
	MaxStoreVersions int

	// AutoLoadLatest loads the latest model snapshot on startup.
	AutoLoadLatest bool

	// FeedbackBufferSize is the capacity of the asynchronous feedback buffer.
	FeedbackBufferSize int

	// EvolveInterval is how often the service checks for evolution triggers.
	EvolveInterval time.Duration

	// DriftThreshold is the accuracy drop that triggers automatic evolution.
	DriftThreshold float64

	// ValidationForgeryThreshold: generated fingerprints above this forgery
	// probability are rejected.
	ValidationForgeryThreshold float64

	// ValidationConsistencyMin: generated fingerprints below this cross-layer
	// consistency score are rejected.
	ValidationConsistencyMin float64

	// InferenceBackend selects inference implementation: "native" or "onnx".
	InferenceBackend string

	// ONNXModelDir points to the directory containing ONNX artifacts.
	ONNXModelDir string

	// ONNXPythonBin is the Python executable used by ONNX backend.
	ONNXPythonBin string

	// ONNXPythonScript is the script path for ONNX inference.
	ONNXPythonScript string

	// ONNXTimeout bounds a single ONNX inference subprocess call.
	ONNXTimeout time.Duration

	// ShadowCompareEnabled runs secondary backend inference for sampled requests.
	ShadowCompareEnabled bool

	// ShadowSampleRate controls the percentage of inference calls sampled for comparison [0,1].
	ShadowSampleRate float64

	// ShadowMetricsPath stores JSONL parity metrics for offline analysis.
	ShadowMetricsPath string

	// CanaryEnabled enables gradual traffic switch to a canary inference backend.
	CanaryEnabled bool

	// CanaryRate controls canary routing ratio [0,1], e.g. 0.05 for 5%.
	CanaryRate float64

	// CanaryBackend sets canary backend name: "onnx" or "native".
	CanaryBackend string
}

ServiceConfig configures the central ML service.

type ServiceStats added in v1.0.17

type ServiceStats struct {
	InferCount    int64
	FeedbackCount int64
	EvolveCount   int64
	ModelReady    bool
	ModelVersions int
	InferenceMode string
	Canary        *CanaryStats
	Shadow        *ShadowCompareStats
	LearnerStats  *OnlineLearnerStats
}

ServiceStats holds ML service statistics.

type ShadowCompareStats added in v1.0.27

type ShadowCompareStats struct {
	SampledCount         int64   `json:"sampledCount"`
	ErrorCount           int64   `json:"errorCount"`
	BrowserTop1AgreeRate float64 `json:"browserTop1AgreeRate"`
	ActionTop1AgreeRate  float64 `json:"actionTop1AgreeRate"`
	AvgForgeryProbDelta  float64 `json:"avgForgeryProbDelta"`
	AvgThreatProbDelta   float64 `json:"avgThreatProbDelta"`
	LastError            string  `json:"lastError,omitempty"`
}

ShadowCompareStats tracks prediction parity between primary and shadow backends.

type SigmoidLayer added in v1.0.14

type SigmoidLayer struct {
	// contains filtered or unexported fields
}

SigmoidLayer implements element-wise sigmoid activation.

func NewSigmoidLayer added in v1.0.14

func NewSigmoidLayer() *SigmoidLayer

func (*SigmoidLayer) Backward added in v1.0.14

func (l *SigmoidLayer) Backward(gradOutput *Tensor) *Tensor

func (*SigmoidLayer) Forward added in v1.0.14

func (l *SigmoidLayer) Forward(input *Tensor) *Tensor

func (*SigmoidLayer) Params added in v1.0.14

func (l *SigmoidLayer) Params() []*Param

func (*SigmoidLayer) SetTraining added in v1.0.14

func (l *SigmoidLayer) SetTraining(training bool)

type SimpleClassifier

type SimpleClassifier struct {
	// contains filtered or unexported fields
}

SimpleClassifier simple classifier (distance-based)

func NewSimpleClassifier

func NewSimpleClassifier(featureCount int) *SimpleClassifier

NewSimpleClassifier create new simple classifier

func (*SimpleClassifier) Predict

func (c *SimpleClassifier) Predict(features []float64) (string, float64)

Predict predict label

func (*SimpleClassifier) PredictTopK

func (c *SimpleClassifier) PredictTopK(features []float64, k int) []Prediction

PredictTopK return top-K prediction results

func (*SimpleClassifier) Train

func (c *SimpleClassifier) Train(features [][]float64, labels []string) error

Train train classifier

type SoftmaxLayer added in v1.0.14

type SoftmaxLayer struct {
	// contains filtered or unexported fields
}

SoftmaxLayer implements row-wise softmax activation.

func NewSoftmaxLayer added in v1.0.14

func NewSoftmaxLayer() *SoftmaxLayer

func (*SoftmaxLayer) Backward added in v1.0.14

func (l *SoftmaxLayer) Backward(gradOutput *Tensor) *Tensor

func (*SoftmaxLayer) Forward added in v1.0.14

func (l *SoftmaxLayer) Forward(input *Tensor) *Tensor

func (*SoftmaxLayer) Params added in v1.0.14

func (l *SoftmaxLayer) Params() []*Param

func (*SoftmaxLayer) SetTraining added in v1.0.14

func (l *SoftmaxLayer) SetTraining(training bool)

type StorageData

type StorageData struct {
	LocalStorageSize   int
	SessionStorageSize int
}

StorageData storage fingerprint data

type StoreConfig added in v1.0.15

type StoreConfig struct {
	BaseDir     string // root directory for model storage
	MaxVersions int    // maximum number of versions to keep (0 = unlimited)
}

StoreConfig holds model store configuration.

func DefaultStoreConfig added in v1.0.15

func DefaultStoreConfig(baseDir string) *StoreConfig

DefaultStoreConfig returns sensible defaults.

type TLSValidationResult added in v1.0.17

type TLSValidationResult struct {
	// Valid indicates whether the TLS configuration looks realistic.
	Valid bool

	// DetectedBrowser is the browser the ML model thinks this TLS config belongs to.
	DetectedBrowser string

	// Confidence is the classification confidence [0,1].
	Confidence float64

	// ForgeryProb is the probability this is a forged TLS fingerprint [0,1].
	ForgeryProb float64

	// ConsistencyScore is how consistent the TLS parameters are [0,1].
	ConsistencyScore float64

	// CipherSuiteScore rates the cipher suite selection quality [0,1].
	CipherSuiteScore float64

	// ExtensionScore rates the extension configuration quality [0,1].
	ExtensionScore float64

	// Issues lists specific problems found.
	Issues []string
}

TLSValidationResult contains the ML validation of a TLS configuration.

type TLSValidator added in v1.0.17

type TLSValidator struct {
	// contains filtered or unexported fields
}

TLSValidator validates TLS configurations using ML models.

func NewTLSValidator added in v1.0.17

func NewTLSValidator(pipeline *ModelPipeline) *TLSValidator

NewTLSValidator creates a new TLS validator backed by the given pipeline.

func (*TLSValidator) ValidateCipherSuites added in v1.0.17

func (v *TLSValidator) ValidateCipherSuites(cipherSuites []uint16, claimedBrowser string) *TLSValidationResult

ValidateCipherSuites evaluates whether a set of cipher suites is consistent with a claimed browser.

func (*TLSValidator) ValidateProfile added in v1.0.17

func (v *TLSValidator) ValidateProfile(profile *profiles.ClientProfile) *TLSValidationResult

ValidateProfile validates the TLS aspects of a full client profile.

type Tensor added in v1.0.14

type Tensor struct {
	Data  []float64
	Shape []int
}

Tensor represents a multi-dimensional float64 tensor in row-major layout. Shape holds the size of each dimension; Data is the flattened 1D array.

func BinaryCrossEntropyLoss added in v1.0.14

func BinaryCrossEntropyLoss(preds *Tensor, targets []float64) (float64, *Tensor)

BinaryCrossEntropyLoss computes binary cross-entropy loss and gradient. preds: sigmoid output [batch × 1] targets: target values [batch] (0.0 or 1.0)

func CrossEntropyLoss added in v1.0.14

func CrossEntropyLoss(probs *Tensor, targets []int) (float64, *Tensor)

CrossEntropyLoss computes cross-entropy loss and gradient. probs: softmax output [batch × classes] targets: target class indices [batch] (int array) Returns: loss value, gradient w.r.t. logits [batch × classes]

func FromSlice added in v1.0.14

func FromSlice(data []float64) *Tensor

FromSlice creates a [1×n] row vector tensor from a float64 slice.

func MSELoss added in v1.0.14

func MSELoss(preds, targets *Tensor) (float64, *Tensor)

MSELoss computes mean squared error loss and gradient.

func NewTensor added in v1.0.14

func NewTensor(shape []int, data []float64) *Tensor

NewTensor creates a tensor from the given shape and data.

func Ones added in v1.0.14

func Ones(dims ...int) *Tensor

Ones creates a tensor filled with ones.

func RandN added in v1.0.14

func RandN(dims ...int) *Tensor

RandN creates a tensor with standard normal random values.

func RandNScaled added in v1.0.14

func RandNScaled(scale float64, dims ...int) *Tensor

RandNScaled creates a scaled normal random tensor (for He/Xavier initialization).

func Zeros added in v1.0.14

func Zeros(dims ...int) *Tensor

Zeros creates a zero-filled tensor.

func (*Tensor) Add added in v1.0.14

func (t *Tensor) Add(other *Tensor) *Tensor

Add performs element-wise addition: C = A + B (same shape or B broadcastable to A).

func (*Tensor) Argmax added in v1.0.14

func (t *Tensor) Argmax() int

Argmax returns the index of the maximum element.

func (*Tensor) At added in v1.0.14

func (t *Tensor) At(i, j int) float64

At returns the value at [i,j] in a 2D tensor.

func (*Tensor) Clamp added in v1.0.14

func (t *Tensor) Clamp(lo, hi float64) *Tensor

Clamp restricts all elements to the range [lo, hi].

func (*Tensor) Clone added in v1.0.14

func (t *Tensor) Clone() *Tensor

Clone returns a deep copy of the tensor.

func (*Tensor) Cols added in v1.0.14

func (t *Tensor) Cols() int

Cols returns the size of dimension 1 (column count) for 2D tensors.

func (*Tensor) L2Norm added in v1.0.14

func (t *Tensor) L2Norm() float64

L2Norm returns the L2 norm.

func (*Tensor) MatMul added in v1.0.14

func (t *Tensor) MatMul(other *Tensor) *Tensor

MatMul performs matrix multiplication, delegated to DefaultDevice.

func (*Tensor) Max added in v1.0.14

func (t *Tensor) Max() float64

Max returns the maximum element value.

func (*Tensor) Mean added in v1.0.14

func (t *Tensor) Mean() float64

Mean returns the mean of all elements.

func (*Tensor) MulElem added in v1.0.14

func (t *Tensor) MulElem(other *Tensor) *Tensor

MulElem performs element-wise (Hadamard) multiplication.

func (*Tensor) MulScalar added in v1.0.14

func (t *Tensor) MulScalar(s float64) *Tensor

MulScalar performs scalar multiplication.

func (*Tensor) Normalize added in v1.0.14

func (t *Tensor) Normalize() *Tensor

Normalize returns an L2-normalized copy (unit vector).

func (*Tensor) ReluApply added in v1.0.14

func (t *Tensor) ReluApply() *Tensor

ReluApply applies element-wise ReLU.

func (*Tensor) Row added in v1.0.14

func (t *Tensor) Row(i int) []float64

Row returns a slice view of row i (shares underlying data).

func (*Tensor) Rows added in v1.0.14

func (t *Tensor) Rows() int

Rows returns the size of dimension 0 (row count).

func (*Tensor) Set added in v1.0.14

func (t *Tensor) Set(i, j int, v float64)

Set sets the value at [i,j] in a 2D tensor.

func (*Tensor) SigmoidApply added in v1.0.14

func (t *Tensor) SigmoidApply() *Tensor

SigmoidApply applies element-wise sigmoid.

func (*Tensor) Size added in v1.0.14

func (t *Tensor) Size() int

Size returns the total number of elements.

func (*Tensor) SoftmaxRow added in v1.0.14

func (t *Tensor) SoftmaxRow() *Tensor

SoftmaxRow applies softmax to each row of a 2D tensor.

func (*Tensor) Sub added in v1.0.14

func (t *Tensor) Sub(other *Tensor) *Tensor

Sub performs element-wise subtraction.

func (*Tensor) Sum added in v1.0.14

func (t *Tensor) Sum() float64

Sum returns the sum of all elements.

func (*Tensor) ToSlice added in v1.0.14

func (t *Tensor) ToSlice() []float64

ToSlice returns a flattened copy of the tensor data.

func (*Tensor) Transpose added in v1.0.14

func (t *Tensor) Transpose() *Tensor

Transpose returns the transpose of a 2D tensor.

type ThreatAssessor added in v1.0.14

type ThreatAssessor struct {
	ThreatNet *Sequential // threat classification network
	ActionNet *Sequential // action recommendation network
}

ThreatAssessor is the threat assessment model.

func NewThreatAssessor added in v1.0.14

func NewThreatAssessor() *ThreatAssessor

NewThreatAssessor creates a threat assessor. Input: embedding(32) + forgery output(5: prob+4 type probs) + behavior(8) = 45 Threat: Input(45) → Dense(128) → BN → ReLU → Dropout(0.2)

→ Dense(64) → BN → ReLU → Dense(32) → ReLU → Dense(6) → Softmax

Action: Input(45) → Dense(128) → BN → ReLU → Dropout(0.2)

→ Dense(64) → BN → ReLU → Dense(32) → ReLU → Dense(5) → Softmax

func (*ThreatAssessor) Assess added in v1.0.14

func (ta *ThreatAssessor) Assess(input *Tensor) []ThreatPrediction

Assess evaluates threat level and recommends actions. input: [batch × 45] (embedding + forgery output + behavior features)

func (*ThreatAssessor) AssessSingle added in v1.0.14

func (ta *ThreatAssessor) AssessSingle(embedding []float64, forgeryResult *ForgeryResult, behavior []float64) ThreatPrediction

AssessSingle assesses a single sample (convenience method).

type ThreatClass added in v1.0.14

type ThreatClass int

ThreatClass represents the threat classification.

const (
	ThreatNone              ThreatClass = iota // no threat
	ThreatBot                                  // bot / crawler
	ThreatFingerprintSpoof                     // fingerprint forgery
	ThreatSessionAnomaly                       // session anomaly
	ThreatBehavioralAnomaly                    // behavioral anomaly
	ThreatEvasion                              // evasion behavior
)

func (ThreatClass) String added in v1.0.14

func (tc ThreatClass) String() string

String returns the string name of the threat class.

type ThreatPrediction added in v1.0.14

type ThreatPrediction struct {
	ThreatClass      ThreatClass // predicted threat class
	ThreatProb       float64     // probability of threat (1 - P(none))
	Action           ActionClass // recommended security action
	ActionConfidence float64     // action confidence
	ClassProbs       []float64   // per-threat-class probability distribution
	ActionProbs      []float64   // per-action probability distribution
}

ThreatPrediction holds threat assessment results.

type TimingData

type TimingData struct {
	Precision float64
}

TimingData timing fingerprint data

type Trainer

type Trainer struct {
	// contains filtered or unexported fields
}

Trainer model trainer

func NewTrainer

func NewTrainer(classifier *HierarchicalClassifier) *Trainer

NewTrainer create new trainer

func (*Trainer) ExportModel

func (t *Trainer) ExportModel(name, version string) *PretrainedModel

ExportModel export trained model

func (*Trainer) LoadDataset

func (t *Trainer) LoadDataset(dataset *Dataset)

LoadDataset load dataset

func (*Trainer) Train

func (t *Trainer) Train() error

Train train model

func (*Trainer) TrainWithProgress

func (t *Trainer) TrainWithProgress(progress func(epoch, total int, loss float64)) error

TrainWithProgress training with progress callback

type TrainingData

type TrainingData struct {
	ProtocolFeatures [][]float64
	ProtocolLabels   []core.ProtocolType

	FamilyFeatures map[core.ProtocolType][][]float64
	FamilyLabels   map[core.ProtocolType][]core.BrowserType

	VersionFeatures map[core.BrowserType][][]float64
	VersionLabels   map[core.BrowserType][]string
}

TrainingData training data structure

type TrainingLabel

type TrainingLabel struct {
	Protocol core.ProtocolType `json:"protocol"`
	Family   core.BrowserType  `json:"family"`
	Version  string            `json:"version"`
	OS       string            `json:"os"`
}

TrainingLabel training label

type TrainingMetrics added in v1.0.14

type TrainingMetrics struct {
	Epoch       int
	EncoderLoss float64
	ClassLoss   float64
	ForgeryLoss float64
	ThreatLoss  float64
	ValAccuracy float64
	ForgeryAUC  float64
}

TrainingMetrics holds training metrics.

type TrainingSample

type TrainingSample struct {
	ID       string                 `json:"id"`
	Features *core.FeatureVector    `json:"features"`
	Label    TrainingLabel          `json:"label"`
	Metadata map[string]interface{} `json:"metadata"`
}

TrainingSample training sample

type ValidationResult added in v1.0.17

type ValidationResult struct {
	// Valid is true if the fingerprint passes all checks.
	Valid bool

	// ForgeryProb is the forgery probability [0,1].
	ForgeryProb float64

	// ForgeryType identifies the detected forgery category.
	ForgeryType ForgeryType

	// ConsistencyScore is the cross-layer consistency [0,1].
	ConsistencyScore float64

	// BrowserFamily is the identified browser.
	BrowserFamily string

	// Confidence is the classification confidence.
	Confidence float64

	// Suggestions lists improvements for invalid fingerprints.
	Suggestions []string
}

ValidationResult describes how realistic a fingerprint looks.

type VersionClassifier

type VersionClassifier struct {
	// contains filtered or unexported fields
}

VersionClassifier version recognition classifier (layer 3)

func NewVersionClassifier

func NewVersionClassifier(family core.BrowserType) *VersionClassifier

NewVersionClassifier create new version classifier

func (*VersionClassifier) Predict

func (vc *VersionClassifier) Predict(features []float64) (string, float64)

Predict predict version

func (*VersionClassifier) Train

func (vc *VersionClassifier) Train(features [][]float64, labels []string) error

Train train version classifier

type WAFDetectionFeedback added in v1.0.24

type WAFDetectionFeedback struct {
	ClientIP        string
	RiskScore       float64
	DetectionLayers []string
	Blocked         bool
	FingerprintID   string
	AntiBot         string // detected anti-bot system identifier
	Timestamp       time.Time
}

WAFDetectionFeedback represents a feedback event from the WAF subsystem.

func (*WAFDetectionFeedback) ToFeedbackSample added in v1.0.24

func (wf *WAFDetectionFeedback) ToFeedbackSample() *FeedbackSample

ToFeedbackSample converts a WAFDetectionFeedback to a ML FeedbackSample. High risk detected = the fingerprint was detectable (low reward). Low risk = the fingerprint looked legitimate (high reward).

type WarmupCosineAnnealingLR added in v1.0.16

type WarmupCosineAnnealingLR struct {
	InitialLR    float64
	MinLR        float64
	WarmupEpochs int
}

WarmupCosineAnnealingLR adds a linear warmup phase before cosine decay.

func NewWarmupCosineAnnealingLR added in v1.0.16

func NewWarmupCosineAnnealingLR(initialLR, minLR float64, warmupEpochs int) *WarmupCosineAnnealingLR

func (*WarmupCosineAnnealingLR) StepLR added in v1.0.16

func (s *WarmupCosineAnnealingLR) StepLR(opt *AdamOptimizer, epoch, totalEpochs int)

type WebGLData

type WebGLData struct {
	Entropy    float64
	Vendor     string
	Renderer   string
	Extensions []string
}

WebGLData WebGL fingerprint data

type WebRTCData

type WebRTCData struct {
	IPLeaked bool
	LocalIPs []string
}

WebRTCData WebRTC fingerprint data

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL