Documentation
¶
Overview ¶
Package kernel provides content-addressed blob storage for raw records. Per Section 2.1 - Receipt Layering Contract
Package kernel provides compile-time boundary assertions. Per Section 1.1 - Kernel Scope Enforcement
Package kernel provides CEL Deterministic Profile (CEL-DP v1) per Addendum 6.9.
Package kernel provides deterministic concurrency artifacts. Per HELM Normative Addendum v1.5 Section F - Deterministic Concurrency Model.
Package kernel provides critical path scheduling. Inspired by Kimi K2.5 PARL Critical Steps metric: T = Σₜ [ S_main(t) + max_i S_sub,i(t) ]
Package kernel provides the CSNF (Canonical Semantic Normal Form) transform. Per HELM Normative Addendum v1.5 Section A - CSNF Specification.
Package kernel provides Decimal and Money profiles for CSNF. Per HELM Normative Addendum v1.5 Section A.5 and A.6.
Package kernel provides extended CSNF profile validation per Normative Addendum 6.5. This file contains enhanced validation for DecimalString and Timestamp profiles, as well as null stripping according to schema configuration.
Package kernel provides effect boundary enforcement. Per Section 1.4 - Effect Interception Boundary
Package kernel provides ErrorIR construction per Normative Addendum 8.5.X.
Package kernel provides evaluation windowing for essential variables. Per Section 4.1 - EssentialVariable Schema
Package kernel provides low-level primitives for the HELM kernel. Per Section 1.3 - Authoritative Event Log
Package kernel provides interoperability constraints for HELM. Per HELM Normative Addendum v1.5 Sections K and L.
Package kernel provides external I/O capture for deterministic replay. Per Section 2.5 - External I/O Capture
memory_integrity.go provides tamper-evident memory protection for governed memory stores. Every write produces a SHA-256 hash receipt, and reads verify integrity before returning values.
Paper basis: arXiv 2603.20357 — hash functions detect undesired memory changes Paper basis: arXiv 2601.05504 — MINJA achieves 95% injection success without protection
Design invariants:
- Every write hashed and recorded
- Reads verify hash integrity (fail-closed: tampered = error)
- Thread-safe with sync.RWMutex
- Clock-injectable for deterministic testing
memory_trust.go implements trust-aware memory scoring for governed memory. Per arXiv 2601.05504 (MINJA defense), memory entries are scored by:
- Age: older entries decay in trust
- Source trust: entries from verified principals score higher
- Pattern analysis: entries matching injection patterns score lower
Design invariants:
- Scores range 0.0-1.0 (1.0 = fully trusted)
- New entries from trusted sources start at 0.9
- Temporal decay: score *= decay_factor per hour (default 0.99)
- Injection pattern detection: score -= 0.5 if suspicious
- Thread-safe
Package kernel provides EvidencePack Merkleization per Addendum 12.X.
Package kernel — NondeterminismBound.
Per HELM 2030 Spec §1.1:
Any unavoidable nondeterminism is explicitly captured, bound, and receipted. Examples: LLM outputs, network timing, random seeds, external API responses.
Package kernel provides PDP integration for the effect boundary. Per Section 1.4 - Effect Interception Boundary
Package kernel provides deterministic PRNG for reproducible operations. Per Section 2.4 - Seed Policy and PRNG
Package kernel provides deterministic state reduction. Per Section 2.2 - Deterministic Reducer Specification
Package kernel provides a deterministic event scheduler. Per Section 1.2 - Deterministic Scheduler
Package kernel provides SecretRef handling per Normative Addendum 8.X.
Package kernel provides total ordering for the authoritative event log. Per Section 1.3 - Authoritative Event Log Enhancement
Index ¶
- Constants
- Variables
- func CSNFNormalize(v any) (any, error)
- func CSNFNormalizeJSON(data []byte) ([]byte, error)
- func CommitSemanticsDeterminismTest(events []json.RawMessage) bool
- func CompareErrors(a, b ErrorIR) int
- func ComputeBackoff(params BackoffParams, policy BackoffPolicy) time.Duration
- func ComputeDeterministicJitter(params BackoffParams, maxJitterMs int64) int64
- func CurrencyMinorUnits(currency string) int
- func DeriveSeed(parentSeed []byte, derivationInput string) []byte
- func EvaluateBackpressure(ctx context.Context, store LimiterStore, actorID string, ...) error
- func IsNFCNormalized(s string) bool
- func NormalizeDecimal(s string, schema DecimalSchema) (string, error)
- func NormalizeTimestamp(s string) (string, error)
- func ScanForPlaintextSecrets(artifact interface{}) error
- func SeedFromLoopID(rootSeed []byte, loopID string) []byte
- func StripNulls(obj map[string]any) map[string]any
- func StripNullsWithSchema(obj map[string]any, schema *CSNFSchema) map[string]any
- func TestReducerConfluence(reducer Reducer, inputs []ReducerInput, permutations int) (bool, error)
- func ValidateCSNFCompliance(v any) []string
- func ValidateConcurrencyArtifact(artifact *ConcurrencyArtifact) []string
- func ValidateDecimalString(s string) error
- func ValidateInlineSize(size int) error
- func ValidateMoney(m *CSNFMoney) []string
- func ValidateSchemaMetadata(meta SchemaMetadata) []string
- func ValidateSecretRef(ref SecretRef) error
- func ValidateTimestamp(s string) error
- func VerifyProof(proof InclusionProof, expectedRoot string) bool
- type AgentKillReceipt
- type AgentKillSwitch
- func (k *AgentKillSwitch) IsKilled(agentID string) bool
- func (k *AgentKillSwitch) Kill(agentID, principal, reason string) (*AgentKillReceipt, error)
- func (k *AgentKillSwitch) ListKilled() []string
- func (k *AgentKillSwitch) Receipts() []AgentKillReceipt
- func (k *AgentKillSwitch) Revive(agentID, principal string) (*AgentKillReceipt, error)
- func (k *AgentKillSwitch) WithKillSwitchClock(clock func() time.Time) *AgentKillSwitch
- type AttemptEntry
- type AttemptIndex
- type BackoffParams
- type BackoffPolicy
- type BackpressurePolicy
- type BlobAddress
- type BlobStore
- type BoundaryAssertions
- type BoundaryViolation
- type CELDPCostBudget
- type CELDPEvaluator
- type CELDPIssue
- type CELDPResult
- type CELDPTier
- type CELDPValidationResult
- type CELDPValidator
- type CSNFArrayKind
- type CSNFArrayMeta
- type CSNFDecimal
- type CSNFIssue
- type CSNFMoney
- type CSNFSchema
- type CSNFSchemaField
- type CSNFTransformer
- type CSNFValidationResult
- type CausationContext
- type CompatibilityPolicy
- type ConcurrencyArtifact
- type ConcurrencyArtifactType
- type ConflictPolicy
- type ContextGuard
- func (cg *ContextGuard) BootFingerprint() string
- func (cg *ContextGuard) Stats() (validations int64, mismatches int64)
- func (cg *ContextGuard) Validate(currentFingerprint string) error
- func (cg *ContextGuard) ValidateCurrent() error
- func (cg *ContextGuard) WithClock(clock func() time.Time) *ContextGuard
- type ContextMismatchError
- type CoordinationMode
- type CriticalPathMetric
- type CriticalPathScheduler
- func (s *CriticalPathScheduler) GetMetrics() *CriticalPathMetric
- func (s *CriticalPathScheduler) IdentifyIndependentGroups(events []*SchedulerEvent) []*IndependentGroup
- func (s *CriticalPathScheduler) OptimizeForCriticalPath(ctx context.Context, events []*SchedulerEvent) []*SchedulerEvent
- func (s *CriticalPathScheduler) ParallelExecute(ctx context.Context, events []*SchedulerEvent, ...) error
- func (s *CriticalPathScheduler) ScheduleBatch(ctx context.Context, events []*SchedulerEvent) error
- type CryptoShredEvent
- type CryptoShredPolicy
- type DecimalRounding
- type DecimalSchema
- type DependencyEdge
- type DependencyGraph
- type DependencyNode
- type DeterministicPRNG
- type DeterministicReducer
- type DeterministicScheduler
- type DisclosureRule
- type EffectBoundary
- type EffectContext
- type EffectLifecycle
- type EffectPayload
- type EffectRequest
- type EffectSubject
- type EffectType
- type EntropyContext
- type EnvelopeRef
- type ErrMemoryTampered
- type ErrorCause
- type ErrorClassification
- type ErrorIR
- type ErrorIRBuilder
- func (b *ErrorIRBuilder) Build() ErrorIR
- func (b *ErrorIRBuilder) WithCause(errorCode, at string) *ErrorIRBuilder
- func (b *ErrorIRBuilder) WithClassification(c ErrorClassification) *ErrorIRBuilder
- func (b *ErrorIRBuilder) WithDetail(detail string) *ErrorIRBuilder
- func (b *ErrorIRBuilder) WithInstance(instance string) *ErrorIRBuilder
- func (b *ErrorIRBuilder) WithStatus(status int) *ErrorIRBuilder
- func (b *ErrorIRBuilder) WithTitle(title string) *ErrorIRBuilder
- type EssentialVariableMonitor
- func (m *EssentialVariableMonitor) AddTrigger(trigger *ViolationTrigger)
- func (m *EssentialVariableMonitor) GetStatistics(variableID string) (avg, min, max float64, count int)
- func (m *EssentialVariableMonitor) RecordValue(variableID string, value float64, timestamp time.Time) []error
- func (m *EssentialVariableMonitor) RegisterVariable(variableID string, windowSize time.Duration, maxSamples int)
- type EvaluationWindow
- type EventEnvelope
- type EventLog
- type EvidenceView
- type ExecutionEntry
- type ExecutionTrace
- type FreezeController
- func (fc *FreezeController) Freeze(principal string) (*FreezeReceipt, error)
- func (fc *FreezeController) FreezeState() (bool, string, time.Time)
- func (fc *FreezeController) IsFrozen() bool
- func (fc *FreezeController) Receipts() []FreezeReceipt
- func (fc *FreezeController) Unfreeze(principal string) (*FreezeReceipt, error)
- func (fc *FreezeController) WithClock(clock func() time.Time) *FreezeController
- type FreezeReceipt
- type GovernancePDPAdapter
- type HELMErrorExtensions
- type HTTPRequestCapture
- type HTTPResponseCapture
- type IOCaptureStore
- type IOInterceptor
- func (i *IOInterceptor) CaptureRequest(ctx context.Context, recordID, effectID, loopID string, ...) (*IORecord, error)
- func (i *IOInterceptor) CaptureResponse(ctx context.Context, record *IORecord, resp *HTTPResponseCapture, ...) error
- func (i *IOInterceptor) CaptureRetry(ctx context.Context, record *IORecord, attempt int, delay time.Duration, ...) error
- func (i *IOInterceptor) RedactAndCommit(record *IORecord, fieldsToRedact []string, originalData map[string]interface{}) string
- type IORecord
- type IdempotencyConfig
- type InMemoryBlobStore
- func (s *InMemoryBlobStore) Delete(ctx context.Context, address BlobAddress) error
- func (s *InMemoryBlobStore) Get(ctx context.Context, address BlobAddress) (*RawRecord, error)
- func (s *InMemoryBlobStore) Has(ctx context.Context, address BlobAddress) bool
- func (s *InMemoryBlobStore) List(ctx context.Context) ([]BlobAddress, error)
- func (s *InMemoryBlobStore) Store(ctx context.Context, content []byte, mimeType string) (BlobAddress, error)
- func (s *InMemoryBlobStore) StoreRedacted(ctx context.Context, contentHash string, mimeType string) (BlobAddress, error)
- type InMemoryEffectBoundary
- func (b *InMemoryEffectBoundary) Approve(ctx context.Context, effectID, decisionID string) error
- func (b *InMemoryEffectBoundary) CheckIdempotency(ctx context.Context, key string) (bool, string, error)
- func (b *InMemoryEffectBoundary) Complete(ctx context.Context, effectID, evidencePackID string) error
- func (b *InMemoryEffectBoundary) Deny(ctx context.Context, effectID, decisionID, reason string) error
- func (b *InMemoryEffectBoundary) Execute(ctx context.Context, effectID string) error
- func (b *InMemoryEffectBoundary) GetLifecycle(ctx context.Context, effectID string) (*EffectLifecycle, error)
- func (b *InMemoryEffectBoundary) Submit(ctx context.Context, req *EffectRequest) (*EffectLifecycle, error)
- type InMemoryEventLog
- func (l *InMemoryEventLog) Append(ctx context.Context, event *EventEnvelope) (uint64, error)
- func (l *InMemoryEventLog) Get(ctx context.Context, seq uint64) (*EventEnvelope, error)
- func (l *InMemoryEventLog) Hash() string
- func (l *InMemoryEventLog) LastSequence() uint64
- func (l *InMemoryEventLog) Range(ctx context.Context, start, end uint64) ([]*EventEnvelope, error)
- type InMemoryIOCaptureStore
- func (s *InMemoryIOCaptureStore) Get(ctx context.Context, recordID string) (*IORecord, error)
- func (s *InMemoryIOCaptureStore) ListByEffect(ctx context.Context, effectID string) ([]*IORecord, error)
- func (s *InMemoryIOCaptureStore) ListByLoop(ctx context.Context, loopID string) ([]*IORecord, error)
- func (s *InMemoryIOCaptureStore) Record(ctx context.Context, record *IORecord) error
- type InMemoryLimiterStore
- type InMemoryScheduler
- func (s *InMemoryScheduler) Close()
- func (s *InMemoryScheduler) Len() int
- func (s *InMemoryScheduler) Next(ctx context.Context) (*SchedulerEvent, error)
- func (s *InMemoryScheduler) Peek(ctx context.Context) (*SchedulerEvent, error)
- func (s *InMemoryScheduler) Schedule(ctx context.Context, event *SchedulerEvent) error
- func (s *InMemoryScheduler) SnapshotHash() string
- type InMemoryTotalOrderLog
- func (l *InMemoryTotalOrderLog) Commit(ctx context.Context, event json.RawMessage, loopID string) (*TotalOrderEvent, error)
- func (l *InMemoryTotalOrderLog) Get(ctx context.Context, position uint64) (*TotalOrderEvent, error)
- func (l *InMemoryTotalOrderLog) Head(ctx context.Context) (*TotalOrderEvent, error)
- func (l *InMemoryTotalOrderLog) Len() uint64
- func (l *InMemoryTotalOrderLog) Range(ctx context.Context, start, end uint64) ([]*TotalOrderEvent, error)
- func (l *InMemoryTotalOrderLog) Verify(ctx context.Context, start, end uint64) (bool, error)
- type InclusionProof
- type IndependentGroup
- type InlineBlobPolicy
- type InlineBlobResult
- type InlineBlobValidator
- type LimiterStore
- type MaterializationScope
- type MemoryEntry
- type MemoryIntegrityOption
- type MemoryIntegrityStore
- func (s *MemoryIntegrityStore) AllHistory() []MemoryWriteEvent
- func (s *MemoryIntegrityStore) Delete(key string) error
- func (s *MemoryIntegrityStore) History(key string) []MemoryWriteEvent
- func (s *MemoryIntegrityStore) Read(key string) (*MemoryEntry, error)
- func (s *MemoryIntegrityStore) Verify(key string) error
- func (s *MemoryIntegrityStore) Write(key string, value []byte, principalID string) (*MemoryEntry, error)
- type MemoryTrustOption
- type MemoryTrustScore
- type MemoryTrustScorer
- type MemoryWriteEvent
- type MerkleLeaf
- type MerkleTree
- type MerkleTreeBuilder
- type MoneyPeriod
- type NondeterminismBound
- type NondeterminismReceipt
- type NondeterminismSource
- type NondeterminismTracker
- func (t *NondeterminismTracker) BoundsForRun(runID string) []NondeterminismBound
- func (t *NondeterminismTracker) Capture(runID string, source NondeterminismSource, ...) *NondeterminismBound
- func (t *NondeterminismTracker) Receipt(runID string) (*NondeterminismReceipt, error)
- func (t *NondeterminismTracker) WithClock(clock func() time.Time) *NondeterminismTracker
- type PDPEvaluator
- type PRNGAlgorithm
- type PRNGConfig
- type PeriodKind
- type ProofStep
- type RawRecord
- type RedisLimiterStore
- type Reducer
- type ReducerConflict
- type ReducerInput
- type ReducerOutput
- type RetentionPolicy
- type RetryAttempt
- type RetryPlan
- type RetrySchedule
- type RetryStrategy
- type SchedulerEvent
- type SchemaMetadata
- type SchemaRegistry
- type SchemaVersion
- type SealedField
- type SecretAccessAuditEntry
- type SecretProvider
- type SecretRef
- type SeedDerivation
- type TaskClassifier
- type TaskProperties
- type TokenBucket
- type TotalOrderEvent
- type TotalOrderLog
- type ViewPolicy
- type ViolationAction
- type ViolationHandler
- type ViolationTrigger
- type WiredEffectBoundary
Examples ¶
Constants ¶
const ( CELDPRuleNoFloats = "CEL-DP-001" CELDPRuleNoTimeTypes = "CEL-DP-002" CELDPRuleNoNowAccess = "CEL-DP-003" CELDPRuleNoMapIterOrder = "CEL-DP-004" CELDPRuleNoEvalOrderAmbig = "CEL-DP-005" CELDPRuleNoUnboundedRecurse = "CEL-DP-006" CELDPRuleExpressionSizeLimit = "CEL-DP-007" )
CEL-DP Rule IDs per Addendum 6.9.6
const ( // Validation errors ErrCodeSchemaMismatch = "HELM/CORE/VALIDATION/SCHEMA_MISMATCH" ErrCodeCSNFViolation = "HELM/CORE/VALIDATION/CSNF_VIOLATION" // Auth errors ErrCodeForbidden = "HELM/CORE/AUTH/FORBIDDEN" // Effect errors ErrCodeTimeout = "HELM/CORE/EFFECT/TIMEOUT" ErrCodeUpstreamError = "HELM/CORE/EFFECT/UPSTREAM_ERROR" ErrCodeIdempotency = "HELM/CORE/EFFECT/IDEMPOTENCY_CONFLICT" // Policy errors ErrCodePolicyDenied = "HELM/CORE/POLICY/DENIED" // Resource errors ErrCodeNotFound = "HELM/CORE/RESOURCE/NOT_FOUND" ErrCodeConflict = "HELM/CORE/RESOURCE/CONFLICT" // CEL-DP errors ErrCodeCELEvaluation = "HELM/CORE/CEL_DP/EVALUATION_ERROR" ErrCodeCELValidation = "HELM/CORE/CEL_DP/VALIDATION_FAILED" ErrCodeCELCostExceeded = "HELM/CORE/CEL_DP/COST_EXCEEDED" ErrCodeCELTimeout = "HELM/CORE/CEL_DP/TIMEOUT" )
Core error codes per Addendum 8.5.X.3
const ( LeafPrefix = "helm:evidence:leaf:v1" NodePrefix = "helm:evidence:node:v1" )
Merkle tree prefixes per Addendum 12.X.1
const CSNFDecimalProfileID = "csnf-decimal-v1"
CSNFDecimalProfileID is the profile identifier for decimal handling.
const CSNFMoneyProfileID = "csnf-money-v1"
CSNFMoneyProfileID is the profile identifier for money handling.
const CSNFProfileID = "csnf-v1"
CSNFProfileID is the canonical profile identifier for CSNF v1.
const CanonicalProfileID = "csnf-v1+jcs-v1"
CanonicalProfileID is the combined profile for CSNF + JCS.
const MaxInlineBytes = 4096 // 4 KiB
MaxInlineBytes is the maximum size for inline blob content. Per Section K.1: MAX_INLINE_BYTES constant.
const SchemaVersionFormat = "MAJOR.MINOR.PATCH"
SchemaVersionFormat is the version format pattern (SemVer).
Variables ¶
var ErrBlobNotFound = errorString("blob not found")
Error types
var (
ErrEventNotFound = errors.New("event not found")
)
Error types
var (
ErrSchedulerClosed = errorString("scheduler closed")
)
Error types
Functions ¶
func CSNFNormalize ¶
CSNFNormalize is a convenience function for one-shot normalization.
func CSNFNormalizeJSON ¶
CSNFNormalizeJSON normalizes a JSON byte slice.
func CommitSemanticsDeterminismTest ¶
func CommitSemanticsDeterminismTest(events []json.RawMessage) bool
CommitSemanticsDeterminismTest verifies that commit semantics are deterministic. Returns true if committing the same events in the same order produces the same hashes.
func CompareErrors ¶
CompareErrors implements deterministic error selection. Per Addendum 8.5.X.4: Return error with smallest (error_code, path) tuple.
func ComputeBackoff ¶
func ComputeBackoff(params BackoffParams, policy BackoffPolicy) time.Duration
ComputeBackoff calculates deterministic retry delay per Addendum 8.5.X.5.
func ComputeDeterministicJitter ¶
func ComputeDeterministicJitter(params BackoffParams, maxJitterMs int64) int64
ComputeDeterministicJitter computes jitter based on stable inputs. Per Addendum 8.5.X.5: Jitter MUST be deterministic, not wall-clock random.
func CurrencyMinorUnits ¶
CurrencyMinorUnits returns the number of minor units for a currency. Common currencies; extend as needed.
func DeriveSeed ¶
DeriveSeed derives a child seed from parent seed and derivation input.
func EvaluateBackpressure ¶
func EvaluateBackpressure(ctx context.Context, store LimiterStore, actorID string, policy BackpressurePolicy) error
EvaluateBackpressure checks if the actor is permitted to proceed using the provided store. If store is nil, it denies by default (fail closed) or allows (fail open) - let's fail closed for safety.
func IsNFCNormalized ¶
IsNFCNormalized checks if a string is already NFC normalized.
func NormalizeDecimal ¶
func NormalizeDecimal(s string, schema DecimalSchema) (string, error)
NormalizeDecimal normalizes a decimal to the specified scale. Per Section A.5: Normalized to exactly the declared scale.
func NormalizeTimestamp ¶
NormalizeTimestamp converts a timestamp to canonical UTC form. While any RFC 3339 with offset is valid, UTC (Z suffix) is recommended.
func ScanForPlaintextSecrets ¶
func ScanForPlaintextSecrets(artifact interface{}) error
ScanForPlaintextSecrets recurses through an artifact to find plaintext secrets. Implements Addendum 8.X.7 logic.
func SeedFromLoopID ¶
SeedFromLoopID derives a seed from a root seed and loop ID.
func StripNulls ¶
StripNulls removes all null values from an object (no schema version). This is a conservative approach when schema is not available.
func StripNullsWithSchema ¶
func StripNullsWithSchema(obj map[string]any, schema *CSNFSchema) map[string]any
StripNullsWithSchema removes null values from non-nullable fields. Per Addendum 6.5.3: Null values MUST be stripped from non-nullable fields.
func TestReducerConfluence ¶
func TestReducerConfluence(reducer Reducer, inputs []ReducerInput, permutations int) (bool, error)
TestReducerConfluence verifies that different orderings produce the same result. This is used for Section 9.4 - Module Lifecycle Confluence Tests.
func ValidateCSNFCompliance ¶
ValidateCSNFCompliance checks if a value is CSNF-compliant.
func ValidateConcurrencyArtifact ¶
func ValidateConcurrencyArtifact(artifact *ConcurrencyArtifact) []string
ValidateConcurrencyArtifact validates a concurrency artifact.
func ValidateDecimalString ¶
ValidateDecimalString checks if a string is a valid CSNF DecimalString. Per Addendum 6.5.5: DecimalStrings MUST match the regex pattern.
func ValidateInlineSize ¶
ValidateSize checks size only without policy application.
func ValidateMoney ¶
ValidateMoney validates a money value.
func ValidateSchemaMetadata ¶
func ValidateSchemaMetadata(meta SchemaMetadata) []string
ValidateSchemaMetadata validates schema metadata.
func ValidateSecretRef ¶
ValidateSecretRef validates a secret reference.
func ValidateTimestamp ¶
ValidateTimestamp checks if a string is a valid CSNF timestamp. Per Addendum 6.5.6: Timestamps MUST have explicit timezone offset.
func VerifyProof ¶
func VerifyProof(proof InclusionProof, expectedRoot string) bool
VerifyProof verifies an inclusion proof against the expected root.
Types ¶
type AgentKillReceipt ¶
type AgentKillReceipt struct {
Action string `json:"action"` // "KILL" or "REVIVE"
AgentID string `json:"agent_id"`
Principal string `json:"principal"` // who issued the command
Reason string `json:"reason,omitempty"`
SagaRunID string `json:"saga_run_id,omitempty"`
Timestamp time.Time `json:"timestamp"`
ContentHash string `json:"content_hash"` // SHA-256 of canonical JSON
}
AgentKillReceipt is a tamper-evident record of a kill/revive action.
type AgentKillSwitch ¶
type AgentKillSwitch struct {
// contains filtered or unexported fields
}
AgentKillSwitch provides per-agent termination with receipted audit trail. Uses sync.Map for lock-free hot-path reads (IsKilled) and sync.Mutex for serialized writes (Kill/Revive), mirroring the FreezeController pattern.
func NewAgentKillSwitch ¶
func NewAgentKillSwitch() *AgentKillSwitch
NewAgentKillSwitch creates a new per-agent kill switch.
func (*AgentKillSwitch) IsKilled ¶
func (k *AgentKillSwitch) IsKilled(agentID string) bool
IsKilled returns true if the agent has been killed. Lock-free hot path.
func (*AgentKillSwitch) Kill ¶
func (k *AgentKillSwitch) Kill(agentID, principal, reason string) (*AgentKillReceipt, error)
Kill terminates an agent. Returns a receipted record.
func (*AgentKillSwitch) ListKilled ¶
func (k *AgentKillSwitch) ListKilled() []string
ListKilled returns all currently killed agent IDs.
func (*AgentKillSwitch) Receipts ¶
func (k *AgentKillSwitch) Receipts() []AgentKillReceipt
Receipts returns all kill/revive receipts (for audit).
func (*AgentKillSwitch) Revive ¶
func (k *AgentKillSwitch) Revive(agentID, principal string) (*AgentKillReceipt, error)
Revive restores a killed agent. Returns a receipted record.
func (*AgentKillSwitch) WithKillSwitchClock ¶
func (k *AgentKillSwitch) WithKillSwitchClock(clock func() time.Time) *AgentKillSwitch
WithKillSwitchClock injects a deterministic clock for testing.
type AttemptEntry ¶
type AttemptEntry struct {
AttemptNum int `json:"attempt_num"`
StartedAt time.Time `json:"started_at"`
CompletedAt time.Time `json:"completed_at,omitempty"`
Success bool `json:"success"`
ErrorCode string `json:"error_code,omitempty"`
ErrorHash string `json:"error_hash,omitempty"`
}
AttemptEntry represents a single attempt.
type AttemptIndex ¶
type AttemptIndex struct {
IndexID string `json:"index_id"`
OperationID string `json:"operation_id"`
Attempts []AttemptEntry `json:"attempts"`
CurrentIndex int `json:"current_index"`
MaxAttempts int `json:"max_attempts"`
}
AttemptIndex tracks retry attempts for deterministic replay. Per Section F.2: attempt_index artifact for retry tracking.
func NewAttemptIndex ¶
func NewAttemptIndex(indexID, operationID string, maxAttempts int) *AttemptIndex
NewAttemptIndex creates a new attempt index.
func (*AttemptIndex) CanRetry ¶
func (a *AttemptIndex) CanRetry() bool
CanRetry checks if more attempts are allowed.
func (*AttemptIndex) LastAttempt ¶
func (a *AttemptIndex) LastAttempt() *AttemptEntry
LastAttempt returns the most recent attempt.
func (*AttemptIndex) RecordAttempt ¶
func (a *AttemptIndex) RecordAttempt(success bool, errorCode, errorMsg string)
RecordAttempt records an attempt.
type BackoffParams ¶
type BackoffParams struct {
PolicyID string
AdapterID string
EffectID string
AttemptIndex int
EnvSnapHash string
}
BackoffParams contains parameters for deterministic backoff calculation.
type BackoffPolicy ¶
type BackoffPolicy struct {
PolicyID string `json:"policy_id"`
BaseMs int64 `json:"base_ms"`
MaxMs int64 `json:"max_ms"`
MaxJitterMs int64 `json:"max_jitter_ms"`
MaxAttempts int `json:"max_attempts"`
}
BackoffPolicy defines retry backoff configuration.
func DefaultBackoffPolicy ¶
func DefaultBackoffPolicy() BackoffPolicy
DefaultBackoffPolicy returns the default backoff policy.
type BackpressurePolicy ¶
BackpressurePolicy defines limits.
type BlobAddress ¶
type BlobAddress string
BlobAddress is a content-addressed identifier (typically SHA256 hash).
type BlobStore ¶
type BlobStore interface {
// Store stores a blob and returns its content address.
Store(ctx context.Context, content []byte, mimeType string) (BlobAddress, error)
// StoreRedacted stores a redacted record with commitment.
StoreRedacted(ctx context.Context, contentHash string, mimeType string) (BlobAddress, error)
// Get retrieves a blob by its content address.
Get(ctx context.Context, address BlobAddress) (*RawRecord, error)
// Has checks if a blob exists.
Has(ctx context.Context, address BlobAddress) bool
// Delete removes a blob (for GDPR compliance).
Delete(ctx context.Context, address BlobAddress) error
// List returns all blob addresses.
List(ctx context.Context) ([]BlobAddress, error)
}
BlobStore provides content-addressed storage for raw records. Per Section 2.1 - content-addressed blob storage for forensic records.
type BoundaryAssertions ¶
type BoundaryAssertions struct {
// AllowedImportPrefixes defines acceptable import path prefixes
AllowedImportPrefixes []string
// DisallowedImportPatterns defines forbidden import patterns
DisallowedImportPatterns []string
// TrustedPackages is an explicit list of trusted packages
TrustedPackages map[string]bool
}
BoundaryAssertions defines the kernel's trusted computing base constraints.
func DefaultKernelBoundaryAssertions ¶
func DefaultKernelBoundaryAssertions() *BoundaryAssertions
DefaultKernelBoundaryAssertions returns the normative boundary constraints. Per Section 1.1 of the Normative Addendum: - Kernel MUST NOT contain domain-specific business logic - Kernel MUST NOT directly depend on vendor SDKs - Kernel MUST maintain a clear TCB boundary
func (*BoundaryAssertions) CheckImport ¶
func (ba *BoundaryAssertions) CheckImport(importPath string) *BoundaryViolation
CheckImport verifies if an import is allowed within the kernel boundary.
func (*BoundaryAssertions) ValidatePackage ¶
func (ba *BoundaryAssertions) ValidatePackage(pkgPath string) ([]BoundaryViolation, error)
ValidatePackage checks a Go package against kernel boundary constraints.
type BoundaryViolation ¶
type BoundaryViolation struct {
Package string `json:"package"`
ImportPath string `json:"import_path"`
Reason string `json:"reason"`
Severity string `json:"severity"` // error, warning
}
BoundaryViolation represents a detected kernel scope violation.
func CompileTimeBoundaryCheck ¶
func CompileTimeBoundaryCheck() (bool, []BoundaryViolation)
CompileTimeBoundaryCheck is called at compile time via go:generate or tests. It returns true if the kernel package maintains its TCB boundary.
type CELDPCostBudget ¶
type CELDPCostBudget struct {
MaxExpressionSize int `json:"max_expression_size"` // Max AST nodes
MaxMacroExpansions int `json:"max_macro_expansions"` // Max macro expansions
MaxNestingDepth int `json:"max_nesting_depth"` // Max expression nesting
MaxEvaluationCost int64 `json:"max_evaluation_cost"` // Max runtime cost units
HardTimeoutMs int `json:"hard_timeout_ms"` // Hard timeout in milliseconds
}
CELDPCostBudget defines execution limits for CEL-DP expressions.
func DefaultCELDPBudget ¶
func DefaultCELDPBudget() CELDPCostBudget
DefaultCELDPBudget returns the default cost budget for CEL-DP.
type CELDPEvaluator ¶
type CELDPEvaluator struct {
// contains filtered or unexported fields
}
CELDPEvaluator provides deterministic CEL evaluation.
func NewCELDPEvaluator ¶
func NewCELDPEvaluator() *CELDPEvaluator
NewCELDPEvaluator creates a new CEL-DP evaluator.
func (*CELDPEvaluator) Evaluate ¶
func (e *CELDPEvaluator) Evaluate(expr string, input map[string]any) (CELDPResult, error)
Evaluate evaluates a CEL expression with CEL-DP compliance. Per Addendum 6.9.7: Validation MUST pass before execution.
type CELDPIssue ¶
type CELDPIssue struct {
RuleID string `json:"rule_id"`
Message string `json:"message"`
Location string `json:"location,omitempty"` // Position in expression
Severity string `json:"severity"` // "error" or "warning"
}
CELDPIssue represents a validation issue for CEL-DP expressions.
type CELDPResult ¶
type CELDPResult struct {
Value any `json:"value,omitempty"`
Error *ErrorIR `json:"error,omitempty"`
}
CELDPResult contains the result of a CEL-DP evaluation.
type CELDPValidationResult ¶
type CELDPValidationResult struct {
Valid bool `json:"valid"`
Issues []CELDPIssue `json:"issues"`
Tier CELDPTier `json:"tier"`
}
CELDPValidationResult contains the results of CEL-DP validation.
type CELDPValidator ¶
type CELDPValidator struct {
// contains filtered or unexported fields
}
CELDPValidator validates CEL expressions for CEL-DP compliance.
func NewCELDPValidator ¶
func NewCELDPValidator() *CELDPValidator
NewCELDPValidator creates a new CEL-DP validator.
func (*CELDPValidator) Validate ¶
func (v *CELDPValidator) Validate(expr string) CELDPValidationResult
Validate checks if a CEL expression is CEL-DP v1 compliant. Per Addendum 6.9.3: All kernel-critical CEL expressions MUST pass this validation.
func (*CELDPValidator) WithBudget ¶
func (v *CELDPValidator) WithBudget(budget CELDPCostBudget) *CELDPValidator
WithBudget sets a custom cost budget.
type CSNFArrayKind ¶
type CSNFArrayKind string
CSNFArrayKind defines the array classification.
const ( // CSNFArrayKindOrdered preserves element order. CSNFArrayKindOrdered CSNFArrayKind = "ORDERED" // CSNFArrayKindSet requires deterministic sorting. CSNFArrayKindSet CSNFArrayKind = "SET" )
type CSNFArrayMeta ¶
type CSNFArrayMeta struct {
Kind CSNFArrayKind `json:"x-csnf-array-kind"`
SortKey string `json:"x-csnf-sort-key,omitempty"` // JSON pointer for SET arrays
Unique bool `json:"x-csnf-unique,omitempty"` // Deduplicate after sorting
}
CSNFArrayMeta provides schema metadata for array normalization.
type CSNFDecimal ¶
type CSNFDecimal struct {
Value string `json:"value"`
}
CSNFDecimal represents a decimal value in CSNF-compliant format. Per Section A.5: Represented as strings with explicit rules.
func ParseDecimal ¶
func ParseDecimal(s string) (*CSNFDecimal, error)
ParseDecimal parses and validates a decimal string.
type CSNFIssue ¶
type CSNFIssue struct {
Path string `json:"path"`
Code string `json:"code"`
Message string `json:"message"`
Severity string `json:"severity"` // "error" or "warning"
}
CSNFIssue represents a single CSNF compliance issue.
type CSNFMoney ¶
type CSNFMoney struct {
AmountMinorUnits int64 `json:"amount_minor_units"`
Currency string `json:"currency"` // ISO 4217 code
Period MoneyPeriod `json:"period"`
}
CSNFMoney represents a monetary value in CSNF-compliant format. Per Section A.6: Money represented as minor units with currency and period.
func MoneyFromDecimal ¶
func MoneyFromDecimal(decimalAmount, currency string, period MoneyPeriod) (*CSNFMoney, error)
MoneyFromDecimal creates a Money from a decimal string and currency. Uses the currency's standard minor unit scale (e.g., 2 for USD, 0 for JPY).
type CSNFSchema ¶
type CSNFSchema struct {
Fields map[string]CSNFSchemaField `json:"properties"`
}
CSNFSchema provides schema information for CSNF validation.
type CSNFSchemaField ¶
type CSNFSchemaField struct {
Nullable bool `json:"nullable"`
Type string `json:"type"`
Format string `json:"format,omitempty"`
ArrayKind CSNFArrayKind `json:"x-helm-array-kind,omitempty"`
SortKey []string `json:"x-helm-sort-key,omitempty"`
SetDedup bool `json:"x-helm-set-dedup,omitempty"`
}
CSNFSchemaField represents schema information for a field.
type CSNFTransformer ¶
type CSNFTransformer struct {
// ArrayMeta provides schema-defined array metadata by JSON pointer path.
ArrayMeta map[string]CSNFArrayMeta
}
CSNFTransformer performs CSNF normalization on JSON values.
func NewCSNFTransformer ¶
func NewCSNFTransformer() *CSNFTransformer
NewCSNFTransformer creates a new CSNF transformer.
func (*CSNFTransformer) Transform ¶
func (t *CSNFTransformer) Transform(v any) (any, error)
Transform applies CSNF normalization to a value. Per Section A.4 - CSNF Transform Definition (csnf-v1).
func (*CSNFTransformer) WithArrayMeta ¶
func (t *CSNFTransformer) WithArrayMeta(path string, meta CSNFArrayMeta) *CSNFTransformer
WithArrayMeta registers array metadata for a path.
type CSNFValidationResult ¶
CSNFValidationResult contains the results of CSNF validation.
func ValidateCSNFStrict ¶
func ValidateCSNFStrict(v any, schema *CSNFSchema) CSNFValidationResult
ValidateCSNFStrict performs strict CSNF validation per Addendum 6.5.
type CausationContext ¶
type CausationContext struct {
ParentEventID string `json:"parent_event_id,omitempty"`
CorrelationID string `json:"correlation_id,omitempty"`
CausationChain []string `json:"causation_chain,omitempty"`
}
CausationContext tracks event causality chain.
type CompatibilityPolicy ¶
type CompatibilityPolicy string
CompatibilityPolicy defines backward/forward compatibility.
const ( CompatibilityPolicyStrict CompatibilityPolicy = "STRICT" // Exact version match CompatibilityPolicyBackward CompatibilityPolicy = "BACKWARD" // Newer can read older CompatibilityPolicyForward CompatibilityPolicy = "FORWARD" // Older can read newer CompatibilityPolicyFull CompatibilityPolicy = "FULL" // Backward + Forward )
type ConcurrencyArtifact ¶
type ConcurrencyArtifact struct {
Type ConcurrencyArtifactType `json:"type"`
DependencyGraph *DependencyGraph `json:"dependency_graph,omitempty"`
AttemptIndex *AttemptIndex `json:"attempt_index,omitempty"`
RetrySchedule *RetrySchedule `json:"retry_schedule,omitempty"`
ExecutionTrace *ExecutionTrace `json:"execution_trace,omitempty"`
}
ConcurrencyArtifact is a union type for all concurrency artifacts.
type ConcurrencyArtifactType ¶
type ConcurrencyArtifactType string
ConcurrencyArtifactType identifies the type of concurrency artifact.
const ( // ConcurrencyArtifactDependencyGraph captures input dependencies. ConcurrencyArtifactDependencyGraph ConcurrencyArtifactType = "DEPENDENCY_GRAPH" // ConcurrencyArtifactAttemptIndex captures retry attempt tracking. ConcurrencyArtifactAttemptIndex ConcurrencyArtifactType = "ATTEMPT_INDEX" // ConcurrencyArtifactRetrySchedule captures retry scheduling. ConcurrencyArtifactRetrySchedule ConcurrencyArtifactType = "RETRY_SCHEDULE" // ConcurrencyArtifactExecutionTrace captures execution ordering. ConcurrencyArtifactExecutionTrace ConcurrencyArtifactType = "EXECUTION_TRACE" )
type ConflictPolicy ¶
type ConflictPolicy string
ConflictPolicy defines how conflicts are resolved.
const ( // ConflictPolicyVerifierWins - the verifier's value takes precedence ConflictPolicyVerifierWins ConflictPolicy = "verifier_wins" // ConflictPolicyFirstSuccess - first successful write wins ConflictPolicyFirstSuccess ConflictPolicy = "first_success" // ConflictPolicyQuorum - requires quorum agreement ConflictPolicyQuorum ConflictPolicy = "quorum" // ConflictPolicyLastWriteWins - last write wins (by sequence number) ConflictPolicyLastWriteWins ConflictPolicy = "last_write_wins" )
type ContextGuard ¶
type ContextGuard struct {
// contains filtered or unexported fields
}
ContextGuard validates environment fingerprints to detect environment recreation or tampering attacks (e.g., Kiro-style env-recreate).
The guard captures a boot-time fingerprint of the execution environment and compares it against the current fingerprint before each enforcement decision. Any mismatch results in a CONTEXT_MISMATCH denial.
Design invariants:
- Boot fingerprint is captured once and immutable
- Comparison is deterministic and repeatable
- Nil guard (no boot fingerprint) is a pass-through for backward compat
- Clock is injected for deterministic testing
func NewContextGuard ¶
func NewContextGuard() *ContextGuard
NewContextGuard creates a ContextGuard with a boot-time fingerprint computed from the current environment.
func NewContextGuardWithFingerprint ¶
func NewContextGuardWithFingerprint(fingerprint string) *ContextGuard
NewContextGuardWithFingerprint creates a ContextGuard with an explicit boot fingerprint. Used for testing or when the fingerprint is provided externally (e.g., from a signed boot attestation).
func (*ContextGuard) BootFingerprint ¶
func (cg *ContextGuard) BootFingerprint() string
BootFingerprint returns the boot-time fingerprint.
func (*ContextGuard) Stats ¶
func (cg *ContextGuard) Stats() (validations int64, mismatches int64)
Stats returns validation statistics.
func (*ContextGuard) Validate ¶
func (cg *ContextGuard) Validate(currentFingerprint string) error
Validate compares the provided fingerprint against the boot fingerprint. Returns nil on match, ContextMismatchError on mismatch.
If the boot fingerprint is empty (no guard configured), validation is a no-op pass-through for backward compatibility.
func (*ContextGuard) ValidateCurrent ¶
func (cg *ContextGuard) ValidateCurrent() error
ValidateCurrent computes the current environment fingerprint and validates it against the boot fingerprint. Convenience method for real-time validation without external fingerprint computation.
func (*ContextGuard) WithClock ¶
func (cg *ContextGuard) WithClock(clock func() time.Time) *ContextGuard
WithClock overrides the clock for deterministic testing.
type ContextMismatchError ¶
type ContextMismatchError struct {
BootFingerprint string `json:"boot_fingerprint"`
CurrentFingerprint string `json:"current_fingerprint"`
BootTime time.Time `json:"boot_time"`
DetectedAt time.Time `json:"detected_at"`
}
ContextMismatchError is returned when the current environment fingerprint does not match the boot-time fingerprint.
func (*ContextMismatchError) Error ¶
func (e *ContextMismatchError) Error() string
type CoordinationMode ¶
type CoordinationMode string
CoordinationMode defines how subtasks are coordinated.
const ( // ModeWaterfall executes subtasks sequentially (single-agent). ModeWaterfall CoordinationMode = "waterfall" // ModeParallel executes independent subtasks concurrently. ModeParallel CoordinationMode = "parallel" // ModeHybrid uses centralized orchestration with selective parallelism. ModeHybrid CoordinationMode = "hybrid" )
type CriticalPathMetric ¶
type CriticalPathMetric struct {
// Current stage index
StageIndex int `json:"stage_index"`
// Orchestration steps at each stage (S_main)
OrchestrationSteps []int `json:"orchestration_steps"`
// Max subagent steps at each stage (max_i S_sub,i)
MaxSubagentSteps []int `json:"max_subagent_steps"`
// Total critical path length
TotalCriticalSteps int `json:"total_critical_steps"`
// Timestamp when started
StartTime time.Time `json:"start_time"`
// contains filtered or unexported fields
}
CriticalPathMetric tracks the critical path through parallel execution. Per PARL: Critical Steps = orchestration overhead + max parallel branch.
func NewCriticalPathMetric ¶
func NewCriticalPathMetric() *CriticalPathMetric
NewCriticalPathMetric creates a new critical path tracker.
func (*CriticalPathMetric) GetParallelEfficiency ¶
func (m *CriticalPathMetric) GetParallelEfficiency() float64
GetParallelEfficiency returns the ratio of work done vs critical path.
func (*CriticalPathMetric) GetTotalCriticalSteps ¶
func (m *CriticalPathMetric) GetTotalCriticalSteps() int
GetTotalCriticalSteps returns the current critical path length.
func (*CriticalPathMetric) Hash ¶
func (m *CriticalPathMetric) Hash() string
Hash returns a deterministic hash of the metric state.
func (*CriticalPathMetric) RecordStage ¶
func (m *CriticalPathMetric) RecordStage(orchestrationSteps int, branchSteps []int)
RecordStage records the completion of a stage with parallel branches.
type CriticalPathScheduler ¶
type CriticalPathScheduler struct {
*InMemoryScheduler
// contains filtered or unexported fields
}
CriticalPathScheduler extends InMemoryScheduler with parallel optimization.
func NewCriticalPathScheduler ¶
func NewCriticalPathScheduler(parallelBudget int) *CriticalPathScheduler
NewCriticalPathScheduler creates a scheduler with critical path optimization.
func (*CriticalPathScheduler) GetMetrics ¶
func (s *CriticalPathScheduler) GetMetrics() *CriticalPathMetric
GetMetrics returns the critical path metrics.
func (*CriticalPathScheduler) IdentifyIndependentGroups ¶
func (s *CriticalPathScheduler) IdentifyIndependentGroups(events []*SchedulerEvent) []*IndependentGroup
IdentifyIndependentGroups partitions events into parallelizable groups. Events with no shared dependencies can execute concurrently.
func (*CriticalPathScheduler) OptimizeForCriticalPath ¶
func (s *CriticalPathScheduler) OptimizeForCriticalPath(ctx context.Context, events []*SchedulerEvent) []*SchedulerEvent
OptimizeForCriticalPath reorders events to minimize critical path length.
func (*CriticalPathScheduler) ParallelExecute ¶
func (s *CriticalPathScheduler) ParallelExecute(ctx context.Context, events []*SchedulerEvent, executor func(*SchedulerEvent) error) error
ParallelExecute executes events in parallel lanes while maintaining determinism.
func (*CriticalPathScheduler) ScheduleBatch ¶
func (s *CriticalPathScheduler) ScheduleBatch(ctx context.Context, events []*SchedulerEvent) error
ScheduleBatch schedules multiple events with critical path optimization.
type CryptoShredEvent ¶
type CryptoShredEvent struct {
EventID string `json:"event_id"`
PolicyID string `json:"policy_id"`
KeyIDs []string `json:"key_ids"` // DEKs that were shredded
Scope string `json:"scope"`
ScopeIdentifier string `json:"scope_identifier"` // tenant_id, record_id, etc.
ShreddedAt time.Time `json:"shredded_at"`
VerificationHash string `json:"verification_hash"` // Proof that keys were destroyed
}
CryptoShredEvent records a crypto-shredding operation.
type CryptoShredPolicy ¶
type CryptoShredPolicy struct {
PolicyID string `json:"policy_id"`
Scope string `json:"scope"` // "record", "tenant", "jurisdiction"
TriggerConditions []string `json:"trigger_conditions"` // Events that trigger shredding
RetentionPeriod time.Duration `json:"retention_period"`
GracePeriod time.Duration `json:"grace_period"`
}
CryptoShredPolicy defines when data encryption keys should be deleted.
type DecimalRounding ¶
type DecimalRounding string
DecimalRounding defines rounding modes for decimal normalization.
const ( DecimalRoundingDown DecimalRounding = "DOWN" DecimalRoundingHalfUp DecimalRounding = "HALF_UP" DecimalRoundingHalfEven DecimalRounding = "HALF_EVEN" )
type DecimalSchema ¶
type DecimalSchema struct {
Scale int `json:"x-decimal-scale"` // Max fractional digits
Rounding DecimalRounding `json:"x-decimal-rounding"` // Rounding mode
MinValue string `json:"x-decimal-min,omitempty"`
MaxValue string `json:"x-decimal-max,omitempty"`
}
DecimalSchema defines the schema constraints for a decimal field.
type DependencyEdge ¶
type DependencyEdge struct {
FromNode string `json:"from_node"`
ToNode string `json:"to_node"`
EdgeType string `json:"edge_type"` // DATA, CONTROL, TEMPORAL
}
DependencyEdge represents a dependency relationship.
type DependencyGraph ¶
type DependencyGraph struct {
GraphID string `json:"graph_id"`
ReducerID string `json:"reducer_id"`
CreatedAt time.Time `json:"created_at"`
Nodes []DependencyNode `json:"nodes"`
Edges []DependencyEdge `json:"edges"`
RootNodes []string `json:"root_nodes"`
LeafNodes []string `json:"leaf_nodes"`
Hash string `json:"hash"`
}
DependencyGraph captures all input dependencies for a reducer. Per Section F.1: dependency_graph artifact captures scheduler influence.
func NewDependencyGraph ¶
func NewDependencyGraph(graphID, reducerID string) *DependencyGraph
NewDependencyGraph creates a new dependency graph.
func (*DependencyGraph) AddEdge ¶
func (g *DependencyGraph) AddEdge(fromNode, toNode, edgeType string)
AddEdge adds an edge to the dependency graph.
func (*DependencyGraph) AddNode ¶
func (g *DependencyGraph) AddNode(node DependencyNode)
AddNode adds a node to the dependency graph.
func (*DependencyGraph) Finalize ¶
func (g *DependencyGraph) Finalize()
Finalize computes the graph hash and identifies root/leaf nodes.
type DependencyNode ¶
type DependencyNode struct {
NodeID string `json:"node_id"`
NodeType string `json:"node_type"`
DependsOn []string `json:"depends_on,omitempty"`
ProducedAt int64 `json:"produced_at"`
ContentHash string `json:"content_hash"`
}
DependencyNode represents a node in the dependency graph. Per Section F.1: All concurrency influence captured as explicit artifacts.
type DeterministicPRNG ¶
type DeterministicPRNG struct {
// contains filtered or unexported fields
}
DeterministicPRNG provides reproducible random numbers. Per Section 2.4 - all randomness MUST be deterministic and logged.
func NewDeterministicPRNG ¶
func NewDeterministicPRNG(config PRNGConfig, seed []byte, loopID string, log EventLog) (*DeterministicPRNG, error)
NewDeterministicPRNG creates a new PRNG with the given seed.
func (*DeterministicPRNG) Bytes ¶
func (p *DeterministicPRNG) Bytes(n int) []byte
Bytes returns n deterministic random bytes.
func (*DeterministicPRNG) Float64 ¶
func (p *DeterministicPRNG) Float64() float64
Float64 returns a deterministic float64 in [0, 1).
func (*DeterministicPRNG) Intn ¶
func (p *DeterministicPRNG) Intn(n int) int
Intn returns a deterministic int in [0, n).
func (*DeterministicPRNG) LoopID ¶
func (p *DeterministicPRNG) LoopID() string
LoopID returns the loop ID.
func (*DeterministicPRNG) Seed ¶
func (p *DeterministicPRNG) Seed() string
Seed returns the current seed (for logging).
func (*DeterministicPRNG) Uint64 ¶
func (p *DeterministicPRNG) Uint64() uint64
Uint64 returns a deterministic uint64.
type DeterministicReducer ¶
type DeterministicReducer struct {
// contains filtered or unexported fields
}
DeterministicReducer implements the reducer with stable sorting.
func NewDeterministicReducer ¶
func NewDeterministicReducer(policy ConflictPolicy) *DeterministicReducer
NewDeterministicReducer creates a new deterministic reducer.
func (*DeterministicReducer) Policy ¶
func (r *DeterministicReducer) Policy() ConflictPolicy
Policy returns the current conflict policy.
func (*DeterministicReducer) Reduce ¶
func (r *DeterministicReducer) Reduce(ctx context.Context, inputs []ReducerInput) (*ReducerOutput, error)
Reduce applies inputs in deterministic order. INVARIANT: Same inputs MUST produce same output regardless of input order.
type DeterministicScheduler ¶
type DeterministicScheduler interface {
// Schedule adds an event to the scheduler.
Schedule(ctx context.Context, event *SchedulerEvent) error
// Next returns the next event to process, blocking if none available.
Next(ctx context.Context) (*SchedulerEvent, error)
// Peek returns the next event without removing it.
Peek(ctx context.Context) (*SchedulerEvent, error)
// Len returns the number of pending events.
Len() int
// SnapshotHash returns a deterministic hash of the current queue state.
SnapshotHash() string
}
DeterministicScheduler provides stable ordering for kernel events. Per Section 1.2: - Kernel MUST schedule events in deterministic order - If two events have same timestamp, ordering MUST be stable (sort_key)
type DisclosureRule ¶
type DisclosureRule struct {
PathPattern string `json:"path_pattern"` // Glob-style pattern
Action string `json:"action"` // "DISCLOSE", "SEAL", "REDACT"
Reason string `json:"reason,omitempty"`
}
DisclosureRule defines how to handle a path pattern.
type EffectBoundary ¶
type EffectBoundary interface {
// Submit submits an effect request for policy evaluation.
// Returns the effect ID and initial lifecycle state.
Submit(ctx context.Context, req *EffectRequest) (*EffectLifecycle, error)
// Approve marks an effect as approved by the PDP.
Approve(ctx context.Context, effectID, decisionID string) error
// Deny marks an effect as denied by the PDP.
Deny(ctx context.Context, effectID, decisionID, reason string) error
// Execute marks an effect as executing.
Execute(ctx context.Context, effectID string) error
// Complete marks an effect as completed.
Complete(ctx context.Context, effectID, evidencePackID string) error
// GetLifecycle returns the current lifecycle state.
GetLifecycle(ctx context.Context, effectID string) (*EffectLifecycle, error)
// CheckIdempotency checks if an effect with this key was already processed.
CheckIdempotency(ctx context.Context, key string) (bool, string, error)
}
EffectBoundary enforces the effect interception boundary. All effects MUST pass through this boundary before execution.
type EffectContext ¶
type EffectContext struct {
ModeID string `json:"mode_id,omitempty"`
LoopID string `json:"loop_id,omitempty"`
PhenotypeHash string `json:"phenotype_hash,omitempty"`
EnvironmentID string `json:"environment_id,omitempty"`
}
EffectContext provides contextual information.
type EffectLifecycle ¶
type EffectLifecycle struct {
State string `json:"state"` // pending, approved, denied, executing, completed, failed, compensated
PDPDecisionID string `json:"pdp_decision_id,omitempty"`
ExecutedAt time.Time `json:"executed_at,omitempty"`
CompletedAt time.Time `json:"completed_at,omitempty"`
EvidencePackID string `json:"evidence_pack_id,omitempty"`
}
EffectLifecycle tracks effect state transitions.
type EffectPayload ¶
type EffectPayload struct {
PayloadHash string `json:"payload_hash"`
Data map[string]interface{} `json:"data,omitempty"`
}
EffectPayload contains the effect data.
type EffectRequest ¶
type EffectRequest struct {
EffectID string `json:"effect_id"`
EffectType EffectType `json:"effect_type"`
SubmittedAt time.Time `json:"submitted_at"`
Subject EffectSubject `json:"subject"`
Payload EffectPayload `json:"payload"`
Idempotency *IdempotencyConfig `json:"idempotency,omitempty"`
Context *EffectContext `json:"context,omitempty"`
}
EffectRequest represents an effect submitted to the kernel boundary. Per Section 1.4 - Effect Interception Boundary
type EffectSubject ¶
type EffectSubject struct {
SubjectID string `json:"subject_id"`
SubjectType string `json:"subject_type"` // human, module, control_loop, external_system
SessionID string `json:"session_id,omitempty"`
AttestationRef string `json:"attestation_ref,omitempty"`
}
EffectSubject represents the actor submitting an effect.
type EffectType ¶
type EffectType string
EffectType defines canonical effect types per Section 8.
const ( EffectTypeDataWrite EffectType = "DATA_WRITE" EffectTypeFundsTransfer EffectType = "FUNDS_TRANSFER" EffectTypePermissionChange EffectType = "PERMISSION_CHANGE" EffectTypeDeploy EffectType = "DEPLOY" EffectTypeNotify EffectType = "NOTIFY" EffectTypeModuleInstall EffectType = "MODULE_INSTALL" EffectTypeConfigChange EffectType = "CONFIG_CHANGE" EffectTypeAuditLog EffectType = "AUDIT_LOG" EffectTypeExternalAPICall EffectType = "EXTERNAL_API_CALL" )
type EntropyContext ¶
type EntropyContext struct {
Seed string `json:"seed,omitempty"`
PRNGAlgorithm string `json:"prng_algorithm,omitempty"`
LoopID string `json:"loop_id,omitempty"`
}
EntropyContext tracks PRNG seed per Section 2.4.
type EnvelopeRef ¶
type EnvelopeRef struct {
RefID string `json:"ref_id"`
WrappedKeyID string `json:"wrapped_key_id"`
Algorithm string `json:"algorithm"`
CiphertextHash string `json:"ciphertext_hash"`
StorageLocation string `json:"storage_location"`
}
EnvelopeRef references envelope-encrypted data. Per Addendum 8.X.5: Large data encrypted under DEK, wrapped by KEK.
type ErrMemoryTampered ¶
ErrMemoryTampered is returned when a read detects that the stored value no longer matches its recorded hash. This is a fail-closed signal — callers MUST NOT trust the returned data.
func (*ErrMemoryTampered) Error ¶
func (e *ErrMemoryTampered) Error() string
type ErrorCause ¶
type ErrorCause struct {
ErrorCode string `json:"error_code"`
At string `json:"at"` // JSON Pointer path
}
ErrorCause represents a single cause in the error chain.
type ErrorClassification ¶
type ErrorClassification string
ErrorClassification defines the retry behavior for errors.
const ( // ErrorClassRetryable indicates a transient failure that may succeed on retry. ErrorClassRetryable ErrorClassification = "RETRYABLE" // ErrorClassNonRetryable indicates a permanent failure. ErrorClassNonRetryable ErrorClassification = "NON_RETRYABLE" // ErrorClassIdempotentSafe indicates the operation was already completed. ErrorClassIdempotentSafe ErrorClassification = "IDEMPOTENT_SAFE" // ErrorClassCompensationRequired indicates partial failure requiring compensation. ErrorClassCompensationRequired ErrorClassification = "COMPENSATION_REQUIRED" )
type ErrorIR ¶
type ErrorIR struct {
// RFC 9457 standard fields
Type string `json:"type"` // URI identifying the problem type
Title string `json:"title"` // Human-readable summary
Status int `json:"status"` // HTTP status code
Detail string `json:"detail,omitempty"` // Human-readable explanation
Instance string `json:"instance,omitempty"` // URI for this occurrence
// HELM extensions
HELM HELMErrorExtensions `json:"helm"`
}
ErrorIR represents a canonical error per RFC 9457 + HELM extensions.
Example ¶
ExampleErrorIR demonstrates ErrorIR construction.
err := NewErrorIR(ErrCodeSchemaMismatch).
WithTitle("Schema Validation Failed").
WithDetail("Field 'amount' must be an integer").
Build()
data, _ := json.MarshalIndent(err, "", " ")
_ = data // Would print JSON
func SelectCanonicalError ¶
SelectCanonicalError selects the canonical error from multiple candidates.
type ErrorIRBuilder ¶
type ErrorIRBuilder struct {
// contains filtered or unexported fields
}
ErrorIRBuilder provides a fluent interface for building ErrorIR.
func NewErrorIR ¶
func NewErrorIR(errorCode string) *ErrorIRBuilder
NewErrorIR creates a new ErrorIR builder.
func (*ErrorIRBuilder) Build ¶
func (b *ErrorIRBuilder) Build() ErrorIR
Build returns the constructed ErrorIR.
func (*ErrorIRBuilder) WithCause ¶
func (b *ErrorIRBuilder) WithCause(errorCode, at string) *ErrorIRBuilder
WithCause adds a cause to the error chain.
func (*ErrorIRBuilder) WithClassification ¶
func (b *ErrorIRBuilder) WithClassification(c ErrorClassification) *ErrorIRBuilder
WithClassification overrides the error classification.
func (*ErrorIRBuilder) WithDetail ¶
func (b *ErrorIRBuilder) WithDetail(detail string) *ErrorIRBuilder
WithDetail sets the error detail.
func (*ErrorIRBuilder) WithInstance ¶
func (b *ErrorIRBuilder) WithInstance(instance string) *ErrorIRBuilder
WithInstance sets the instance URI.
func (*ErrorIRBuilder) WithStatus ¶
func (b *ErrorIRBuilder) WithStatus(status int) *ErrorIRBuilder
WithStatus overrides the HTTP status code.
func (*ErrorIRBuilder) WithTitle ¶
func (b *ErrorIRBuilder) WithTitle(title string) *ErrorIRBuilder
WithTitle sets the error title.
type EssentialVariableMonitor ¶
type EssentialVariableMonitor struct {
// contains filtered or unexported fields
}
EssentialVariableMonitor monitors essential variables with windowing and triggers.
func NewEssentialVariableMonitor ¶
func NewEssentialVariableMonitor() *EssentialVariableMonitor
NewEssentialVariableMonitor creates a new monitor.
func (*EssentialVariableMonitor) AddTrigger ¶
func (m *EssentialVariableMonitor) AddTrigger(trigger *ViolationTrigger)
AddTrigger adds a violation trigger for a variable.
func (*EssentialVariableMonitor) GetStatistics ¶
func (m *EssentialVariableMonitor) GetStatistics(variableID string) (avg, min, max float64, count int)
GetStatistics returns windowed statistics for a variable.
func (*EssentialVariableMonitor) RecordValue ¶
func (m *EssentialVariableMonitor) RecordValue(variableID string, value float64, timestamp time.Time) []error
RecordValue records a value and checks triggers.
func (*EssentialVariableMonitor) RegisterVariable ¶
func (m *EssentialVariableMonitor) RegisterVariable(variableID string, windowSize time.Duration, maxSamples int)
RegisterVariable registers a variable with a window.
type EvaluationWindow ¶
type EvaluationWindow struct {
// contains filtered or unexported fields
}
EvaluationWindow tracks values over a rolling window.
func NewEvaluationWindow ¶
func NewEvaluationWindow(windowSize time.Duration, maxSamples int) *EvaluationWindow
NewEvaluationWindow creates a new evaluation window.
func (*EvaluationWindow) Add ¶
func (w *EvaluationWindow) Add(value float64, timestamp time.Time)
Add adds a sample to the window.
func (*EvaluationWindow) Average ¶
func (w *EvaluationWindow) Average() float64
Average returns the average value in the window.
func (*EvaluationWindow) Count ¶
func (w *EvaluationWindow) Count() int
Count returns the number of samples in the window.
func (*EvaluationWindow) Max ¶
func (w *EvaluationWindow) Max() float64
Max returns the maximum value in the window.
func (*EvaluationWindow) Min ¶
func (w *EvaluationWindow) Min() float64
Min returns the minimum value in the window.
type EventEnvelope ¶
type EventEnvelope struct {
EventID string `json:"event_id"`
EventType string `json:"event_type"`
SequenceNumber uint64 `json:"sequence_number"`
ObservedAt time.Time `json:"observed_at"`
ReceivedAt time.Time `json:"received_at"`
CommittedAt time.Time `json:"committed_at,omitempty"`
PayloadHash string `json:"payload_hash"`
Payload map[string]interface{} `json:"payload,omitempty"`
Causation *CausationContext `json:"causation,omitempty"`
Entropy *EntropyContext `json:"entropy,omitempty"`
}
EventEnvelope represents a kernel event with normative time semantics. Per Section 2.3 - Time Semantics
type EventLog ¶
type EventLog interface {
// Append adds an event to the log. Returns committed sequence number.
Append(ctx context.Context, event *EventEnvelope) (uint64, error)
// Get retrieves an event by sequence number.
Get(ctx context.Context, seq uint64) (*EventEnvelope, error)
// Range returns events in [start, end] sequence range.
Range(ctx context.Context, start, end uint64) ([]*EventEnvelope, error)
// LastSequence returns the highest committed sequence number.
LastSequence() uint64
// Hash returns the cumulative hash of all committed events.
Hash() string
}
EventLog defines the authoritative event log interface. Per Section 1.3 - Authoritative Event Log
type EvidenceView ¶
type EvidenceView struct {
ViewID string `json:"view_id"`
EvidencePackHash string `json:"evidence_pack_hash"`
ViewPolicyID string `json:"view_policy_id"`
Disclosed map[string]any `json:"disclosed"`
Sealed []SealedField `json:"sealed"`
Proofs []InclusionProof `json:"proofs"`
ViewHash string `json:"view_hash"`
CreatedAt string `json:"created_at"`
}
EvidenceView is a derived view with selective disclosure.
func DeriveEvidenceView ¶
func DeriveEvidenceView(pack map[string]any, tree *MerkleTree, policy ViewPolicy, timestamp string) (*EvidenceView, error)
DeriveEvidenceView creates an EvidenceView from an EvidencePack. Per Addendum 12.X.4: Same inputs MUST yield identical outputs.
type ExecutionEntry ¶
type ExecutionEntry struct {
StepNum int `json:"step_num"`
EventID string `json:"event_id"`
EventType string `json:"event_type"`
ProcessedAt time.Time `json:"processed_at"`
InputHash string `json:"input_hash"`
OutputHash string `json:"output_hash"`
}
ExecutionEntry represents one execution step.
type ExecutionTrace ¶
type ExecutionTrace struct {
TraceID string `json:"trace_id"`
ReducerID string `json:"reducer_id"`
Entries []ExecutionEntry `json:"entries"`
Hash string `json:"hash"`
}
ExecutionTrace captures the order of execution for replay. Per Section F.4: Execution ordering must be reproducible.
func NewExecutionTrace ¶
func NewExecutionTrace(traceID, reducerID string) *ExecutionTrace
NewExecutionTrace creates a new execution trace.
func (*ExecutionTrace) AddEntry ¶
func (t *ExecutionTrace) AddEntry(eventID, eventType, inputHash, outputHash string)
AddEntry adds an execution entry.
func (*ExecutionTrace) Finalize ¶
func (t *ExecutionTrace) Finalize()
Finalize computes the trace hash.
func (*ExecutionTrace) VerifyDeterminism ¶
func (t *ExecutionTrace) VerifyDeterminism(other *ExecutionTrace) bool
VerifyDeterminism checks if two traces are identical.
type FreezeController ¶
type FreezeController struct {
// contains filtered or unexported fields
}
FreezeController implements a global kill-switch for all enforcement decisions.
When frozen, the Guardian MUST deny all incoming requests with reason code SYSTEM_FROZEN. The freeze state is stored as an atomic boolean for lock-free reads on the hot path. All transitions (freeze/unfreeze) emit a FreezeReceipt with cryptographic content hash for the audit trail.
Design invariants:
- Read path (IsFrozen) is lock-free via atomic.Bool
- Write path (Freeze/Unfreeze) is serialized via mutex
- All transitions are receipted and timestamped
- Clock is injected for deterministic testing
func NewFreezeController ¶
func NewFreezeController() *FreezeController
NewFreezeController creates a new FreezeController with the system clock.
func (*FreezeController) Freeze ¶
func (fc *FreezeController) Freeze(principal string) (*FreezeReceipt, error)
Freeze activates the global freeze. All subsequent enforcement decisions will be denied with SYSTEM_FROZEN until Unfreeze is called.
Returns a FreezeReceipt for the audit trail. Returns an error if already frozen.
func (*FreezeController) FreezeState ¶
func (fc *FreezeController) FreezeState() (bool, string, time.Time)
FreezeState returns the current freeze state details. Returns (isFrozen, principal, timestamp).
func (*FreezeController) IsFrozen ¶
func (fc *FreezeController) IsFrozen() bool
IsFrozen returns whether the system is currently in a global freeze state. This is the hot-path check used by the Guardian before any policy evaluation. It is lock-free for maximum throughput.
func (*FreezeController) Receipts ¶
func (fc *FreezeController) Receipts() []FreezeReceipt
Receipts returns all freeze/unfreeze transition receipts.
func (*FreezeController) Unfreeze ¶
func (fc *FreezeController) Unfreeze(principal string) (*FreezeReceipt, error)
Unfreeze deactivates the global freeze, allowing enforcement decisions to proceed normally.
Returns a FreezeReceipt for the audit trail. Returns an error if not frozen.
func (*FreezeController) WithClock ¶
func (fc *FreezeController) WithClock(clock func() time.Time) *FreezeController
WithClock overrides the clock for deterministic testing.
type FreezeReceipt ¶
type FreezeReceipt struct {
Action string `json:"action"` // "freeze" or "unfreeze"
Principal string `json:"principal"` // who performed the action
Timestamp time.Time `json:"timestamp"` // when the action occurred
ContentHash string `json:"content_hash"` // SHA-256 of the canonical receipt
}
FreezeReceipt is the audit record for a freeze/unfreeze transition.
type GovernancePDPAdapter ¶
type GovernancePDPAdapter struct {
// contains filtered or unexported fields
}
GovernancePDPAdapter adapts the governance.PolicyDecisionPoint to kernel.PDPEvaluator.
func NewGovernancePDPAdapter ¶
func NewGovernancePDPAdapter(pdp governance.PolicyDecisionPoint) *GovernancePDPAdapter
NewGovernancePDPAdapter creates an adapter for the governance PDP.
func (*GovernancePDPAdapter) Evaluate ¶
func (a *GovernancePDPAdapter) Evaluate(ctx context.Context, req *EffectRequest) (string, string, error)
Evaluate implements PDPEvaluator for the effect boundary.
type HELMErrorExtensions ¶
type HELMErrorExtensions struct {
ErrorCode string `json:"error_code"`
Namespace string `json:"namespace"`
Classification ErrorClassification `json:"classification"`
CanonicalCauseChain []ErrorCause `json:"canonical_cause_chain,omitempty"`
}
HELMErrorExtensions contains HELM-specific error fields.
type HTTPRequestCapture ¶
type HTTPRequestCapture struct {
Method string
URL string
Headers map[string]string
Body []byte
IdempotencyKey string
}
HTTPRequestCapture captures HTTP request details.
type HTTPResponseCapture ¶
HTTPResponseCapture captures HTTP response details.
type IOCaptureStore ¶
type IOCaptureStore interface {
// Record stores an I/O record.
Record(ctx context.Context, record *IORecord) error
// Get retrieves a record by ID.
Get(ctx context.Context, recordID string) (*IORecord, error)
// ListByEffect returns all records for an effect.
ListByEffect(ctx context.Context, effectID string) ([]*IORecord, error)
// ListByLoop returns all records for a control loop.
ListByLoop(ctx context.Context, loopID string) ([]*IORecord, error)
}
IOCaptureStore stores and retrieves I/O records.
type IOInterceptor ¶
type IOInterceptor struct {
// contains filtered or unexported fields
}
IOInterceptor intercepts and captures external I/O.
func NewIOInterceptor ¶
func NewIOInterceptor(store IOCaptureStore, log EventLog) *IOInterceptor
NewIOInterceptor creates a new I/O interceptor.
func (*IOInterceptor) CaptureRequest ¶
func (i *IOInterceptor) CaptureRequest(ctx context.Context, recordID, effectID, loopID string, req *HTTPRequestCapture) (*IORecord, error)
CaptureRequest captures an outgoing request.
func (*IOInterceptor) CaptureResponse ¶
func (i *IOInterceptor) CaptureResponse(ctx context.Context, record *IORecord, resp *HTTPResponseCapture, durationMs int64) error
CaptureResponse captures an incoming response.
func (*IOInterceptor) CaptureRetry ¶
func (i *IOInterceptor) CaptureRetry(ctx context.Context, record *IORecord, attempt int, delay time.Duration, reason string) error
CaptureRetry captures a retry attempt.
func (*IOInterceptor) RedactAndCommit ¶
func (i *IOInterceptor) RedactAndCommit(record *IORecord, fieldsToRedact []string, originalData map[string]interface{}) string
RedactAndCommit redacts sensitive fields and creates a cryptographic commitment.
type IORecord ¶
type IORecord struct {
RecordID string `json:"record_id"`
OperationType string `json:"operation_type"` // http_request, http_response, db_query, etc.
Timestamp time.Time `json:"timestamp"`
// Request details
RequestHash string `json:"request_hash"`
RequestMethod string `json:"request_method,omitempty"`
RequestURL string `json:"request_url,omitempty"`
RequestHeaders map[string]string `json:"request_headers,omitempty"`
RequestBodyHash string `json:"request_body_hash,omitempty"`
// Response details
ResponseHash string `json:"response_hash,omitempty"`
ResponseStatus int `json:"response_status,omitempty"`
ResponseHeaders map[string]string `json:"response_headers,omitempty"`
ResponseBodyHash string `json:"response_body_hash,omitempty"`
// Retry tracking
RetryAttempt int `json:"retry_attempt"`
RetryDelay string `json:"retry_delay,omitempty"`
RetryReason string `json:"retry_reason,omitempty"`
// Idempotency
IdempotencyKey string `json:"idempotency_key,omitempty"`
// Redaction
RedactedFields []string `json:"redacted_fields,omitempty"`
RedactionCommitment string `json:"redaction_commitment,omitempty"`
// Correlation
EffectID string `json:"effect_id,omitempty"`
LoopID string `json:"loop_id,omitempty"`
// Timing
DurationMs int64 `json:"duration_ms,omitempty"`
}
IORecord captures a single external I/O interaction.
type IdempotencyConfig ¶
type IdempotencyConfig struct {
Key string `json:"key"`
KeyDerivation string `json:"key_derivation"` // client_provided, content_hash, effect_id
WindowSeconds int `json:"window_seconds,omitempty"`
}
IdempotencyConfig defines idempotency enforcement.
type InMemoryBlobStore ¶
type InMemoryBlobStore struct {
// contains filtered or unexported fields
}
InMemoryBlobStore provides an in-memory content-addressed store.
func NewInMemoryBlobStore ¶
func NewInMemoryBlobStore() *InMemoryBlobStore
NewInMemoryBlobStore creates a new in-memory blob store.
func (*InMemoryBlobStore) Delete ¶
func (s *InMemoryBlobStore) Delete(ctx context.Context, address BlobAddress) error
Delete implements BlobStore.
func (*InMemoryBlobStore) Get ¶
func (s *InMemoryBlobStore) Get(ctx context.Context, address BlobAddress) (*RawRecord, error)
Get implements BlobStore.
func (*InMemoryBlobStore) Has ¶
func (s *InMemoryBlobStore) Has(ctx context.Context, address BlobAddress) bool
Has implements BlobStore.
func (*InMemoryBlobStore) List ¶
func (s *InMemoryBlobStore) List(ctx context.Context) ([]BlobAddress, error)
List implements BlobStore.
func (*InMemoryBlobStore) Store ¶
func (s *InMemoryBlobStore) Store(ctx context.Context, content []byte, mimeType string) (BlobAddress, error)
Store implements BlobStore.
func (*InMemoryBlobStore) StoreRedacted ¶
func (s *InMemoryBlobStore) StoreRedacted(ctx context.Context, contentHash string, mimeType string) (BlobAddress, error)
StoreRedacted implements BlobStore.
type InMemoryEffectBoundary ¶
type InMemoryEffectBoundary struct {
// contains filtered or unexported fields
}
InMemoryEffectBoundary is a reference implementation.
func NewInMemoryEffectBoundary ¶
func NewInMemoryEffectBoundary(pdp PDPEvaluator, log EventLog) *InMemoryEffectBoundary
NewInMemoryEffectBoundary creates a new effect boundary.
func (*InMemoryEffectBoundary) Approve ¶
func (b *InMemoryEffectBoundary) Approve(ctx context.Context, effectID, decisionID string) error
Approve marks an effect as approved.
func (*InMemoryEffectBoundary) CheckIdempotency ¶
func (b *InMemoryEffectBoundary) CheckIdempotency(ctx context.Context, key string) (bool, string, error)
CheckIdempotency checks if an effect with this key was already processed.
func (*InMemoryEffectBoundary) Complete ¶
func (b *InMemoryEffectBoundary) Complete(ctx context.Context, effectID, evidencePackID string) error
Complete marks an effect as completed.
func (*InMemoryEffectBoundary) Deny ¶
func (b *InMemoryEffectBoundary) Deny(ctx context.Context, effectID, decisionID, reason string) error
Deny marks an effect as denied.
func (*InMemoryEffectBoundary) Execute ¶
func (b *InMemoryEffectBoundary) Execute(ctx context.Context, effectID string) error
Execute marks an effect as executing.
func (*InMemoryEffectBoundary) GetLifecycle ¶
func (b *InMemoryEffectBoundary) GetLifecycle(ctx context.Context, effectID string) (*EffectLifecycle, error)
GetLifecycle returns the current lifecycle state.
func (*InMemoryEffectBoundary) Submit ¶
func (b *InMemoryEffectBoundary) Submit(ctx context.Context, req *EffectRequest) (*EffectLifecycle, error)
Submit submits an effect for policy evaluation.
type InMemoryEventLog ¶
type InMemoryEventLog struct {
// contains filtered or unexported fields
}
InMemoryEventLog is a reference implementation for testing.
func NewInMemoryEventLog ¶
func NewInMemoryEventLog() *InMemoryEventLog
NewInMemoryEventLog creates a new in-memory event log.
func (*InMemoryEventLog) Append ¶
func (l *InMemoryEventLog) Append(ctx context.Context, event *EventEnvelope) (uint64, error)
Append adds an event with canonical encoding.
func (*InMemoryEventLog) Get ¶
func (l *InMemoryEventLog) Get(ctx context.Context, seq uint64) (*EventEnvelope, error)
Get retrieves an event by sequence number.
func (*InMemoryEventLog) Hash ¶
func (l *InMemoryEventLog) Hash() string
Hash returns the cumulative hash of all committed events.
func (*InMemoryEventLog) LastSequence ¶
func (l *InMemoryEventLog) LastSequence() uint64
LastSequence returns the highest committed sequence number.
func (*InMemoryEventLog) Range ¶
func (l *InMemoryEventLog) Range(ctx context.Context, start, end uint64) ([]*EventEnvelope, error)
Range returns events in sequence range [start, end].
type InMemoryIOCaptureStore ¶
type InMemoryIOCaptureStore struct {
// contains filtered or unexported fields
}
InMemoryIOCaptureStore provides in-memory I/O capture.
func NewInMemoryIOCaptureStore ¶
func NewInMemoryIOCaptureStore() *InMemoryIOCaptureStore
NewInMemoryIOCaptureStore creates a new I/O capture store.
func (*InMemoryIOCaptureStore) ListByEffect ¶
func (s *InMemoryIOCaptureStore) ListByEffect(ctx context.Context, effectID string) ([]*IORecord, error)
ListByEffect implements IOCaptureStore.
func (*InMemoryIOCaptureStore) ListByLoop ¶
func (s *InMemoryIOCaptureStore) ListByLoop(ctx context.Context, loopID string) ([]*IORecord, error)
ListByLoop implements IOCaptureStore.
type InMemoryLimiterStore ¶
type InMemoryLimiterStore struct {
// contains filtered or unexported fields
}
InMemoryLimiterStore for testing/single-instance deployments.
func NewInMemoryLimiterStore ¶
func NewInMemoryLimiterStore() *InMemoryLimiterStore
func (*InMemoryLimiterStore) Allow ¶
func (s *InMemoryLimiterStore) Allow(ctx context.Context, actorID string, policy BackpressurePolicy, cost int) (bool, error)
type InMemoryScheduler ¶
type InMemoryScheduler struct {
// contains filtered or unexported fields
}
InMemoryScheduler provides a deterministic in-memory scheduler.
func NewInMemoryScheduler ¶
func NewInMemoryScheduler() *InMemoryScheduler
NewInMemoryScheduler creates a new deterministic scheduler.
func (*InMemoryScheduler) Len ¶
func (s *InMemoryScheduler) Len() int
Len implements DeterministicScheduler.
func (*InMemoryScheduler) Next ¶
func (s *InMemoryScheduler) Next(ctx context.Context) (*SchedulerEvent, error)
Next implements DeterministicScheduler.
func (*InMemoryScheduler) Peek ¶
func (s *InMemoryScheduler) Peek(ctx context.Context) (*SchedulerEvent, error)
Peek implements DeterministicScheduler.
func (*InMemoryScheduler) Schedule ¶
func (s *InMemoryScheduler) Schedule(ctx context.Context, event *SchedulerEvent) error
Schedule implements DeterministicScheduler.
func (*InMemoryScheduler) SnapshotHash ¶
func (s *InMemoryScheduler) SnapshotHash() string
SnapshotHash implements DeterministicScheduler.
type InMemoryTotalOrderLog ¶
type InMemoryTotalOrderLog struct {
// contains filtered or unexported fields
}
InMemoryTotalOrderLog provides an in-memory implementation.
func NewInMemoryTotalOrderLog ¶
func NewInMemoryTotalOrderLog() *InMemoryTotalOrderLog
NewInMemoryTotalOrderLog creates a new total order log.
func (*InMemoryTotalOrderLog) Commit ¶
func (l *InMemoryTotalOrderLog) Commit(ctx context.Context, event json.RawMessage, loopID string) (*TotalOrderEvent, error)
Commit implements TotalOrderLog.
func (*InMemoryTotalOrderLog) Get ¶
func (l *InMemoryTotalOrderLog) Get(ctx context.Context, position uint64) (*TotalOrderEvent, error)
Get implements TotalOrderLog.
func (*InMemoryTotalOrderLog) Head ¶
func (l *InMemoryTotalOrderLog) Head(ctx context.Context) (*TotalOrderEvent, error)
Head implements TotalOrderLog.
func (*InMemoryTotalOrderLog) Len ¶
func (l *InMemoryTotalOrderLog) Len() uint64
Len implements TotalOrderLog.
func (*InMemoryTotalOrderLog) Range ¶
func (l *InMemoryTotalOrderLog) Range(ctx context.Context, start, end uint64) ([]*TotalOrderEvent, error)
Range implements TotalOrderLog.
type InclusionProof ¶
type InclusionProof struct {
ProofID string `json:"proof_id"`
LeafPath string `json:"leaf_path"`
LeafHash string `json:"leaf_hash"`
MerkleRoot string `json:"merkle_root"`
ProofPath []ProofStep `json:"proof_path"`
}
InclusionProof demonstrates that a leaf is part of the Merkle tree.
type IndependentGroup ¶
type IndependentGroup struct {
GroupID string `json:"group_id"`
Events []*SchedulerEvent `json:"events"`
Dependencies []string `json:"dependencies"` // GroupIDs this depends on
}
IndependentGroup represents a group of events that can execute in parallel.
type InlineBlobPolicy ¶
type InlineBlobPolicy string
InlineBlobPolicy defines the policy for oversized blobs.
const ( InlineBlobPolicyReject InlineBlobPolicy = "REJECT" InlineBlobPolicyReference InlineBlobPolicy = "REFERENCE" InlineBlobPolicyTruncate InlineBlobPolicy = "TRUNCATE" )
type InlineBlobResult ¶
type InlineBlobResult struct {
Valid bool `json:"valid"`
OriginalSize int `json:"original_size"`
PolicyApplied InlineBlobPolicy `json:"policy_applied,omitempty"`
ReferenceID string `json:"reference_id,omitempty"` // For REFERENCE policy
TruncatedTo int `json:"truncated_to,omitempty"` // For TRUNCATE policy
Error string `json:"error,omitempty"`
}
InlineBlobResult represents the result of blob validation.
type InlineBlobValidator ¶
type InlineBlobValidator struct {
MaxBytes int
Policy InlineBlobPolicy
}
InlineBlobValidator validates blob sizes.
func NewInlineBlobValidator ¶
func NewInlineBlobValidator() *InlineBlobValidator
NewInlineBlobValidator creates a validator with default settings.
func (*InlineBlobValidator) Validate ¶
func (v *InlineBlobValidator) Validate(data []byte) InlineBlobResult
Validate checks if blob data is within limits.
func (*InlineBlobValidator) WithMaxBytes ¶
func (v *InlineBlobValidator) WithMaxBytes(max int) *InlineBlobValidator
WithMaxBytes sets a custom max size.
func (*InlineBlobValidator) WithPolicy ¶
func (v *InlineBlobValidator) WithPolicy(policy InlineBlobPolicy) *InlineBlobValidator
WithPolicy sets the oversized blob policy.
type LimiterStore ¶
type LimiterStore interface {
// Allow checks if the actor is allowed to perform an action costing 'cost'.
// Returns true if allowed, false if rate limited.
// Also returns the remaining tokens or an error.
Allow(ctx context.Context, actorID string, policy BackpressurePolicy, cost int) (bool, error)
}
LimiterStore abstracts the storage for rate limiting buckets.
type MaterializationScope ¶
type MaterializationScope string
MaterializationScope defines when a secret is resolved.
const ( // MaterializationScopeRuntime indicates runtime resolution. MaterializationScopeRuntime MaterializationScope = "runtime" // MaterializationScopeBuildTime indicates build-time injection. MaterializationScopeBuildTime MaterializationScope = "build_time" )
type MemoryEntry ¶
type MemoryEntry struct {
Key string `json:"key"`
Value []byte `json:"value"`
ContentHash string `json:"content_hash"` // SHA-256 hex of Value
WrittenAt time.Time `json:"written_at"`
WrittenBy string `json:"written_by"` // Principal ID
Version int `json:"version"` // Monotonic per key
}
MemoryEntry is a single value stored in the integrity-protected memory store.
type MemoryIntegrityOption ¶
type MemoryIntegrityOption func(*MemoryIntegrityStore)
MemoryIntegrityOption configures optional behavior for MemoryIntegrityStore.
func WithIntegrityClock ¶
func WithIntegrityClock(clock func() time.Time) MemoryIntegrityOption
WithIntegrityClock injects a deterministic clock for testing.
type MemoryIntegrityStore ¶
type MemoryIntegrityStore struct {
// contains filtered or unexported fields
}
MemoryIntegrityStore is a thread-safe, tamper-evident key-value store. Every write is hashed and recorded. Reads verify integrity before returning.
func NewMemoryIntegrityStore ¶
func NewMemoryIntegrityStore(opts ...MemoryIntegrityOption) *MemoryIntegrityStore
NewMemoryIntegrityStore creates a new integrity-protected memory store.
func (*MemoryIntegrityStore) AllHistory ¶
func (s *MemoryIntegrityStore) AllHistory() []MemoryWriteEvent
AllHistory returns the full audit trail of all write events, in chronological order.
func (*MemoryIntegrityStore) Delete ¶
func (s *MemoryIntegrityStore) Delete(key string) error
Delete removes a key from the store. Returns an error if the key does not exist.
func (*MemoryIntegrityStore) History ¶
func (s *MemoryIntegrityStore) History(key string) []MemoryWriteEvent
History returns the write history for a specific key, in chronological order.
func (*MemoryIntegrityStore) Read ¶
func (s *MemoryIntegrityStore) Read(key string) (*MemoryEntry, error)
Read retrieves a value and verifies its integrity. If the stored value's hash does not match the recorded hash, ErrMemoryTampered is returned. This is fail-closed: tampered data is never silently returned.
func (*MemoryIntegrityStore) Verify ¶
func (s *MemoryIntegrityStore) Verify(key string) error
Verify performs an explicit integrity check on a stored key. Returns nil if the value is intact, ErrMemoryTampered if corrupted, or a not-found error if the key does not exist.
func (*MemoryIntegrityStore) Write ¶
func (s *MemoryIntegrityStore) Write(key string, value []byte, principalID string) (*MemoryEntry, error)
Write stores a value with integrity protection. It computes the SHA-256 hash, records the write event, and returns the resulting MemoryEntry.
type MemoryTrustOption ¶
type MemoryTrustOption func(*MemoryTrustScorer)
MemoryTrustOption configures optional behavior for MemoryTrustScorer.
func WithDecayRate ¶
func WithDecayRate(rate float64) MemoryTrustOption
WithDecayRate sets the per-hour decay factor. Must be in (0.0, 1.0].
func WithTrustScorerClock ¶
func WithTrustScorerClock(clock func() time.Time) MemoryTrustOption
WithTrustScorerClock overrides the time source for deterministic testing.
type MemoryTrustScore ¶
type MemoryTrustScore struct {
Key string `json:"key"`
Score float64 `json:"score"` // 0.0-1.0
SourceTrust float64 `json:"source_trust"` // Initial trust from source
AgeHours float64 `json:"age_hours"`
DecayApplied float64 `json:"decay_applied"` // Cumulative decay factor
Suspicious bool `json:"suspicious"` // Injection pattern detected
ComputedAt time.Time `json:"computed_at"`
}
MemoryTrustScore is the computed trust assessment for a single memory entry.
type MemoryTrustScorer ¶
type MemoryTrustScorer struct {
// contains filtered or unexported fields
}
MemoryTrustScorer computes composite trust scores for memory entries using temporal decay, source reputation, and injection pattern detection.
func NewMemoryTrustScorer ¶
func NewMemoryTrustScorer(opts ...MemoryTrustOption) *MemoryTrustScorer
NewMemoryTrustScorer creates a new scorer with the given options.
func (*MemoryTrustScorer) IsTrusted ¶
func (s *MemoryTrustScorer) IsTrusted(entry *MemoryEntry, threshold float64) bool
IsTrusted returns true if the entry's trust score meets or exceeds the threshold.
func (*MemoryTrustScorer) ScoreEntry ¶
func (s *MemoryTrustScorer) ScoreEntry(entry *MemoryEntry) *MemoryTrustScore
ScoreEntry computes the trust score for a memory entry.
Algorithm:
base = baseTrust[entry.WrittenBy] or 0.5 (unknown principal) age_hours = time.Since(entry.WrittenAt).Hours() decay = decayRate ^ age_hours suspicious = containsInjectionPatterns(entry.Value) penalty = if suspicious then 0.5 else 0.0 score = clamp(base * decay - penalty, 0.0, 1.0)
func (*MemoryTrustScorer) SetPrincipalTrust ¶
func (s *MemoryTrustScorer) SetPrincipalTrust(principalID string, trust float64)
SetPrincipalTrust sets the base trust score for a principal. The trust value is clamped to [0.0, 1.0].
type MemoryWriteEvent ¶
type MemoryWriteEvent struct {
Key string `json:"key"`
ContentHash string `json:"content_hash"`
WrittenBy string `json:"written_by"`
WrittenAt time.Time `json:"written_at"`
Version int `json:"version"`
PreviousHash string `json:"previous_hash,omitempty"`
}
MemoryWriteEvent records a single write for the audit trail.
type MerkleLeaf ¶
type MerkleLeaf struct {
Path string `json:"path"` // JSON Pointer path
LeafBytes []byte `json:"-"` // Computed leaf bytes
LeafHash string `json:"leaf_hash"` // SHA256 of leaf bytes
}
MerkleLeaf represents a leaf in the Merkle tree.
type MerkleTree ¶
type MerkleTree struct {
Leaves []MerkleLeaf `json:"leaves"`
Root string `json:"root"`
Levels [][]string `json:"-"` // Internal node hashes by level
}
MerkleTree represents a Merkle tree for an EvidencePack.
Example ¶
ExampleMerkleTree demonstrates Merkle tree construction.
builder := NewMerkleTreeBuilder()
obj := map[string]any{
"user_id": "user-123",
"action": "transfer",
"amount": int64(10000),
}
tree, _ := builder.BuildTree(obj)
_ = tree.Root // Use the Merkle root
func (*MerkleTree) GenerateProof ¶
func (tree *MerkleTree) GenerateProof(path string) (*InclusionProof, error)
GenerateProof generates an inclusion proof for a path.
type MerkleTreeBuilder ¶
type MerkleTreeBuilder struct {
// contains filtered or unexported fields
}
MerkleTreeBuilder builds Merkle trees for EvidencePacks.
func NewMerkleTreeBuilder ¶
func NewMerkleTreeBuilder() *MerkleTreeBuilder
NewMerkleTreeBuilder creates a new Merkle tree builder.
func (*MerkleTreeBuilder) BuildTree ¶
func (b *MerkleTreeBuilder) BuildTree(obj map[string]any) (*MerkleTree, error)
BuildTree constructs a Merkle tree from an object. Per Addendum 12.X.1: Leaves are sorted by JSON Pointer path.
type MoneyPeriod ¶
type MoneyPeriod struct {
Kind PeriodKind `json:"kind"`
ID string `json:"id,omitempty"` // Required for CUSTOM kind
}
MoneyPeriod defines the measurement period for a money value.
type NondeterminismBound ¶
type NondeterminismBound struct {
BoundID string `json:"bound_id"`
RunID string `json:"run_id"`
Source NondeterminismSource `json:"source"`
Description string `json:"description"`
InputHash string `json:"input_hash"` // hash of what went in
OutputHash string `json:"output_hash"` // hash of what came out
Seed string `json:"seed,omitempty"`
CapturedAt time.Time `json:"captured_at"`
ContentHash string `json:"content_hash"`
}
NondeterminismBound captures a single nondeterministic event with its binding.
type NondeterminismReceipt ¶
type NondeterminismReceipt struct {
ReceiptID string `json:"receipt_id"`
RunID string `json:"run_id"`
Bounds []NondeterminismBound `json:"bounds"`
TotalBounds int `json:"total_bounds"`
ContentHash string `json:"content_hash"`
}
NondeterminismReceipt aggregates all nondeterminism for a run.
type NondeterminismSource ¶
type NondeterminismSource string
NondeterminismSource identifies the origin of nondeterminism.
const ( NDSourceLLM NondeterminismSource = "LLM" NDSourceNetwork NondeterminismSource = "NETWORK" NDSourceRandom NondeterminismSource = "RANDOM" NDSourceExternal NondeterminismSource = "EXTERNAL_API" NDSourceTiming NondeterminismSource = "TIMING" NDSourceUserInput NondeterminismSource = "USER_INPUT" )
type NondeterminismTracker ¶
type NondeterminismTracker struct {
// contains filtered or unexported fields
}
NondeterminismTracker tracks and receipts nondeterminism per run.
func NewNondeterminismTracker ¶
func NewNondeterminismTracker() *NondeterminismTracker
NewNondeterminismTracker creates a new tracker.
func (*NondeterminismTracker) BoundsForRun ¶
func (t *NondeterminismTracker) BoundsForRun(runID string) []NondeterminismBound
BoundsForRun returns all captured bounds for a run.
func (*NondeterminismTracker) Capture ¶
func (t *NondeterminismTracker) Capture(runID string, source NondeterminismSource, description, inputHash, outputHash, seed string) *NondeterminismBound
Capture records a nondeterministic event.
func (*NondeterminismTracker) Receipt ¶
func (t *NondeterminismTracker) Receipt(runID string) (*NondeterminismReceipt, error)
Receipt produces a sealed receipt for all nondeterminism in a run.
func (*NondeterminismTracker) WithClock ¶
func (t *NondeterminismTracker) WithClock(clock func() time.Time) *NondeterminismTracker
WithClock overrides clock for testing.
type PDPEvaluator ¶
type PDPEvaluator interface {
Evaluate(ctx context.Context, req *EffectRequest) (decision string, decisionID string, err error)
}
PDPEvaluator is the interface for the Policy Decision Point.
type PRNGAlgorithm ¶
type PRNGAlgorithm string
PRNGAlgorithm defines approved PRNG algorithms.
const ( // PRNGAlgorithmChaCha20 - ChaCha20-based PRNG PRNGAlgorithmChaCha20 PRNGAlgorithm = "chacha20" // PRNGAlgorithmHMACSHA256 - HMAC-SHA256 based PRNG PRNGAlgorithmHMACSHA256 PRNGAlgorithm = "hmac_sha256" )
type PRNGConfig ¶
type PRNGConfig struct {
Algorithm PRNGAlgorithm `json:"algorithm"`
SeedLength int `json:"seed_length_bytes"`
Derivation SeedDerivation `json:"derivation"`
RecordToLog bool `json:"record_to_log"`
}
PRNGConfig defines the PRNG configuration.
func DefaultPRNGConfig ¶
func DefaultPRNGConfig() PRNGConfig
DefaultPRNGConfig returns the default PRNG configuration.
type PeriodKind ¶
type PeriodKind string
PeriodKind defines the kind of measurement period for money.
const ( PeriodKindInstant PeriodKind = "INSTANT" PeriodKindDay PeriodKind = "DAY" PeriodKindMonth PeriodKind = "MONTH" PeriodKindInvoice PeriodKind = "INVOICE" PeriodKindContract PeriodKind = "CONTRACT" PeriodKindCustom PeriodKind = "CUSTOM" )
type ProofStep ¶
type ProofStep struct {
Side string `json:"side"` // "L" or "R"
SiblingHash string `json:"sibling_hash"`
}
ProofStep represents one step in an inclusion proof.
type RawRecord ¶
type RawRecord struct {
Address BlobAddress `json:"address"`
Content []byte `json:"-"` // Not serialized directly
ContentLen int `json:"content_len"`
MimeType string `json:"mime_type"`
Redacted bool `json:"redacted"`
}
RawRecord represents an un-interpreted forensic record. Per Section 2.1 - RawRecordLayer
type RedisLimiterStore ¶
type RedisLimiterStore struct {
// contains filtered or unexported fields
}
RedisLimiterStore implements LimiterStore using Redis.
func NewRedisLimiterStore ¶
func NewRedisLimiterStore(addr, password string, db int) *RedisLimiterStore
NewRedisLimiterStore creates a new store backed by Redis.
func (*RedisLimiterStore) Allow ¶
func (s *RedisLimiterStore) Allow(ctx context.Context, actorID string, policy BackpressurePolicy, cost int) (bool, error)
Allow executes the Lua script to check and update the token bucket.
type Reducer ¶
type Reducer interface {
// Reduce applies inputs to produce deterministic output.
Reduce(ctx context.Context, inputs []ReducerInput) (*ReducerOutput, error)
// Policy returns the current conflict policy.
Policy() ConflictPolicy
}
Reducer provides deterministic state reduction. Per Section 2.2 - Deterministic Reducer Specification
type ReducerConflict ¶
type ReducerConflict struct {
Key string `json:"key"`
WinnerSeq uint64 `json:"winner_seq"`
LoserSeq uint64 `json:"loser_seq"`
WinnerValue interface{} `json:"winner_value"`
LoserValue interface{} `json:"loser_value"`
ResolutionBy string `json:"resolution_by"` // Which policy resolved it
}
ReducerConflict records a conflict during reduction.
type ReducerInput ¶
type ReducerInput struct {
SequenceNumber uint64 `json:"sequence_number"`
Key string `json:"key"`
Value interface{} `json:"value"`
SortKey string `json:"sort_key"` // Stable sort key for determinism
Metadata map[string]interface{} `json:"metadata,omitempty"`
}
ReducerInput represents an input to the reducer.
type ReducerOutput ¶
type ReducerOutput struct {
StateHash string `json:"state_hash"`
State map[string]interface{} `json:"state"`
Applied []uint64 `json:"applied"` // Sequence numbers applied
Conflicts []ReducerConflict `json:"conflicts"` // Conflicts encountered
}
ReducerOutput represents the reduced state.
type RetentionPolicy ¶
type RetentionPolicy struct {
PolicyID string `json:"policy_id"`
DataClassification string `json:"data_classification"` // PII, financial, etc.
RetentionPeriod time.Duration `json:"retention_period"`
LegalHoldOverride bool `json:"legal_hold_override"`
DeletionMethod string `json:"deletion_method"` // "soft", "crypto_shred", "hard"
JurisdictionRules []string `json:"jurisdiction_rules,omitempty"`
}
RetentionPolicy defines data retention rules.
type RetryAttempt ¶
type RetryAttempt struct {
AttemptIndex int `json:"attempt_index"`
DelayMs int64 `json:"delay_ms"`
ScheduledAt time.Time `json:"scheduled_at"`
}
RetryAttempt represents a single scheduled attempt.
type RetryPlan ¶
type RetryPlan struct {
RetryPlanID string `json:"retry_plan_id"`
EffectID string `json:"effect_id"`
PolicyID string `json:"policy_id"`
Schedule []RetryAttempt `json:"schedule"`
MaxAttempts int `json:"max_attempts"`
ExpiresAt time.Time `json:"expires_at"`
CreatedAt time.Time `json:"created_at"`
}
RetryPlan represents a pre-committed retry schedule.
func CreateRetryPlan ¶
func CreateRetryPlan(effectID string, policy BackoffPolicy, envSnapHash string, startTime time.Time) RetryPlan
CreateRetryPlan generates a pre-committed retry schedule. Per Addendum 8.5.X.6: Retry schedule MUST be committed before attempts.
type RetrySchedule ¶
type RetrySchedule struct {
ScheduleID string `json:"schedule_id"`
OperationID string `json:"operation_id"`
Strategy RetryStrategy `json:"strategy"`
BaseDelayMs int `json:"base_delay_ms"`
MaxDelayMs int `json:"max_delay_ms"`
Multiplier float64 `json:"multiplier"`
ScheduledRuns []int64 `json:"scheduled_runs"` // Unix timestamps
}
RetrySchedule captures retry timing for deterministic replay. Per Section F.3: retry_schedule_ref artifact.
func NewRetrySchedule ¶
func NewRetrySchedule(scheduleID, operationID string, strategy RetryStrategy, baseDelayMs, maxDelayMs int, multiplier float64) *RetrySchedule
NewRetrySchedule creates a new retry schedule.
func (*RetrySchedule) ComputeDelay ¶
func (r *RetrySchedule) ComputeDelay(attemptIndex int) int
ComputeDelay computes the delay for a given attempt (0-indexed).
func (*RetrySchedule) ScheduleNextRun ¶
ScheduleNextRun computes and records the next run time.
type RetryStrategy ¶
type RetryStrategy string
RetryStrategy defines the retry backoff strategy.
const ( RetryStrategyFixed RetryStrategy = "FIXED" RetryStrategyLinear RetryStrategy = "LINEAR" RetryStrategyExponential RetryStrategy = "EXPONENTIAL" )
type SchedulerEvent ¶
type SchedulerEvent struct {
EventID string `json:"event_id"`
EventType string `json:"event_type"`
ScheduledAt time.Time `json:"scheduled_at"`
Priority int `json:"priority"` // Lower = higher priority
SequenceNum uint64 `json:"sequence_num"`
Payload map[string]interface{} `json:"payload"`
LoopID string `json:"loop_id,omitempty"`
// For deterministic ordering
SortKey string `json:"sort_key"`
}
SchedulerEvent represents a scheduled event.
type SchemaMetadata ¶
type SchemaMetadata struct {
SchemaID string `json:"$id"`
SchemaVersion string `json:"schema_version"`
Compatibility CompatibilityPolicy `json:"x-helm-compatibility,omitempty"`
Deprecated bool `json:"x-helm-deprecated,omitempty"`
DeprecatedBy string `json:"x-helm-deprecated-by,omitempty"`
MinVersion string `json:"x-helm-min-version,omitempty"`
}
SchemaMetadata contains version and compatibility info. Per Section L.2: Schema metadata requirements.
type SchemaRegistry ¶
type SchemaRegistry struct {
// contains filtered or unexported fields
}
SchemaRegistry manages schema versions.
func NewSchemaRegistry ¶
func NewSchemaRegistry() *SchemaRegistry
NewSchemaRegistry creates a new registry.
func (*SchemaRegistry) GetLatest ¶
func (r *SchemaRegistry) GetLatest(schemaID string) (*SchemaMetadata, error)
GetLatest returns the latest version of a schema.
func (*SchemaRegistry) IsVersionSupported ¶
func (r *SchemaRegistry) IsVersionSupported(schemaID, version string, policy CompatibilityPolicy) (bool, error)
IsVersionSupported checks if a version is supported.
func (*SchemaRegistry) Register ¶
func (r *SchemaRegistry) Register(meta SchemaMetadata) error
Register adds a schema to the registry.
type SchemaVersion ¶
type SchemaVersion struct {
Major int `json:"major"`
Minor int `json:"minor"`
Patch int `json:"patch"`
Label string `json:"label,omitempty"` // e.g., "-beta.1"
}
SchemaVersion represents a semantic version.
func ParseSchemaVersion ¶
func ParseSchemaVersion(s string) (*SchemaVersion, error)
ParseSchemaVersion parses a version string.
func (SchemaVersion) Compare ¶
func (v SchemaVersion) Compare(other SchemaVersion) int
Compare compares two versions. Returns: -1 if v < other, 0 if v == other, 1 if v > other
func (SchemaVersion) IsCompatible ¶
func (v SchemaVersion) IsCompatible(other SchemaVersion, policy CompatibilityPolicy) bool
IsCompatible checks version compatibility based on policy.
func (SchemaVersion) String ¶
func (v SchemaVersion) String() string
String returns the version string.
type SealedField ¶
type SealedField struct {
Path string `json:"path"`
Commitment string `json:"commitment"` // Hash of the value
Reason string `json:"reason,omitempty"`
}
SealedField represents a sealed (undisclosed) field.
type SecretAccessAuditEntry ¶
type SecretAccessAuditEntry struct {
EntryID string `json:"entry_id"`
RefID string `json:"ref_id"`
ActorID string `json:"actor_id"`
AccessedAt time.Time `json:"accessed_at"`
AccessType string `json:"access_type"` // "read", "rotate", "delete"
EffectID string `json:"effect_id,omitempty"`
SessionID string `json:"session_id"`
JustificationHash string `json:"justification_hash,omitempty"`
}
SecretAccessAuditEntry records secret access for audit.
type SecretProvider ¶
type SecretProvider string
SecretProvider identifiers for supported secret stores.
const ( SecretProviderVault SecretProvider = "vault" SecretProviderAWSSecrets SecretProvider = "aws-secretsmanager" SecretProviderGCPSecrets SecretProvider = "gcp-secretmanager" SecretProviderAzureKV SecretProvider = "azure-keyvault" SecretProviderK8sSecrets SecretProvider = "kubernetes-secrets" //nolint:gosec // Identifier, not a secret SecretProviderEnv SecretProvider = "env" )
type SecretRef ¶
type SecretRef struct {
RefID string `json:"ref_id"`
Provider SecretProvider `json:"provider"`
Path string `json:"path"`
Version string `json:"version,omitempty"`
MaterializationScope MaterializationScope `json:"materialization_scope"`
AuditOnAccess bool `json:"audit_on_access"`
RotationPolicyID string `json:"rotation_policy_id,omitempty"`
}
SecretRef references a secret without containing the actual value. Per Addendum 8.X: Secrets MUST NOT appear in EvidencePacks.
type SeedDerivation ¶
type SeedDerivation string
SeedDerivation defines how seeds are derived.
const ( // SeedDerivationLoopID - seed derived from loop ID SeedDerivationLoopID SeedDerivation = "loop_id" // SeedDerivationParentSeed - seed derived from parent SeedDerivationParentSeed SeedDerivation = "parent_seed" // SeedDerivationRequestHash - seed derived from request hash SeedDerivationRequestHash SeedDerivation = "request_hash" )
type TaskClassifier ¶
type TaskClassifier struct {
// Thresholds for mode selection (configurable)
ParallelThreshold float64 // Min parallelizable fraction for ModeParallel
HybridThreshold float64 // Min for ModeHybrid (between single and parallel)
MaxSequentialDepth int // Max depth before forcing single-agent
MinSubtasks int // Min subtasks to justify multi-agent overhead
}
TaskClassifier analyzes tasks and selects optimal coordination modes.
func DefaultTaskClassifier ¶
func DefaultTaskClassifier() *TaskClassifier
DefaultTaskClassifier returns a classifier with empirically-derived thresholds based on Google Research's scaling laws.
func (*TaskClassifier) ClassifyFromDAG ¶
func (tc *TaskClassifier) ClassifyFromDAG(tasks []string, deps map[string][]string) TaskProperties
ClassifyFromDAG analyzes a task DAG and returns its properties. tasks: list of task IDs. deps: map of taskID → list of dependency IDs.
func (*TaskClassifier) SelectMode ¶
func (tc *TaskClassifier) SelectMode(props TaskProperties) (CoordinationMode, string)
SelectMode returns the optimal CoordinationMode based on task properties. Implements the decision logic from Google's scaling agent systems paper:
- High parallelizable fraction + low sequential depth → ModeParallel
- Low parallelizable fraction + deep sequential → single-agent (no swarm)
- Mixed → ModeHybrid with centralized orchestration
Returns the mode and a human-readable rationale.
type TaskProperties ¶
type TaskProperties struct {
// ToolDensity is the ratio of tool calls to reasoning steps.
// High density (>0.5) favors parallel execution.
ToolDensity float64 `json:"tool_density"`
// Decomposability measures how easily the task can be split into
// independent subtasks. Range: 0.0 (monolithic) to 1.0 (fully decomposable).
Decomposability float64 `json:"decomposability"`
// SequentialDepth is the longest chain of dependent steps
// (critical path length). Deep chains penalize parallelism.
SequentialDepth int `json:"sequential_depth"`
// ParallelizableFraction (Amdahl's law) — fraction of work that
// can execute concurrently. Range: 0.0 to 1.0.
ParallelizableFraction float64 `json:"parallelizable_fraction"`
// EstimatedSubtasks is the total number of subtasks after decomposition.
EstimatedSubtasks int `json:"estimated_subtasks"`
// ErrorAmplificationRisk is the estimated probability that
// independent multi-agent execution will amplify errors.
// Based on Google's finding that error compounds across agents.
// Range: 0.0 (low risk) to 1.0 (high risk).
ErrorAmplificationRisk float64 `json:"error_amplification_risk"`
}
TaskProperties represents the analyzed properties of a task or goal used to select the optimal coordination mode. Based on Google Research's scaling principles (Jan 2026): multi-agent coordination helps on parallelizable tasks but can degrade on sequential tasks.
type TokenBucket ¶
type TokenBucket struct {
// contains filtered or unexported fields
}
TokenBucket implements a thread-safe token bucket rate limiter.
func NewTokenBucket ¶
func NewTokenBucket(ratePerSec float64, capacity int) *TokenBucket
func (*TokenBucket) Allow ¶
func (tb *TokenBucket) Allow(cost int) bool
type TotalOrderEvent ¶
type TotalOrderEvent struct {
// OrderPosition is the globally unique position in total order
OrderPosition uint64 `json:"order_position"`
// EventEnvelope contains the underlying event data
EventEnvelope json.RawMessage `json:"event_envelope"`
// CommitHash is the cryptographic hash at this position
CommitHash string `json:"commit_hash"`
// PreviousHash links to the previous event (chain)
PreviousHash string `json:"previous_hash"`
// CommittedAt is the timestamp when this event was committed
CommittedAt time.Time `json:"committed_at"`
// LoopID identifies the control loop that produced this event
LoopID string `json:"loop_id,omitempty"`
}
TotalOrderEvent represents an event with total order guarantee. Per Section 1.3 - every committed event MUST have a unique position in total order.
type TotalOrderLog ¶
type TotalOrderLog interface {
// Commit appends an event to the log, assigning it a total order position.
Commit(ctx context.Context, event json.RawMessage, loopID string) (*TotalOrderEvent, error)
// Get retrieves an event by its order position.
Get(ctx context.Context, position uint64) (*TotalOrderEvent, error)
// Range returns events in order within a range.
Range(ctx context.Context, start, end uint64) ([]*TotalOrderEvent, error)
// Head returns the latest committed event.
Head(ctx context.Context) (*TotalOrderEvent, error)
// Verify checks the hash chain integrity.
Verify(ctx context.Context, start, end uint64) (bool, error)
// Len returns the total number of committed events.
Len() uint64
}
TotalOrderLog provides total ordering over committed events. Per Section 1.3 requirements: - Total order over all committed events - Canonical commit encoding - Hash chain for integrity verification
type ViewPolicy ¶
type ViewPolicy struct {
PolicyID string `json:"policy_id"`
Name string `json:"name"`
DisclosureRules []DisclosureRule `json:"disclosure_rules"`
}
ViewPolicy defines disclosure rules for EvidenceView derivation.
type ViolationAction ¶
type ViolationAction string
ViolationAction defines what to do when a variable violates its bounds.
const ( ViolationActionAlert ViolationAction = "ALERT" ViolationActionClamp ViolationAction = "CLAMP" ViolationActionHalt ViolationAction = "HALT" ViolationActionRevert ViolationAction = "REVERT" )
type ViolationHandler ¶
type ViolationHandler interface {
HandleViolation(trigger *ViolationTrigger, currentValue float64) error
}
ViolationHandler processes violations.
type ViolationTrigger ¶
type ViolationTrigger struct {
VariableID string
Action ViolationAction
ThresholdType string // "min", "max", "range"
MinThreshold float64
MaxThreshold float64
Handler ViolationHandler
}
ViolationTrigger handles essential variable violations.
func (*ViolationTrigger) Check ¶
func (t *ViolationTrigger) Check(value float64) (bool, string)
Check checks if a value violates the trigger's bounds.
func (*ViolationTrigger) Execute ¶
func (t *ViolationTrigger) Execute(value float64) error
Execute executes the violation action.
type WiredEffectBoundary ¶
type WiredEffectBoundary struct {
*InMemoryEffectBoundary
// contains filtered or unexported fields
}
WiredEffectBoundary creates a fully wired effect boundary with PDP integration.
func NewWiredEffectBoundary ¶
func NewWiredEffectBoundary(pdp governance.PolicyDecisionPoint, log EventLog) *WiredEffectBoundary
NewWiredEffectBoundary creates an effect boundary wired to the governance PDP.
Source Files
¶
- agent_kill.go
- blob_store.go
- boundary_assertions.go
- cel_dp.go
- concurrency.go
- context_guard.go
- critical_path.go
- csnf.go
- csnf_decimal.go
- csnf_profiles.go
- effect_boundary.go
- error_ir.go
- evaluation_window.go
- event_log.go
- freeze.go
- interop.go
- io_capture.go
- limiter.go
- limiter_redis.go
- memory_integrity.go
- memory_trust.go
- merkle.go
- nondeterminism.go
- pdp_adapter.go
- prng.go
- reducer.go
- scheduler.go
- secret_ref.go
- task_classifier.go
- total_order_log.go
Directories
¶
| Path | Synopsis |
|---|---|
|
Package consistency implements causal consistency primitives for HELM.
|
Package consistency implements causal consistency primitives for HELM. |
|
Package cpi provides the Canonical Policy Index — a deterministic policy stack validator for the HELM kernel.
|
Package cpi provides the Canonical Policy Index — a deterministic policy stack validator for the HELM kernel. |