limits

package
v0.8.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 25, 2026 License: Apache-2.0 Imports: 6 Imported by: 0

Documentation

Overview

Package limits provides helper functions for enforcing resource limits.

Package limits defines resource bounds that prevent denial-of-service attacks.

Size Limits

Size limits use the typed SizeLimit type to provide compile-time safety. Use the pre-defined limits for parsing operations:

safejson.Unmarshal(data, limits.ConfigFile, &cfg)
safeyaml.Unmarshal(data, limits.LockFile, &lock)
safejson.DecodeReader(resp.Body, "api", limits.JSONResponse, &result)

All limits are derived from the evidence pack spec Section 7.2 and are designed to bound memory usage, prevent zip bombs, and limit subprocess output. Adjust with caution: relaxing limits may create resource exhaustion vectors.

Index

Constants

View Source
const (
	// PrivateDirMode for directories only the owner should access (0700).
	PrivateDirMode os.FileMode = 0700

	// StandardDirMode for directories with standard access (0755).
	StandardDirMode os.FileMode = 0755

	// PrivateFileMode for files only the owner should read/write (0600).
	PrivateFileMode os.FileMode = 0600

	// StandardFileMode for files with standard read access (0644).
	StandardFileMode os.FileMode = 0644
)

File permission constants for secure file operations. Use these instead of raw octal values.

View Source
const (
	// MaxPackSizeBytes is the maximum total pack size (2 GB).
	MaxPackSizeBytes int64 = 2 * 1024 * 1024 * 1024

	// MaxArtifactCount is the maximum number of artifacts in a pack.
	MaxArtifactCount int = 10000

	// MaxCompressionRatio is the maximum allowed compression ratio (100:1).
	// This helps detect zip bombs.
	MaxCompressionRatio int = 100

	// MaxZipEntries is the maximum number of entries in a zip archive.
	// This prevents DoS via central directory bloat.
	MaxZipEntries int = 15000

	// MaxAttestationJSONDepth is the maximum nesting depth for attestation JSON.
	// Prevents stack overflow during parsing of maliciously nested structures.
	MaxAttestationJSONDepth int = 32
)

Pack size limits (not typed - used for streaming/comparison).

View Source
const (
	// MaxCollectorCount is the maximum number of collectors in a config/lockfile.
	MaxCollectorCount int = 1000

	// MaxToolCount is the maximum number of tools in a config/lockfile.
	MaxToolCount int = 1000

	// MaxRemoteCount is the maximum number of remotes in a config.
	MaxRemoteCount int = 100

	// MaxUtilityCount is the maximum number of utilities in a user lockfile.
	MaxUtilityCount int = 100

	// MaxPlatformCount is the maximum number of platforms per collector.
	MaxPlatformCount int = 100

	// MaxCatalogComponentCount is the maximum number of components in a catalog.
	MaxCatalogComponentCount int = 10000

	// MaxYAMLAliasExpansion is the maximum ratio of expanded size to input size.
	// YAML alias bombs can expand small inputs into huge outputs.
	MaxYAMLAliasExpansion int = 10
)

Count limits for DoS prevention.

View Source
const (
	// MaxAggregateOutputBytes is the maximum total bytes retained across all collector outputs (256 MB).
	MaxAggregateOutputBytes int64 = 256 * 1024 * 1024

	// DefaultSigningMemoryLimit is the default maximum memory for signing operations (256 MB).
	DefaultSigningMemoryLimit int64 = 256 * 1024 * 1024
)

Execution limits for DoS prevention.

View Source
const (
	// DefaultHTTPTimeout is the default timeout for HTTP requests (30 seconds).
	DefaultHTTPTimeout = 30 * time.Second

	// DefaultCollectorTimeout is the default timeout for collector execution (60 seconds).
	DefaultCollectorTimeout = 60 * time.Second

	// DefaultToolTimeout is the default timeout for tool execution (5 minutes).
	DefaultToolTimeout = 5 * time.Minute

	// MaxHTTPRedirects is the maximum number of HTTP redirects to follow.
	MaxHTTPRedirects = 10
)

Timeout limits for network and execution operations. These use time.Duration for natural Go ergonomics.

View Source
const (
	// MaxRecursionDepth is the maximum depth for recursive operations.
	MaxRecursionDepth = 100

	// MaxMergeNestingDepth is the maximum nesting depth for merged packs.
	MaxMergeNestingDepth = 10
)

Recursion and depth limits for DoS prevention.

Variables

This section is empty.

Functions

func LimitedCopy

func LimitedCopy(dst io.Writer, src io.Reader, limit int64) (int64, error)

LimitedCopy copies from src to dst with a byte limit. Returns ErrSizeLimitExceeded if the source exceeds the limit. Unlike io.Copy with LimitReader alone, this explicitly errors rather than silently truncating.

func LimitedCopyOp

func LimitedCopyOp(dst io.Writer, src io.Reader, limit int64, op string) (int64, error)

LimitedCopyOp is like LimitedCopy but includes an operation name in errors.

func ReadAllWithLimit

func ReadAllWithLimit(r io.Reader, limit int64) ([]byte, error)

ReadAllWithLimit reads at most limit bytes and returns ErrSizeLimitExceeded when input is larger.

func ReadAllWithLimitOp

func ReadAllWithLimitOp(r io.Reader, limit int64, op string) ([]byte, error)

ReadAllWithLimitOp reads all data from r up to limit bytes. The op parameter provides context for error messages.

Types

type Budget

type Budget struct {
	// contains filtered or unexported fields
}

Budget tracks cumulative resource usage across operations. Budget is safe for concurrent use.

Example usage:

budget := limits.NewBudget(limits.MaxPackSizeBytes, limits.MaxArtifactCount)
for _, file := range files {
    if !budget.ReserveBytes(file.Size) {
        return errors.New("pack size limit exceeded")
    }
    if !budget.ReserveCount() {
        return errors.New("artifact count limit exceeded")
    }
    // process file
}

func NewBudget

func NewBudget(maxBytes int64, maxCount int) *Budget

NewBudget creates a budget with the specified byte and count limits. Pass 0 for a limit to disable that constraint.

func NewBytesBudget

func NewBytesBudget(maxBytes int64) *Budget

NewBytesBudget creates a budget with only a byte limit.

func (*Budget) BytesRemaining

func (b *Budget) BytesRemaining() int64

BytesRemaining returns the remaining bytes in the budget.

func (*Budget) BytesUsed

func (b *Budget) BytesUsed() int64

BytesUsed returns the number of bytes used from the budget.

func (*Budget) CountRemaining

func (b *Budget) CountRemaining() int

CountRemaining returns the remaining count in the budget. Returns math.MaxInt if the budget is unlimited or remaining exceeds int range.

func (*Budget) ReleaseBytes

func (b *Budget) ReleaseBytes(n int64)

ReleaseBytes releases previously reserved bytes back to the budget. This is useful when a reservation was made but the operation was cancelled.

func (*Budget) ReserveBytes

func (b *Budget) ReserveBytes(n int64) bool

ReserveBytes attempts to atomically reserve n bytes from the budget. Returns true if reservation succeeded, false if budget would be exceeded.

SECURITY: Uses compare-and-swap to prevent concurrent readers from overshooting.

func (*Budget) ReserveBytesUpTo

func (b *Budget) ReserveBytesUpTo(n int64) int64

ReserveBytesUpTo attempts to reserve up to n bytes, reserving as much as available. Returns the number of bytes actually reserved (may be less than n if near limit). Returns 0 if no budget is available.

func (*Budget) ReserveCount

func (b *Budget) ReserveCount() bool

ReserveCount attempts to reserve one count from the budget. Returns true if reservation succeeded, false if count limit reached.

type ErrBudgetExhausted

type ErrBudgetExhausted struct {
	Limit int64
	Used  int64
}

ErrBudgetExhausted is returned when an operation's budget is exhausted.

func (*ErrBudgetExhausted) Error

func (e *ErrBudgetExhausted) Error() string

type ErrRecursionLimitExceeded

type ErrRecursionLimitExceeded struct {
	Depth    int
	MaxDepth int
}

ErrRecursionLimitExceeded is returned when recursion depth exceeds the limit.

func (*ErrRecursionLimitExceeded) Error

func (e *ErrRecursionLimitExceeded) Error() string

type ErrSizeLimitExceeded

type ErrSizeLimitExceeded struct {
	Limit int64
	Op    string // optional: operation name for context
}

ErrSizeLimitExceeded is returned when a read operation exceeds the size limit.

func (*ErrSizeLimitExceeded) Error

func (e *ErrSizeLimitExceeded) Error() string

type LimitedWriter

type LimitedWriter struct {
	// contains filtered or unexported fields
}

LimitedWriter forwards writes up to limit bytes and discards the remainder. It reports len(p) to avoid breaking subprocess pipes.

LimitedWriter is safe for concurrent use.

func NewLimitedWriter

func NewLimitedWriter(w io.Writer, limit int64) *LimitedWriter

NewLimitedWriter creates a writer that accepts at most limit bytes. Excess bytes are silently discarded (returns original length to caller).

func (*LimitedWriter) Truncated

func (l *LimitedWriter) Truncated() bool

Truncated reports whether writes hit the configured byte limit.

func (*LimitedWriter) Write

func (l *LimitedWriter) Write(p []byte) (n int, err error)

Write implements io.Writer. Bytes beyond the limit are silently discarded. Write is safe for concurrent use.

func (*LimitedWriter) Written

func (l *LimitedWriter) Written() int64

Written returns the number of bytes written (not discarded).

type RecursionGuard

type RecursionGuard struct {
	// contains filtered or unexported fields
}

RecursionGuard prevents stack overflow from deep recursion. NOT safe for concurrent use - use one per goroutine.

Example usage:

func traverse(node *Node, guard *limits.RecursionGuard) error {
    if err := guard.Enter(); err != nil {
        return err
    }
    defer guard.Leave()
    // process node and recurse...
}

func NewRecursionGuard

func NewRecursionGuard(maxDepth int) *RecursionGuard

NewRecursionGuard creates a guard with the specified max depth.

func (*RecursionGuard) Depth

func (g *RecursionGuard) Depth() int

Depth returns the current recursion depth.

func (*RecursionGuard) Enter

func (g *RecursionGuard) Enter() error

Enter increments depth and returns error if max exceeded. Must call Leave() after Enter() returns nil. If Enter() returns an error, depth is NOT incremented and Leave() should NOT be called.

func (*RecursionGuard) Leave

func (g *RecursionGuard) Leave()

Leave decrements depth. Must be called after Enter returns nil.

type SizeLimit

type SizeLimit int64

SizeLimit is a typed size limit for parsing operations. Using a distinct type prevents accidentally passing arbitrary int64 values.

var (
	// ConfigFile is the limit for epack.yaml and similar config files (1 MB).
	ConfigFile SizeLimit = 1 * 1024 * 1024

	// LockFile is the limit for epack.lock.yaml (10 MB).
	// Lockfiles can be larger due to multi-platform digests.
	LockFile SizeLimit = 10 * 1024 * 1024

	// JSONResponse is the limit for JSON API responses (10 MB).
	// Use for HTTP responses, adapter outputs, etc.
	JSONResponse SizeLimit = 10 * 1024 * 1024

	// Manifest is the limit for manifest.json in packs (10 MB).
	Manifest SizeLimit = 10 * 1024 * 1024

	// Artifact is the limit for a single artifact (100 MB).
	Artifact SizeLimit = 100 * 1024 * 1024

	// Attestation is the limit for Sigstore attestation bundles (1 MB).
	Attestation SizeLimit = 1 * 1024 * 1024

	// Catalog is the limit for component catalog files (5 MB).
	Catalog SizeLimit = 5 * 1024 * 1024

	// CatalogMeta is the limit for catalog.json.meta (64 KB).
	CatalogMeta SizeLimit = 64 * 1024

	// ToolResult is the limit for tool result.json (10 MB).
	// SECURITY: Tool output is untrusted.
	ToolResult SizeLimit = 10 * 1024 * 1024

	// CollectorOutput is the limit for a single collector's stdout (64 MB).
	CollectorOutput SizeLimit = 64 * 1024 * 1024

	// AssetDownload is the limit for downloaded assets from GitHub (500 MB).
	AssetDownload SizeLimit = 500 * 1024 * 1024
)

Size limit constants for parsing operations. These are the ONLY valid values for safejson/safeyaml/safefile functions.

func (SizeLimit) Bytes

func (s SizeLimit) Bytes() int64

Bytes returns the limit as an int64 for use with APIs that require it.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL