Documentation
¶
Index ¶
Constants ¶
const HandlerFirecracker = "firecracker"
HandlerFirecracker is the config.toml runtime.handler value that selects the raw Firecracker driver. Linux-only. See docs/plan.md §4.1 for why hpcc drives Firecracker directly instead of going through firecracker-containerd.
const HandlerReallyReallyDangerous = "really_really_dangerous"
HandlerReallyReallyDangerous is the config.toml runtime.handler value that selects DangerouslyExecOnHost. The string is intentionally awful so nobody sets it without knowing what they're doing — every compile runs as a child process of the worker, on the worker host, with the worker's full file system, network, and credentials. There is no kernel boundary, no NIC removal, no audit envelope. Development only.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Container ¶
type Container interface {
ID() string
TenantID() string
ImageDigest() string
State() gen.VMState
// Exec runs one compiler invocation inside the container and blocks
// until the process exits, ctx is cancelled, or the container dies.
// On ctx cancellation the in-container process is killed via the
// shim's process API and ctx.Err() is returned.
Exec(ctx context.Context, req ExecRequest) (ExecResult, error)
// Stop tears the container down (SIGTERM → grace → SIGKILL) and
// reaps the VM. Idempotent; safe to call after the container has
// already exited on its own.
Stop(ctx context.Context) error
}
Container is one running per-tenant sandbox. One container == one VM (Firecracker microVM on Linux, Hyper-V utility VM on Windows), so this is also what the worker reports in WorkerHeartbeat.active_vms.
Methods are safe for concurrent use; Exec calls fan out as separate Task.Execs against the same underlying task.
type ContainerSpec ¶
type ContainerSpec struct {
ID string // worker-unique container id, also the VM id reported in heartbeats
TenantID string
ImageDigest string // prepared-image digest (output of image.Store)
VCPUs int32
MemoryBytes int64
}
ContainerSpec describes one per-tenant container. The runtime translates this into a backend-native shape: a Firecracker VMM configuration on Linux, an OCI runtime spec for the containerd + hcsshim path on Windows.
Per-RPC source/output mount paths are deliberately NOT here — they live on ExecRequest instead. Containers are reusable across compiles (see PooledRuntime), so the spec captures only stable, tenant-level shape (identity, sizing); the mounts that change per compile bind at Exec time.
type DangerouslyExecOnHost ¶
type DangerouslyExecOnHost struct{}
DangerouslyExecOnHost is a Runtime implementation that does NOT isolate compiles at all — every Exec is forked from the worker process via os/exec, sharing the worker's PID namespace, file system, network namespace, and uid. It exists solely to exercise the worker→runtime call path during development; selecting it in production puts the entire worker host inside the trust boundary.
The /src and /out "mounts" are simulated by string-substituting the in-container paths in argv with the host-side paths from ContainerSpec at exec time. From the rest of the system's perspective (worker handler, runtimeExecutor, compiler package), argv looks the same as it would for a real Firecracker run.
func (DangerouslyExecOnHost) Close ¶
func (DangerouslyExecOnHost) Close() error
func (DangerouslyExecOnHost) Start ¶
func (DangerouslyExecOnHost) Start(ctx context.Context, spec ContainerSpec) (Container, error)
type ExecRequest ¶
type ExecRequest struct {
ExecID string // unique within the container; surfaces in backend events (vsock RPC id on Linux, containerd task id on Windows)
Argv []string
Env []string
Cwd string
SrcHostPath string
OutHostPath string
Stdin io.Reader // optional
Stdout, Stderr io.Writer // optional; nil discards
}
ExecRequest is one Task.Exec — a fully-resolved toolchain invocation. No shell; argv goes straight to execve in the guest.
SrcHostPath / OutHostPath are the per-Exec bind-mount targets. The runtime is responsible for making them visible at /src and /out inside the sandbox for the lifetime of this Exec; an empty string means "no such mount." Real backends (raw Firecracker on Linux, hcsshim on Windows) attach these at Exec time so a pooled container can serve compiles for different RPC tmpdirs without restart.
type ExecResult ¶
type ExecResult struct {
ExitCode int
}
ExecResult is what Exec returns when the process exited cleanly under the runtime's control. A non-zero ExitCode is *not* an error — the caller decides whether a compiler failure is fatal.
type Firecracker ¶
type Firecracker struct {
// contains filtered or unexported fields
}
Firecracker is the raw Firecracker Runtime. It owns the per-tenant VMM lifecycle (jailer + firecracker), wiring the prepared ext4 rootfs and hpcc-supplied vmlinux into a freshly-jailed chroot and driving the Firecracker API to the InstanceStart action.
The host-side vsock agent and Exec dispatch are not wired here yet — Exec returns errFirecrackerExecNotImplemented. Boot is independently testable now so the rest of the worker plumbing (image pull, pool, runtime.Select) can be exercised end-to-end against a real microVM.
func NewFirecracker ¶
func NewFirecracker(opts FirecrackerOptions) (*Firecracker, error)
NewFirecracker validates required paths/credentials up front so a misconfigured worker fails at startup rather than on the first Compile RPC. All path fields and UID/GID must be set; BootArgs may be empty (defaultBootArgs is used).
func (*Firecracker) Close ¶
func (f *Firecracker) Close() error
func (*Firecracker) Start ¶
func (f *Firecracker) Start(ctx context.Context, spec ContainerSpec) (Container, error)
type FirecrackerOptions ¶
type FirecrackerOptions struct {
FirecrackerBin string
JailerBin string
KernelImage string
RootfsDir string
RunDir string
UID, GID int
BootArgs string
}
FirecrackerOptions is the runtime's host-side configuration. All path fields are required; UID/GID are the non-root credentials jailer drops to before exec'ing firecracker. BootArgs is optional and falls back to defaultBootArgs.
type Options ¶
type Options struct {
Firecracker FirecrackerOptions
}
Options bundles backend-specific runtime knobs. Select inspects the fields whose handler was chosen and ignores the rest, so callers can populate every backend up front and let Select pick.
type PooledRuntime ¶
type PooledRuntime struct {
// contains filtered or unexported fields
}
PooledRuntime wraps a Runtime with a per-tenant container pool. The first Start for a (TenantID, ImageDigest) pair forwards to the inner runtime; on subsequent Starts an idle pooled container — if any — is handed back instead, amortizing VM-boot cost across compiles.
Containers returned to the caller are wrapped: their Stop releases to the pool rather than tearing the inner container down. The reaper goroutine evicts entries on two clocks:
- idleTTL — parked entries older than this since their last park are stale and get torn down.
- maxLifetime — entries whose original (cold-start) age exceeds this are torn down regardless of how recently they were used. This is §4.2's "hard session timeout (e.g. shift change, N hours)": long-lived per-tenant state accumulates and someone will eventually ask what's in it.
A maxParked cap evicts the oldest parked container when a new one would push the pool over the limit. Close drains everything.
Key choice — (TenantID, ImageDigest) — is what defines a reusable session in §4.4 of the design. VCPUs/MemoryBytes are worker-global (driven by cfg.VM) so they don't enter the key. Per-RPC source and output mount paths live on ExecRequest, not ContainerSpec, so they don't enter the key either — that's what makes pooling tractable.
func NewPooledRuntime ¶
func NewPooledRuntime(inner Runtime, idleTTL, maxLifetime time.Duration, maxParked int) *PooledRuntime
NewPooledRuntime wraps inner and starts the reaper if either timeout is enabled.
idleTTL is the maximum time a parked container may sit in the pool before it gets evicted; pass 0 to disable idle reaping.
maxLifetime is the hard ceiling on a container's wall-clock age, measured from its original cold start. Past this, the container is torn down at the next pop or park regardless of how recently it was used. Pass 0 to disable.
maxParked is the upper bound on parked containers across all keys; when a park would exceed it, the oldest parked container is evicted first. Pass 0 for unlimited (the dev/test default).
func (*PooledRuntime) Close ¶
func (p *PooledRuntime) Close() error
Close drains the pool — every parked container gets a real Stop — and forwards to the inner runtime's Close. After Close the pool rejects further Starts; outstanding pooledContainers will fall through to the inner Stop on their next Stop call.
func (*PooledRuntime) Start ¶
func (p *PooledRuntime) Start(ctx context.Context, spec ContainerSpec) (Container, error)
Start returns a pooled container if one is available for spec's (TenantID, ImageDigest); otherwise it forwards to the inner runtime. The caller gets a wrapper whose Stop releases back to the pool.
On a warm hit, createdAt rides through from the popped entry into the new wrapper so the maxLifetime clock is anchored on the original cold start, not the latest reuse — that's what makes "hard session timeout" actually hard.
type Runtime ¶
type Runtime interface {
// Start launches a new per-tenant container from a prepared image
// (pause binary already injected as PID 1) and waits until it's
// ready to accept Execs. The returned Container is owned by the
// caller and must be Stopped to release the underlying VM.
Start(ctx context.Context, spec ContainerSpec) (Container, error)
// Close releases client-level resources (containerd connection).
// Outstanding Containers must be Stopped first.
Close() error
}
Runtime owns per-tenant compile sandboxes. Implementations are backend-specific — a raw Firecracker driver on Linux (hpcc owns image pull, ext4 rootfs build, VMM lifecycle, and the host-side vsock channel to the in-VM agent), containerd + hcsshim (Hyper-V isolation) on Windows — but the surface is the same: start a container from a prepared image, dispatch Execs into it, stop it. Image preparation (pulling, pause/agent injection) is the image package's job and is out of scope here.
func Select ¶
Select returns the Runtime implementation that matches the configured runtime.handler value. Unimplemented backends return a clear error at worker startup rather than panicking later on the first Compile.
Recognized values:
"really_really_dangerous" — DangerouslyExecOnHost; dev only.
"firecracker" — raw Firecracker driver (Linux).
"runhcs-wcow-hypervisor" — containerd + hcsshim Hyper-V isolation
(Windows); not implemented.