docker

package
v1.0.0-alpha.31 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 14, 2026 License: MIT Imports: 40 Imported by: 0

Documentation

Overview

Package docker implements executor.Provider on top of the Docker Engine via the moby/moby Go SDK. Each agent runs as an ephemeral container against a small base image (default: distroless/static), with the agent's materialised content directory bind-mounted at /workspace and the runtime + each BIN image-mounted read-only at /opt/runtime / /opt/bins/<name>.

The CLI (`docker`) is intentionally not used. All Docker interactions go through the SDK so tests can swap a mock implementing the small Client interface defined here.

Async-job Exec for the docker backend. Spawns a per-job container from the agent's base image with the SAME mount + env layout the runtime container uses (workspace bind-mounted at /workspace, BIN images mounted at /opt/bins/<name>/, locked-down env). The job container is labelled so the daemon's boot path can sweep ghosts from a previous incarnation that was killed -9 mid-flight.

stdin: when non-empty, the path attaches to the container's stdio BEFORE start, writes the payload to the hijacked write side, half-closes it (so the BIN reads EOF and continues), then starts the container. Output streaming still uses ContainerLogs.

Index

Constants

View Source
const DefaultBaseImage = "gcr.io/distroless/static-debian12:nonroot"

DefaultBaseImage is the container image agents launch from when no override is supplied. distroless/static is libc-free, has a non-root user (UID 65532), and weighs ~2 MB. Static Go binaries (the runtime + every BIN) don't need libc, so this is enough.

View Source
const MinEngineVersion = 28

MinEngineVersion is the lowest Docker Engine major release that supports `--mount type=image` for non-experimental container creation. Docker 28.0 (February 2025) is when the type validator stopped rejecting it on the daemon side; engines 25–27 advertise the API field but ContainerCreate still errors with "mount type unknown". We fail fast at Provider construction with a pointed error message rather than letting the failure surface midway through Run.

Variables

View Source
var ErrDaemonUnreachable = errors.New("docker daemon unreachable")

ErrDaemonUnreachable is returned by Verify when Info() / Version() can't reach the daemon at all (socket missing, permission denied, etc.). Wraps the underlying SDK error so callers can use errors.Is.

View Source
var ErrEngineTooOld = errors.New("docker engine too old")

ErrEngineTooOld is returned by Verify when the daemon's Engine version is below MinEngineVersion.

View Source
var ErrExecNotImplemented = errors.New("docker executor: Exec is now implemented; this error is unreachable")

ErrExecNotImplemented stays for backward compatibility with any caller that still imports it from the old stub. New code paths should use the real Exec implementation above.

Deprecated: docker.Agent.Exec is now implemented; this var will be removed in a follow-up.

View Source
var ErrNoSnapshotter = errors.New("containerd snapshotter not enabled")

ErrNoSnapshotter is returned by Verify when the daemon doesn't expose the containerd snapshotter — required for openotters' custom OCI mediatypes.

Functions

func NewClient

func NewClient(opts ...mobyclient.Opt) (*mobyclient.Client, error)

NewClient constructs a real *mobyclient.Client. Endpoint resolution order:

  1. DOCKER_HOST env var (and DOCKER_TLS_VERIFY / DOCKER_CERT_PATH / DOCKER_API_VERSION friends, via mobyclient.FromEnv).
  2. Auto-detected sockets in user-home or runtime dirs that non-privileged Docker installs put their endpoint at: - ~/.colima/default/docker.sock (Colima default profile) - ~/.docker/run/docker.sock (Docker Desktop, recent) - $XDG_RUNTIME_DIR/docker.sock (rootless Linux Docker)
  3. SDK default (/var/run/docker.sock on Linux/macOS).

The auto-detection covers the most common case where the daemon is running but `docker` CLI works only because the user's shell has a docker context configured — the SDK's FromEnv doesn't read those contexts, so without help it would fail with "/var/run/docker.sock: no such file" even though `docker ps` works.

API-version negotiation is enabled by default in the SDK, so no explicit option needed.

Compile-time assertion below ensures *mobyclient.Client satisfies our trimmed-down Client interface.

func Verify

func Verify(ctx context.Context, cli Client) error

Verify probes the daemon at Provider construction time:

  • daemon is reachable (Info() / ServerVersion() succeed),
  • Engine version is ≥ MinEngineVersion,
  • containerd snapshotter is enabled (the daemon's Info.Driver reports a containerd snapshotter rather than the classic graphdriver).

Returns a multi-line, copy-pasteable error message when any check fails. Each error wraps a sentinel (ErrDaemonUnreachable, ErrEngineTooOld, ErrNoSnapshotter) so callers can switch on failure mode.

Types

type Agent

type Agent struct {
	// contains filtered or unexported fields
}

Agent is the Docker-backed implementation of executor.Agent.

func (*Agent) Addr

func (a *Agent) Addr() string

Addr returns the host loopback address the runtime's gRPC server is published on. Empty until create() has reserved a port (which happens at Run time) but otherwise stable for the agent's life.

func (*Agent) DeleteSession

func (a *Agent) DeleteSession(ctx context.Context, sessionID string) error

DeleteSession drops sessionID from the runtime's session store. Satisfies executor.SessionDeleter.

func (*Agent) Exec

func (a *Agent) Exec(ctx context.Context, bin string, args []string, stdin string) executor.ExecResult

Exec runs `bin args...` in a new container spawned from the agent's base image, with the agent's workspace bind-mounted and every declared BIN image-mounted at /opt/bins/<name>. Cancellation kills the container; its log stream is captured and demuxed into stdout/stderr.

The agent MUST be initialized (Prepare/Run has resolved its runtime + BINs) before Exec is called — otherwise we don't know what to mount. Calling Exec on an un-initialized agent surfaces in ExecResult.Err.

func (*Agent) FailureReason

func (a *Agent) FailureReason() executor.FailureReason

FailureReason returns the cause when Status() == StatusFailed.

func (*Agent) ListSessionMessages

func (a *Agent) ListSessionMessages(
	ctx context.Context, sessionID string, limit int,
) ([]executor.SessionMessage, error)

ListSessionMessages fetches the persisted message log for sessionID from the runtime's gRPC server. Satisfies executor.SessionReader.

func (*Agent) ListSessions

func (a *Agent) ListSessions(ctx context.Context) ([]executor.SessionInfo, error)

ListSessions enumerates every session in the runtime's memory store. Satisfies executor.SessionLister.

func (*Agent) Prepare

func (a *Agent) Prepare(ctx context.Context) error

Prepare materialises the agent's content workspace. Idempotent.

func (*Agent) Probe

func (a *Agent) Probe(ctx context.Context) error

Probe issues a single Ready() RPC against the runtime inside the container, dialing the host loopback port. Used by the daemon supervisor to gate the Starting → Ready transition.

func (*Agent) Prompt

func (a *Agent) Prompt(ctx context.Context, req executor.PromptRequest, w io.Writer) error

Prompt opens a ChatStream and writes the final assistant response into w, discarding intermediate tool/step/delta events. Mirrors the system executor's Prompter implementation; the only difference is that addr points at the host loopback port mapping back into the container.

func (*Agent) PromptObject

func (a *Agent) PromptObject(ctx context.Context, req executor.ObjectPromptRequest) ([]byte, error)

PromptObject runs a stateless structured-output query against the runtime's gRPC server. The runtime handles JSON-schema parsing and object marshalling; we just relay.

func (*Agent) PromptStream

func (a *Agent) PromptStream(ctx context.Context, req executor.PromptRequest, cb func(executor.PromptEvent)) error

PromptStream opens a ChatStream against the runtime's gRPC server and invokes cb synchronously for every event received.

func (*Agent) Remove

func (a *Agent) Remove(ctx context.Context) error

Remove tears down the container and the agent's on-disk content.

func (*Agent) Run

func (a *Agent) Run(ctx context.Context) error

Run materialises (if needed), pulls images, creates+starts the runtime container, then blocks on wait.

Lifecycle: Pulling (entry — covers MaterializeContent + the docker image pulls for runtime / base / BINs) → Starting (create + start) → (daemon supervisor flips to Ready once readiness probe answers) → Stopped on container exit.

func (*Agent) Runtime

func (a *Agent) Runtime() *executor.Runtime

Runtime returns the resolved runtime descriptor populated at Prepare/Run time.

func (*Agent) Start

func (a *Agent) Start(ctx context.Context) error

Start re-runs a stopped (or failed) agent. Rejects when the agent is already in flight (pulling / starting / ready / working) or removed.

func (*Agent) Status

func (a *Agent) Status() executor.Status

Status returns the current lifecycle state.

func (*Agent) StatusTracker

func (a *Agent) StatusTracker() *executor.StatusTracker

StatusTracker exposes the underlying tracker — see the comment on the system Agent for how the daemon supervisor uses it.

func (*Agent) Stop

func (a *Agent) Stop(ctx context.Context) error

Stop signals the running container to exit and waits for the Run goroutine to return.

func (*Agent) SubscribeStatus

func (a *Agent) SubscribeStatus() (<-chan executor.Status, func())

SubscribeStatus returns a channel of status transitions and a cancel function.

func (*Agent) UUID

func (a *Agent) UUID() uuid.UUID

UUID is the agent's stable identifier across Stop/Start cycles.

type AgentOption

type AgentOption func(*agentDeps)

AgentOption configures one Agent at Create time. Mirrors the system executor's per-agent option mechanism (see agentfile/executor/system/options.go); both packages expose "provider creates agents" + "callers may layer per-agent config" so the openotters daemon's pool.createAgent can thread daemon URL / agent token / log writers / etc. without polluting the abstract executor.Provider.Create signature.

func WithAgentToken

func WithAgentToken(token string) AgentOption

WithAgentToken sets the JWT minted by the daemon for this agent; injected into the container env as OTTERS_AGENT_TOKEN. Empty disables the env var (runtime spawns fine — outbound daemon calls just fail Unauthenticated).

func WithDaemonURL

func WithDaemonURL(url string) AgentOption

WithDaemonURL sets the openotters daemon's TCP endpoint (e.g. http://host.docker.internal:5500) the runtime should dial back to. Bind-mounting the daemon's unix socket would be cleaner, but Docker Desktop / Colima on macOS refuse to bind-mount unix socket files from the host (`stat: operation not supported`), so docker-executor agents always use TCP. Linux gets a host.docker.internal → host-gateway ExtraHosts entry so the hostname resolves the same way as on macOS.

Empty disables OTTERSD_URL injection — the runtime spawns fine, outbound daemon RPCs just fail "no endpoint configured".

type Client

type Client interface {
	Info(
		ctx context.Context, opts mobyclient.InfoOptions,
	) (mobyclient.SystemInfoResult, error)
	ServerVersion(
		ctx context.Context, opts mobyclient.ServerVersionOptions,
	) (mobyclient.ServerVersionResult, error)

	ContainerCreate(
		ctx context.Context, opts mobyclient.ContainerCreateOptions,
	) (mobyclient.ContainerCreateResult, error)
	ContainerStart(
		ctx context.Context, id string, opts mobyclient.ContainerStartOptions,
	) (mobyclient.ContainerStartResult, error)
	ContainerStop(
		ctx context.Context, id string, opts mobyclient.ContainerStopOptions,
	) (mobyclient.ContainerStopResult, error)
	ContainerRemove(
		ctx context.Context, id string, opts mobyclient.ContainerRemoveOptions,
	) (mobyclient.ContainerRemoveResult, error)
	ContainerInspect(
		ctx context.Context, id string, opts mobyclient.ContainerInspectOptions,
	) (mobyclient.ContainerInspectResult, error)
	ContainerLogs(
		ctx context.Context, id string, opts mobyclient.ContainerLogsOptions,
	) (mobyclient.ContainerLogsResult, error)
	ContainerList(
		ctx context.Context, opts mobyclient.ContainerListOptions,
	) (mobyclient.ContainerListResult, error)
	// ContainerAttach returns a hijacked connection wired up to the
	// container's stdio. The async-exec path uses it to write a
	// stdin payload before ContainerStart, then half-closes the
	// write side so the BIN reads EOF and continues.
	ContainerAttach(
		ctx context.Context, id string, opts mobyclient.ContainerAttachOptions,
	) (mobyclient.ContainerAttachResult, error)

	ImagePull(
		ctx context.Context, ref string, opts mobyclient.ImagePullOptions,
	) (mobyclient.ImagePullResponse, error)
	ImagePush(
		ctx context.Context, ref string, opts mobyclient.ImagePushOptions,
	) (mobyclient.ImagePushResponse, error)
	ImageList(
		ctx context.Context, opts mobyclient.ImageListOptions,
	) (mobyclient.ImageListResult, error)
	ImageInspect(
		ctx context.Context, imageID string, opts ...mobyclient.ImageInspectOption,
	) (mobyclient.ImageInspectResult, error)
	ImageRemove(
		ctx context.Context, imageID string, opts mobyclient.ImageRemoveOptions,
	) (mobyclient.ImageRemoveResult, error)
	ImageTag(
		ctx context.Context, opts mobyclient.ImageTagOptions,
	) (mobyclient.ImageTagResult, error)
	ImageLoad(
		ctx context.Context, input io.Reader, opts ...mobyclient.ImageLoadOption,
	) (mobyclient.ImageLoadResult, error)
	ImageSave(
		ctx context.Context, imageIDs []string, opts ...mobyclient.ImageSaveOption,
	) (mobyclient.ImageSaveResult, error)

	Close() error
}

Client is the strict subset of the moby/moby SDK that the executor relies on. Defining it here (rather than depending on the full `*mobyclient.Client` directly) keeps the test surface tiny: mockery generates a mock that only has to implement the methods we genuinely call.

The real implementation is *mobyclient.Client; in tests, swap a mock via WithClient(MockClient).

The moby SDK's v0.4.x convention is `(ctx, [id,] Options) (Result, error)` — this interface mirrors that signature exactly so the real client satisfies it without an adapter (the compile-time assertion below enforces that).

type Provider

type Provider struct {
	// contains filtered or unexported fields
}

Provider implements executor.Provider against a Docker Engine via the moby/moby Go SDK.

func NewProvider

func NewProvider(root billy.Filesystem, storeFor StoreFor, opts ...ProviderOption) (*Provider, error)

NewProvider constructs a Docker provider. Pass WithClient to inject a mock in tests; production callers pass nothing and the Provider builds a default Client via NewClient (which honours DOCKER_HOST and negotiates the API version with the daemon).

root is the host directory that holds per-agent materialised content (one subdir per agent UUID). storeFor is consulted at Create time to load the agent OCI artifact. Both have the same shape as the system executor's equivalents.

func (*Provider) Close

func (p *Provider) Close() error

Close releases the underlying SDK client.

func (*Provider) Create

func (p *Provider) Create(
	ctx context.Context, id uuid.UUID, ref spec.Reference, overrides ...spec.Override,
) (executor.Agent, error)

Create returns a new Agent bound to the provided ID + image reference. The agent is not yet started; call Run / Start.

func (*Provider) CreateWithOptions

func (p *Provider) CreateWithOptions(
	_ context.Context, id uuid.UUID, ref spec.Reference,
	agentOpts []AgentOption, overrides ...spec.Override,
) (executor.Agent, error)

CreateWithOptions is the per-agent-options variant of Create. The openotters daemon's pool.createAgent uses it to thread daemon URL / agent token / future per-agent injections without changing the abstract executor.Provider.Create signature. Pure additive — Create is a thin wrapper that calls this with no AgentOption.

func (*Provider) Destroy

func (p *Provider) Destroy(_ context.Context) error

Destroy removes all per-agent directories and any container labelled io.openotters.agent.

func (*Provider) Load

func (p *Provider) Load(_ context.Context) ([]executor.Agent, error)

Load lists existing agent directories on disk. Currently a stub — re-binding running containers to in-process Agent values is a follow-up; on daemon restart with the docker executor, agents are dropped from the pool and the user re-runs them.

func (*Provider) Registry

func (p *Provider) Registry() executor.Registry

Registry returns the Docker-backed executor.Registry façade.

type ProviderOption

type ProviderOption func(*Provider)

ProviderOption configures the Docker Provider.

func WithBaseImage

func WithBaseImage(ref string) ProviderOption

WithBaseImage overrides the base image used to launch every agent container. Default is "gcr.io/distroless/static-debian12:nonroot". Pass "scratch" if you need a base with no /etc/passwd at all.

func WithClient

func WithClient(c Client) ProviderOption

WithClient overrides the moby/moby client. Production callers omit this so the Provider builds one via NewClient (DOCKER_HOST + API negotiation from env). Tests pass a mock implementing the Client interface to drive lifecycle calls without a real daemon.

func WithLogDir

func WithLogDir(dir string) ProviderOption

WithLogDir captures each agent's runtime stdout/stderr to <dir>/<agent-id>.log. Same shape as the system executor's option; not yet wired through the docker Agent's container-logs flow but reserved for the follow-up that pipes ContainerLogs into a file.

func WithModelResolver

func WithModelResolver(r model.Resolver) ProviderOption

WithModelResolver wires a model.Resolver onto the docker Provider. The Provider passes it to each Agent so credentials are looked up at materialise time.

func WithMounts

func WithMounts(m []executor.Mount) ProviderOption

WithMounts attaches user mounts (`-v`) to every agent the Provider creates. Same semantics as the system executor.

func WithSkipVerify

func WithSkipVerify() ProviderOption

WithSkipVerify disables the daemon Verify() probe at NewProvider time. Tests use it to construct a Provider against a mock without the real Info/ServerVersion calls; production callers should not set it.

func WithUsageFetcher

func WithUsageFetcher(f agentoci.UsageFetcher) ProviderOption

WithUsageFetcher overrides the OCI fetcher used to read each BIN's USAGE.md body at materialisation time. Default is agentoci.RemoteUsageFetcher() (talks directly to the upstream registry); the openotters daemon swaps in a caching variant that reads through the embedded registry. Nil disables doc extraction — the runtime still sees the stamped Docs.Usage path but reads no body.

type Store

type Store struct {
	// contains filtered or unexported fields
}

Store is an oras.Target backed by the Docker daemon's image store. It exists so the docker executor can use Docker as the One True Registry — `otters image build`, `bin build`, `image ls`, `image rm`, `image push/pull` all flow through `cli.Image*` instead of an embedded oras-go HTTP server.

Wire shape: Push accumulates blobs/manifests in memory; Tag finalises the staged content into an OCI image layout tar and streams it through cli.ImageLoad. Resolve / Fetch / Exists serve from the cache when present, falling back to cli.ImageSave for content the daemon already has but we haven't staged.

Why staging instead of a per-blob commit: cli.ImageLoad expects a complete OCI layout (oci-layout file + index.json + blobs/); blob-by-blob writes to the daemon don't exist in the SDK. Build pipelines (build.Build / bin.Build / bin.BuildIndex) push every blob, then tag the manifest once at the end — the same shape as the OCI distribution spec, just batched into a single ImageLoad at Tag time.

One Store is created per (daemon, agent root) — the openotters daemon's storeFor closure spawns one per ref under the docker executor. The Store keeps blobs in memory; agent / bin OCI artifacts are kilobytes (a Linux-arm64 ping is ~2 MiB even gzipped), so memory pressure isn't a concern.

func NewStore

func NewStore(cli Client) *Store

NewStore returns an oras.Target backed by cli. The Store is stateful (accumulates blobs across Push calls) so callers should scope one to each build / read flow rather than sharing across concurrent operations.

func (*Store) Exists

func (s *Store) Exists(_ context.Context, desc ocispec.Descriptor) (bool, error)

Exists reports whether desc is staged or already known to the daemon. We don't probe the daemon for arbitrary blobs — that would require a content-store API we don't have — so we only claim existence when the blob is in our staging map. The build pipeline tolerates Exists=false followed by a successful Push (oras Copy semantics), so the conservative answer is correct.

func (*Store) Fetch

func (s *Store) Fetch(_ context.Context, desc ocispec.Descriptor) (io.ReadCloser, error)

Fetch returns the bytes for desc. Staged blobs win; otherwise we hydrate the entire image whose ref points at one of our loaded manifests, populating the cache so subsequent Fetches succeed from memory.

func (*Store) Push

func (s *Store) Push(_ context.Context, desc ocispec.Descriptor, r io.Reader) error

Push stages the blob at desc in memory. The bytes don't reach the docker daemon until Tag is called for some descriptor that references this blob.

func (*Store) Resolve

func (s *Store) Resolve(ctx context.Context, ref string) (ocispec.Descriptor, error)

Resolve looks up ref → descriptor. Staged tags win (so Tag + Resolve in the same Store sees just-built content). Otherwise we ImageSave the ref, parse the OCI image layout tar, and populate our cache with every blob we saw. Returns ErrNotFound when the daemon has no image at that ref.

func (*Store) Tag

func (s *Store) Tag(ctx context.Context, desc ocispec.Descriptor, ref string) error

Tag commits the staged content as a Docker image-store entry pointed at by ref. Builds an OCI image layout tar containing every staged blob + an index.json with the ref → manifest pointer, then streams it to the daemon via cli.ImageLoad.

Subsequent Resolve(ref) calls return without re-saving thanks to the staged blobs cache. Tagging the same ref twice in a row (e.g. `bin build` writes both `name:latest` and `name:tag`) is supported — each call rebuilds the layout from the same blob set.

type StoreFor

type StoreFor func(ref spec.Reference) oras.ReadOnlyTarget

StoreFor returns an OCI target backing a specific agent's image ref. Same shape as system.StoreFor — the daemon constructs both.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL