herd

package module
v0.3.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 13, 2026 License: MIT Imports: 18 Imported by: 0

README

Herd - Go Library

Herd is a session-affine process pool for Go. It manages a fleet of OS subprocess "workers" and routes incoming requests to the correct worker based on an arbitrary session ID.

The Core Invariant

1 Session ID → 1 Worker, for the lifetime of the session.

This invariant transforms stateful binaries (like Browsers, LLMs, or REPLs) into multi-tenant services. Because a session always hits the same process, you can maintain in-memory state, KV caches, or local file systems without a complex coordination layer.


🚀 Key Features

  • Session Affinity: Guaranteed routing of a session ID to its unique pinned worker.
  • Auto-Scaling: Dynamically scale workers between min and max bounds based on demand.
  • Idle Eviction (TTL): Automatically reclaim workers that haven't been accessed within a configurable TTL.
  • Health Monitoring: Continuous liveness checks on every worker process; dead workers are automatically replaced.
  • Singleflight Acquisition: Protects against "thundering herd" issues where multiple concurrent requests for a new session ID try to spawn workers simultaneously.
  • Generic Clients: Fully generic Pool[C] supports any client type (HTTP, gRPC, custom structs).
  • Reverse Proxy Helper: Built-in HTTP reverse proxy that handles the full session lifecycle (Acquire → Proxy → Release).

📦 Installation

go get github.com/hackstrix/herd

🌐 Quick Start: Playwright Browser Isolation

Herd is perfect for creating multi-tenant browser automation gateways. In this example, each session ID gets its own dedicated Chrome instance. Because browsers maintain complex state (cookies, local storage, open pages), we configure Herd to never reuse a worker once its TTL expires, avoiding cross-tenant state leaks.

You can find the full, runnable code for this example in examples/playwright/main.go.

1. The Code
package main

import (
	"log"
	"net/http"
	"time"

	"github.com/hackstrix/herd"
	"github.com/hackstrix/herd/proxy"
)

func main() {
	// 1. Spawns an isolated npx playwright run-server per user
	factory := herd.NewProcessFactory("npx", "playwright", "run-server", "--port", "{{.Port}}", "--host", "127.0.0.1").
		WithHealthPath("/").
		WithStartTimeout(1 * time.Minute).
		WithStartHealthCheckDelay(500 * time.Millisecond)

	// 2. Worker reuse is disabled to prevent state leaks between sessions
	pool, _ := herd.New(factory,
		herd.WithAutoScale(1, 5), // auto-scale between 1 and 5 concurrent tenants (until expires)
		herd.WithTTL(15 * time.Minute),
		herd.WithWorkerReuse(false), // CRITICAL: Never share browsers between users
	)

	// 3. Setup proxy to intelligently route WebSocket connections
	mux := http.NewServeMux()
	mux.Handle("/", proxy.NewReverseProxy(pool, func(r *http.Request) string {
		return r.Header.Get("X-Session-ID") // Pin by X-Session-ID
	}))

	log.Fatal(http.ListenAndServe(":8080", mux))
}
2. Running It

Start the gateway (assuming you are in the examples/playwright directory):

sudo snap install node
npx playwright install --with-deps
# Running without sudo will disable cgroup isolation.
sudo go run .
3. Usage

Connect to the gateway using Python and Playwright. Herd guarantees that all requests with the same X-Session-ID connect to the exact same browser instance, preserving your state (like logins, cookies, and tabs) across reconnections as long as your session TTL hasn't expired!

import asyncio
from playwright.async_api import async_playwright

async def main():
    async with async_playwright() as p:
        # Herd routes based on X-Session-ID header
        browser = await p.chromium.connect(
            "ws://127.0.0.1:8080/", 
            headers={"X-Session-ID": "my-secure-session"}
        )
        
        ctx = await browser.new_context()
        page = await ctx.new_page()
        await page.goto("https://github.com")
        print(await page.title())
        await browser.close()

asyncio.run(main())

🛠️ Quick Start: Ollama Multi-Agent Gateway

Here is an example of turning ollama serve into a multi-tenant LLM gateway where each agent (or user) gets their own dedicated Ollama process. This is specifically useful for isolating context windows or KV caches per agent without downloading models multiple times.

You can find the full, runnable code for this example in examples/ollama/main.go.

1. The Code
package main

import (
	"context"
	"log"
	"net/http"
	"time"

	"github.com/hackstrix/herd"
	"github.com/hackstrix/herd/proxy"
)

func main() {
	// 1. Define how to spawn an Ollama worker on a dynamic port
	factory := herd.NewProcessFactory("ollama", "serve").
		WithEnv("OLLAMA_HOST=127.0.0.1:{{.Port}}").
		WithHealthPath("/").
		WithStartTimeout(2 * time.Minute).
		WithStartHealthCheckDelay(1 * time.Second)

	// 2. Create the pool with auto-scaling and TTL eviction
	pool, _ := herd.New(factory,
		herd.WithAutoScale(1, 10),
		herd.WithTTL(10 * time.Minute),
		herd.WithWorkerReuse(true),
	)

	// 3. Setup a session-aware reverse proxy
	mux := http.NewServeMux()
	mux.Handle("/api/", proxy.NewReverseProxy(pool, func(r *http.Request) string {
		return r.Header.Get("X-Agent-ID") // Pin worker by X-Agent-ID header
	}))

	log.Fatal(http.ListenAndServe(":8080", mux))
}
2. Running It

Start the gateway (assuming you are in the examples/ollama directory):

sudo snap isntall ollama
# Running without sudo will disable cgroup isolation.
sudo go run .
3. Usage

Send requests with an X-Agent-ID header. Herd guarantees that all requests with the same ID will hit the exact same underlying ollama serve instance!

curl -X POST http://localhost:8080/api/chat \
  -H "X-Agent-ID: agent-42" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3",
    "messages": [{"role": "user", "content": "Hello! I am agent 42."}]
  }'

🏗️ Architecture

Read the full Architecture & Request Lifecycle Design Document

Herd is built around three core interfaces:

  • Worker[C]: Represents a single running subprocess. It provides the typed client C used to communicate with the process.
  • WorkerFactory[C]: Responsible for spawning new Worker instances. The default ProcessFactory handles local OS binaries.
  • Pool[C]: The central router. It maps session IDs to workers, manages the horizontal scaling, and handles session lifecycle.
Session Lifecycle
  1. Acquire(ctx, sessionID): Retrieves the worker pinned to the ID. If none exists, a free worker is popped from the pool (or a new one is spawned).
  2. Session.Worker.Client(): Use the returned worker to perform your logic.
  3. Session.Release(): Returns the worker to the pool. The bond to the session ID is preserved until the TTL expires or the worker crashes.

⚙️ Configuration Options

Option Description Default
WithAutoScale(min, max) Sets the floor and ceiling for the process fleet. min:1, max:10
WithTTL(time.Duration) Max idle time for a session before it is evicted. 5m
WithHealthInterval(d) How often to poll workers for liveness. 5s
WithStartHealthCheckDelay(d) Delay before starting health checks on newly spawned workers. 1s
WithCrashHandler(func) Callback triggered when a worker exits unexpectedly. nil
WithWorkerReuse(bool) Whether to recycle workers or kill them when TTL expires. true

📊 Monitoring

Pool.Stats() returns a point-in-time snapshot of both pool state and host resource usage, powered by the herd/observer subpackage.

import (
    "fmt"
    "github.com/hackstrix/herd"
)

stats := pool.Stats()

fmt.Printf("Workers : %d total, %d available\n",
    stats.TotalWorkers, stats.AvailableWorkers)
fmt.Printf("Sessions: %d active, %d acquiring\n",
    stats.ActiveSessions, stats.InflightAcquires)

// Node-level resource snapshot (Linux only; zero on macOS/Windows)
fmt.Printf("Host RAM: %d MB total, %d MB available\n",
    stats.Node.TotalMemoryBytes/1024/1024,
    stats.Node.AvailableMemoryBytes/1024/1024)
fmt.Printf("CPU Idle: %.1f%%\n", stats.Node.CPUIdle*100)

Note: On Linux, Stats() blocks for ~100 ms to measure CPU idle via two /proc/stat samples. Cache the result if you expose it on a high-traffic metrics endpoint.

The Node field is zero-valued on non-Linux platforms — treat a zero TotalMemoryBytes as "metrics unavailable" rather than "machine has no RAM."


📄 License

MIT License. See LICENSE for details.

Documentation

Overview

options.go — functional options for Pool construction.

Change log

  • v0.1: Initial set: WithAutoScale, WithTTL, WithCrashHandler, WithHealthInterval.

Pattern

All public options follow the functional-options pattern (Dave Cheney, 2014). Each option is a function that mutates an internal `config` struct. They are applied in order inside New[C], before any goroutines are started, so there are no synchronization requirements here.

Adding a new option

  1. Add the field to `config`.
  2. Set a sensible default in `defaultConfig()`.
  3. Write a `WithXxx` function below.
  4. Document the zero-value behaviour in the comment.

pool.go — Session-affine process pool with singleflight Acquire.

Core invariant

1 sessionID → 1 Worker, for the lifetime of the session.

Concurrency model (read this before touching Acquire)

There are three maps protected by a single mutex (p.mu):

p.sessions  map[string]Worker[C]       — live sessionID → worker bindings
p.inflight  map[string]chan struct{}    — in-progress Acquire for a session

And one lock-free channel:

p.available chan Worker[C]             — free workers ready to be assigned

The singleflight guarantee for Acquire(ctx, sessionID):

  1. Lock → check sessions → if found: unlock, return (FAST PATH).
  2. Lock → check inflight → if pending: grab chan, unlock, wait on it, then restart from step 1 when chan closes.
  3. Lock → create inflight[sessionID] = make(chan struct{}) → unlock.
  4. Block on <-p.available (or ctx cancel).
  5. Call w.Healthy(ctx). If unhealthy: discard, close inflight chan, return error.
  6. Lock → sessions[sessionID]=w, delete inflight[sid], close(ch) → unlock. Closing ch broadcasts to all goroutines waiting in step 2.
  7. Return &Session[C]{…}

Why a chan struct{} instead of sync.Mutex per session?

  • A mutex would only let one waiter in. We need ALL waiters to unblock when the acquiring goroutine completes (step 6 closes ch → zero-copy broadcast).
  • Closing a channel is safe to call exactly once and is always non-blocking.

Session.Release

Session.Release removes the entry from p.sessions and pushes the worker back onto p.available. It does NOT close or kill the worker — the binary keeps running and will be assigned to the next caller.

Crash path

processWorker.monitor() calls p.onCrash(sessionID) when it detects that the subprocess exited while holding a session. onCrash:

  1. Removes the session from p.sessions.
  2. If there is a pending inflight chan for the same sessionID, closes it so any goroutine waiting in step 2 of Acquire unblocks (they will then get an error because Acquire finds neither a session nor a valid inflight).
  3. Calls the user-supplied crashHandler if set.

factory.go — WorkerFactory implementations.

What lives here

  • ProcessFactory: the default factory. Spawns any OS binary, assigns it an OS-allocated TCP port, and polls GET <address>/health until 200 OK. Users pass this to New[C] so they never have to implement WorkerFactory themselves for the common HTTP case.

Port allocation

Ports are assigned by the OS (net.Listen("tcp", "127.0.0.1:0")). The binary receives its port via the PORT environment variable AND via any arg that contains the literal string "{{.Port}}" — that token is replaced with the actual port number at spawn time.

Health polling

After the process starts, ProcessFactory polls GET <address>/health every 200ms, up to 30 attempts (6 seconds total). If the worker never responds with 200 OK, Spawn returns an error and kills the process. The concrete port + binary are logged at startup.

Crash monitoring

A background goroutine calls cmd.Wait(). On exit, if the worker still holds a sessionID the pool's onCrash callback is invoked so the session affinity map is cleaned up.

pool_ttl.go — Idle session TTL sweeper for Pool[C].

Why a separate file?

pool.go owns the concurrency model (Acquire, Release, onCrash). This file owns time-based session lifecycle — kept separate so each file has one job and can be read/reviewed in isolation.

How it works

Every session that enters p.sessions also gets an entry in p.lastAccessed (a map[string]time.Time guarded by the same p.mu lock). Every call to Acquire that hits the fast path "touches" the session by updating its timestamp. The sweeper goroutine wakes up every ttl/2 and evicts sessions whose timestamp is older than cfg.ttl, calling release() on each so the worker is returned to the available channel.

Concurrency

p.lastAccessed is always read/written under p.mu — the same lock that guards p.sessions and p.inflight. No extra synchronization is needed.

Disabling TTL

Pass WithTTL(0) to New[C]. The ttlSweepLoop in pool.go returns immediately when cfg.ttl == 0, so this file's sweeper is never started.

Index

Constants

This section is empty.

Variables

View Source
var ErrWorkerDead = errors.New("worker process has died")

Functions

This section is empty.

Types

type Option

type Option func(*config)

Option is a functional option for New[C].

func WithAutoScale

func WithAutoScale(min, max int) Option

WithAutoScale sets the minimum and maximum number of live workers.

  • min: the pool always keeps at least this many workers healthy, even with zero active sessions.
  • max: hard cap on concurrent workers; Acquire blocks once this limit is reached until a worker becomes available.

Panics if min < 1 or max < min.

func WithCrashHandler

func WithCrashHandler(fn func(sessionID string)) Option

WithCrashHandler registers a callback invoked when a worker's subprocess exits unexpectedly while it holds an active session.

fn receives the sessionID that was lost. Use it to:

  • Delete session-specific state in your database
  • Return an error to the end user ("your session was interrupted")
  • Trigger a re-run of the failed job

fn is called from a background monitor goroutine. It must not block for extended periods; spawn a goroutine if you need to do heavy work.

If WithCrashHandler is not set, crashes are only logged.

func WithHealthInterval

func WithHealthInterval(d time.Duration) Option

WithHealthInterval sets how often the pool's background health-check loop calls Worker.Healthy() on every live worker.

Shorter intervals detect unhealthy workers faster but add more HTTP/RPC overhead. The default (5s) is a good balance for most workloads.

Set d = 0 to disable background health checks entirely. Workers are still checked once during Acquire (step 6 of the singleflight protocol).

func WithStartHealthCheckDelay

func WithStartHealthCheckDelay(d time.Duration) Option

WithStartHealthCheckDelay delay the health check for the first time. let the process start and breath before hammering with health checks

func WithTTL

func WithTTL(d time.Duration) Option

WithTTL sets the idle-session timeout.

A session is considered idle when no Acquire call has touched it within d. When the TTL fires, the session is removed from the affinity map and its worker is returned to the available pool.

Set d = 0 to disable TTL (sessions live until explicitly Released or the pool shuts down). This is useful for REPL-style processes where the caller owns the session lifetime.

func WithWorkerReuse

func WithWorkerReuse(reuse bool) Option

WithWorkerReuse controls whether a worker is recycled when its session's TTL expires. If true (the default), the worker is returned to the available pool to serve new sessions. If false, the worker process is killed when the session expires, and a fresh worker is spawned to maintain the minimum pool capacity.

type Pool

type Pool[C any] struct {
	// contains filtered or unexported fields
}

Pool manages a set of workers and routes requests by sessionID. Create one with New[C].

func New

func New[C any](factory WorkerFactory[C], opts ...Option) (*Pool[C], error)

New creates a pool backed by factory, applies opts, and starts min workers. Returns an error if any of the initial workers fail to start.

func (*Pool[C]) Acquire

func (p *Pool[C]) Acquire(ctx context.Context, sessionID string) (*Session[C], error)

Acquire returns the Worker pinned to sessionID.

If sessionID already has a worker, it is returned immediately (fast path). If sessionID is new, a free worker is popped from the available channel, health-checked, and pinned to the session.

If another goroutine is currently acquiring the same sessionID, this call blocks until that acquisition completes and then returns the same worker (singleflight guarantee — no two goroutines can pin different workers to the same sessionID simultaneously).

Blocks until a worker is available or ctx is cancelled.

func (*Pool[C]) Shutdown

func (p *Pool[C]) Shutdown(ctx context.Context) error

Shutdown gracefully stops the pool. It closes all background goroutines and then kills every worker. In-flight Acquire calls will receive a context cancellation error if the caller's ctx is tied to the application lifetime.

Two signals are sent deliberately:

  • p.cancel() cancels p.ctx, which unblocks any in-flight addWorker goroutines that are blocking on factory.Spawn (they use a context.WithTimeout derived from p.ctx).
  • close(p.done) signals the healthCheckLoop and runTTLSweep goroutines to exit their ticker loops cleanly.

func (*Pool[C]) Stats

func (p *Pool[C]) Stats() PoolStats

Stats returns a point-in-time snapshot of pool state. Safe to call concurrently.

On Linux this blocks for ~100 ms to measure CPU idle via /proc/stat. Cache the result if you call Stats() in a hot path.

type PoolStats

type PoolStats struct {
	// TotalWorkers is the number of workers currently registered in the pool
	// (starting + healthy + busy). Does not count workers being scaled down.
	TotalWorkers int

	// AvailableWorkers is the number of idle workers ready to accept a new session.
	AvailableWorkers int

	// ActiveSessions is the number of sessionID → worker bindings currently live.
	ActiveSessions int

	// InflightAcquires is the number of Acquire calls currently in the "slow path"
	// (waiting for a worker to become available). Useful for queue-depth alerting.
	InflightAcquires int

	// Node is a snapshot of host-level resource availability (memory, CPU idle).
	// Populated by observer.PollNodeStats(); zero-valued on non-Linux platforms
	// or if the poll fails.
	//
	// Note: on Linux, Stats() blocks for ~100 ms to measure CPU idle.
	Node observer.NodeStats
}

PoolStats is a point-in-time snapshot of pool state for dashboards / alerts.

type ProcessFactory

type ProcessFactory struct {
	// contains filtered or unexported fields
}

ProcessFactory is the default WorkerFactory[*http.Client]. It spawns `binary` as a subprocess, allocates a free OS port, and polls GET <address>/health until the worker reports healthy.

Use NewProcessFactory to create one; pass it directly to New[C]:

pool, err := herd.New(herd.NewProcessFactory("./my-binary", "--port", "{{.Port}}"))

func NewProcessFactory

func NewProcessFactory(binary string, args ...string) *ProcessFactory

NewProcessFactory returns a ProcessFactory that spawns the given binary.

Any arg containing the literal string "{{.Port}}" is replaced with the OS-assigned port number at spawn time. The port is also injected via the PORT environment variable for binaries that prefer env-based config.

factory := herd.NewProcessFactory("./ollama", "serve", "--port", "{{.Port}}")

func (*ProcessFactory) Spawn

func (f *ProcessFactory) Spawn(ctx context.Context) (Worker[*http.Client], error)

Spawn implements WorkerFactory[*http.Client]. It allocates a free port, starts the binary, and blocks until the worker passes a /health check or ctx is cancelled.

func (*ProcessFactory) WithCPULimit added in v0.3.0

func (f *ProcessFactory) WithCPULimit(cores float64) *ProcessFactory

WithCPULimit sets the cgroup CPU quota in cores for each spawned worker. For example, 0.5 means half a CPU and 2 means two CPUs. A value of 0 disables the limit.

func (*ProcessFactory) WithEnv

func (f *ProcessFactory) WithEnv(kv string) *ProcessFactory

WithEnv appends an extra KEY=VALUE environment variable that is injected into every worker spawned by this factory. The literal string "{{.Port}}" is replaced with the worker's allocated port number, which is useful for binaries that accept the listen address via an env var rather than a flag.

factory := herd.NewProcessFactory("ollama", "serve").
	WithEnv("OLLAMA_HOST=127.0.0.1:{{.Port}}").
	WithEnv("OLLAMA_MODELS=/tmp/shared-ollama-models")

func (*ProcessFactory) WithHealthPath

func (f *ProcessFactory) WithHealthPath(path string) *ProcessFactory

WithHealthPath sets the HTTP path that herd polls to decide whether a worker is ready. The path must return HTTP 200 when the process is healthy.

Default: "/health"

Use this for binaries that expose liveness on a non-standard path:

factory := herd.NewProcessFactory("ollama", "serve").
	WithHealthPath("/")   // ollama serves GET / → 200 "Ollama is running"

func (*ProcessFactory) WithInsecureSandbox added in v0.3.0

func (f *ProcessFactory) WithInsecureSandbox() *ProcessFactory

WithInsecureSandbox disables the namespace/cgroup sandbox. Use only for local debugging on non-Linux systems or when you explicitly trust the spawned processes.

func (*ProcessFactory) WithMemoryLimit added in v0.3.0

func (f *ProcessFactory) WithMemoryLimit(bytes int64) *ProcessFactory

WithMemoryLimit sets the cgroup memory limit, in bytes, for each spawned worker. A value of 0 disables the memory limit.

func (*ProcessFactory) WithPIDsLimit added in v0.3.0

func (f *ProcessFactory) WithPIDsLimit(n int64) *ProcessFactory

WithPIDsLimit sets the cgroup PID limit for each spawned worker. Pass -1 for unlimited. Values of 0 or less than -1 are invalid.

func (*ProcessFactory) WithStartHealthCheckDelay

func (f *ProcessFactory) WithStartHealthCheckDelay(d time.Duration) *ProcessFactory

WithStartHealthCheckDelay delay the health check for the first time. let the process start and breath before hammering with health checks

func (*ProcessFactory) WithStartTimeout

func (f *ProcessFactory) WithStartTimeout(d time.Duration) *ProcessFactory

WithStartTimeout sets the maximum duration herd will poll the worker's health endpoint after spawning the process before giving up and killing it.

Default: 30 seconds

type Session

type Session[C any] struct {
	// ID is the sessionID that was passed to Pool.Acquire.
	ID string

	// Worker is the underlying worker pinned to this session.
	// Use Worker.Client() to talk to the subprocess.
	Worker Worker[C]
	// contains filtered or unexported fields
}

Session is a scoped handle returned by Pool.Acquire.

It binds one sessionID to one worker for the duration of the session. Call Release when the session is done — this frees the worker so it can be assigned to the next sessionID. Failing to call Release leaks a worker.

A Session is NOT safe for concurrent use by multiple goroutines. Multiple HTTP requests for the same sessionID should each call Acquire independently; the pool guarantees they always receive the same underlying worker.

func (*Session[C]) ConnRelease

func (s *Session[C]) ConnRelease()

func (*Session[C]) Release

func (s *Session[C]) Release()

Release removes the session from the affinity map and returns the worker to the available pool. After Release, the worker may be assigned to a different sessionID. Calling Release more than once is a no-op.

type SessionRegistry added in v0.3.1

type SessionRegistry[C any] interface {
	// Get returns the worker pinned to sessionID.
	// Returns (nil, nil) if no session exists for this ID.
	Get(ctx context.Context, sessionID string) (Worker[C], error)

	// Put pins a worker to a sessionID.
	Put(ctx context.Context, sessionID string, w Worker[C]) error

	// Delete removes the pinning for sessionID.
	Delete(ctx context.Context, sessionID string) error

	// List returns a snapshot of all currently active sessions.
	// Primarily used for background health checks and cleanup.
	List(ctx context.Context) (map[string]Worker[C], error)

	// Len returns the number of active sessions.
	Len() int
}

SessionRegistry tracks which workers are pinned to which session IDs. In a distributed setup (Enterprise), this registry is shared across multiple nodes.

type Worker

type Worker[C any] interface {
	// ID returns a stable, unique identifier for this worker (e.g. "worker-3").
	// Never reused — not even after a crash and restart.
	ID() string

	// Address returns the internal network URI the worker
	// is listening on (e.g., '127.0.0.1:54321').
	Address() string

	// Client returns the typed connection to the worker process.
	// For most users this is *http.Client; gRPC users return their stub here.
	Client() C

	// Healthy performs a liveness check against the subprocess.
	// Returns nil if the worker is accepting requests; non-nil otherwise.
	// Pool.Acquire calls this before handing a worker to a new session,
	// so a stale or crashed worker is never returned to a caller.
	Healthy(ctx context.Context) error

	// Close performs graceful shutdown of the worker process.
	// Called by the pool during scale-down or Pool.Shutdown.
	io.Closer
}

Worker represents one running subprocess managed by the pool.

C is the typed client the caller uses to talk to the subprocess — for example *http.Client, a gRPC connection, or a custom struct. The type parameter is constrained to "any" so the pool is fully generic.

type WorkerFactory

type WorkerFactory[C any] interface {
	// Spawn starts one new worker and blocks until it is healthy.
	// If ctx is cancelled before the worker becomes healthy, Spawn must
	// kill the process and return a non-nil error.
	Spawn(ctx context.Context) (Worker[C], error)
}

WorkerFactory knows how to spawn one worker process and return a typed Worker[C] that is ready to accept requests (i.e. Healthy returns nil).

Most users never implement this interface — they use NewProcessFactory instead. Implement WorkerFactory only if you need custom spawn logic (e.g. Firecracker microVM, Docker container, remote SSH process).

Directories

Path Synopsis
Package observer provides lightweight, OS-level resource sampling.
Package observer provides lightweight, OS-level resource sampling.
Package proxy provides NewReverseProxy — the one-liner that turns a session-affine process pool into an HTTP gateway.
Package proxy provides NewReverseProxy — the one-liner that turns a session-affine process pool into an HTTP gateway.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL