cli

package
v0.16.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 10, 2026 License: Apache-2.0 Imports: 49 Imported by: 0

Documentation

Overview

Package cli holds embeddable Kong commands for the bones binary.

Each command type is a Kong-tagged struct with a Run method. The command tree is assembled in cmd/bones/cli.go alongside libfossil/cli and EdgeSync/cli.

Index

Constants

View Source
const HotSlotThreshold = 4

HotSlotThreshold is the open-task count above which a slot is considered "hot" — packed past the point of plausibly-deliberate authoring. The substrate runs exactly one `coord.Leaf` per slot at a time (ADR 0023 architectural invariant 5; ADR 0028 architectural invariant 1: "Per-slot leaf process — each active slot owns exactly one `coord.Leaf`"), so N tasks in one slot run serially regardless of file-disjointness or task-graph independence. Issue #214's heuristic: more than 4 open tasks in a single slot is almost certainly a packing mistake, because parallelism in bones comes from N distinct slots, not from depth within one slot. Kept as a single named constant (not a flag, not workspace-configurable) per the issue's "avoid per-workspace tuning becoming a foot-gun" guidance.

View Source
const SessionStartSentinelFile = ".bones/last-session-prime"

SessionStartSentinelFile is the workspace-relative path of the SessionStart hook sentinel. `bones tasks prime` (wired as a SessionStart hook by `bones up`) refreshes the file on entry, and `bones doctor` reads it to detect "hooks configured but never fire" failure modes (#172).

Variables

View Source
var ErrHubRepoNotBootstrapped = errors.New("hub repo not bootstrapped")

ErrHubRepoNotBootstrapped is returned by hubRepoPath when the workspace has no hub.fossil yet. Callers can switch on this to emit role-appropriate guidance: orchestrator commands tell the user to run `bones up`; leaf commands (`swarm join`) refuse to bootstrap and tell the agent the orchestrator must do it.

Functions

func FixForFossilDrift added in v0.6.0

func FixForFossilDrift() string

FixForFossilDrift returns the fix command for fossil/git HEAD divergence. Per ADR 0037, `bones apply` materializes the fossil trunk tip into the git working tree; this is the correct path when fossil tip != git HEAD.

func FixForHubDown added in v0.6.0

func FixForHubDown() string

FixForHubDown returns an advisory hint for a workspace whose hub is not responding.

func FixForMissingFossil added in v0.6.0

func FixForMissingFossil() string

FixForMissingFossil returns the fix command when fossil is absent from PATH.

func FixForMissingHook added in v0.6.0

func FixForMissingHook() string

FixForMissingHook returns the fix command for a missing pre-commit hook.

func FixForRemoteSlot added in v0.6.0

func FixForRemoteSlot(host string) string

FixForRemoteSlot returns an advisory hint for a remote-owned slot.

func FixForScaffoldDrift added in v0.6.0

func FixForScaffoldDrift() string

FixForScaffoldDrift returns the fix command for scaffold version drift.

func FixForStaleSlot added in v0.6.0

func FixForStaleSlot(slot string) string

FixForStaleSlot returns the fix command for releasing a stale claim.

Types

type ApplyCmd added in v0.4.0

type ApplyCmd struct {
	DryRun bool   `name:"dry-run" help:"show planned changes without writing or staging"`
	Slot   string `name:"slot" help:"materialize this slot's branch (synthetic) or recovery dir (with --to)"` //nolint:lll
	To     string `name:"to" help:"target directory for recovery-mode --slot; created if missing"`
	Staged bool   `name:"staged" help:"in synthetic slot mode, run git add on materialized files"`
}

ApplyCmd materializes fossil-resident work into a target directory. Three modes share the same verb (issue #234, ADR 0037, ADR 0050):

  1. Default (trunk fan-in): with no flags, reads the hub fossil's trunk tip and stages the changes into the project-root git working tree for the user to review and commit. See docs/superpowers/specs/2026-04-30-bones-apply-design.md.
  2. Synthetic slot mode (--slot=agent-<id>): per ADR 0050, reads the agent's fossil branch (`agent/<full-id>`) tip and materializes its tree into the project-root git working tree. Same dirty-tree refusal, same telemetry path. Default writes files unstaged; --staged adds `git add` for each materialized path so the operator can `git diff --staged` the same way trunk mode produces.
  3. Recovery slot mode (--slot=<name> --to=<dir>): copies the slot's most recent committed-artifacts tree from `.bones/recovery/<slot>-*` into <dir>. Predates ADR 0050 and remains for plan-driven slots that write recovery artifacts.

bones apply never runs `git commit`. In trunk mode it stages with `git add -A` within fossil's tracked-paths set; the user owns the commit message and the commit author identity. In synthetic slot mode the default leaves files unstaged so `git status` shows the branch's worth of work for the operator to review before staging.

func (*ApplyCmd) Run added in v0.4.0

func (c *ApplyCmd) Run(g *repocli.Globals) (err error)

type CleanupCmd added in v0.11.0

type CleanupCmd struct {
	Slot         string `name:"slot" help:"reap a single slot (drops session record + removes wt)"`    //nolint:lll
	Worktree     string `name:"worktree" help:"remove a single legacy .claude/worktrees/agent-*/ dir"` //nolint:lll
	AllWorktrees bool   `name:"all-worktrees" help:"remove the entire .claude/worktrees/ tree"`        //nolint:lll
}

CleanupCmd is the prompt-cleanup verb that operators (or harness `SubagentStop` recipes) invoke to reap a single slot or a legacy `.claude/worktrees/agent-*/` dir without waiting for the hub's lease-TTL watcher to converge.

Three mutually-exclusive modes:

  • --slot=<name> drop the slot's session record and remove `.bones/swarm/<slot>/wt/`. Idempotent — a missing slot is a no-op (exit 0) so re-running after a SubagentStop hook already fired converges silently.
  • --worktree=<path> remove a single legacy `.claude/worktrees/agent-*/` dir. Force- unlocks the git worktree first so a crashed-agent lock file does not block removal.
  • --all-worktrees remove the entire `.claude/worktrees/` tree. Useful for the migration from pre-ADR-0050 isolation; the loud `bones up` refusal points operators here.

Bones ships only the verb. The recipe for wiring this into Claude Code's `SubagentStop` hook is documented at `docs/recipes/claude-code-harness.md`; bones does NOT scaffold the hook because it does not own `.claude/settings.json` wholesale (#256, #260, ADR 0050).

func (*CleanupCmd) Run added in v0.11.0

func (c *CleanupCmd) Run(g *repocli.Globals) error

Run dispatches on the chosen mode. Mutual exclusion is enforced up front so a typo'd combination fails before any side effect.

type DoctorCmd added in v0.1.1

type DoctorCmd struct {
	edgecli.DoctorCmd
	All    bool `name:"all" help:"check all registered workspaces on this user/host"`
	Quiet  bool `name:"quiet" short:"q" help:"only show workspaces with issues (with --all)"`
	ShowOK bool `name:"show-ok" help:"include OK workspaces in --all output (verbose mode)"`
	JSON   bool `name:"json" help:"emit machine-readable JSON"`
	// NoFix flips bones doctor into report-only mode for the ADR
	// 0051 hook-protocol auto-rewrite. Default behavior auto-heals
	// stale .claude/settings.json hook entries; --no-fix surfaces
	// drift as WARN lines without rewriting the file.
	NoFix bool `name:"no-fix" help:"report-only: do not auto-rewrite stale hook entries"`
	// Reset opts in to rewriting drifted bones-owned hook entries
	// back to their canonical form (issue #318). Default is
	// report-only because drift here means "operator hand-edited
	// the entry"; rewriting without consent destroys their change.
	// Different posture from --no-fix's auto-rewrite of stale v0.12
	// command forms (where drift = stale shape bones itself created).
	Reset bool `name:"reset" help:"rewrite drifted hook entries to canonical (overwrites edits)"`
}

DoctorCmd extends EdgeSync's doctor with bones-specific checks. The embedded EdgeSync DoctorCmd runs the base health gate (Go runtime, fossil, NATS reachability, hooks); then this wrapper adds the swarm-session inventory described in ADR 0028 §"Process lifecycle and crash recovery" so stuck or cross-host slots surface here.

Embedded — not aliased — so EdgeSync's flags (--nats-url) still participate in Kong parsing.

func (*DoctorCmd) Run added in v0.1.1

func (c *DoctorCmd) Run(g *repocli.Globals) (err error)

Run invokes the EdgeSync doctor first; on completion (regardless of pass/warn/fail) it appends a "swarm sessions" section that iterates bones-swarm-sessions and reports each entry's state.

type DownCmd added in v0.3.0

type DownCmd struct {
	Yes        bool `name:"yes" short:"y" help:"skip the confirmation prompt"`
	KeepSkills bool `name:"keep-skills" help:"do not remove .claude/skills"`
	KeepHooks  bool `name:"keep-hooks" help:"do not edit .claude/settings.json"`
	KeepHub    bool `name:"keep-hub" help:"do not stop hub or remove .orchestrator/"`
	DryRun     bool `name:"dry-run" help:"print plan without executing"`
	All        bool `name:"all" help:"tear down all registered workspaces"`
}

DownCmd reverses bones up: stops the hub via hub.Stop, removes the workspace marker (.bones/), any leftover legacy state (.orchestrator/ from pre-ADR-0041 workspaces), the scaffolded skills (.claude/skills/{orchestrator,subagent,uninstall-bones}), and the bones-installed SessionStart/Stop hooks from .claude/settings.json. Other hooks in settings.json are left untouched.

Per issue #252, AGENTS.md and CLAUDE.md at the workspace root are no longer scaffolded by `bones up`, so `bones down` does not touch them. Workspaces installed by older bones may still carry bones-managed content there; the operator removes it manually if needed.

Destructive — requires --yes or an interactive y/N confirmation. Idempotent: re-running on a clean tree is a no-op.

func (*DownCmd) Run added in v0.3.0

func (c *DownCmd) Run(g *repocli.Globals) error

Run is the Kong entry point. Resolves the workspace root (walking up from cwd if a marker exists, falling back to cwd otherwise), builds an execution plan, prompts unless --yes, and executes.

type EnvCmd added in v0.6.0

type EnvCmd struct {
	Shell string `name:"shell" help:"shell: bash|zsh|fish (default: auto-detect from $SHELL)"`
}

func (*EnvCmd) Run added in v0.6.0

func (c *EnvCmd) Run() error

type HubCmd added in v0.1.1

type HubCmd struct {
	Start HubStartCmd `cmd:"" help:"Start the embedded Fossil hub + NATS server"`
	Stop  HubStopCmd  `cmd:"" help:"Stop the embedded Fossil hub + NATS server"`
	User  HubUserCmd  `cmd:"" help:"Manage fossil users in the hub repo"`
	Reap  HubReapCmd  `cmd:"" help:"Terminate orphan hub processes (ADR 0043)"`
}

HubCmd is the umbrella command for the embedded Fossil + NATS hub.

Subcommands:

hub start [--detach]    bring the hub up
hub stop                tear it down
hub user add <login>    pre-create a fossil user in the hub repo
hub user list           list fossil users in the hub repo

Per ADR 0041 these subcommands are the only entry points to hub lifecycle: the legacy bash bootstrap shims under .orchestrator/scripts/ are no longer scaffolded.

type HubReapCmd added in v0.7.0

type HubReapCmd struct {
	Yes    bool `name:"yes" short:"y" help:"skip the per-orphan confirmation prompt"`
	DryRun bool `name:"dry-run" help:"print orphans without acting"`
}

HubReapCmd lists orphan hub processes and offers to terminate them. Two orphan kinds are surfaced:

  1. Registry-source: a registered entry whose PID is alive but whose workspace is gone (cwd missing, marker missing, or trashed).
  2. Process-source: a live `bones hub start` process with no matching registry entry — typically a leak from a test runner that spawned a workspace and exited without `bones down`. Pre-#NNN these were visible in `bones status --all` but invisible to `bones hub reap`, forcing operators to `kill -9` directly.

Per orphan: SIGTERM then SIGKILL after a short grace. Registry-source orphans also have their entry removed on success. See ADR 0043.

func (*HubReapCmd) Run added in v0.7.0

func (c *HubReapCmd) Run(g *repocli.Globals) error

type HubStartCmd added in v0.1.1

type HubStartCmd struct {
	Detach bool `name:"detach" help:"return immediately after the hub is reachable"`
	// 0 = let the hub allocate per-workspace (default). The hub records
	// the resolved URL at .bones/hub-{fossil,nats}-url so a
	// second workspace can run concurrently on its own free ports.
	// Pass an explicit non-zero port to pin.
	RepoPort  int `name:"repo-port" default:"0" help:"repo HTTP port (0 = per-ws)"`
	CoordPort int `name:"coord-port" default:"0" help:"coord client port (0 = per-ws)"`
	// DrainTimeout bounds NATS/Fossil drain on shutdown before
	// runForeground returns errDrainTimeout (non-zero exit). See #158.
	DrainTimeout time.Duration `name:"drain-timeout" default:"30s" help:"max drain wait"` //nolint:lll
	// LogLevel controls hub.log entry verbosity (#322). Standard four
	// severities: debug, info, warn, error. Default INFO (read-only
	// RPCs demoted to DEBUG; mutating + errors at INFO). Honored as
	// "flag wins, then BONES_HUB_LOG_LEVEL env var, then INFO".
	LogLevel string `name:"log-level" help:"hub.log min level: debug|info|warn|error (env BONES_HUB_LOG_LEVEL)"` //nolint:lll
}

HubStartCmd wires `bones hub start` flags to hub.Start.

--detach (default false) is what shell hooks want: spawn a background hub and return immediately once both servers are reachable. Without --detach, the command runs the hub in the foreground and shuts both servers down on SIGINT/SIGTERM. Foreground mode is the easiest way to see hub logs interactively.

func (*HubStartCmd) Run added in v0.1.1

func (c *HubStartCmd) Run(g *repocli.Globals) error

type HubStopCmd added in v0.1.1

type HubStopCmd struct {
	Force bool `name:"force" help:"stop even when swarm slots are active (#157)"`
}

HubStopCmd wires `bones hub stop` to hub.Stop. Idempotent.

--force overrides the active-slot safety check (#157). Without it, stop refuses when any .bones/swarm/<slot>/leaf.pid points at a live process, since restarting the hub on a different port would silently break those leaves' cached NATS URLs.

func (*HubStopCmd) Run added in v0.1.1

func (c *HubStopCmd) Run(g *repocli.Globals) error

type HubUserAddCmd added in v0.1.1

type HubUserAddCmd struct {
	Login string `arg:"" help:"login (e.g. slot-rendering)"`
	Caps  string `name:"caps" default:"oih" help:"fossil caps (default: clone+checkin+history)"`
}

HubUserAddCmd creates a user in the hub repo. Idempotent: if the user already exists, exits 0 without changing anything.

func (*HubUserAddCmd) Run added in v0.1.1

func (c *HubUserAddCmd) Run(g *repocli.Globals) error

type HubUserCmd added in v0.1.1

type HubUserCmd struct {
	Add  HubUserAddCmd  `cmd:"" help:"Add (or noop-if-exists) a user to the hub repo"`
	List HubUserListCmd `cmd:"" help:"List users in the hub repo"`
}

HubUserCmd groups subcommands that manage the fossil user table on the hub repo (.bones/hub.fossil). The table is consulted by every `fossil commit --user X` against that repo, so swarm slots that commit under their own identity (slot-rendering, slot-physics, etc.) must exist here first or fossil rejects the commit with "no such user."

This primitive is the manual escape hatch; future swarm-join tooling will call into the same internal helpers automatically.

type HubUserListCmd added in v0.1.1

type HubUserListCmd struct{}

HubUserListCmd prints the hub repo's user table.

func (*HubUserListCmd) Run added in v0.1.1

func (c *HubUserListCmd) Run(g *repocli.Globals) error

type InitCmd

type InitCmd struct{}

InitCmd creates a new bones workspace in the current directory.

func (*InitCmd) Run

func (c *InitCmd) Run(g *repocli.Globals) error

type JoinCmd

type JoinCmd struct{}

JoinCmd locates and verifies an existing workspace from cwd.

func (*JoinCmd) Run

func (c *JoinCmd) Run(g *repocli.Globals) error

type LogsCmd added in v0.6.0

type LogsCmd struct {
	Slot      string `name:"slot" help:"slot name to read log from"`
	Workspace bool   `name:"workspace" help:"read workspace-level log instead of a slot log"`
	Hub       bool   `name:"hub" help:"read .bones/hub.log (operator RPC log per #322)"`
	Tail      bool   `name:"tail" short:"f" help:"follow: poll for new events after EOF"`
	Since     string `name:"since" help:"filter events after duration (e.g. 5m) or RFC3339 time"`
	Last      int    `name:"last" help:"keep only the last N events"`
	JSON      bool   `name:"json" help:"emit raw NDJSON line unchanged"`
	FullTime  bool   `name:"full-time" help:"render full RFC3339 timestamp instead of HH:MM:SS"`
}

LogsCmd implements `bones logs`. Reads per-slot, workspace-level, or hub NDJSON event logs with formatting, follow mode, and filters.

Exactly one of --slot, --workspace, or --hub must be specified.

func (*LogsCmd) Run added in v0.6.0

func (c *LogsCmd) Run(g *repocli.Globals) error

Run is the Kong entry point for `bones logs`.

type PlanCmd added in v0.7.0

type PlanCmd struct {
	Finalize PlanFinalizeCmd `cmd:"" help:"Materialize hub artifacts to the host tree (ADR 0044)"`
}

PlanCmd is the umbrella command for plan-workflow operations beyond validation. Today's surface: `bones plan finalize`. Validate stays at the top level (`bones validate-plan`) for now — it predates this group and changing the verb shape is a separate concern.

type PlanFinalizeCmd added in v0.7.0

type PlanFinalizeCmd struct {
	Plan  string `name:"plan" help:"plan path (default: read .bones/swarm/dispatch.json)"`
	Force bool   `name:"force" help:"overwrite host-tree files that differ from hub trunk"`
	Stage bool   `name:"stage" help:"git add the materialized files after writing"`
}

PlanFinalizeCmd materializes files committed by each slot to hub trunk back into the host tree, closing the loop on the swarm workflow. The slot's `[slot: name]` annotation in the plan is the manifest — files listed in the dispatch manifest's per-slot `Files` are the materialization set. See ADR 0044.

func (*PlanFinalizeCmd) Run added in v0.7.0

func (c *PlanFinalizeCmd) Run(g *repocli.Globals) error

type SessionMarkerCmd added in v0.6.0

type SessionMarkerCmd struct {
	Register   SessionMarkerRegisterCmd   `cmd:"" name:"register"`
	Unregister SessionMarkerUnregisterCmd `cmd:"" name:"unregister"`
}

SessionMarkerCmd is hidden from --help; it exists only for bones-managed SessionStart/End hooks to call. The marker schema lives in the sessions package; this verb is the only call site.

type SessionMarkerRegisterCmd added in v0.6.0

type SessionMarkerRegisterCmd struct {
	SessionID string `name:"session-id" required:""`
	Cwd       string `name:"cwd" required:"" help:"absolute workspace cwd"`
	PID       int    `name:"pid" required:"" help:"claude (or harness) process PID"`
}

func (*SessionMarkerRegisterCmd) Run added in v0.6.0

type SessionMarkerUnregisterCmd added in v0.6.0

type SessionMarkerUnregisterCmd struct {
	SessionID string `name:"session-id" required:""`
}

func (*SessionMarkerUnregisterCmd) Run added in v0.6.0

type StatusCmd added in v0.6.0

type StatusCmd struct {
	All  bool `name:"all" help:"show status across all workspaces on this user/host"`
	JSON bool `name:"json" help:"emit machine-readable JSON"`
}

StatusCmd renders a one-shot snapshot of the workspace combining NATS task/session state with the hub fossil timeline.

func (*StatusCmd) Run added in v0.6.0

func (c *StatusCmd) Run(g *repocli.Globals) error

type SwarmCloseCmd added in v0.1.1

type SwarmCloseCmd struct {
	Slot    string `name:"slot" help:"slot (defaults to single active slot on host)"`
	Result  string `name:"result" default:"success" help:"success|fail|fork"`
	Summary string `name:"summary" default:"swarm close" help:"final summary"`
	Branch  string `name:"branch" help:"only with --result=fork: branch name"`
	Rev     string `name:"rev" help:"only with --result=fork: rev"`
	HubURL  string `name:"hub-url" help:"override hub fossil HTTP URL"`
	// SubstrateError / SubstrateFault expose the dispatch.ResultMessage
	// substrate fields added in #159 so a wrapper or orchestrator can
	// signal explicitly that bones (not the agent) hit a failure on
	// the close path. The flags are separate from --summary because
	// --summary is the agent's intent and must reach darken verbatim;
	// these markers are bones-side observations.
	SubstrateError string `name:"substrate-error" help:"free-text substrate failure (#159)"`
	SubstrateFault string `name:"substrate-fault" help:"substrate fault category (#159)"`
	NoArtifact     bool   `name:"no-artifact" help:"acknowledge an intentional empty close"`
	KeepWT         bool   `name:"keep-wt" help:"retain wt dir on success (default: remove)"`
}

SwarmCloseCmd ends a swarm session: posts a dispatch.ResultMessage to the task thread, then has the lease close the task in KV (on result=success), release the claim hold, stop the leaf, remove the host-local pid file, and CAS-delete the session record.

Idempotent against partial-cleanup states: a missing session record (already closed) is not an error so re-running close after a crash converges.

func (*SwarmCloseCmd) Run added in v0.1.1

func (c *SwarmCloseCmd) Run(g *repocli.Globals) error

type SwarmCmd added in v0.1.1

type SwarmCmd struct {
	Join     SwarmJoinCmd     `cmd:"" help:"Open a leaf, claim a task, prepare a worktree"`
	Commit   SwarmCommitCmd   `cmd:"" help:"Commit changes (heartbeats the session)"`
	Close    SwarmCloseCmd    `cmd:"" help:"Release claim, post result, stop the leaf"`
	Status   SwarmStatusCmd   `cmd:"" help:"List active swarm sessions"`
	Cwd      SwarmCwdCmd      `cmd:"" help:"Print the slot's worktree path"`
	Tasks    SwarmTasksCmd    `cmd:"" help:"List ready tasks matching slot"`
	FanIn    SwarmFanInCmd    `cmd:"" name:"fan-in" help:"Merge open hub leaves into trunk"`
	Reap     SwarmReapCmd     `cmd:"" help:"Force-close stale same-host sessions as result=fail"`
	Dispatch SwarmDispatchCmd `cmd:"" help:"Dispatch a plan: create tasks + write manifest"`
}

SwarmCmd groups the agent-shaped swarm verbs introduced in ADR 0028: per-slot leaf lifecycle wrapped around coord.Leaf so subagent prompts shrink from ~12 substrate calls to ~5 swarm verbs.

All verbs are workspace-local (joinWorkspace from cwd). Agent prompts run them as plain shell commands; the orchestrator skill (R6 follow-up) renders them inline.

type SwarmCommitCmd added in v0.1.1

type SwarmCommitCmd struct {
	Slot       string   `name:"slot" help:"slot name (defaults to single active slot on this host)"`
	Message    string   `name:"message" short:"m" required:"" help:"commit message"`
	Files      []string `arg:"" optional:"" help:"files to commit (default: all modified)"`
	HubURL     string   `name:"hub-url" help:"override hub fossil HTTP URL"`
	NoAutosync bool     `name:"no-autosync" help:"branch-per-slot mode (skip pre-commit hub pull)"`
}

SwarmCommitCmd commits in-flight changes from the slot's worktree to the slot's leaf, triggers a sync round to the hub, and bumps the swarm session record's TTL via CAS.

Files default to "all modified files in wt/" if no positional arguments are passed.

All the assembled scaffold (Resume the lease, claim, announce holds, commit via leaf, push to hub, renew session) lives in internal/swarm.ResumedLease. This verb is flag parsing + file gathering + ResumedLease.Commit + result printing.

func (*SwarmCommitCmd) Run added in v0.1.1

func (c *SwarmCommitCmd) Run(g *repocli.Globals) error

type SwarmCwdCmd added in v0.1.1

type SwarmCwdCmd struct {
	Slot string `name:"slot" required:"" help:"slot name"`
}

SwarmCwdCmd prints the slot's worktree path on stdout. Designed for shell substitution:

cd "$(bones swarm cwd --slot=rendering)"

Pure path derivation — no NATS round-trip, no KV lookup. Only requires a workspace context to compute the absolute path. Callers who want the wt path AND a liveness check should use `swarm status`.

func (*SwarmCwdCmd) Run added in v0.1.1

func (c *SwarmCwdCmd) Run(g *repocli.Globals) error

type SwarmDispatchCmd added in v0.6.0

type SwarmDispatchCmd struct {
	PlanPath string `arg:"" optional:"" name:"plan" help:"path to plan markdown"`
	Advance  bool   `name:"advance" help:"check current wave; promote if all tasks Closed"`
	Cancel   bool   `name:"cancel" help:"abandon in-flight dispatch; closes tasks as canceled"`
	Wave     int    `name:"wave" help:"explicit wave number (rare; for testing)"`
	JSON     bool   `name:"json" help:"emit manifest path + summary as JSON"`
	DryRun   bool   `name:"dry-run" help:"validate; don't touch NATS or filesystem"`
	Quiet    bool   `name:"quiet" help:"suppress success output"`
}

SwarmDispatchCmd implements `bones swarm dispatch`.

Without a flag, it reads a plan file, creates one task per slot via the bones-tasks KV, writes a dispatch manifest, and prints the manifest path with a next-step instruction for the orchestrator skill.

--advance checks whether all tasks in the current wave are Closed; if so it promotes CurrentWave to the next wave number and rewrites the manifest.

--cancel removes the manifest and closes any referenced tasks with ClosedReason="dispatch-canceled".

--dry-run validates the plan (or checks advance/cancel preconditions) without touching NATS or the filesystem.

func (*SwarmDispatchCmd) Run added in v0.6.0

Run dispatches to the appropriate subcommand based on which flag is set.

type SwarmFanInCmd added in v0.1.1

type SwarmFanInCmd struct {
	User    string `name:"user" default:"orchestrator" help:"fossil user attributed to merge"`
	Message string `name:"message" short:"m" default:"" help:"merge commit message"`
	DryRun  bool   `name:"dry-run" help:"show what would be merged without committing"`
}

SwarmFanInCmd merges every open leaf on the hub repo's trunk into a single integration tip. After Phase 1 swarm slots close, the hub typically has N leaves (one per slot lineage) — fan-in collapses them so a downstream `fossil update` materializes a single merged working tree.

Implementation shells out to the system `fossil` binary because libfossil's Repo.Merge requires distinct src/dst branch names, but concurrent slot work produces sibling leaves on the SAME branch (typically trunk). Same-branch leaf merging needs the multi-step orchestration fossil's CLI does (open temp checkout → fossil merge <UUID> → fossil commit). Acceptable: fan-in is an admin-scale operation invoked once per swarm, not a hot path; the same `fossil` dependency is already used by `bones peek`.

On systems without `fossil` on PATH, `bones swarm fan-in` prints an install hint and exits non-zero so a wrapping orchestrator can fall back to manual instructions.

func (*SwarmFanInCmd) Run added in v0.1.1

func (c *SwarmFanInCmd) Run(g *repocli.Globals) error

type SwarmJoinCmd added in v0.1.1

type SwarmJoinCmd struct {
	Auto          bool   `name:"auto" help:"synthetic agent slot derived from .bones/agent.id (ADR 0050)"` //nolint:lll
	Slot          string `name:"slot" help:"slot name (matches plan [slot: X]); omit with --auto"`
	TaskID        string `name:"task-id" help:"open task id to claim; omit with --auto"`
	Caps          string `name:"caps" default:"oih" help:"fossil caps for the slot user"`
	ForceTakeover bool   `name:"force" help:"clobber an existing slot session (recovery only)"`
	HubURL        string `name:"hub-url" help:"override hub fossil HTTP URL"`
	NoAutosync    bool   `name:"no-autosync" help:"branch-per-slot mode (skip pre-commit hub pull)"`
}

SwarmJoinCmd opens a per-slot leaf, ensures the slot's fossil user exists in the hub, claims the named task, writes the swarm session record to KV, and prints the slot's worktree path on stdout (for `cd $(bones swarm cwd ...)`-style sourcing).

On a successful return, the leaf process has been stopped (per the per-CLI-invocation lifetime contract — ADR 0028 §"Process lifecycle"). The session record persists so subsequent `swarm commit` and `swarm close` invocations can Resume.

All the assembly (workspace check, fossil-user creation, KV session record CAS, leaf open/claim) lives in internal/swarm (ADR 0028). This verb is a thin adapter from CLI flags to swarm.Acquire + FreshLease.Release.

`--auto` (ADR 0050, #282) is the synthetic-slot path: the slot name is derived from `.bones/agent.id` and a synthetic task is auto-created. Mutually exclusive with `--slot` / `--task-id`. The caller (typically Claude Code's `Agent` tool) consumes the printed `BONES_SLOT_WT=` line to `cd` into the slot's worktree.

func (*SwarmJoinCmd) Run added in v0.1.1

func (c *SwarmJoinCmd) Run(g *repocli.Globals) error

Run drives the join flow per ADR 0028 §"swarm join", via swarm.Acquire: open workspace, Acquire (which does the role-guard check, ensures the slot user, CAS-writes the session record, opens the leaf, claims the task, writes the pid file), emit the report, FreshLease.Release.

`--auto` reroutes to swarm.JoinAuto (ADR 0050) which derives the slot from agent.id, auto-creates a synthetic task, and treats a re-join as idempotent (no error, prints the same slot dir).

type SwarmReapCmd added in v0.5.0

type SwarmReapCmd struct {
	Threshold time.Duration `name:"threshold" default:"90s" help:"staleness threshold"`
	DryRun    bool          `name:"dry-run" help:"report stale slots without closing"`
	HubURL    string        `name:"hub-url" help:"override hub fossil HTTP URL"`
}

SwarmReapCmd force-closes stale swarm sessions on this host. A session is "stale" when LastRenewed is older than the staleness threshold (DefaultThresholdSec, mirroring `bones swarm status`'s classification). Reap closes each stale same-host session as result=fail with summary "auto-reaped: stale" and releases the claim so the task is re-dispatchable.

Cross-host stale sessions are reported but never reaped — the owning host's PID may still be making local progress, and a remote reap would clobber that. Operators with cross-host stale sessions must run `bones swarm reap` from the owning host.

The verb exists because the harness layer can drop a leaf without running close (orchestrator rate-limit, kill -9, machine sleep) and the substrate's TTL eviction is too coarse a signal for operators who want to recover within seconds. Before this verb, the manual recovery path was a per-slot `bones swarm close --slot=X --result=fail` loop.

func (*SwarmReapCmd) Run added in v0.5.0

func (c *SwarmReapCmd) Run(g *repocli.Globals) error

Run executes the reap. Workspace + sessions are opened once; per- stale-session closes run sequentially (small N expected; concurrent closes would race the same hub fossil).

type SwarmStatusCmd added in v0.1.1

type SwarmStatusCmd struct {
	JSON bool `name:"json" help:"emit JSON"`
}

SwarmStatusCmd lists every active swarm session in the workspace. Output is derived purely from KV state — there is no per-process liveness probe because every swarm verb is a fresh CLI invocation whose pid dies at end of verb. LastRenewed is the canonical signal: active = renewed within activeThresholdSec, stale = older.

func (*SwarmStatusCmd) Run added in v0.1.1

func (c *SwarmStatusCmd) Run(g *repocli.Globals) error

type SwarmTasksCmd added in v0.1.1

type SwarmTasksCmd struct {
	Slot string `name:"slot" required:"" help:"slot name"`
	JSON bool   `name:"json" help:"emit JSON"`
}

SwarmTasksCmd lists ready tasks scoped to a slot. Wraps the same readiness model as `bones tasks list --ready` (coord.Ready handles blocks/supersedes/duplicates/parent edges) and filters the result to tasks whose slot annotation matches --slot.

Slot membership is resolved in three places, first-match-wins:

  1. Task.Context["slot"] equals --slot
  2. Task title contains "[slot: <name>]" literal
  3. Any file path starts with "<slot>/" or contains "/<slot>/"

(3) mirrors the validate_plan slot-disjointness rule so plans that don't yet stamp Context["slot"] still surface their tasks here.

func (*SwarmTasksCmd) Run added in v0.1.1

func (c *SwarmTasksCmd) Run(g *repocli.Globals) error

type TasksAggregateCmd

type TasksAggregateCmd struct {
	Since time.Duration `name:"since" default:"1h" help:"window size"`
	JSON  bool          `name:"json" help:"emit JSON"`
}

TasksAggregateCmd produces a per-slot summary of tasks within a window.

func (*TasksAggregateCmd) Run

type TasksAutoclaimCmd

type TasksAutoclaimCmd struct {
	Enabled  *bool         `name:"enabled" help:"enable tick (default: env AGENT_INFRA_AUTOCLAIM)"`
	Idle     bool          `name:"idle" default:"true" help:"treat session as idle for this tick"`
	ClaimTTL time.Duration `name:"claim-ttl" default:"1m" help:"claim TTL for auto-claimed task"`
}

TasksAutoclaimCmd runs a single autoclaim tick: if the session is idle and no task is currently claimed by this agent, atomically pick the oldest Ready task and claim it, then post a "claimed by" notice on the task thread.

One-shot — the caller (e.g. an agent harness's idle hook) decides when to invoke. There is no daemon mode; if a `--watch` mode is ever needed, the helper below moves into a package. ADR 0035 records why this lives here for now.

func (*TasksAutoclaimCmd) Run

type TasksClaimCmd

type TasksClaimCmd struct {
	ID    string `arg:"" help:"task id"`
	JSON  bool   `name:"json" help:"emit JSON"`
	Quiet bool   `name:"quiet" help:"suppress success output"`
}

TasksClaimCmd claims a task as the current agent.

func (*TasksClaimCmd) Run

func (c *TasksClaimCmd) Run(g *repocli.Globals) error

type TasksCloseCmd

type TasksCloseCmd struct {
	ID       string `arg:"" help:"task id"`
	Reason   string `name:"reason" help:"close reason (optional)"`
	KeepSlot bool   `name:"keep-slot" help:"close the task; leave the swarm slot intact"`
	JSON     bool   `name:"json" help:"emit JSON"`
	Quiet    bool   `name:"quiet" help:"suppress success output"`
}

TasksCloseCmd closes a task. When the task is bound to a live swarm slot (a session record in bones-swarm-sessions carries this task's id), the matching slot is auto-released via the existing `bones swarm close` path so the operator does not have to remember a separate cleanup step (issue #209, ADR 0049).

--keep-slot opts out of the auto-release for the rare case where the operator wants the slot to outlive the task it was claimed for. The auto-release path uses the same artifact precondition swarm close uses; if it would refuse (no commit since join) then `tasks close` itself refuses and leaves the task state unchanged so close is atomic across the two layers.

func (*TasksCloseCmd) Run

func (c *TasksCloseCmd) Run(g *repocli.Globals) error

type TasksCmd

type TasksCmd struct {
	Create    TasksCreateCmd    `cmd:"" help:"Create a new task"`
	List      TasksListCmd      `cmd:"" help:"List tasks"`
	Show      TasksShowCmd      `cmd:"" help:"Show a task"`
	Update    TasksUpdateCmd    `cmd:"" help:"Update a task"`
	Claim     TasksClaimCmd     `cmd:"" help:"Claim a task"`
	Close     TasksCloseCmd     `cmd:"" help:"Close a task"`
	Watch     TasksWatchCmd     `cmd:"" help:"Stream task lifecycle events"`
	Status    TasksStatusCmd    `cmd:"" help:"Snapshot of all tasks by status"`
	Link      TasksLinkCmd      `cmd:"" help:"Link two tasks with an edge type"`
	Prime     TasksPrimeCmd     `cmd:"" help:"Print agent-tasks context (prime)"`
	Autoclaim TasksAutoclaimCmd `cmd:"" help:"Run one autoclaim tick"`
	Dispatch  TasksDispatchCmd  `cmd:"" hidden:"" help:"Dispatch parent/worker (hub-only)"`
	Aggregate TasksAggregateCmd `cmd:"" help:"Aggregate per-slot task summary"`
	Compact   TasksCompactCmd   `cmd:"" help:"Compact eligible closed tasks (ADR 0016)"`
	Ready     TasksReadyCmd     `cmd:"" help:"List unblocked, unclaimed open tasks"`
}

TasksCmd groups all `bones tasks <verb>` subcommands. The Hipp audit folded ready/stale/orphans/preflight (plus the literal 'add' alias of 'create') into 'list' filter flags; dispatch is hub-only and hidden.

type TasksCompactCmd

type TasksCompactCmd struct {
	//nolint:lll // single-line struct tag is the conventional Kong pattern
	Age        time.Duration `name:"age" default:"720h" help:"only compact closed tasks older than this (default 30d)"`
	Max        int           `name:"max" default:"100" help:"max tasks compacted per pass"`
	DryRun     bool          `name:"dry-run" help:"list eligible tasks without writing"`
	Summarizer string        `name:"summarizer" default:"noop" help:"summarizer impl (noop|...)"`
}

TasksCompactCmd compacts eligible closed tasks via the substrate primitive coord.Leaf.Compact (ADR 0016). The verb is operator-driven: it opens a transient leaf bound to a fixed CLI slot ID, runs one batch pass, and stops the leaf. No swarm session is acquired; this verb reuses the leaf abstraction only because Compact's commit path lives there.

Defaults:

  • --age=720h (30d): closed-since age threshold
  • --max=100: per-pass limit
  • --summarizer=noop: ships the no-op default; future Haiku binding is a separate follow-up.

--dry-run lists eligible tasks without writing.

func (*TasksCompactCmd) Run

func (c *TasksCompactCmd) Run(g *repocli.Globals) error

Run is the Kong entry point. Resolves the workspace, runs the verb with stdout, and maps task-domain errors to the standard CLI exit codes via taskCLIError.

type TasksCreateCmd

type TasksCreateCmd struct {
	Title      string   `arg:"" help:"task title"`
	Files      string   `name:"files" help:"comma-separated file list"`
	Parent     string   `name:"parent" help:"parent task id"`
	DeferUntil string   `name:"defer-until" help:"RFC3339 time"`
	Slot       string   `name:"slot" help:"slot name; stamps Context[slot]"`
	Context    []string `name:"context" help:"key=value (repeatable)" sep:"none"`
	JSON       bool     `name:"json" help:"emit JSON"`
	Quiet      bool     `name:"quiet" help:"suppress success output"`
}

TasksCreateCmd creates a new task.

func (*TasksCreateCmd) Run

func (c *TasksCreateCmd) Run(g *repocli.Globals) error

type TasksDispatchCmd

type TasksDispatchCmd struct {
	Parent TasksDispatchParentCmd `cmd:"" help:"Run dispatch parent"`
	Worker TasksDispatchWorkerCmd `cmd:"" help:"Run dispatch worker"`
}

TasksDispatchCmd groups dispatch parent/worker subcommands.

type TasksDispatchParentCmd

type TasksDispatchParentCmd struct {
	TaskID             string `name:"task-id" required:"" help:"task id"`
	WorkerBin          string `name:"worker-bin" help:"worker binary path (default: this process)"`
	WorkerResult       string `name:"worker-result" default:"success" help:"worker final result"`
	WorkerSummary      string `name:"worker-summary" default:"done" help:"worker final summary"`
	WorkerClaimHandoff bool   `name:"worker-claim-handoff" help:"worker takes claim ownership"`
}

TasksDispatchParentCmd runs the parent side of the dispatch flow: spawn a worker process, subscribe for its result, then close/fork the claimed task accordingly.

func (*TasksDispatchParentCmd) Run

type TasksDispatchWorkerCmd

type TasksDispatchWorkerCmd struct {
	TaskID           string        `name:"task-id" required:"" help:"task id"`
	TaskThread       string        `name:"task-thread" required:"" help:"task chat thread"`
	WorkerAgentID    string        `name:"worker-agent-id" required:"" help:"worker agent id"`
	ClaimFromAgentID string        `name:"claim-from-agent-id" help:"expected previous claimer"`
	HandoffTTL       time.Duration `name:"handoff-ttl" default:"30s" help:"handoff hold ttl"`
	Result           string        `name:"result" default:"success" help:"success|fork|fail"`
	Summary          string        `name:"summary" default:"done" help:"final summary"`
	Branch           string        `name:"branch" help:"fork branch"`
	Rev              string        `name:"rev" help:"fork rev"`
}

TasksDispatchWorkerCmd runs the worker side of the dispatch flow: optionally take over a claim, post progress to the task thread, then close (on success-with-handoff) or just announce the result.

func (*TasksDispatchWorkerCmd) Run

type TasksLinkCmd

type TasksLinkCmd struct {
	From  string `arg:"" help:"from task id"`
	To    string `arg:"" help:"to task id"`
	Type  string `name:"type" help:"edge type: blocks|supersedes|duplicates|discovered-from"`
	JSON  bool   `name:"json" help:"emit JSON"`
	Quiet bool   `name:"quiet" help:"suppress success output"`
}

TasksLinkCmd links two tasks with a typed edge.

func (*TasksLinkCmd) Run

func (c *TasksLinkCmd) Run(g *repocli.Globals) error

type TasksListCmd

type TasksListCmd struct {
	All       bool   `name:"all" help:"include closed tasks"`
	Status    string `name:"status" help:"open|claimed|closed"`
	Closed    bool   `name:"closed" help:"alias for --status=closed"`
	Open      bool   `name:"open" help:"alias for --status=open"`
	Claimed   bool   `name:"claimed" help:"alias for --status=claimed"`
	ClaimedBy string `name:"claimed-by" help:"agent id, or - for unclaimed"`
	Ready     bool   `name:"ready" help:"only tasks ready to claim (open, unblocked, not deferred)"`
	Stale     int    `name:"stale" help:"only tasks not updated in N days; 0 = off"`
	Orphans   bool   `name:"orphans" help:"only claimed tasks whose claimer is offline"`
	BySlot    bool   `name:"by-slot" help:"group by slot; flag hot slots"`
	JSON      bool   `name:"json" help:"emit JSON"`
}

TasksListCmd lists tasks. Filter flags compose: status → ready → stale → orphans. --ready and --orphans require a coord session; --stale and the others run from the in-memory task list only.

--by-slot is the inspection mode added for issue #214: it groups open tasks by Context["slot"] and flags slots whose open-task count exceeds HotSlotThreshold, so plan authors see the cost of slot packing before dispatch (per ADR 0023's slot disjointness contract and ADR 0028's per-slot leaf invariant). When set, --by-slot replaces the per-task rendering with a per-slot summary; the other filter flags still scope which tasks the grouping considers (e.g. `--by-slot --status=open`).

func (*TasksListCmd) Run

func (c *TasksListCmd) Run(g *repocli.Globals) error

type TasksPrimeCmd

type TasksPrimeCmd struct {
	JSON bool `name:"json" help:"emit raw bones JSON shape (operator scripts)"`
	// Hook selects an event whose Claude Code hook envelope should
	// be emitted. Empty (default) means no envelope. See ADR 0051
	// for the supported values and the protocol contract.
	Hook string `name:"hook" enum:"session-start," default:"" help:"emit Claude Code hook envelope"`
}

TasksPrimeCmd prints an agent context summary (open/ready/claimed tasks, recent threads, peers online).

Three output modes:

  • default (no flags): human-readable markdown via formatPrime().
  • --json: raw bones JSON shape ({open_tasks, ready_tasks, ...}). Stable contract for operator scripts (separately governed by issue #321's schema-contract work).
  • --hook=session-start: Claude Code hook protocol envelope wrapping the formatPrime() markdown in `hookSpecificOutput.additionalContext`. This is the form `bones up` writes into .claude/settings.json so Claude Code's SessionStart hook reader actually injects bones context into the agent window. See ADR 0051.

`--json` and `--hook=X` describe two distinct consumers and are mutually exclusive: the operator-script surface (--json) and the Claude Code protocol surface (--hook). Combining them is a CLI error.

func (*TasksPrimeCmd) Run

func (c *TasksPrimeCmd) Run(g *repocli.Globals) error

type TasksReadyCmd

type TasksReadyCmd struct {
	JSON  bool   `name:"json" help:"emit JSON array"`
	Limit int    `name:"limit" default:"0" help:"max rows; 0 = unlimited"`
	Slot  string `name:"slot" help:"filter to one slot's task scope"`
	Mine  bool   `name:"mine" help:"only tasks the calling agent could claim"`
}

TasksReadyCmd lists open, unblocked, unclaimed tasks — the actionable work an agent could pick up right now. Mirrors `bd ready` from beads.

The verb is read-only; pairs with `bones tasks autoclaim` for the claim step. Filtering and sort logic live in tasks.FilterReady / tasks.SortReady so a future caller (e.g. a richer prime view) can reuse the same predicate without going through the CLI.

Default sort: priority highest-first (Context["priority"]="P<N>", lower N wins), then CreatedAt oldest-first as the FIFO tiebreak.

func (*TasksReadyCmd) Run

func (c *TasksReadyCmd) Run(g *repocli.Globals) error

type TasksShowCmd

type TasksShowCmd struct {
	ID   string `arg:"" help:"task id"`
	JSON bool   `name:"json" help:"emit JSON"`
}

TasksShowCmd prints a single task.

func (*TasksShowCmd) Run

func (c *TasksShowCmd) Run(g *repocli.Globals) error

type TasksStatusCmd

type TasksStatusCmd struct{}

TasksStatusCmd prints a one-shot snapshot of hub and backlog state.

Output:

hub:      http://127.0.0.1:8765 (pid 12345)
nats:     nats://127.0.0.1:4222
backlog:  3 open · 1 claimed · 2 closed (last 24h)

func (*TasksStatusCmd) Run

func (c *TasksStatusCmd) Run(g *repocli.Globals) error

type TasksUpdateCmd

type TasksUpdateCmd struct {
	ID         string   `arg:"" help:"task id"`
	Status     string   `name:"status" help:"open|claimed|closed"`
	Title      *string  `name:"title" help:"new title"`
	Files      *string  `name:"files" help:"comma-separated file list (replaces existing)"`
	Parent     *string  `name:"parent" help:"parent task id"`
	DeferUntil *string  `name:"defer-until" help:"RFC3339 time (empty clears)"`
	Context    []string `name:"context" help:"key=value (repeatable; merges)" sep:"none"`
	ClaimedBy  *string  `name:"claimed-by" help:"agent id to claim as"`
	JSON       bool     `name:"json" help:"emit JSON"`
	Quiet      bool     `name:"quiet" help:"suppress success output"`
}

TasksUpdateCmd updates a task. Flags are pointer-typed so we can detect "flag absent" vs "flag set to empty string" — a distinction the underlying mutator depends on (only set fields get applied).

func (*TasksUpdateCmd) Run

func (c *TasksUpdateCmd) Run(g *repocli.Globals) error

type TasksWatchCmd

type TasksWatchCmd struct {
	From uint64 `name:"from" help:"start at stream sequence (excl with --since)"`
	// Since takes a Go time.Duration (e.g. "24h"). Mutually exclusive
	// with --from. Default zero means live-only consumption.
	Since time.Duration `name:"since" help:"start at wall-clock offset (excl with --from)"`
	JSON  bool          `name:"json" help:"emit one EventEnvelope JSON object per line"`
}

TasksWatchCmd subscribes to the task event log and streams lifecycle events to stdout until Ctrl-C or context cancellation. Default is live-only per ADR 0052; --from and --since opt into a one-shot backfill before tailing. --json switches the output to one EventEnvelope JSON object per line for downstream automation.

func (*TasksWatchCmd) Run

func (c *TasksWatchCmd) Run(g *repocli.Globals) error

type TelemetryCmd added in v0.5.0

type TelemetryCmd struct {
	Status  TelemetryStatusCmd  `cmd:"" default:"withargs" help:"Print current telemetry state"`
	Disable TelemetryDisableCmd `cmd:"" help:"Opt out of telemetry (writes ~/.bones/no-telemetry)"`
	Enable  TelemetryEnableCmd  `cmd:"" help:"Re-enable telemetry (removes the opt-out marker)"`
}

TelemetryCmd groups the operator-facing telemetry verbs introduced in ADR 0040. Default invocation (no subcommand) is treated as `status` per the ADR's "fewer subcommands to remember" decision.

The verbs are file-system-only — no NATS, no fossil, no workspace. They work in any directory and on a fresh machine. Operators running `bones telemetry disable` immediately after install (before `bones up`) is the load-bearing UX.

type TelemetryDisableCmd added in v0.5.0

type TelemetryDisableCmd struct{}

TelemetryDisableCmd writes the opt-out marker. Idempotent — safe to run on a fresh install or repeatedly. Prints a confirmation line so the operator sees the action took.

func (*TelemetryDisableCmd) Run added in v0.5.0

Run writes the marker file. Surfaces I/O errors directly with the path so the operator can investigate (e.g. permission issues on read-only home directories).

type TelemetryEnableCmd added in v0.5.0

type TelemetryEnableCmd struct{}

TelemetryEnableCmd removes the opt-out marker. Idempotent on a fresh install (no marker → no-op).

func (*TelemetryEnableCmd) Run added in v0.5.0

Run removes the marker. The follow-up resolution outcome depends on the build flavor and env vars; the printed hint points the operator at `bones telemetry status` rather than guessing.

type TelemetryStatusCmd added in v0.5.0

type TelemetryStatusCmd struct{}

TelemetryStatusCmd prints the current resolution outcome. Output shape is one labeled line per fact (state, reason, endpoint, dataset, install_id) so a script can parse it with grep.

func (*TelemetryStatusCmd) Run added in v0.5.0

Run renders the status. Always exits 0 — even "off" is a valid state to report.

type UpCmd

type UpCmd struct {
	// Stealth skips the .claude/settings.json merge. Combine with
	// BONES_DIR=/path for a zero-workspace-write install.
	Stealth bool `name:"stealth" help:"skip .claude/settings.json merge (combine with BONES_DIR)"`
	JSON    bool `name:"json" help:"emit JSON envelope (ADR 0053)"`
	Quiet   bool `name:"quiet" help:"suppress per-action output and success signature"`
}

UpCmd performs workspace bootstrap from a fresh clone: workspace init (idempotent), orchestrator scaffold (scripts, skills, hooks), git hook install, agent guidance, and Fossil drift check. The hub itself is no longer started by `bones up`; per ADR 0041 it auto-starts on the first verb that needs it via workspace.Join.

Per #314 the default output is now per-action structured: every gitignore add, hook install/rewrite, skill sync, and manifest bump gets one grep-friendly line, terminated by a one-line success signature ("up <workspace> actions=<n>"). --quiet suppresses both the per-action lines AND the summary signature; --json emits the same actions wrapped in the ADR 0053 schema envelope.

--stealth (issue #291) suppresses the merge into .claude/settings.json — useful when running bones against a project where the operator does not want bones-managed hook entries written into Claude config. Combine with BONES_DIR=/some/path for a zero-workspace-write install.

func (*UpCmd) Run

func (c *UpCmd) Run(g *repocli.Globals) error

type ValidatePlanCmd

type ValidatePlanCmd struct {
	Path      string `arg:"" type:"existingfile" help:"Markdown plan path"`
	ListSlots bool   `name:"list-slots" help:"deprecated no-op (JSON is always emitted on stdout)"`
}

ValidatePlanCmd parses a Markdown plan, extracts [slot: name] annotations, and verifies:

  1. Every Task heading has a [slot: name].
  2. Slots are directory-disjoint (no two slots share a directory prefix).
  3. Each task's Files: paths begin with the slot's owned directory.

Output contract (deliberately strict so orchestrator scripts can pipe stdout through `jq` / `python -c json.load` without stripping prose first):

stdout : always a single JSON object {errors, slots} — empty
         arrays on a clean validation, non-empty Errors on a
         violation. Even parse failures emit a JSON object so
         consumers don't have to special-case stdout shape.
stderr : human-readable prose, one violation per line, on
         failures. Empty on a clean run.
exit   : 0 = valid, 1 = plan-level violations, 2 = parse / IO error.

--list-slots is retained as a no-op flag for backwards compatibility with orchestrator scripts authored against the older "JSON only on success when --list-slots set" shape. It emits a one-line stderr hint the first time it's used so callers know the flag is deprecated; behavior is identical with or without it.

func (*ValidatePlanCmd) Run

func (c *ValidatePlanCmd) Run(g *repocli.Globals) error

type ValidateResult added in v0.5.0

type ValidateResult struct {
	Errors []string    `json:"errors"`
	Slots  []slotEntry `json:"slots"`
}

ValidateResult is the on-stdout JSON shape emitted by every invocation of ValidatePlanCmd. Fields are always present; Errors is non-empty iff exit is non-zero. Slots is best-effort: even on validation failure we still emit whatever slot annotations the parser saw, so an orchestrator that wants to dispatch the valid slots alongside reporting the failures has the data without re-parsing.

type WorkspacesCmd added in v0.8.0

type WorkspacesCmd struct {
	Ls   WorkspacesLsCmd   `cmd:"" help:"List all known workspaces (live + stopped)"`
	Show WorkspacesShowCmd `cmd:"" help:"Show one workspace by name (or filename id)"`
}

WorkspacesCmd groups the cross-workspace inspection verbs (#174).

On disk, every workspace registry file is keyed by sha256(cwd) — see registry.WorkspaceID. Filenames are intentionally opaque so that agents and humans can both write them without a name-collision protocol; this group surfaces a name-keyed view on top.

type WorkspacesLsCmd added in v0.8.0

type WorkspacesLsCmd struct {
	JSON  bool `name:"json" help:"emit machine-readable JSON"`
	Prune bool `name:"prune" help:"remove registry entries whose hub is stopped"`
	Yes   bool `name:"yes" short:"y" help:"skip the confirmation prompt for --prune"`
}

WorkspacesLsCmd renders the registry as a human-readable table.

We deliberately do NOT prune stopped entries here by default (unlike `bones status --all`, which has live-only semantics): the operator running `bones workspaces ls` is asking "what does bones think it knows about?", and silently discarding stale entries makes it harder to debug "why is bones up showing the wrong project?" scenarios.

--prune is the explicit cleanup verb (#180): list dead entries (HubStatus == stopped), confirm, and delete the registry files. Entries whose status is "unknown" are skipped — those could be workspaces whose hub never recorded a HubURL, and pruning them would be wrong.

func (*WorkspacesLsCmd) Run added in v0.8.0

func (c *WorkspacesLsCmd) Run() error

Run gathers the registry view via registry.ListInfo (which performs the hub-status probe) and formats it. ListInfo already sorts by Name,Cwd, so we don't re-sort here.

type WorkspacesShowCmd added in v0.8.0

type WorkspacesShowCmd struct {
	Name string `arg:"" required:"" help:"Workspace name or 16-hex id"`
	JSON bool   `name:"json" help:"emit raw machine-readable JSON instead of pretty"`
}

WorkspacesShowCmd prints one workspace's full registry entry.

The Name argument is matched case-sensitively against Entry.Name and against the on-disk filename ID (16-hex chars). Matching by ID lets the operator disambiguate when two workspaces share a name — `ls` surfaces the ID in --json output, and Show accepts it.

func (*WorkspacesShowCmd) Run added in v0.8.0

func (c *WorkspacesShowCmd) Run() error

Directories

Path Synopsis
Package schemas defines the typed payload structs for every `--json`-emitting bones CLI verb (ADR 0053).
Package schemas defines the typed payload structs for every `--json`-emitting bones CLI verb (ADR 0053).
Package uxprint is the single source of truth for one-line success signatures emitted by bones state-mutating CLI verbs.
Package uxprint is the single source of truth for one-line success signatures emitted by bones state-mutating CLI verbs.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL