README
¶
TeamCode
A multi-agent coding CLI where specialized agents communicate via message bus to collaboratively implement, review, test, and debug code.
Architecture
TeamCode runs a team of 8 agents that talk to each other through a central message bus:
| Agent | Role |
|---|---|
| Liaison | User-facing interface; forwards tasks and streams results |
| Leader | Plans tasks, delegates to coders, routes to reviewers/testers |
| Coder | Writes code via LLM + FileTool, produces real file changes |
| Reviewer | Reviews code diffs, approves or requests changes |
| Tester | Runs go test ./... (or configured framework), parses real results |
| Debugger | Diagnoses test failures, proposes fixes |
| Architect | Designs system architecture for complex tasks |
| Docs | Generates documentation |
Pipeline
User → Liaison → Leader → Coder → Leader → Reviewer → Leader → Tester → Leader → Liaison → User
↑ |
└── Debugger (on failure) ←─┘
Installation
macOS / Linux (install script)
curl -sSL https://raw.githubusercontent.com/p-yan-6908/teamcode/main/install.sh | bash
Homebrew
brew install --formula https://raw.githubusercontent.com/p-yan-6908/teamcode/main/homebrew/teamcode.rb
From source
go install github.com/p-yan-6908/teamcode/cmd/teamcode@latest
Build locally
git clone https://github.com/p-yan-6908/teamcode.git
cd teamcode
make build
Quick Start
Initialize
Set up your project configuration:
teamcode init
Verify your setup:
teamcode doctor
Mock Mode (no API key needed)
go build -o teamcode ./cmd/teamcode
./teamcode --mock "Build a todo API"
With OpenAI
export OPENAI_API_KEY=sk-...
./teamcode --provider openai --model gpt-4o "Build a todo API"
With Anthropic
export ANTHROPIC_API_KEY=sk-ant-...
./teamcode --provider anthropic --model claude-sonnet-4-20250514 "Build a todo API"
With Ollama (local)
export OLLAMA_HOST=localhost:11434
./teamcode --provider ollama --model qwen2.5-coder:14b "Build a todo API"
With OpenRouter
export OPENROUTER_API_KEY=sk-or-...
./teamcode --provider openrouter --model openai/gpt-4o "Build a todo API"
Interactive TUI
./teamcode --mock
Type a task in the input field. Press Enter to submit. Press ? for help.
Provider Setup
Detailed setup instructions for each supported LLM provider.
OpenAI
- Create an API key at platform.openai.com
- Set the environment variable:
export OPENAI_API_KEY=sk-... - Run a task:
teamcode --provider openai --model gpt-4o "Build a todo API"
Recommended models: gpt-4o, gpt-4o-mini.
Anthropic
- Create an API key at console.anthropic.com
- Set the environment variable:
export ANTHROPIC_API_KEY=sk-ant-... - Run a task:
teamcode --provider anthropic --model claude-sonnet-4-20250514 "Build a todo API"
Recommended models: claude-sonnet-4-20250514.
Ollama (local)
- Install Ollama from ollama.com
- Pull a model:
ollama pull qwen2.5-coder:14b - Ensure Ollama is running (default:
localhost:11434) - Run a task:
teamcode --provider ollama --model qwen2.5-coder:14b "Build a todo API"
No API key is required. Ollama runs entirely on your machine.
OpenRouter
- Create an API key at openrouter.ai
- Set the environment variable:
export OPENROUTER_API_KEY=sk-or-... - Run a task:
teamcode --provider openrouter --model openai/gpt-4o "Build a todo API"
OpenRouter supports many models via a unified API. Prefix the model with the provider namespace, e.g., openai/gpt-4o or anthropic/claude-sonnet-4-20250514.
Command Reference
| Command | Description |
|---|---|
teamcode [options] <task> |
Run a task in TUI or headless mode |
teamcode status |
Show current session status |
teamcode inspect task <id> |
Inspect a specific task's details |
teamcode rollback <task-id> |
Rollback a task's git transaction |
teamcode logs [flags] |
Show and filter event logs |
teamcode doctor |
Check environment and configuration |
teamcode timeline <task-id> |
Show task execution timeline |
Headless Mode
Run TeamCode in automated pipelines without user interaction:
teamcode --mock --yes --json "task"
Options
| Flag | Description | Stability |
|---|---|---|
--yes |
Auto-approve all review requests | STABLE |
--json |
Output JSON results for parsing | STABLE |
--mock |
Use mock LLM (no API key required) | STABLE |
--budget |
Set USD budget limit (default: $10) | EXPERIMENTAL |
--deadline |
Set runtime deadline (e.g., 5m, 1h) |
EXPERIMENTAL |
Examples
# Mock mode with auto-approval
teamcode --mock --yes "Add user authentication"
# With real LLM and budget limit
teamcode --budget 5.00 "Build REST API"
# With deadline
teamcode --deadline 10m "Refactor legacy code"
# JSON output for CI/CD
teamcode --mock --json "Write unit tests" | jq '.task_id'
Log Filtering
Filter and inspect event logs from TeamCode sessions:
# Follow logs in real time
teamcode logs --follow
# Filter by task ID
teamcode logs --task <id>
# Filter by agent
teamcode logs --agent coder-1
# Filter by event type
teamcode logs --type action_complete
# Show last N entries
teamcode logs --tail 50
# Output as JSON
teamcode logs --json
Log Filtering Flags
| Flag | Description |
|---|---|
--follow |
Stream logs in real time |
--task <id> |
Filter logs by task ID |
--agent <id> |
Filter by agent (liaison, leader, coder, etc.) |
--type <type> |
Filter by event type |
--since <time> |
Filter by timestamp (RFC3339 format) |
--tail <n> |
Show last N entries |
--json |
Output logs as JSON array |
Configuration
Create a teamcode.toml in your project root (see teamcode.example.toml for all options):
[llm]
provider = "openai"
model = "gpt-4o"
api_key = "$OPENAI_API_KEY"
max_concurrent_requests = 5
rate_limit_rps = 10
Supported providers: openai, anthropic, ollama, openrouter, mock.
Example configurations:
# Anthropic
[llm]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
api_key = "$ANTHROPIC_API_KEY"
# Ollama (local, no API key needed)
[llm]
provider = "ollama"
model = "qwen2.5-coder:14b"
base_url = "localhost:11434"
# OpenRouter
[llm]
provider = "openrouter"
model = "openai/gpt-4o"
api_key = "$OPENROUTER_API_KEY"
Environment Variables
API keys can reference environment variables with $ prefix:
api_key = "$OPENAI_API_KEY"
Shell Policy
Restrict allowed shell commands for security:
[tools.shell]
allowed_commands = ["go", "git", "npm", "pytest", "cargo"]
sandbox = true
Budgets
Set spending limits to prevent runaway costs:
[llm]
budget_usd = 10.00
max_concurrent_requests = 5
rate_limit_rps = 10
Approvals
Configure automatic approval or timeout behavior:
[approval]
auto_approve = false
timeout_seconds = 120
[approval.policies]
security_critical = "require"
docs_only = "auto"
Runtime Deadlines
Set task timeouts to prevent runaway tasks:
[runtime]
default_timeout_seconds = 300
max_retries = 3
[runtime.per_task]
code_generation = 180
code_review = 60
testing = 120
Keyboard Shortcuts
| Key | Action |
|---|---|
Enter |
Submit task |
Tab |
Switch view (agents / messages / tasks) |
Ctrl+S |
Soft cancel (graceful stop) |
Ctrl+Shift+C |
Hard cancel (immediate stop) |
Ctrl+C |
Quit |
? |
Toggle help |
Development
go test ./... # Run all tests
go test -race ./... # Run with race detector
go build ./... # Build
gofmt -w . # Format
Smoke Tests
Smoke tests verify real provider connectivity and are opt-in via environment variables.
# Run OpenAI smoke tests (requires API key)
export OPENAI_API_KEY=sk-...
make smoke
# Or run directly
go test -v -run Smoke ./internal/llm/...
| Environment Variable | Description |
|---|---|
TEAMCODE_SMOKE_OPENAI |
Set to 1 to enable OpenAI smoke tests |
OPENAI_API_KEY |
Your OpenAI API key (required for OpenAI smoke tests) |
Smoke tests are skipped by default and do not run during make check.
Version
./teamcode --version
./teamcode --version --json
Compatibility Policy
TeamCode uses explicit stability levels for CLI flags, JSON output, and configuration options.
| Level | Guarantee |
|---|---|
| STABLE | No breaking changes within the same major version. Safe for CI/CD pipelines and scripting. |
| EXPERIMENTAL | May change or be removed at any time. Published for early feedback. |
| DEPRECATED | Will be removed in a future version. A migration path is provided in the release notes. |
Breaking Changes
Breaking changes require a major version bump and include:
- Removing a STABLE CLI flag
- Changing STABLE exit code semantics
- Removing a STABLE JSON field
- Changing configuration behavior without a migration path
Security fixes are the sole exception. A security fix may break compatibility without a major version bump when the alternative is leaving users vulnerable.
CLI Flag Stability
| Flag | Stability |
|---|---|
--version |
STABLE |
--json |
STABLE |
--mock |
STABLE |
--yes |
STABLE |
--config |
STABLE |
--provider |
STABLE |
--model |
STABLE |
--no-commit |
STABLE |
--dry-run |
STABLE |
--resume |
STABLE |
--budget |
EXPERIMENTAL |
--deadline |
EXPERIMENTAL |
Security Boundaries
What TeamCode Protects Against
- Path traversal in file operations: agents cannot read or write outside the project directory
- Command injection via ShellTool: only commands in the allowlist can execute
- Git safety: git operations are restricted to the current repository
- Budget overrun: configurable token and USD limits prevent runaway LLM spending
- Rollback integrity: file changes are tracked and can be reverted to pre-task state
What TeamCode Does Not Protect Against
- Malicious code in LLM output: TeamCode executes code produced by LLMs. A compromised or jailbroken LLM can generate harmful code that passes all internal checks.
- Network egress: agents can make network requests if allowed shell commands include
curlorwget. - Denial of service: large inputs or recursive spawn patterns can exhaust resources.
- Side channels: TeamCode does not isolate agents from each other or from the host system beyond file path and command restrictions.
- Secrets in prompts: API keys and tokens in task descriptions may be logged or sent to the LLM provider.
Release Integrity
TeamCode releases are cryptographically verifiable. Each release includes SHA-256 checksums for all artifacts.
Verification
Download the release and its checksum file, then verify:
curl -LO https://github.com/p-yan-6908/teamcode/releases/download/v1.0.0/teamcode_v1.0.0_linux_amd64.tar.gz
curl -LO https://github.com/p-yan-6908/teamcode/releases/download/v1.0.0/teamcode_v1.0.0_linux_amd64.tar.gz.sha256
sha256sum -c teamcode_v1.0.0_linux_amd64.tar.gz.sha256
Provenance Metadata
Inspect build metadata embedded in the binary:
teamcode --version --json
This outputs JSON containing the exact version, Git commit, build date, Go version, and target platform used to produce the binary.
No-Retag Policy
Tags are immutable. If a release is found to be defective, a new patch version will be issued rather than retagging or reusing an existing version string. Never retag a release.
RC Limitations
This is a release candidate version. The following limitations apply:
Current Limitations
- Nested Git Repositories: TeamCode may not correctly handle nested git repositories or git worktrees within the project directory.
- Dirty Repository Handling: When running on a dirty repository (uncommitted changes), TeamCode will attempt to preserve unrelated edits during rollback but cannot guarantee preservation of all changes in complex scenarios.
- Large Patch Files: Very large patch files (>1MB) may cause performance issues or timeouts.
- Concurrent Sessions: Only one active session per
.teamcodedirectory is supported. - Multi-repo coordination: TeamCode operates on a single repository.
- Custom LLM providers: Built-in providers include OpenAI, Anthropic, Ollama, OpenRouter, and mock.
- Windows PowerShell: Only bash-compatible shells supported.
- Interactive debugging: Test failures are diagnosed but not debugged interactively.
- Partial task rollback: Only full-task rollback supported (cannot selectively undo changes).
Known Issues
- Session recovery after unexpected termination may not preserve all agent state.
- Some LLM providers may have rate limiting that causes retries and slower execution.
- Large codebase indexing can be slow on first run.
- TUI may lag with many concurrent agent messages.
- JSON output may be truncated for very long task logs.
- Some complex merge conflicts may require manual resolution.
Contributing
cmd/teamcode/ Entry point and CLI flags
internal/
agent/ Agent lifecycle, spawn, cancel
bus/ Pub/sub message bus with dead letter queue
config/ TOML configuration
llm/ LLM client interface, OpenAI, mock, rate limiting
roles/ 8 role handlers (leader, coder, reviewer, etc.)
runtime/ TeamRuntime orchestrator, action execution
schema/ Structured LLM action schema with validation
state/ Project state, checkpoints, agent status tracking
tools/ FileTool, ShellTool, GitTool with sandboxing
tui/ Bubble Tea TUI with agent status display
types/ Shared message and action types
State Persistence
TeamCode saves state to .teamcode/state.json automatically. On restart, it loads the previous state including active tasks, agent statuses, and decisions.
Rollback Safety
TeamCode tracks all file changes and can restore the repository to its original state if something goes wrong.
When Rollback is Safe
Rollback is safe when:
- No unrelated changes were made to the repository during task execution
- The repository has not diverged (no new commits on top of pre-task state)
- The task did not complete successfully (cancelled or failed)
Dirty Repository Behavior
If the repository has uncommitted changes before starting a task:
- TeamCode captures the list of dirty files at task start
- On rollback, only files touched by TeamCode are restored
- Pre-existing dirty files remain untouched
If the repository has staged changes:
- Staged changes are preserved during rollback
- TeamCode only restores files it modified
Examples
# Check current session status
teamcode status
# Attempt rollback
teamcode rollback <task-id>
Rollback will fail if:
- You have untracked files that conflict with TeamCode-created files
- The repo HEAD has moved (new commits added)
- Worktrees or submodules are involved (manual cleanup required)
Troubleshooting
Use teamcode doctor to diagnose common environment and configuration issues.
Quick Diagnostic
Run a health check on your environment:
teamcode doctor
Healthy output looks like this:
teamcode doctor
===============
[PASS] Config Validity teamcode.toml loaded and validated successfully
[PASS] Session State .teamcode/session.json is valid
[PASS] Log Directory .teamcode/logs/ exists and is writable
[PASS] Git Repository repository is clean
[PASS] Redaction Compliance no secrets detected in last 50 scanned lines
Overall: 5/5 checks passed
If any check fails, the doctor prints the failure and a suggested fix.
JSON Diagnostics
For programmatic checks or CI/CD pipelines:
teamcode doctor --json
Example output:
{
"_schema_version": 1,
"overall": "PASS",
"passed": 5,
"total": 5,
"checks": [
{
"name": "Config Validity",
"status": "PASS",
"details": "teamcode.toml loaded and validated successfully",
"severity": "info"
}
]
}
Common Fixes
For detailed solutions to common problems, see docs/TROUBLESHOOTING.md.
Fixing a Config Error
If teamcode doctor reports a missing or invalid config:
$ teamcode doctor
[FAIL] Config Validity teamcode.toml not found
→ Run `teamcode init` to create teamcode.toml
$ teamcode init
$ teamcode doctor
Fixing a Dirty Repo Warning
If teamcode doctor warns about uncommitted changes:
$ teamcode doctor
[FAIL] Git Repository dirty working tree
→ Commit or stash changes, checkout a branch, or review worktree setup
$ git add -A && git commit -m "wip"
$ teamcode doctor
v1.0.0 - What's New
Stability Guarantees
- Compatibility freeze: All STABLE CLI flags, exit codes, JSON fields, and config options are frozen for the v1.x lifecycle.
_schema_versionpresent in every machine-readable JSON output.- Migration tests cover v0.9 and v0.10 config/session upgrades.
Release Integrity
- Dirty build detection: Binaries built from dirty worktrees report
dirty: trueeven when HEAD matches a tag exactly. release-gatetarget:make releaseandmake release-localrefuse dirty trees and missing exact tags.- Checksum verification: All release artifacts include SHA256 checksums;
install.shverifies on download.
Runtime Hardening
- Deterministic shutdown: Agent goroutines tracked with
WaitGroup; headless bus subscribers properly unsubscribed. - No test flakes: Runtime and bus tests pass repeatedly without cleanup races.
- Goroutine leak detection:
make check-fullincludes per-package budget checks.
Documentation
- COMPATIBILITY.md: Stability levels (STABLE / EXPERIMENTAL / DEPRECATED), breaking change policy, and security boundaries.
- RELEASE_CHECKLIST.md: Step-by-step RC-to-final process with validation matrix.
v0.7.0 - What's New
Headless Mode
- CLI-first workflow with
--yesauto-approval - JSON output for CI/CD integration
- Configurable timeouts and budgets
- Improved rollback commands
Configuration Enhancements
- Shell policy allowlist
- Budget tracking with spend alerts
- Approval policies per task type
- Per-task runtime deadlines
Hardening Improvements
- Approval System: Concurrent approval requests with request-ID routing
- Git Transactions: Edge case handling for worktrees and submodules
- TUI Accuracy: Real git diff and test output display
- CLI/Headless: Improved status and rollback commands
- Session Resilience: JSON backup and migration support
- Patch Pipeline: Batch application with rollback-on-error
- LLM Reliability: Budget tracking, rate limiting, timeouts
- Security: Command allowlist and path traversal protection
Configuration
[llm]
budget_usd = 10.00
max_concurrent_requests = 5
rate_limit_rps = 10
[approval]
timeout_seconds = 120
[tools.shell]
allowed_commands = ["go", "git", "npm", "pytest"]