README
ΒΆ
AgentPipe π

AgentPipe is a powerful CLI and TUI application that orchestrates conversations between multiple AI agents. It allows different AI CLI tools (like Claude, Cursor, Gemini, Qwen, Ollama) to communicate with each other in a shared "room", creating dynamic multi-agent conversations with real-time metrics, cost tracking, and interactive user participation.
Screenshots
Enhanced TUI Interface
Enhanced TUI with multi-panel layout: agent list with status indicators, conversation view with metrics, statistics panel showing turns and total cost, configuration panel, and user input area
Console/CLI Interface
CLI output showing color-coded agent messages with agent type indicators (e.g., "Alice (qoder)"), HOST vs SYSTEM distinction, and inline metrics display
Supported AI Agents
All agents now use a standardized interaction pattern with structured three-part prompts, message filtering, and comprehensive logging for reliable multi-agent conversations.
- β Amp (Sourcegraph) - Advanced coding agent with autonomous reasoning β‘ Thread-optimized
- β Claude (Anthropic) - Advanced reasoning and coding
- β Codex (OpenAI) - Code generation specialist (non-interactive exec mode)
- β Copilot (GitHub) - Terminal-based coding agent with multiple model support
- β Cursor (Cursor AI) - IDE-integrated AI assistance
- β Factory (Factory.ai) - Agent-native software development with Droid (non-interactive exec mode)
- β Gemini (Google) - Multimodal understanding
- β Qoder - Agentic coding platform with enhanced context engineering
- β Qwen (Alibaba) - Multilingual capabilities
- β Ollama - Local LLM support (planned)
Features
Core Capabilities
- Multi-Agent Conversations: Connect multiple AI agents in a single conversation
- Multiple Conversation Modes:
round-robin: Agents take turns in a fixed orderreactive: Agents respond based on conversation dynamicsfree-form: Agents participate freely as they see fit
- Flexible Configuration: Use command-line flags or YAML configuration files
Enhanced TUI Interface
- Multi-panel layout with dedicated sections for agents, chat, stats, and config
- Color-coded agent messages with unique colors per agent
- Agent type indicators showing agent type in parentheses (e.g., "Alice (qoder)")
- Branded sunset logo with gradient colors
- Real-time agent activity indicators (π’ active/responding, β« idle)
- Inline metrics display (response time in seconds, token count, cost)
- Conversation search (Ctrl+F) with n/N navigation through results
- Agent filtering via slash commands (/filter, /clear)
- Help modal (?) showing all keyboard shortcuts
- Topic panel showing initial conversation prompt
- Statistics panel with turn counters and total conversation cost
- Configuration panel displaying all active settings and config file path
- Interactive user input panel for joining conversations
- Smart message consolidation (headers only on speaker change)
- Proper multi-paragraph message formatting
Production Features
- Prometheus Metrics: Comprehensive observability with 10+ metrics types
- Request rates, durations, errors
- Token usage and cost tracking
- Active conversations, retry attempts, rate limit hits
- HTTP server with
/metrics,/health, and web UI endpoints - Ready for Grafana dashboards and alerting
- Conversation Management:
- Save/resume conversations from state files
- Export to JSON, Markdown, or HTML formats
- Automatic chat logging to
~/.agentpipe/chats/
- Reliability & Performance:
- Rate limiting per agent with token bucket algorithm
- Retry logic with exponential backoff (configurable)
- Structured error handling with error types
- Config hot-reload for development workflows
- Middleware Pipeline: Extensible message processing
- Built-in: logging, metrics, validation, sanitization, filtering
- Custom middleware support for transforms and filters
- Error recovery and panic handling
- Docker Support: Multi-stage builds, docker-compose, production-ready
- Health Checks: Automatic agent health verification before conversations
- Agent Detection: Built-in doctor command to check installed AI CLIs
- Customizable Agents: Configure prompts, models, and behaviors for each agent
What's New
See CHANGELOG.md for detailed version history and release notes.
Latest Release: v0.1.5 - Linting fixes for clean CI/CD builds.
What's New in v0.1.5:
π Bug Fixes:
- Linting Errors: Fixed golangci-lint errors in doctor.go
- Fixed gofmt formatting (struct field alignment)
- Fixed prealloc warning (pre-allocated slices with known capacity)
- CI/CD pipeline now passes all quality checks
Previous Release - v0.1.4: JSON output for programmatic agent detection
agentpipe doctor --jsonfor structured, machine-readable output- Perfect for dynamic UI generation (e.g., agentpipe-web)
Installation
Using Homebrew (macOS/Linux)
brew tap kevinelliott/tap
brew install agentpipe
Using the install script
curl -sSL https://raw.githubusercontent.com/kevinelliott/agentpipe/main/install.sh | bash
Using Go
go install github.com/kevinelliott/agentpipe@latest
Build from source
git clone https://github.com/kevinelliott/agentpipe.git
cd agentpipe
# Build only
go build -o agentpipe .
# Or build and install to /usr/local/bin (requires sudo on macOS/Linux)
sudo make install
# Or install to custom location (e.g., ~/.local/bin, no sudo needed)
make install PREFIX=$HOME/.local
Prerequisites
AgentPipe requires at least one AI CLI tool to be installed:
- Amp CLI -
ampβ‘ Optimized- Install: See installation guide
- Authenticate: Follow Amp documentation
- Features: Autonomous coding, IDE integration, complex task execution
- Thread Management: AgentPipe uses Amp's native threading to maintain server-side conversation state
- Smart Filtering: Only sends new messages from other agents, reducing API costs by 50-90%
- Structured Context: Initial prompts are delivered in a clear, three-part structure
- Claude CLI -
claude- Install: See installation guide
- Authenticate: Run
claudeand follow authentication prompts - Features: Advanced reasoning, coding, complex task execution
- GitHub Copilot CLI -
copilot- Install:
npm install -g @github/copilot - Authenticate: Launch
copilotand use/logincommand - Requires: Node.js v22+, npm v10+, and active GitHub Copilot subscription
- Install:
- Cursor CLI -
cursor-agent- Install:
curl https://cursor.com/install -fsS | bash - Authenticate:
cursor-agent login
- Install:
- Factory CLI -
droid- Install:
curl -fsSL https://app.factory.ai/cli | sh - Authenticate: Sign in via browser when prompted
- Features: Agent-native development, Code Droid and Knowledge Droid, CI/CD integration
- Install:
- Gemini CLI -
gemini - Qoder CLI -
qodercli- Install: See installation guide
- Authenticate: Run
qodercliand use/logincommand - Features: Enhanced context engineering, intelligent agents, MCP integration, built-in tools
- Qwen CLI -
qwen - Codex CLI -
codex- Uses
codex execsubcommand for non-interactive mode - Automatically bypasses approval prompts for multi-agent conversations
- β οΈ For development/testing only - not recommended for production use
- Uses
- Ollama -
ollama(planned)
Check which agents are available on your system:
agentpipe doctor
Quick Start
Simple conversation with command-line flags
# Start a conversation between Claude and Gemini
agentpipe run -a claude:Alice -a gemini:Bob -p "Let's discuss AI ethics"
# Use TUI mode with metrics for a rich experience
agentpipe run -a claude:Poet -a gemini:Scientist --tui --metrics
# Configure conversation parameters
agentpipe run -a claude:Agent1 -a gemini:Agent2 \
--mode reactive \
--max-turns 10 \
--timeout 45 \
--prompt "What is consciousness?"
Using configuration files
# Run with a configuration file
agentpipe run -c examples/simple-conversation.yaml
# Run a debate between three agents
agentpipe run -c examples/debate.yaml --tui
# Brainstorming session with multiple agents
agentpipe run -c examples/brainstorm.yaml
Configuration
YAML Configuration Format
version: "1.0"
agents:
- id: agent-1
type: claude # Agent type (claude, gemini, qwen, etc.)
name: "Friendly Assistant"
prompt: "You are a helpful and friendly assistant."
announcement: "Hello everyone! I'm here to help!"
model: claude-3-sonnet # Optional: specific model
temperature: 0.7 # Optional: response randomness
max_tokens: 1000 # Optional: response length limit
- id: agent-2
type: gemini
name: "Technical Expert"
prompt: "You are a technical expert who loves explaining complex topics."
announcement: "Technical Expert has joined the chat!"
temperature: 0.5
orchestrator:
mode: round-robin # Conversation mode
max_turns: 10 # Maximum conversation turns
turn_timeout: 30s # Timeout per agent response
response_delay: 2s # Delay between responses
initial_prompt: "Let's start our discussion!"
logging:
enabled: true # Enable chat logging
chat_log_dir: ~/.agentpipe/chats # Custom log path (optional)
show_metrics: true # Display response metrics in TUI (time, tokens, cost)
log_format: text # Log format (text or json)
Conversation Modes
- round-robin: Agents speak in a fixed rotation
- reactive: Agents respond based on who spoke last
- free-form: Agents decide when to participate
Commands
agentpipe run
Start a conversation between agents.
Flags:
-c, --config: Path to YAML configuration file-a, --agents: List of agents (format:type:name)-m, --mode: Conversation mode (default: round-robin)--max-turns: Maximum conversation turns (default: 10)--timeout: Response timeout in seconds (default: 30)--delay: Delay between responses in seconds (default: 1)-p, --prompt: Initial conversation prompt-t, --tui: Use enhanced TUI interface with panels and user input--log-dir: Custom path for chat logs (default: ~/.agentpipe/chats)--no-log: Disable chat logging--metrics: Display response metrics (duration, tokens, cost) in TUI--skip-health-check: Skip agent health checks (not recommended)--health-check-timeout: Health check timeout in seconds (default: 5)--save-state: Save conversation state to file on completion--state-file: Custom state file path (default: auto-generated)--watch-config: Watch config file for changes and reload (development mode)
agentpipe doctor
Comprehensive system health check to verify AgentPipe is properly configured and ready to use.
agentpipe doctor
# Output in JSON format for programmatic consumption
agentpipe doctor --json
The doctor command performs a complete diagnostic check of your system and provides detailed information about:
System Environment:
- Go runtime version and architecture
- PATH environment validation
- Home directory detection
- AgentPipe directories (
~/.agentpipe/chats,~/.agentpipe/states)
AI Agent CLIs:
- Detection of all 10 supported agent CLIs
- Installation paths
- Version information
- Upgrade instructions for keeping agents up-to-date
- Authentication status for agents that require it
- Installation commands for missing agents
- Documentation links
Configuration:
- Example configuration files detection
- User configuration file validation (
~/.agentpipe/config.yaml) - Helpful suggestions for setup
Output includes:
- Visual status indicators (β available, β missing, β οΈ warning, βΉοΈ info)
- Organized sections for easy scanning
- Summary with total available agents
- Ready-to-use upgrade commands for npm-based CLIs
Example Output:
π AgentPipe Doctor - System Health Check
=============================================================
π SYSTEM ENVIRONMENT
------------------------------------------------------------
β
Go Runtime: go1.25.3 (darwin/arm64)
β
PATH: 40 directories in PATH
β
Home Directory: /Users/username
β
Chat Logs Directory: /Users/username/.agentpipe/chats
π€ AI AGENT CLIS
------------------------------------------------------------
β
Claude
Command: claude
Path: /usr/local/bin/claude
Version: 2.0.19 (Claude Code)
Upgrade: See https://docs.claude.com/en/docs/claude-code/installation
Auth: β
Authenticated
Docs: https://github.com/anthropics/claude-code
β
Gemini
Command: gemini
Path: /usr/local/bin/gemini
Version: 0.9.0
Upgrade: npm update -g @google/generative-ai-cli
Auth: β
Authenticated
Docs: https://github.com/google/generative-ai-cli
βοΈ CONFIGURATION
------------------------------------------------------------
β
Example Configs: 2 example configurations found
βΉοΈ User Config: No user config (use 'agentpipe init' to create one)
============================================================
π SUMMARY
Available Agents: 9/9
β¨ AgentPipe is ready! You can use 9 agent(s).
Run 'agentpipe run --help' to start a conversation.
Example Output:
π AgentPipe Doctor - System Health Check
=============================================================
π SYSTEM ENVIRONMENT
------------------------------------------------------------
β
Go Runtime: go1.25.3 (darwin/arm64)
β
PATH: 40 directories in PATH
β
Home Directory: /Users/username
β
Chat Logs Directory: /Users/username/.agentpipe/chats
π€ AI AGENT CLIS
------------------------------------------------------------
β
Claude
Command: claude
Path: /usr/local/bin/claude
Version: 2.0.19 (Claude Code)
Upgrade: See https://docs.claude.com/en/docs/claude-code/installation
Auth: β
Authenticated
Docs: https://github.com/anthropics/claude-code
β
Factory
Command: droid
Path: /usr/local/bin/droid
Version: 1.3.220
Upgrade: See https://docs.factory.ai/cli for upgrade instructions
Auth: β
Authenticated
Docs: https://docs.factory.ai/cli
β
Gemini
Command: gemini
Path: /usr/local/bin/gemini
Version: 0.9.0
Upgrade: npm update -g @google/generative-ai-cli
Auth: β
Authenticated
Docs: https://github.com/google/generative-ai-cli
βοΈ CONFIGURATION
------------------------------------------------------------
β
Example Configs: 2 example configurations found
βΉοΈ User Config: No user config (use 'agentpipe init' to create one)
============================================================
π SUMMARY
Available Agents: 10/10
β¨ AgentPipe is ready! You can use 10 agent(s).
Run 'agentpipe run --help' to start a conversation.
Screenshot:
Doctor command showing comprehensive system diagnostics, agent detection with versions and upgrade instructions, and authentication status
JSON Output Format:
The --json flag outputs structured data perfect for programmatic consumption (e.g., web interfaces, automation scripts):
{
"system_environment": [...], // System checks (Go runtime, PATH, directories)
"supported_agents": [...], // All agents AgentPipe supports
"available_agents": [...], // Only agents installed and working
"configuration": [...], // Config file status
"summary": {
"total_agents": 10, // Total supported agents
"available_count": 10, // Number of working agents
"missing_agents": [], // Names of missing agents
"ready": true // Whether AgentPipe is ready to run
}
}
Each agent entry includes:
name,command,available,authenticatedpath,version(when available)install_cmd,upgrade_cmd,docserror(when not available)
This format enables web interfaces like agentpipe-web to dynamically detect and display available agents.
Use this command to:
- Verify your AgentPipe installation is complete
- Check which AI agents are available
- Find upgrade instructions for installed agents
- Troubleshoot missing dependencies
- Validate authentication status before starting conversations
agentpipe export
Export conversation from a state file to different formats.
# Export to JSON
agentpipe export state.json --format json --output conversation.json
# Export to Markdown
agentpipe export state.json --format markdown --output conversation.md
# Export to HTML (includes styling)
agentpipe export state.json --format html --output conversation.html
Flags:
--format: Export format (json, markdown, html)--output: Output file path
agentpipe resume
Resume a saved conversation from a state file.
# List all saved conversations
agentpipe resume --list
# View a saved conversation
agentpipe resume ~/.agentpipe/states/conversation-20231215-143022.json
# Resume and continue (future feature)
agentpipe resume state.json --continue
Flags:
--list: List all saved conversation states--continue: Continue the conversation (planned feature)
agentpipe init
Interactive wizard to create a new AgentPipe configuration file.
agentpipe init
Creates a YAML config file with guided prompts for:
- Conversation mode selection
- Agent configuration
- Orchestrator settings
- Logging preferences
Examples
Cursor and Claude Collaboration
# Save as cursor-claude-team.yaml
version: "1.0"
agents:
- id: cursor-dev
type: cursor
name: "Cursor Developer"
prompt: "You are a senior developer who writes clean, efficient code."
- id: claude-reviewer
type: claude
name: "Claude Reviewer"
prompt: "You are a code reviewer who ensures best practices and identifies potential issues."
orchestrator:
mode: round-robin
max_turns: 6
initial_prompt: "Let's design a simple REST API for a todo list application."
Poetry vs Science Debate
# Save as poetry-science.yaml
version: "1.0"
agents:
- id: poet
type: claude
name: "The Poet"
prompt: "You speak in beautiful metaphors and see the world through an artistic lens."
temperature: 0.9
- id: scientist
type: gemini
name: "The Scientist"
prompt: "You explain everything through logic, data, and scientific principles."
temperature: 0.3
orchestrator:
mode: round-robin
initial_prompt: "Is love just chemistry or something more?"
Run with: agentpipe run -c poetry-science.yaml --tui
Creative Brainstorming with Metrics
agentpipe run \
-a claude:IdeaGenerator \
-a gemini:CriticalThinker \
-a qwen:Implementer \
--mode free-form \
--max-turns 15 \
--metrics \
--tui \
-p "How can we make education more engaging?"
When metrics are enabled, you'll see:
- Response time for each agent (e.g., "2.3s")
- Token usage per response (e.g., "150 tokens")
- Cost estimate per response (e.g., "$0.0023")
- Total conversation cost in the Statistics panel
Session Summary: All conversations now display a summary when they end, whether by:
- Normal completion (max turns reached)
- User interruption (CTRL-C)
- Error condition
The summary includes:
- Total messages (agent + system)
- Total tokens used
- Total time spent (formatted as ms/s/m:s)
- Total estimated cost
TUI Interface
The enhanced TUI provides a rich, interactive experience for managing multi-agent conversations:
Layout
The TUI is divided into multiple panels:
- Agents Panel (Left): Shows all connected agents with real-time status indicators
- Chat Panel (Center): Displays the conversation with color-coded messages
- Topic Panel (Top Right): Shows the initial conversation prompt
- Statistics Panel (Right): Displays turn count, agent statistics, and total conversation cost
- Configuration Panel (Right): Shows active settings and config file path
- User Input Panel (Bottom): Allows you to participate in the conversation
Visual Features
- Agent Status Indicators: Green dot (π’) for active/responding, grey dot (β«) for idle
- Agent Type Badges: Message badges show agent type in parentheses (e.g., "Alice (qoder)") for easy identification
- Color-Coded Messages: Each agent gets a unique color for easy tracking with consistent badge colors
- HOST/SYSTEM Distinction: Clear visual separation between orchestrator prompts (HOST) and system notifications (SYSTEM)
- Consolidated Headers: Message headers only appear when the speaker changes
- Metrics Display: Response time (seconds), token count, and cost shown inline when enabled
- Multi-Paragraph Support: Properly formatted multi-line agent responses
Controls
General:
Tab: Switch between panels (Agents, Chat, User Input)ββ: Navigate in active panelPageUp/PageDown: Scroll conversationCtrl+Corq: Quit?: Show help modal with all keybindings
Conversation:
Enter: Send message when in User Input paneli: Show agent info modal (when in Agents panel)- Active agent indicators: π’ (responding) / β« (idle)
Search:
Ctrl+F: Open search modeEnter: Execute searchn: Next search resultN: Previous search resultEsc: Exit search mode
Commands:
/: Enter command mode/filter <agent>: Filter messages by agent name/clear: Clear active filterEsc: Exit command mode
Development
Building from Source
# Clone the repository
git clone https://github.com/kevinelliott/agentpipe.git
cd agentpipe
# Build the binary
go build -o agentpipe .
# Or build with version information
VERSION=v0.0.7 make build
# Run tests
go test ./...
Project Structure
agentpipe/
βββ cmd/ # CLI commands
β βββ root.go # Root command
β βββ run.go # Run conversation command
β βββ doctor.go # Doctor diagnostic command
β βββ export.go # Export conversations
β βββ resume.go # Resume conversations
β βββ init.go # Interactive configuration wizard
βββ pkg/
β βββ agent/ # Agent interface and registry
β βββ adapters/ # Agent implementations (7 adapters)
β βββ config/ # Configuration handling
β β βββ watcher.go # Config hot-reload support
β βββ conversation/ # Conversation state management
β β βββ state.go # Save/load conversation states
β βββ errors/ # Structured error types
β βββ export/ # Export to JSON/Markdown/HTML
β βββ log/ # Structured logging (zerolog)
β βββ logger/ # Chat logging and output
β βββ metrics/ # Prometheus metrics
β β βββ metrics.go # Metrics collection
β β βββ server.go # HTTP metrics server
β βββ middleware/ # Message processing pipeline
β β βββ middleware.go # Core middleware pattern
β β βββ builtin.go # Built-in middleware
β βββ orchestrator/ # Conversation orchestration
β βββ ratelimit/ # Token bucket rate limiting
β βββ tui/ # Terminal UI
β βββ utils/ # Utilities (tokens, costs)
βββ docs/ # Documentation
β βββ architecture.md
β βββ contributing.md
β βββ development.md
β βββ troubleshooting.md
β βββ docker.md
βββ examples/ # Example configurations
β βββ simple-conversation.yaml
β βββ brainstorm.yaml
β βββ middleware.yaml
β βββ prometheus-metrics.yaml
βββ test/
β βββ integration/ # End-to-end tests
β βββ benchmark/ # Performance benchmarks
βββ Dockerfile # Multi-stage production build
βββ docker-compose.yml # Docker Compose configuration
βββ main.go
Adding New Agent Types
When creating a new agent adapter, follow the standardized pattern for consistency:
- Create adapter structure in
pkg/adapters/:
package adapters
import (
"context"
"fmt"
"os/exec"
"strings"
"time"
"github.com/kevinelliott/agentpipe/pkg/agent"
"github.com/kevinelliott/agentpipe/pkg/log"
)
type MyAgent struct {
agent.BaseAgent
execPath string
}
func NewMyAgent() agent.Agent {
return &MyAgent{}
}
- Implement required methods with structured logging:
func (m *MyAgent) Initialize(config agent.AgentConfig) error {
if err := m.BaseAgent.Initialize(config); err != nil {
log.WithFields(map[string]interface{}{
"agent_id": config.ID,
"agent_name": config.Name,
}).WithError(err).Error("myagent base initialization failed")
return err
}
path, err := exec.LookPath("myagent")
if err != nil {
log.WithFields(map[string]interface{}{
"agent_id": m.ID,
"agent_name": m.Name,
}).WithError(err).Error("myagent CLI not found in PATH")
return fmt.Errorf("myagent CLI not found: %w", err)
}
m.execPath = path
log.WithFields(map[string]interface{}{
"agent_id": m.ID,
"agent_name": m.Name,
"exec_path": path,
"model": m.Config.Model,
}).Info("myagent initialized successfully")
return nil
}
func (m *MyAgent) IsAvailable() bool {
_, err := exec.LookPath("myagent")
return err == nil
}
func (m *MyAgent) HealthCheck(ctx context.Context) error {
// Check if CLI is responsive
cmd := exec.CommandContext(ctx, m.execPath, "--version")
output, err := cmd.CombinedOutput()
// ... error handling with logging
return nil
}
- Implement message filtering:
func (m *MyAgent) filterRelevantMessages(messages []agent.Message) []agent.Message {
relevant := make([]agent.Message, 0, len(messages))
for _, msg := range messages {
// Exclude this agent's own messages
if msg.AgentName == m.Name || msg.AgentID == m.ID {
continue
}
relevant = append(relevant, msg)
}
return relevant
}
- Implement structured prompt building:
func (m *MyAgent) buildPrompt(messages []agent.Message, isInitialSession bool) string {
var prompt strings.Builder
// PART 1: IDENTITY AND ROLE
prompt.WriteString("AGENT SETUP:\n")
prompt.WriteString(strings.Repeat("=", 60))
prompt.WriteString("\n")
prompt.WriteString(fmt.Sprintf("You are '%s' participating in a multi-agent conversation.\n\n", m.Name))
if m.Config.Prompt != "" {
prompt.WriteString("YOUR ROLE AND INSTRUCTIONS:\n")
prompt.WriteString(m.Config.Prompt)
prompt.WriteString("\n\n")
}
// PART 2: CONVERSATION CONTEXT
if len(messages) > 0 {
var initialPrompt string
var otherMessages []agent.Message
// Find orchestrator's initial prompt (HOST) vs agent announcements (SYSTEM)
// HOST = orchestrator's initial task/prompt (AgentID="host", AgentName="HOST")
// SYSTEM = agent join announcements and other system messages
for _, msg := range messages {
if msg.Role == "system" && (msg.AgentID == "system" || msg.AgentID == "host" || msg.AgentName == "System" || msg.AgentName == "HOST") && initialPrompt == "" {
initialPrompt = msg.Content
} else {
otherMessages = append(otherMessages, msg)
}
}
// Show initial task prominently
if initialPrompt != "" {
prompt.WriteString("YOUR TASK - PLEASE RESPOND TO THIS:\n")
prompt.WriteString(strings.Repeat("=", 60))
prompt.WriteString("\n")
prompt.WriteString(initialPrompt)
prompt.WriteString("\n")
prompt.WriteString(strings.Repeat("=", 60))
prompt.WriteString("\n\n")
}
// Show conversation history
if len(otherMessages) > 0 {
prompt.WriteString("CONVERSATION SO FAR:\n")
prompt.WriteString(strings.Repeat("-", 60))
prompt.WriteString("\n")
for _, msg := range otherMessages {
timestamp := time.Unix(msg.Timestamp, 0).Format("15:04:05")
if msg.Role == "system" {
prompt.WriteString(fmt.Sprintf("[%s] SYSTEM: %s\n", timestamp, msg.Content))
} else {
prompt.WriteString(fmt.Sprintf("[%s] %s: %s\n", timestamp, msg.AgentName, msg.Content))
}
}
prompt.WriteString(strings.Repeat("-", 60))
prompt.WriteString("\n\n")
}
if initialPrompt != "" {
prompt.WriteString(fmt.Sprintf("Now respond to the task above as %s. Provide a direct, thoughtful answer.\n", m.Name))
}
}
return prompt.String()
}
- Implement SendMessage with timing and logging:
func (m *MyAgent) SendMessage(ctx context.Context, messages []agent.Message) (string, error) {
if len(messages) == 0 {
return "", nil
}
log.WithFields(map[string]interface{}{
"agent_name": m.Name,
"message_count": len(messages),
}).Debug("sending message to myagent CLI")
// Filter and build prompt
relevantMessages := m.filterRelevantMessages(messages)
prompt := m.buildPrompt(relevantMessages, true)
// Execute CLI command (use stdin when possible)
cmd := exec.CommandContext(ctx, m.execPath)
cmd.Stdin = strings.NewReader(prompt)
startTime := time.Now()
output, err := cmd.CombinedOutput()
duration := time.Since(startTime)
if err != nil {
log.WithFields(map[string]interface{}{
"agent_name": m.Name,
"duration": duration.String(),
}).WithError(err).Error("myagent execution failed")
return "", fmt.Errorf("myagent execution failed: %w", err)
}
log.WithFields(map[string]interface{}{
"agent_name": m.Name,
"duration": duration.String(),
"response_size": len(output),
}).Info("myagent message sent successfully")
return strings.TrimSpace(string(output)), nil
}
- Register the factory:
func init() {
agent.RegisterFactory("myagent", NewMyAgent)
}
See existing adapters in pkg/adapters/ for complete reference implementations:
claude.go- Simple stdin-based patterncodex.go- Non-interactive exec mode with flagsamp.go- Advanced thread management patterncursor.go- JSON stream parsing patternfactory.go- Non-interactive exec mode with autonomy levelsqoder.go- Non-interactive print mode with yolo flag
Advanced Features
Amp CLI Thread Management β‘
AgentPipe includes optimized support for the Amp CLI using native thread management:
How it Works:
-
Thread Creation (
amp thread new):- Creates an empty thread and returns a thread ID
- AgentPipe immediately follows with
amp thread continueto send the initial task
-
Initial Task Delivery (
amp thread continue {thread_id}):- Sends a structured three-part prompt:
- Part 1: Agent setup (role and instructions)
- Part 2: Direct task instruction (orchestrator's initial prompt)
- Part 3: Conversation history (messages from other agents)
- Amp processes the task and returns its response
- Thread state is maintained server-side for future interactions
- Sends a structured three-part prompt:
-
Conversation Continuation (
amp thread continue {thread_id}):- Only sends NEW messages from OTHER agents
- Amp's own responses are filtered out (it already knows what it said)
- Maintains conversation context without redundant data transfer
Benefits:
- β‘ 50-90% reduction in data sent per turn
- π° Lower API costs - no redundant token usage
- π Faster responses - minimal data transfer
- π― Direct engagement - Agents receive clear, actionable instructions
Example:
agents:
- id: amp-architect
name: "Amp Architect"
type: amp
prompt: "You are an experienced software architect..."
model: claude-sonnet-4.5
See examples/amp-coding.yaml for a complete example.
Standardized Agent Interaction Pattern
All AgentPipe adapters now implement a consistent, reliable interaction pattern that ensures agents properly understand and respond to conversation context:
Three-Part Structured Prompts:
Every agent receives prompts in the same clear, structured format:
PART 1: AGENT SETUP
============================================================
You are 'AgentName' participating in a multi-agent conversation.
YOUR ROLE AND INSTRUCTIONS:
<your custom prompt from config>
============================================================
PART 2: YOUR TASK - PLEASE RESPOND TO THIS
============================================================
<orchestrator's initial prompt - the conversation topic>
============================================================
PART 3: CONVERSATION SO FAR
------------------------------------------------------------
[timestamp] AgentName: message content
[timestamp] SYSTEM: system announcement
...
------------------------------------------------------------
Now respond to the task above as AgentName. Provide a direct, thoughtful answer.
Key Features:
- Message Filtering: Each agent automatically filters out its own previous messages to avoid redundancy
- Directive Instructions: Clear "YOUR TASK - PLEASE RESPOND TO THIS" header ensures agents understand what to do
- Context Separation: System messages are clearly labeled and separated from agent messages
- Consistent Structure: All 9 adapters (Amp, Claude, Codex, Copilot, Cursor, Factory, Gemini, Qoder, Qwen) use identical patterns
- Structured Logging: Comprehensive debug logging with timing, message counts, and prompt previews
- HOST vs SYSTEM Distinction: Clear separation between orchestrator messages (HOST) and system notifications (SYSTEM)
- HOST: The orchestrator presenting the initial conversation task/prompt
- SYSTEM: Agent join announcements and other system notifications
Benefits:
- β Immediate Engagement: Agents respond directly to prompts instead of asking "what would you like help with?"
- β Reduced Confusion: Clear separation between setup, task, and conversation history
- β Better Debugging: Detailed logs show exactly what each agent receives
- β Reliable Responses: Standardized approach works consistently across all agent types
- β Cost Efficiency: Message filtering eliminates redundant data transfer
Implementation Details:
Each adapter implements:
filterRelevantMessages()- Excludes agent's own messagesbuildPrompt()- Creates structured three-part prompts- Comprehensive error handling with specific error detection
- Timing and metrics for all operations
This pattern evolved from extensive testing with multi-agent conversations and addresses common issues like:
- Agents not receiving the initial conversation topic
- Agents treating prompts as passive context rather than direct instructions
- Redundant message delivery increasing API costs
- Inconsistent behavior across different agent types
Prometheus Metrics & Monitoring
AgentPipe includes comprehensive Prometheus metrics for production monitoring:
// Enable metrics in your code
import "github.com/kevinelliott/agentpipe/pkg/metrics"
// Start metrics server
server := metrics.NewServer(metrics.ServerConfig{Addr: ":9090"})
go server.Start()
// Set metrics on orchestrator
orch.SetMetrics(metrics.DefaultMetrics)
Available Metrics:
agentpipe_agent_requests_total- Request counter by agent and statusagentpipe_agent_request_duration_seconds- Request duration histogramagentpipe_agent_tokens_total- Token usage by type (input/output)agentpipe_agent_cost_usd_total- Estimated costs in USDagentpipe_agent_errors_total- Error counter by typeagentpipe_active_conversations- Current active conversationsagentpipe_conversation_turns_total- Total turns by modeagentpipe_message_size_bytes- Message size distributionagentpipe_retry_attempts_total- Retry counteragentpipe_rate_limit_hits_total- Rate limit hits
Endpoints:
http://localhost:9090/metrics- Prometheus metrics (OpenMetrics format)http://localhost:9090/health- Health checkhttp://localhost:9090/- Web UI with documentation
See examples/prometheus-metrics.yaml for complete configuration, Prometheus queries, Grafana dashboard setup, and alerting rules.
Docker Support
Run AgentPipe in Docker for production deployments:
# Build image
docker build -t agentpipe:latest .
# Run with docker-compose (includes metrics server)
docker-compose up
# Run standalone
docker run -v ~/.agentpipe:/root/.agentpipe agentpipe:latest run -c /config/config.yaml
Features:
- Multi-stage build (~50MB final image)
- Health checks included
- Volume mounts for configs and logs
- Prometheus metrics exposed on port 9090
- Production-ready with non-root user
See docs/docker.md for complete Docker documentation.
Middleware Pipeline
Extend AgentPipe with custom message processing:
// Add built-in middleware
orch.AddMiddleware(middleware.LoggingMiddleware())
orch.AddMiddleware(middleware.MetricsMiddleware())
orch.AddMiddleware(middleware.ContentFilterMiddleware(config))
// Or use defaults
orch.SetupDefaultMiddleware()
// Create custom middleware
custom := middleware.NewTransformMiddleware("uppercase",
func(ctx *MessageContext, msg *Message) (*Message, error) {
msg.Content = strings.ToUpper(msg.Content)
return msg, nil
})
orch.AddMiddleware(custom)
Built-in Middleware:
LoggingMiddleware- Structured loggingMetricsMiddleware- Performance trackingContentFilterMiddleware- Content validation and filteringSanitizationMiddleware- Message sanitizationEmptyContentValidationMiddleware- Empty message rejectionRoleValidationMiddleware- Role validationErrorRecoveryMiddleware- Panic recovery
See examples/middleware.yaml for complete examples.
Rate Limiting
Configure rate limits per agent:
agents:
- id: claude
type: claude
rate_limit: 10 # 10 requests per second
rate_limit_burst: 5 # Burst capacity of 5
Uses token bucket algorithm with:
- Configurable rate and burst capacity
- Thread-safe implementation
- Automatic rate limit hit tracking in metrics
Conversation State Management
Save and resume conversations:
# Save conversation state on completion
agentpipe run -c config.yaml --save-state
# List saved conversations
agentpipe resume --list
# View saved conversation
agentpipe resume ~/.agentpipe/states/conversation-20231215-143022.json
# Export to different formats
agentpipe export state.json --format html --output report.html
State files include:
- Full conversation history
- Configuration used
- Metadata (turns, duration, timestamps)
- Agent information
Config Hot-Reload (Development Mode)
Enable config file watching for rapid development:
agentpipe run -c config.yaml --watch-config
Changes to the config file are automatically detected and reloaded without restarting the conversation.
Troubleshooting
Agent Health Check Failed
If you encounter health check failures:
- Verify the CLI is properly installed:
which <agent-name> - Check if the CLI requires authentication or API keys
- Try running the CLI manually to ensure it works
- Use
--skip-health-checkflag as a last resort (not recommended)
GitHub Copilot CLI Issues
The GitHub Copilot CLI has specific requirements:
- Authentication: Run
copilotin interactive mode and use/logincommand - Subscription Required: Requires an active GitHub Copilot subscription
- Model Selection: Default is Claude Sonnet 4.5; use
modelconfig option to specify others - Node.js Requirements: Requires Node.js v22+ and npm v10+
- Check Status: Run
copilot --helpto verify installation
Cursor CLI Specific Issues
The Cursor CLI (cursor-agent) has some unique characteristics:
- Authentication Required: Run
cursor-agent loginbefore first use - Longer Response Times: Cursor typically takes 10-20 seconds to respond (AgentPipe handles this automatically)
- Process Management: cursor-agent doesn't exit naturally; AgentPipe manages process termination
- Check Status: Run
cursor-agent statusto verify authentication - Timeout Errors: If you see timeout errors, ensure you're authenticated and have a stable internet connection
Factory CLI Specific Issues
The Factory CLI (droid) requires authentication and uses non-interactive exec mode:
- Authentication Required: Run
droidand sign in via browser when prompted - Non-Interactive Mode: AgentPipe uses
droid execautomatically for non-interactive execution - Autonomy Levels: Uses
--auto highto enable edits and commands without permission prompts - Model Selection: Optional model specification via config (e.g.,
model: claude-sonnet-4.5) - Check Status: Run
droid --helpto verify installation and available commands - Full documentation: https://docs.factory.ai/cli/getting-started/quickstart
Codex CLI Specific Issues
The Codex CLI requires non-interactive exec mode for multi-agent conversations:
- Non-Interactive Mode: AgentPipe uses
codex execsubcommand automatically - JSON Output: Responses are parsed from JSON format to extract agent messages
- Approval Bypass: Uses
--dangerously-bypass-approvals-and-sandboxflag for automated execution - Important: This is designed for development/testing environments only
- Security Note: Never use with untrusted prompts or in production without proper sandboxing
- Check status:
codex --helpto verify installation and available commands
Qoder CLI Specific Issues
The Qoder CLI requires authentication and uses non-interactive mode:
- Authentication Required: Run
qodercliin interactive mode and use/logincommand - Non-Interactive Mode: AgentPipe uses
qodercli --printautomatically for non-interactive execution - Permission Handling: Uses
--yoloflag to skip permission prompts for automated execution - Output Formats: Supports text, json, and stream-json formats
- Check Status: Run
qodercli --helpto verify installation and available commands - Full documentation: https://docs.qoder.com/cli/using-cli
Qwen Code CLI Issues
The Qwen Code CLI uses a different interface than other agents:
- Use
qwen --prompt "your prompt"for non-interactive mode - The CLI may open an interactive session if not properly configured
- Full documentation: https://github.com/QwenLM/qwen-code
Gemini Model Not Found
If you get a 404 error with Gemini:
- Check your model name in the configuration
- Ensure you have access to the specified model
- Try without specifying a model to use the default
Chat Logs Location
Chat logs are saved by default to:
- macOS/Linux:
~/.agentpipe/chats/ - Windows:
%USERPROFILE%\.agentpipe\chats\
You can override this with --log-path or disable logging with --no-log.
License
MIT License
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Documentation
ΒΆ
There is no documentation for this package.
Directories
ΒΆ
| Path | Synopsis |
|---|---|
|
Package main demonstrates the export functionality
|
Package main demonstrates the export functionality |
|
internal
|
|
|
pkg
|
|
|
agent
Package agent provides the core interfaces and types for AI agent communication.
|
Package agent provides the core interfaces and types for AI agent communication. |
|
config
Package config provides configuration management for AgentPipe.
|
Package config provides configuration management for AgentPipe. |
|
conversation
Package conversation provides conversation state management for AgentPipe.
|
Package conversation provides conversation state management for AgentPipe. |
|
export
Package export provides functionality to export conversations to different formats.
|
Package export provides functionality to export conversations to different formats. |
|
log
Package log provides structured logging using zerolog.
|
Package log provides structured logging using zerolog. |
|
metrics
Package metrics provides Prometheus metrics for AgentPipe.
|
Package metrics provides Prometheus metrics for AgentPipe. |
|
middleware
Package middleware provides a flexible middleware pattern for message processing.
|
Package middleware provides a flexible middleware pattern for message processing. |
|
orchestrator
Package orchestrator manages multi-agent conversations with different orchestration modes.
|
Package orchestrator manages multi-agent conversations with different orchestration modes. |
|
ratelimit
Package ratelimit provides token bucket rate limiting for agent requests.
|
Package ratelimit provides token bucket rate limiting for agent requests. |