phero

module
v0.0.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 14, 2026 License: Apache-2.0

README ΒΆ

Phero

🐜 Phero

The chemical language of AI agents.

Phero is a modern Go framework for building multi-agent AI systems. Like ants in a colony, agents in Phero cooperate, communicate, and coordinate toward shared goals, each with specialized roles, working together through a clean, composable architecture.

Build Status GoDoc Go Report Card GitHub release

Why Phero?

  • 🎯 Purpose-built for agents Not an LLM wrapper; a framework for orchestrating cooperative agent systems
  • 🧩 Composable primitives Small, focused packages that solve specific problems
  • πŸ”§ Tool-first design Built-in support for function tools, skills, RAG, and MCP
  • 🎨 Developer-friendly Clean APIs, opt-in tracing, OpenAI-compatible LLM support
  • πŸͺΆ Lightweight No heavy dependencies; just Go and your choice of LLM provider

Features

Core Capabilities
  • 🀝 Agent orchestration Multi-agent workflows with role specialization and coordination
  • 🧩 LLM abstraction Work with OpenAI, Ollama, or any OpenAI-compatible endpoint
  • πŸ› οΈ Function tools Expose Go functions as callable tools with automatic JSON Schema generation
  • πŸ“š RAG (Retrieval-Augmented Generation) Built-in vector storage and semantic search
  • 🧠 Skills system Define reusable agent capabilities in SKILL.md files
  • πŸ”Œ MCP support Integrate Model Context Protocol servers as agent tools
  • 🧾 Memory management Conversational context storage for agents
  • βœ‚οΈ Text splitting Document chunking for RAG workflows
  • 🧬 Embeddings Semantic search capabilities via OpenAI embeddings
  • πŸ—„οΈ Vector stores Vector database integration
Requirements
  • Go 1.25.5 or later
  • An LLM provider (OpenAI, Ollama, or OpenAI-compatible endpoint)

Quick Start

Start with the Simple Agent example to learn the basics in ~100 lines of code.

Then try:

Then explore the examples/ directory for more advanced patterns:

  • Multi-agent workflows
  • RAG chatbots
  • Skills integration
  • MCP server connections

Some examples require extra services (e.g. Qdrant for vector search).

Architecture

Phero is organized into focused packages, each solving a specific problem:

πŸ€– Agent Layer
  • agent Core orchestration for LLM-based agents with tool execution and chat loops
  • memory Conversational context management for multi-turn interactions
πŸ’¬ LLM Layer
  • llm Clean LLM interface with function tool support and JSON Schema utilities
  • llm/openai OpenAI-compatible client (works with OpenAI, Ollama, and compatible endpoints)
🧠 Knowledge Layer
  • embedding Embedding interface for semantic operations
  • embedding/openai OpenAI embeddings implementation
  • vectorstore Vector storage interface for similarity search
  • vectorstore/qdrant Qdrant vector database integration
  • vectorstore/psql PostgreSQL + pgvector integration
  • textsplitter Document chunking for RAG workflows
  • rag Complete RAG pipeline combining embeddings and vector stores
πŸ”§ Tools & Integration
  • skill Parse SKILL.md files and expose them as agent capabilities
  • mcp Model Context Protocol adapter for external tool integration
  • tool/file File system operations
  • tool/go Safe Go command execution
  • tool/python Python script execution
  • tool/human Human-in-the-loop input collection

Examples

Comprehensive examples are included in the examples/ directory:

Example Description
Simple Agent Start here! Minimal example showing one agent with one custom tool perfect for learning the basics
Conversational Agent REPL-style chatbot with short-term conversational memory and a simple built-in tool
Long-Term Memory REPL-style chatbot with semantic long-term memory (RAG) backed by Qdrant
Debate Committee Multi-agent architecture where committee members debate independently and a judge synthesizes the final decision
Multi-Agent Workflow Classic Plan β†’ Execute β†’ Analyze β†’ Critique pattern with specialized agent roles
RAG Chatbot Terminal chatbot with semantic search over local documents using Qdrant
Skill Discover SKILL.md files and expose them as callable agent tools
MCP Integration Run an MCP server as a subprocess and expose its tools to agents
Supervisor Blackboard Supervisor-worker pattern with a shared blackboard for coordination

Design Philosophy

Phero embraces several core principles:

  1. Composability over monoliths Each package does one thing well
  2. Interfaces over implementations Swap LLMs, vector stores, or embeddings easily
  3. Explicit over implicit No hidden magic; clear control flow
  4. Tools are first-class Function tools are the primary integration point
  5. Developer experience matters Clean APIs, helpful tracing, good error messages

Contributing

Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.

License

This project is licensed under the Apache License 2.0. See the LICENSE file for details.

Acknowledgments

Built with ❀️ by Simone Vellei.

Inspired by the collaborative intelligence of ant colonies where independent agents work together toward shared goals, recognizing one another and coordinating through clear protocols.

The ant is not just a mascot. It is the philosophy. 🐜

Directories ΒΆ

Path Synopsis
Package agent provides a small orchestration layer around an llm.LLM.
Package agent provides a small orchestration layer around an llm.LLM.
Package embedding defines a small, provider-agnostic interface for generating vector embeddings from text.
Package embedding defines a small, provider-agnostic interface for generating vector embeddings from text.
examples
mcp command
mcp/server command
rag-chatbot command
simple-agent command
skills command
llm
Package llm provides small, composable building blocks for driving chat-based LLMs.
Package llm provides small, composable building blocks for driving chat-based LLMs.
openai
Package openai provides an llm.LLM implementation backed by the OpenAI Chat Completions API.
Package openai provides an llm.LLM implementation backed by the OpenAI Chat Completions API.
Package mcp provides adapters for working with the Model Context Protocol (MCP) in this codebase.
Package mcp provides adapters for working with the Model Context Protocol (MCP) in this codebase.
rag
Package rag provides a memory.Memory implementation backed by the project's retrieval-augmented generation (RAG) store.
Package rag provides a memory.Memory implementation backed by the project's retrieval-augmented generation (RAG) store.
simple
Package simple provides a small in-memory message store for agents.
Package simple provides a small in-memory message store for agents.
Package rag provides a small retrieval-augmented generation (RAG) helper.
Package rag provides a small retrieval-augmented generation (RAG) helper.
Package skill provides helpers to discover and parse skill definitions.
Package skill provides helpers to discover and parse skill definitions.
Package textsplitter provides utilities to split text into size-bounded, optionally-overlapping chunks.
Package textsplitter provides utilities to split text into size-bounded, optionally-overlapping chunks.
tool
go
psql
Package psql implements the vectorstore.Store interface using PostgreSQL + pgvector.
Package psql implements the vectorstore.Store interface using PostgreSQL + pgvector.
qdrant
Package qdrant implements the vectorstore.Store interface using Qdrant.
Package qdrant implements the vectorstore.Store interface using Qdrant.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL