aigent

module
v0.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 19, 2025 License: AGPL-3.0

README

Aigent - AI-powered Code Analysis Tool

Aigent is an AI-powered code analysis tool that uses large language models to assist with software development tasks. This repository contains the agent server component that handles LLM interactions and tool executions. The UI clients are separate codebases that communicate with this server via JSONRPC over Unix sockets.

Architecture Overview

The architecture consists of two main components:

  1. Agent Server (this repository): Handles LLM interactions and tool executions
  2. UI Clients (separate repositories): Provide user interfaces that communicate with the agent server

These components communicate via JSONRPC over a Unix socket, allowing for:

  • Normal LLM conversations with tool calls
  • Control messages for reconfiguring the agent (e.g., changing logging level, switching personas)
  • Multiple instances of agents and UIs running on a single machine

Key Features

  • Persona System: Define different roles for the LLM with specific tools and system prompts
  • Tool Calling: Execute tools on the user's machine with configurable approval policies
  • JSONRPC Communication: Structured communication between UI and agent components
  • File-based Logging: Unified logging system that writes to configurable log files
  • MCP Tool Support: Integration with Machine Cognition Protocol (MCP) servers

Usage

Starting the Agent Server
aigent agent [options]

Options:

  • --socket-path: Path to the Unix socket (defaults to ~/.aigent/sockets/agent.sock)
  • --api-key: API key for the provider (or set OPENAI_API_KEY environment variable)
  • --provider: LLM provider to use (default: openai)
  • --model: Model ID to use (default: gpt-4-turbo)
  • --endpoint: API endpoint for the provider (default: https://api.openai.com/v1)
  • --temperature: Temperature for the LLM (default: 0.7)
  • --max-tokens: Maximum number of tokens to generate (default: 1024)
  • --persona: Persona to use (default: software_engineer)
  • --enable-tools: Enable tool calling for LLM (default: true)
  • --tool-approval-policy: Policy for tool call approval (default: smart)
  • --config: Path to configuration file (default: aigent.yaml)
  • --log-file: Path to log file (default: ~/.aigent/logs/aigent.log)
  • --log-level: Log level (default: info)
Using the Default Command

Running aigent without a subcommand will use the agent command's action as the default:

aigent [options]
Multiple Instances

To run multiple instances of the agent, specify different socket paths:

# Start agent 1
aigent agent --socket-path ~/.aigent/sockets/agent1.sock

# Start agent 2
aigent agent --socket-path ~/.aigent/sockets/agent2.sock

Persona System

The persona system provides a way to define different roles or personalities for the LLM agent, each with its own set of tools and system prompts. A persona defines:

  1. A set of tools available to the LLM
  2. A system prompt that guides the LLM's behavior
  3. A unique identifier and descriptive name

Currently available personas:

  • Software Engineer (software_engineer): A persona focused on software engineering tasks, providing tools for code analysis, development, and debugging.

You can select a persona using the --persona flag:

aigent agent --persona software_engineer

JSONRPC Protocol

The agent server exposes the following JSONRPC methods:

Chat Methods
  • chat.sendMessage: Sends a message to the agent
    • Parameters: { "content": "message text" }
    • Result: { "messageId": "msg_id" }
Agent Control Methods
  • agent.configure: Configures the agent

    • Parameters: { "logLevel": "level", "persona": "persona_id", "temperature": 0.7, "maxTokens": 1024 }
    • Result: { "success": true, "message": "Configuration updated successfully" }
  • agent.getStatus: Gets the status of the agent

    • Parameters: {}
    • Result: { "currentPersona": "persona_id", "logLevel": "level", "temperature": 0.7, "maxTokens": 1024, "isStreaming": true }
Tool Approval Methods
  • tool.approvalRequest: Request approval for tool execution
    • Parameters: { "toolCall": { ... } }
    • Result: { "approved": true, "message": "Tool call approved" }
Notifications

The agent server sends the following notifications:

  • chat.streamResponse: Streaming response from the agent
    • Parameters:
      • Chunk: { "messageId": "msg_id", "chunk": { ... } }
      • Tool Call: { "messageId": "msg_id", "toolCall": { "toolCall": { ... }, "result": "result", "error": "error" } }
      • Complete: { "messageId": "msg_id", "complete": { "fullContent": "content", "toolCalls": [ ... ], "error": "error" } }

Logging System

The project uses a unified file-based logging system that writes all logs to configurable files:

  • Default log file: ~/.aigent/logs/aigent.log
  • Log level can be configured via the --log-level flag
  • Logs include timestamps, levels, and attributes for detailed debugging

Development

For development, this project is part of a larger ecosystem:

  • codeberg.org/MadsRC/aigent (this repository): The agent server module
  • git.sr.ht/~madsrc/sui: A separate UI client module that depends on this module's jsonrpc package

A Go workspace (go.work) is used to manage these local modules without requiring replace directives in the go.mod files. Note that the UI client is not part of this codebase and is developed separately.

Configuration

Configuration can be provided through:

  1. Command-line flags
  2. Environment variables (e.g., OPENAI_API_KEY, AIGENT_MODEL)
  3. Configuration file (aigent.yaml)

Example configuration file:

aigent:
  provider: openai
  model: gpt-4o
  endpoint: https://api.openai.com
  color: true

License

AGPL-3.0-only

Directories

Path Synopsis
cmd
internal
chat
Package chat provides functionality for interacting with Large Language Models (LLMs) through both direct prompts and streaming interfaces.
Package chat provides functionality for interacting with Large Language Models (LLMs) through both direct prompts and streaming interfaces.
mcp
Package mcp provides shared types and functionality for MCP (Machine Cognition Protocol) integration.
Package mcp provides shared types and functionality for MCP (Machine Cognition Protocol) integration.
persona
Package persona provides functionality for defining and managing different LLM personas.
Package persona provides functionality for defining and managing different LLM personas.
repomap
Package repomap provides functionality for creating a structured map of a code repository.
Package repomap provides functionality for creating a structured map of a code repository.
toolapproval
Package toolapproval provides a system for approving or denying tool calls before they are executed.
Package toolapproval provides a system for approving or denying tool calls before they are executed.
version
Package version provides version information for the aigent application.
Package version provides version information for the aigent application.
pkg
api
Package api defines the public API contract for the aigent system.
Package api defines the public API contract for the aigent system.
jsonrpc
Package jsonrpc implements a JSON-RPC 2.0 client and server over Unix sockets.
Package jsonrpc implements a JSON-RPC 2.0 client and server over Unix sockets.
llm
Package llm provides a unified interface for interacting with various Large Language Model providers.
Package llm provides a unified interface for interacting with various Large Language Model providers.
llm/providers/openai
Package openai provides an OpenAI provider implementation for the LLM client interface.
Package openai provides an OpenAI provider implementation for the LLM client interface.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL