agentkit

module
v0.3.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 5, 2026 License: MIT

README ΒΆ

AgentKit

Build Status Lint Status Go Report Card Docs License

A Go library for building AI agent applications. Provides server factories, LLM abstractions, workflow orchestration, and multi-runtime deployment support.

Features

  • 🏭 Server Factories - A2A and HTTP servers in 5 lines (saves ~475 lines per project)
  • 🧠 Multi-Provider LLM - Gemini, Claude, OpenAI, xAI, Ollama via OmniLLM
  • πŸ”€ Workflow Orchestration - Type-safe graph-based execution with Eino
  • ☁️ Multi-Runtime Deployment - Kubernetes (Helm) or AWS AgentCore
  • πŸ”’ VaultGuard Integration - Security-gated credential access

Architecture

agentkit/
β”œβ”€β”€ # Core (platform-agnostic)
β”œβ”€β”€ a2a/             # A2A protocol server factory
β”œβ”€β”€ agent/           # Base agent framework
β”œβ”€β”€ config/          # Configuration management
β”œβ”€β”€ http/            # HTTP client utilities
β”œβ”€β”€ httpserver/      # HTTP server factory
β”œβ”€β”€ llm/             # Multi-provider LLM abstraction
β”œβ”€β”€ orchestration/   # Eino workflow orchestration
β”‚
β”œβ”€β”€ # Platform-specific
└── platforms/
    β”œβ”€β”€ agentcore/   # AWS Bedrock AgentCore runtime
    └── kubernetes/  # Kubernetes + Helm deployment

Installation

go get github.com/agentplexus/agentkit

Quick Start

Complete Agent with HTTP + A2A Servers
package main

import (
    "context"

    "github.com/agentplexus/agentkit/a2a"
    "github.com/agentplexus/agentkit/agent"
    "github.com/agentplexus/agentkit/config"
    "github.com/agentplexus/agentkit/httpserver"
)

func main() {
    ctx := context.Background()
    cfg := config.LoadConfig()

    // Create agent
    ba, _ := agent.NewBaseAgent(cfg, "research-agent", 30)
    researchAgent := NewResearchAgent(ba, cfg)

    // HTTP server - 5 lines
    httpServer, _ := httpserver.NewBuilder("research-agent", 8001).
        WithHandlerFunc("/research", researchAgent.HandleResearch).
        WithDualModeLog().
        Build()

    // A2A server - 5 lines
    a2aServer, _ := a2a.NewServer(a2a.Config{
        Agent:       researchAgent.ADKAgent(),
        Port:        "9001",
        Description: "Research agent for web search",
    })

    // Start servers
    a2aServer.StartAsync(ctx)
    httpServer.Start()
}

This replaces ~100 lines of boilerplate with ~15 lines.

A2A Server Factory
import "github.com/agentplexus/agentkit/a2a"

// Create and start A2A server
server, _ := a2a.NewServer(a2a.Config{
    Agent:       myAgent,           // Google ADK agent
    Port:        "9001",            // Empty = random port
    Description: "My agent",
})

server.Start(ctx)  // Blocking
// or
server.StartAsync(ctx)  // Non-blocking

// Useful methods
server.URL()          // "http://localhost:9001"
server.AgentCardURL() // "http://localhost:9001/.well-known/agent.json"
server.InvokeURL()    // "http://localhost:9001/invoke"
server.Stop(ctx)      // Graceful shutdown
HTTP Server Factory
import "github.com/agentplexus/agentkit/httpserver"

// Config-based
server, _ := httpserver.New(httpserver.Config{
    Name: "my-agent",
    Port: 8001,
    HandlerFuncs: map[string]http.HandlerFunc{
        "/process": agent.HandleProcess,
    },
})

// Builder pattern (fluent API)
server, _ := httpserver.NewBuilder("my-agent", 8001).
    WithHandlerFunc("/research", agent.HandleResearch).
    WithHandlerFunc("/synthesize", agent.HandleSynthesize).
    WithHandler("/orchestrate", orchestration.NewHTTPHandler(exec)).
    WithTimeouts(30*time.Second, 120*time.Second, 60*time.Second).
    WithDualModeLog().
    Build()

server.Start()
Basic Agent
import (
    "github.com/agentplexus/agentkit/agent"
    "github.com/agentplexus/agentkit/config"
)

cfg := config.LoadConfig()

ba, err := agent.NewBaseAgent(cfg, "my-agent", 30)
if err != nil {
    log.Fatal(err)
}
defer ba.Close()

// Utility methods
content, err := ba.FetchURL(ctx, url, maxSizeMB)
ba.LogInfo("message %s", arg)
ba.LogError("error %s", arg)
Secure Agent with VaultGuard
ba, secCfg, err := agent.NewBaseAgentSecure(ctx, "secure-agent", 30,
    config.WithPolicy(nil), // Use default policy
)
if err != nil {
    log.Fatalf("Security check failed: %v", err)
}
defer ba.Close()
defer secCfg.Close()

log.Printf("Security score: %d", secCfg.SecurityResult().Score)
Workflow Orchestration with Eino
import (
    "github.com/cloudwego/eino/compose"
    "github.com/agentplexus/agentkit/orchestration"
)

// Build workflow graph
builder := orchestration.NewGraphBuilder[*Input, *Output]("my-workflow")
graph := builder.Graph()

// Add nodes using Eino's InvokableLambda
processLambda := compose.InvokableLambda(processFunc)
graph.AddLambdaNode("process", processLambda)

formatLambda := compose.InvokableLambda(formatFunc)
graph.AddLambdaNode("format", formatLambda)

// Connect nodes
builder.AddStartEdge("process")
builder.AddEdge("process", "format")
builder.AddEndEdge("format")

// Execute
finalGraph := builder.Build()
executor := orchestration.NewExecutor(finalGraph, "my-workflow")
result, err := executor.Execute(ctx, input)

// Expose as HTTP handler
handler := orchestration.NewHTTPHandler(executor)
http.Handle("/execute", handler)

Multi-Runtime Deployment

AgentKit supports two deployment runtimes:

Aspect Kubernetes AWS AgentCore
Distributions EKS, GKE, AKS, Minikube, kind AWS only
Config tool Helm CDK / Terraform
Scaling HPA Automatic
Isolation Containers Firecracker microVMs
Pricing Always-on Pay-per-use
Kubernetes Deployment
import "github.com/agentplexus/agentkit/platforms/kubernetes"

// Load and validate Helm values
values, errs := kubernetes.LoadAndValidate("values.yaml")

// Merge base and overlay values
values, err := kubernetes.LoadAndMerge("values.yaml", "values-prod.yaml")

Example values.yaml:

global:
  image:
    registry: ghcr.io/myorg
    pullPolicy: IfNotPresent
    tag: "latest"

namespace:
  create: true
  name: my-agents

llm:
  provider: gemini
  geminiModel: "gemini-2.0-flash-exp"

agents:
  research:
    enabled: true
    replicaCount: 1
    image:
      repository: my-research-agent
    service:
      type: ClusterIP
      port: 8001
      a2aPort: 9001
    resources:
      requests:
        cpu: 100m
        memory: 128Mi

vaultguard:
  enabled: true
  minSecurityScore: 50
AWS AgentCore Deployment
import "github.com/agentplexus/agentkit/platforms/agentcore"

// Simple setup
server := agentcore.NewBuilder().
    WithPort(8080).
    WithAgent(researchAgent).
    WithAgent(synthesisAgent).
    WithDefaultAgent("research").
    MustBuild(ctx)

server.Start()

Wrap Eino executors for AgentCore:

// Build Eino workflow
graph := buildOrchestrationGraph()
executor := orchestration.NewExecutor(graph, "stats-workflow")

// Wrap for AgentCore
agent := agentcore.WrapExecutor("stats", executor)

// Or with custom I/O transformation
agent := agentcore.WrapExecutorWithPrompt("stats", executor,
    func(prompt string) StatsReq { return StatsReq{Topic: prompt} },
    func(out StatsResp) string { return out.Summary },
)
Same Code, Different Runtimes
// Agent implementation - runtime agnostic
executor := orchestration.NewExecutor(graph, "stats")

// Runtime 1: Kubernetes
httpServer, _ := httpserver.NewBuilder("stats", 8001).
    WithHandler("/stats", orchestration.NewHTTPHandler(executor)).
    Build()

// Runtime 2: AWS AgentCore
acServer := agentcore.NewBuilder().
    WithAgent(agentcore.WrapExecutor("stats", executor)).
    MustBuild(ctx)
Local Development

AgentCore code runs locally without AWS - same binary, different infrastructure:

go run main.go
curl localhost:8080/ping
curl -X POST localhost:8080/invocations -d '{"prompt":"test"}'
Aspect Local AWS AgentCore
Process Go binary Firecracker microVM
Sessions In-memory Isolated per microVM
Scaling Manual Automatic

No code changes needed between local development and production.

Packages

a2a

A2A (Agent-to-Agent) protocol server factory.

server, _ := a2a.NewServer(a2a.Config{
    Agent:             myAgent,
    Port:              "9001",
    Description:       "My agent",
    InvokePath:        "/invoke",        // Default: /invoke
    ReadHeaderTimeout: 10 * time.Second,
    SessionService:    customService,    // Default: in-memory
})
httpserver

HTTP server factory with builder pattern.

server, _ := httpserver.NewBuilder("name", 8001).
    WithHandlerFunc("/path", handlerFunc).
    WithHandler("/path2", handler).
    WithTimeouts(read, write, idle).
    WithDualModeLog().
    Build()
agent

Base agent implementation with LLM integration.

ba, err := agent.NewBaseAgent(cfg, "name", timeoutSec)
ba, secCfg, err := agent.NewBaseAgentSecure(ctx, "name", timeout, opts...)
config

Configuration management with VaultGuard integration.

cfg := config.LoadConfig()
secCfg, err := config.LoadSecureConfig(ctx, config.WithDevPolicy())
apiKey, err := secCfg.GetCredential(ctx, "API_KEY")
llm

LLM model factory and adapters.

factory := llm.NewModelFactory(cfg)
model, err := factory.CreateModel(ctx)
orchestration

Eino-based workflow orchestration.

builder := orchestration.NewGraphBuilder[Input, Output]("name")
executor := orchestration.NewExecutor(graph, "name")
handler := orchestration.NewHTTPHandler(executor)
http

HTTP client utilities for inter-agent communication.

err := http.PostJSON(ctx, client, url, request, &response)
err := http.GetJSON(ctx, client, url, &response)
err := http.HealthCheck(ctx, client, baseURL)
platforms/kubernetes

Helm chart value structs and validation for Kubernetes deployments.

values, errs := kubernetes.LoadAndValidate("values.yaml")
values, err := kubernetes.LoadAndMerge("values.yaml", "values-prod.yaml")
platforms/agentcore

AWS Bedrock AgentCore runtime support.

server := agentcore.NewBuilder().
    WithAgent(agent).
    MustBuild(ctx)

// Wrap Eino executors
agent := agentcore.WrapExecutor("name", executor)

Configuration

AgentKit loads configuration from environment variables:

Variable Description Default
LLM_PROVIDER LLM provider (gemini, claude, openai, xai, ollama) gemini
LLM_MODEL Model name Provider default
GEMINI_API_KEY Gemini API key -
CLAUDE_API_KEY Claude/Anthropic API key -
OPENAI_API_KEY OpenAI API key -
XAI_API_KEY xAI API key -
OLLAMA_URL Ollama server URL http://localhost:11434
OBSERVABILITY_ENABLED Enable LLM observability false
OBSERVABILITY_PROVIDER Provider (opik, langfuse, phoenix) opik

Benefits

AgentKit eliminates ~1,500 lines of boilerplate per project:

Component Lines Saved
A2A server factory ~350 lines
HTTP server factory ~125 lines
Shared pkg/ code ~930 lines

See BENEFITS.md for detailed analysis.

Companion Modules

AgentKit has companion modules for Infrastructure-as-Code (IaC) deployment:

Module Purpose Dependencies
agentkit-aws-cdk AWS CDK constructs for AgentCore 21
agentkit-aws-pulumi Pulumi components for AgentCore 340

All modules share the same YAML/JSON configuration schema from platforms/agentcore/iac/.

For pure CloudFormation (no CDK/Pulumi runtime), use the built-in generator:

import "github.com/agentplexus/agentkit/platforms/agentcore/iac"

config, _ := iac.LoadStackConfigFromFile("config.yaml")
iac.GenerateCloudFormationFile(config, "template.yaml")

See ROADMAP.md for planned modules including Terraform support.

Dependencies

  • OmniLLM - Multi-provider LLM abstraction
  • VaultGuard - Security-gated credentials
  • Eino - Graph-based orchestration
  • Google ADK - Agent Development Kit
  • a2a-go - A2A protocol implementation

License

MIT License

Directories ΒΆ

Path Synopsis
Package a2a provides a factory for creating A2A protocol servers.
Package a2a provides a factory for creating A2A protocol servers.
Package agent provides base agent functionality for building AI agents.
Package agent provides base agent functionality for building AI agents.
Package config provides configuration management for agent applications.
Package config provides configuration management for agent applications.
Package http provides HTTP client utilities for inter-agent communication.
Package http provides HTTP client utilities for inter-agent communication.
Package httpserver provides a factory for creating agent HTTP servers.
Package httpserver provides a factory for creating agent HTTP servers.
llm
Package llm provides LLM model creation and management.
Package llm provides LLM model creation and management.
adapters
Package adapters provides LLM adapters for different providers.
Package adapters provides LLM adapters for different providers.
Package orchestration provides patterns for building Eino-based agent workflows.
Package orchestration provides patterns for building Eino-based agent workflows.
platforms
agentcore
Package agentcore provides a runtime adapter for AWS Bedrock AgentCore.
Package agentcore provides a runtime adapter for AWS Bedrock AgentCore.
agentcore/iac
Package iac provides shared infrastructure-as-code configuration for AgentCore deployments.
Package iac provides shared infrastructure-as-code configuration for AgentCore deployments.
kubernetes
Package kubernetes provides Go structs for Kubernetes/Helm deployment configuration.
Package kubernetes provides Go structs for Kubernetes/Helm deployment configuration.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL