dspy

package module
v0.0.0-...-db8c0f5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 23, 2025 License: MIT Imports: 1 Imported by: 0

README

DSPy-Go

Go Report Card codecov Go Reference

What is DSPy-Go?

DSPy-Go is a native Go implementation of the DSPy framework, bringing systematic prompt engineering and automated reasoning capabilities to Go applications. It provides a flexible and idiomatic framework for building reliable and effective Language Model (LLM) applications through composable modules and workflows.

Key Features
  • Modular Architecture: Build complex LLM applications by composing simple, reusable components
  • Systematic Prompt Engineering: Optimize prompts automatically based on examples and feedback
  • Flexible Workflows: Chain, branch, and orchestrate LLM operations with powerful workflow abstractions
  • Multiple LLM Providers: Support for Anthropic Claude, Google Gemini, Ollama, and LlamaCPP
  • Advanced Reasoning Patterns: Implement chain-of-thought, ReAct, and other reasoning techniques

Installation

go get github.com/selfreliantob/dspy-go

Quick Start

Here's a simple example to get you started with DSPy-Go:

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/selfreliantob/dspy-go/pkg/core"
    "github.com/selfreliantob/dspy-go/pkg/llms"
    "github.com/selfreliantob/dspy-go/pkg/modules"
    "github.com/selfreliantob/dspy-go/pkg/config"
)

func main() {
    // Configure the default LLM
    llms.EnsureFactory()
    err := config.ConfigureDefaultLLM("your-api-key", core.ModelAnthropicSonnet)
    if err != nil {
        log.Fatalf("Failed to configure LLM: %v", err)
    }

    // Create a signature for question answering
    signature := core.NewSignature(
        []core.InputField{{Field: core.Field{Name: "question"}}},
        []core.OutputField{{Field: core.Field{Name: "answer"}}},
    )

    // Create a ChainOfThought module that implements step-by-step reasoning
    cot := modules.NewChainOfThought(signature)

    // Create a program that executes the module
    program := core.NewProgram(
        map[string]core.Module{"cot": cot},
        func(ctx context.Context, inputs map[string]interface{}) (map[string]interface{}, error) {
            return cot.Process(ctx, inputs)
        },
    )

    // Execute the program with a question
    result, err := program.Execute(context.Background(), map[string]interface{}{
        "question": "What is the capital of France?",
    })
    if err != nil {
        log.Fatalf("Error executing program: %v", err)
    }

    fmt.Printf("Answer: %s\n", result["answer"])
}

Core Concepts

DSPy-Go is built around several key concepts that work together to create powerful LLM applications:

Signatures

Signatures define the input and output fields for modules, creating a clear contract for what a module expects and produces.

// Create a signature for a summarization task
signature := core.NewSignature(
    []core.InputField{
        {Field: core.Field{Name: "document", Description: "The document to summarize"}},
    },
    []core.OutputField{
        {Field: core.Field{Name: "summary", Description: "A concise summary of the document"}},
        {Field: core.Field{Name: "key_points", Description: "The main points from the document"}},
    },
)

Signatures can include field descriptions that enhance prompt clarity and improve LLM performance.

Modules

Modules are the building blocks of DSPy-Go programs. They encapsulate specific functionalities and can be composed to create complex pipelines. Some key modules include:

Predict

The simplest module that makes direct predictions using an LLM.

predict := modules.NewPredict(signature)
result, err := predict.Process(ctx, map[string]interface{}{
    "document": "Long document text here...",
})
// result contains "summary" and "key_points"
ChainOfThought

Implements chain-of-thought reasoning, which guides the LLM to break down complex problems into intermediate steps.

cot := modules.NewChainOfThought(signature)
result, err := cot.Process(ctx, map[string]interface{}{
    "question": "Solve 25 × 16 step by step.",
})
// result contains both the reasoning steps and the final answer
ReAct

Implements the Reasoning and Acting paradigm, allowing LLMs to use tools to solve problems.

// Create tools
calculator := tools.NewCalculatorTool()
searchTool := tools.NewSearchTool()

// Create ReAct module with tools
react := modules.NewReAct(signature, []core.Tool{calculator, searchTool})
result, err := react.Process(ctx, map[string]interface{}{
    "question": "What is the population of France divided by 1000?",
})
// ReAct will use the search tool to find the population and the calculator to divide it
Programs

Programs combine modules into executable workflows. They define how inputs flow through the system and how outputs are produced.

program := core.NewProgram(
    map[string]core.Module{
        "retriever": retriever,
        "generator": generator,
    },
    func(ctx context.Context, inputs map[string]interface{}) (map[string]interface{}, error) {
        // First retrieve relevant documents
        retrieverResult, err := retriever.Process(ctx, inputs)
        if err != nil {
            return nil, err
        }
        
        // Then generate an answer using the retrieved documents
        generatorInputs := map[string]interface{}{
            "question": inputs["question"],
            "documents": retrieverResult["documents"],
        }
        return generator.Process(ctx, generatorInputs)
    },
)
Optimizers

Optimizers help improve the performance of your DSPy-Go programs by automatically tuning prompts and module parameters.

BootstrapFewShot

Automatically selects high-quality examples for few-shot learning.

// Create a dataset of examples
dataset := datasets.NewInMemoryDataset()
dataset.AddExample(map[string]interface{}{
    "question": "What is the capital of France?",
    "answer": "The capital of France is Paris.",
})
// Add more examples...

// Create and apply the optimizer
optimizer := optimizers.NewBootstrapFewShot(dataset, metrics.NewExactMatchMetric("answer"))
optimizedModule, err := optimizer.Optimize(ctx, originalModule)
MIPRO and Copro

More advanced optimizers for multi-step interactive prompt optimization (MIPRO) and collaborative prompt optimization (Copro).

// Create a MIPRO optimizer
mipro := optimizers.NewMIPRO(dataset, metrics.NewRougeMetric("answer"))
optimizedModule, err := mipro.Optimize(ctx, originalModule)

Agents and Workflows

DSPy-Go provides powerful abstractions for building more complex agent systems.

Memory

Different memory implementations for tracking conversation history.

// Create a buffer memory for conversation history
memory := memory.NewBufferMemory(10) // Keep last 10 exchanges
memory.Add(context.Background(), "user", "Hello, how can you help me?")
memory.Add(context.Background(), "assistant", "I can answer questions and help with tasks. What do you need?")

// Retrieve conversation history
history, err := memory.Get(context.Background())
Workflows
Chain Workflow

Sequential execution of steps:

// Create a chain workflow
workflow := workflows.NewChainWorkflow(store)

// Add steps to the workflow
workflow.AddStep(&workflows.Step{
    ID: "step1",
    Module: modules.NewPredict(signature1),
})

workflow.AddStep(&workflows.Step{
    ID: "step2", 
    Module: modules.NewPredict(signature2),
})

// Execute the workflow
result, err := workflow.Execute(ctx, inputs)
Configurable Retry Logic

Each workflow step can be configured with retry logic:

step := &workflows.Step{
    ID: "retry_example",
    Module: myModule,
    RetryConfig: &workflows.RetryConfig{
        MaxAttempts: 3,
        BackoffMultiplier: 2.0,
        InitialBackoff: time.Second,
    },
    Condition: func(state map[string]interface{}) bool {
        return someCondition(state)
    },
}
Orchestrator

Flexible task decomposition and execution:

// Create an orchestrator with subtasks
orchestrator := agents.NewOrchestrator()

// Define and add subtasks
researchTask := agents.NewTask("research", researchModule)
summarizeTask := agents.NewTask("summarize", summarizeModule)

orchestrator.AddTask(researchTask)
orchestrator.AddTask(summarizeTask)

// Execute the orchestration
result, err := orchestrator.Execute(ctx, map[string]interface{}{
    "topic": "Climate change impacts",
})

Working with Different LLM Providers

DSPy-Go supports multiple LLM providers out of the box:

// Using Anthropic Claude
llm, err := llms.NewAnthropicLLM("api-key", core.ModelAnthropicSonnet)

// Using Google Gemini
llm, err := llms.NewGeminiLLM("api-key", "gemini-pro")

// Using Ollama (local)
llm, err := llms.NewOllamaLLM("http://localhost:11434", "ollama:llama2")

// Using LlamaCPP (local)
llm, err := llms.NewLlamacppLLM("http://localhost:8080")

// Set as default LLM
llms.SetDefaultLLM(llm)

// Or use with a specific module
myModule.SetLLM(llm)

Advanced Features

Tracing and Logging

DSPy-Go includes detailed tracing and structured logging for debugging and optimization:

// Enable detailed tracing
ctx = core.WithExecutionState(context.Background())

// Configure logging
logger := logging.NewLogger(logging.Config{
    Severity: logging.DEBUG,
    Outputs:  []logging.Output{logging.NewConsoleOutput(true)},
})
logging.SetLogger(logger)

// After execution, inspect trace
executionState := core.GetExecutionState(ctx)
steps := executionState.GetSteps("moduleId")
for _, step := range steps {
    fmt.Printf("Step: %s, Duration: %s\n", step.Name, step.Duration)
    fmt.Printf("Prompt: %s\n", step.Prompt)
    fmt.Printf("Response: %s\n", step.Response)
}
Custom Tools

You can extend ReAct modules with custom tools:

// Define a custom tool
type WeatherTool struct{}

func (t *WeatherTool) GetName() string {
    return "weather"
}

func (t *WeatherTool) GetDescription() string {
    return "Get the current weather for a location"
}

func (t *WeatherTool) CanHandle(action string) bool {
    return strings.HasPrefix(action, "weather(")
}

func (t *WeatherTool) Execute(ctx context.Context, action string) (string, error) {
    // Parse location from action
    location := parseLocation(action)
    
    // Fetch weather data (implementation detail)
    weather, err := fetchWeather(location)
    if err != nil {
        return "", err
    }
    
    return fmt.Sprintf("Weather in %s: %s, %d°C", location, weather.Condition, weather.Temperature), nil
}

// Use the custom tool with ReAct
react := modules.NewReAct(signature, []core.Tool{&WeatherTool{}})
Streaming Support

Process LLM outputs incrementally as they're generated:

// Create a streaming handler
handler := func(chunk string) {
    fmt.Print(chunk)
}

// Enable streaming on the module
module.SetStreamingHandler(handler)

// Process with streaming enabled
result, err := module.Process(ctx, inputs)

Examples

Check the examples directory for complete implementations:

Documentation

For more detailed documentation:

License

DSPy-Go is released under the MIT License. See the LICENSE file for details.

Documentation

Overview

Package dspy is a Go implementation of the DSPy framework for using language models to solve complex tasks through composable steps and prompting techniques.

DSPy-Go provides a collection of modules, optimizers, and tools for building reliable LLM-powered applications. It focuses on making it easy to:

  • Break down complex tasks into modular steps
  • Optimize prompts and chain-of-thought reasoning
  • Build flexible agent-based systems
  • Handle common LLM interaction patterns
  • Evaluate and improve system performance

Key Components:

  • Core: Fundamental abstractions like Module, Signature, LLM and Program for defining and executing LLM-based workflows.

  • Modules: Building blocks for composing LLM workflows:

  • Predict: Basic prediction module for simple LLM interactions

  • ChainOfThought: Implements step-by-step reasoning with rationale tracking

  • ReAct: Implements Reasoning and Acting with tool integration

  • Optimizers: Tools for improving prompt effectiveness:

  • BootstrapFewShot: Automatically selects high-quality examples for few-shot learning

  • MIPRO: Multi-step interactive prompt optimization

  • Copro: Collaborative prompt optimization

  • Agents: Advanced patterns for building sophisticated AI systems:

  • Memory: Different memory implementations for tracking conversation history

  • Tools: Integration with external tools and APIs

  • Workflows:

  • Chain: Sequential execution of steps

  • Parallel: Concurrent execution with controlled parallelism

  • Router: Dynamic routing based on classification

  • Orchestrator: Flexible task decomposition and execution

  • Integration with multiple LLM providers:

  • Anthropic Claude

  • Google Gemini

  • Ollama

  • LlamaCPP

Simple Example:

import (
    "context"
    "fmt"
    "log"

    "github.com/selfreliantob/dspy-go/pkg/core"
    "github.com/selfreliantob/dspy-go/pkg/llms"
    "github.com/selfreliantob/dspy-go/pkg/modules"
)

func main() {
    // Configure the default LLM
    llms.EnsureFactory()
    err := config.ConfigureDefaultLLM("your-api-key", core.ModelAnthropicSonnet)
    if err != nil {
        log.Fatalf("Failed to configure LLM: %v", err)
    }

    // Create a signature for question answering
    signature := core.NewSignature(
        []core.InputField{{Field: core.Field{Name: "question"}}},
        []core.OutputField{{Field: core.Field{Name: "answer"}}},
    )

    // Create a ChainOfThought module
    cot := modules.NewChainOfThought(signature)

    // Create a program
    program := core.NewProgram(
        map[string]core.Module{"cot": cot},
        func(ctx context.Context, inputs map[string]interface{}) (map[string]interface{}, error) {
            return cot.Process(ctx, inputs)
        },
    )

    // Execute the program
    result, err := program.Execute(context.Background(), map[string]interface{}{
        "question": "What is the capital of France?",
    })
    if err != nil {
        log.Fatalf("Error executing program: %v", err)
    }

    fmt.Printf("Answer: %s\n", result["answer"])
}

Advanced Features:

  • Tracing and Logging: Detailed tracing and structured logging for debugging and optimization Execution context is tracked and passed through the pipeline for debugging and analysis.

  • Error Handling: Comprehensive error management with custom error types and centralized handling

  • Metric-Based Optimization: Improve module performance based on custom evaluation metrics

  • Custom Tool Integration: Extend ReAct modules with domain-specific tools

  • Workflow Retry Logic: Resilient execution with configurable retry mechanisms and backoff strategies

  • Streaming Support: Process LLM outputs incrementally as they're generated

  • Data Storage: Integration with various storage backends for persistence of examples and results

  • Arrow Support: Integration with Apache Arrow for efficient data handling and processing

Working with Workflows:

// Chain workflow example
workflow := workflows.NewChainWorkflow(store)
workflow.AddStep(&workflows.Step{
    ID: "step1",
    Module: modules.NewPredict(signature1),
})
workflow.AddStep(&workflows.Step{
    ID: "step2",
    Module: modules.NewPredict(signature2),
    // Configurable retry logic
    RetryConfig: &workflows.RetryConfig{
        MaxAttempts: 3,
        BackoffMultiplier: 2.0,
    },
    // Conditional execution
    Condition: func(state map[string]interface{}) bool {
        return someCondition(state)
    },
})

For more examples and detailed documentation, visit: https://github.com/selfreliantob/dspy-go

DSPy-Go is released under the MIT License.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func DCrvQbEf

func DCrvQbEf() error

Types

This section is empty.

Directories

Path Synopsis
examples
internal
pkg
tools
pkg/tools/func.go
pkg/tools/func.go

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL