gochat

module
v0.1.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 13, 2026 License: MIT

README ยถ

๐Ÿš€ GoChat

GoChat is a **modern, enterprise-ready Go client SDK for Large Language Models (LLMs)**. It provides an exceptionally elegant and type-safe unified interface that completely smooths out the chaotic API differences between OpenAI, Anthropic (Claude), DeepSeek, Qwen, Ollama, and other major cloud providers or local models.

Go Reference Go Version Go Report Card codecov

English | ็ฎ€ไฝ“ไธญๆ–‡


โœจ New & Killer Features

  • ๐Ÿ”Œ Seamless Model Switching: Change one line of initialization code to switch your application effortlessly between GPT-4o, Claude 3.7, DeepSeek-R1, and Qwen-Max.
  • ๐Ÿง  Native Deep Thinking Support: Built-in interceptors compatible with the reasoning chains of DeepSeek-R1, Claude 3.7, and OpenAI o1/o3.
  • ๐Ÿงฌ Unified Embedding System:
    • Support for both Remote APIs (OpenAI/Azure) and Local Models (ONNX/BGE/Sentence-BERT).
    • High-performance Batch Processing with concurrent execution and atomic progress tracking.
    • Built-in LRU Caching to avoid redundant vector calculations.
  • โ›“๏ธ Modular Pipeline Framework:
    • Orchestrate complex LLM workflows (RAG, multi-step reasoning) using a clean Step-based architecture.
    • Thread-safe State Management and execution Hooks for full observability.
  • ๐ŸŒ Web Search at Will: Native support for models with external web retrieval (like Qwen) via core.WithEnableSearch(true).
  • ๐Ÿข Enterprise OAuth2 Persistence: Automates Device Code Flow / OAuth2 authorization with persistent Token storage and auto-refreshing.

๐Ÿ“ฆ Installation

go get github.com/DotNetAge/gochat

๐Ÿš€ Quick Start

1. High-Performance Embedding (Local or Remote)

// Use BatchProcessor for optimized vector generation
processor := embedding.NewBatchProcessor(provider, embedding.BatchOptions{
    MaxBatchSize:  32,
    MaxConcurrent: 4,
})

// Generate embeddings with progress tracking
embeddings, err := processor.ProcessWithProgress(ctx, texts, func(current, total int, err error) bool {
    fmt.Printf("Progress: %d/%d\n", current, total)
    return true // Return false to cancel
})

2. Streamlined Pipeline Execution

p := pipeline.New().
    AddStep(steps.NewTemplateStep("User question: {{.query}}", "prompt", "query")).
    AddStep(steps.NewGenerateCompletionStep(client, "prompt", "answer", "gpt-4o")).
    AddHook(myLogger) // Observe every step

state := pipeline.NewState()
state.Set("query", "What is GoChat?")

err := p.Execute(ctx, state)
fmt.Println(state.GetString("answer"))

3. Capturing the "Chain of Thought" in a Stream

stream, _ := client.ChatStream(ctx, messages, core.WithThinking(0))
defer stream.Close()

for stream.Next() {
    ev := stream.Event()
    if ev.Type == core.EventThinking {
        fmt.Print(ev.Content) // Reasoning output
    } else if ev.Type == core.EventContent {
        fmt.Print(ev.Content) // Final answer
    }
}

๐Ÿ”Œ Fully Supported Providers

Provider Models Auth Methods
OpenAI GPT-4o, o1, o3-mini API Key
Anthropic Claude 3.5/3.7 API Key
DeepSeek V3, R1 API Key
Alibaba Qwen Qwen-Max, Qwen-Plus API Key, OAuth2, Device Code
Google Gemini 1.5 Pro/Flash API Key, OAuth2
Local / ONNX BGE, Sentence-BERT Local Execution
Azure OpenAI All GPT models API Key (Azure format)

๐ŸŽฏ Design Philosophy

GoChat adheres to Go's philosophy of minimalism: The core interface core.Client has only two methods: Chat and ChatStream. All provider-specific customizations are elegantly extended via Functional Options, ensuring the main interface remains clean and stable.

๐Ÿ“„ License

This project is open-sourced under the MIT License. PRs are welcome!

Directories ยถ

Path Synopsis
examples
01_basic_chat command
Basic example demonstrating simple chat completion with OpenAI.
Basic example demonstrating simple chat completion with OpenAI.
02_multi_turn command
Multi-turn conversation example.
Multi-turn conversation example.
03_streaming command
Streaming response example.
Streaming response example.
04_tool_calling command
Tool calling example.
Tool calling example.
05_multiple_providers command
Multiple providers example.
Multiple providers example.
06_image_input command
Multimodal input example - sending images to the model.
Multimodal input example - sending images to the model.
07_document_analysis command
Document analysis example - analyzing PDF, text files, or other documents.
Document analysis example - analyzing PDF, text files, or other documents.
08_multiple_images command
Multiple images example - analyzing multiple images in one request.
Multiple images example - analyzing multiple images in one request.
09_helper_utilities command
Helper utilities for common use cases.
Helper utilities for common use cases.
pkg
embedding
Package downloader provides functionality for downloading embedding models from remote sources.
Package downloader provides functionality for downloading embedding models from remote sources.
pipeline
Package pipeline provides a flexible framework for composing and executing sequences of operations (Steps) for LLM workflows like RAG and data processing.
Package pipeline provides a flexible framework for composing and executing sequences of operations (Steps) for LLM workflows like RAG and data processing.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL