langchaingo

package module
v0.1.13-update.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 23, 2025 License: MIT Imports: 0 Imported by: 0

README

⚠️ This is a fork of the original github.com/tmc/langchaingo repository.

🦜️🔗 LangChain Go (fork)

go.dev reference scorecard Open in Dev Containers

⚡ Building applications with LLMs through composability, with Go! ⚡

🚀 Important Announcement 🚀

Starting from release v0.1.13-update.1, this fork is now a full-fledged library that does not require any replace directives in your go.mod file. You can simply add it to your project with:

go get github.com/vxcontrol/langchaingo@latest

Why this fork?

This fork was created to incorporate functionality from open Pull Requests that haven't been merged into the original repository yet. You can view the list of accepted PRs in our v0.1.13-update.0 release and in the main-pull-requests branch.

Additionally, this repository contains custom improvements and enhancements related to langchaingo support, which will be published as releases. The original repository's state can be accessed in the main branch, which will be regularly updated with upstream changes.

This fork is primarily maintained for use in the PentAGI project, an autonomous AI Agents system for performing complex penetration testing tasks.

Branch Structure and Versioning

This repository follows a specific branching strategy to maintain both upstream compatibility and custom enhancements:

Branch Management
  • main: Fully synchronized with upstream (tmc/langchaingo). Never force-pushed.
  • main-pull-requests: Contains merged PRs from upstream that haven't been officially merged. Rebased on main after synchronization (commit hashes will change).
  • main-vxcontrol: Default branch containing all current enhancements. Rebased on main-pull-requests (commit hashes will change).
  • release/v*: Created from main-vxcontrol for each release. These branches are stable and never force-pushed.
Versioning

Release tags follow the format v0.1.13-update.1, where:

  • v0.1.13 corresponds to the latest upstream release version
  • -update.1 indicates our increment number (starting at 1 and incrementing with each new release)

Each new release cumulatively includes all changes from previous releases on top of the current upstream state, ensuring that you always get the complete set of enhancements when using a specific tag.

Dependency Management

Important: When using this fork in your projects, always reference release tags rather than commit hashes. This ensures proper dependency resolution since branches like main-vxcontrol undergo rebasing and their commit hashes change over time.

go get github.com/vxcontrol/langchaingo@v0.1.13-update.1
Branch Visualization
  main              A---B---C---D---E---F   (synced with upstream)
                     \
  main-pull-requests   \---G---H---I        (rebased on main, PRs from upstream)
                          \
  main-vxcontrol           \---J---K---L    (default branch, rebased on main-pull-requests)
                                \
  release/vM.M.P-update.N        M          (tagged stable release)
For Contributors

If you want to contribute to this fork, please create Pull Requests based on the current state of the main-vxcontrol branch. Even though commit hashes in this branch may change due to rebasing, your contributions will be included in the next release when enough changes have accumulated.

When creating a PR, please ensure your changes are well-tested and include appropriate documentation. Once merged, your contributions will be included in the next stable release with fixed commit hashes.

Acknowledgements

Special thanks to Travis Cline (@tmc) and all contributors who have made this project possible.

Original resources

🤔 What is this?

This is the Go language implementation of LangChain.

📖 Documentation

🎉 Examples

See ./examples for example usage.

package main

import (
  "context"
  "fmt"
  "log"

  "github.com/vxcontrol/langchaingo/llms"
  "github.com/vxcontrol/langchaingo/llms/openai"
)

func main() {
  ctx := context.Background()
  llm, err := openai.New()
  if err != nil {
    log.Fatal(err)
  }
  prompt := "What would be a good company name for a company that makes colorful socks?"
  completion, err := llms.GenerateFromSinglePrompt(ctx, llm, prompt)
  if err != nil {
    log.Fatal(err)
  }
  fmt.Println(completion)
}
$ go run .
Socktastic

Resources

Join the Discord server for support and discussions: Join Discord

Here are some links to blog posts and articles on using Langchain Go:

Contributors

There is a momentum for moving the development of langchaingo to a more community effort, if you are interested in being a maintainer or you are a contributor please join our Discord and let us know.

Documentation

Overview

Package langchaingo provides a Go implementation of LangChain, a framework for building applications with Large Language Models (LLMs) through composability.

LangchainGo enables developers to create powerful AI-driven applications by providing a unified interface to various LLM providers, vector databases, and other AI services. The framework emphasizes modularity, extensibility, and ease of use.

Core Components

The framework is organized around several key packages:

Quick Start

Basic text generation with OpenAI:

import (
	"context"
	"log"

	"github.com/vxcontrol/langchaingo/llms"
	"github.com/vxcontrol/langchaingo/llms/openai"
)

ctx := context.Background()
llm, err := openai.New()
if err != nil {
	log.Fatal(err)
}

completion, err := llm.GenerateContent(ctx, []llms.MessageContent{
	llms.TextParts(llms.ChatMessageTypeHuman, "What is the capital of France?"),
})

Creating embeddings and using vector search:

import (
	"github.com/vxcontrol/langchaingo/embeddings"
	"github.com/vxcontrol/langchaingo/schema"
	"github.com/vxcontrol/langchaingo/vectorstores/chroma"
)

// Create an embedder
embedder, err := embeddings.NewEmbedder(llm)
if err != nil {
	log.Fatal(err)
}

// Create a vector store
store, err := chroma.New(
	chroma.WithChromaURL("http://localhost:8000"),
	chroma.WithEmbedder(embedder),
)

// Add documents
docs := []schema.Document{
	{PageContent: "Paris is the capital of France"},
	{PageContent: "London is the capital of England"},
}
store.AddDocuments(ctx, docs)

// Search for similar documents
results, err := store.SimilaritySearch(ctx, "French capital", 1)

Building a chain for question answering:

import (
	"github.com/vxcontrol/langchaingo/chains"
	"github.com/vxcontrol/langchaingo/vectorstores"
)

chain := chains.NewRetrievalQAFromLLM(
	llm,
	vectorstores.ToRetriever(store, 3),
)

answer, err := chains.Run(ctx, chain, "What is the capital of France?")

Provider Support

LangchainGo supports numerous providers:

LLM Providers:

  • OpenAI (GPT-3.5, GPT-4, GPT-4 Turbo)
  • Anthropic (Claude family)
  • Google AI (Gemini, PaLM)
  • AWS Bedrock (Claude, Llama, Titan)
  • Cohere
  • Mistral AI
  • Ollama (local models)
  • Hugging Face Inference
  • And many more...

Embedding Providers:

  • OpenAI
  • Hugging Face
  • Jina AI
  • Voyage AI
  • Google Vertex AI
  • AWS Bedrock

Vector Stores:

  • Chroma
  • Pinecone
  • Weaviate
  • Qdrant
  • PostgreSQL with pgvector
  • Redis
  • Milvus
  • MongoDB Atlas Vector Search
  • OpenSearch
  • Azure AI Search

Agents and Tools

Create agents that can use tools to accomplish complex tasks:

import (
	"github.com/vxcontrol/langchaingo/agents"
	"github.com/vxcontrol/langchaingo/tools/serpapi"
	"github.com/vxcontrol/langchaingo/tools/calculator"
)

// Create tools
searchTool := serpapi.New("your-api-key")
calcTool := calculator.New()

// Create an agent
agent := agents.NewMRKLAgent(llm, []tools.Tool{searchTool, calcTool})
executor := agents.NewExecutor(agent)

// Run the agent
result, err := executor.Call(ctx, map[string]any{
	"input": "What's the current population of Tokyo multiplied by 2?",
})

Memory and Conversation

Maintain conversation context across multiple interactions:

import (
	"github.com/vxcontrol/langchaingo/memory"
	"github.com/vxcontrol/langchaingo/chains"
)

// Create memory
memory := memory.NewConversationBuffer()

// Create a conversation chain
chain := chains.NewConversation(llm, memory)

// Have a conversation
chains.Run(ctx, chain, "Hello, my name is Alice")
chains.Run(ctx, chain, "What's my name?") // Will remember "Alice"

Advanced Features

Streaming responses:

stream, err := llm.GenerateContentStream(ctx, messages)
for stream.Next() {
	chunk := stream.Value()
	fmt.Print(chunk.Choices[0].Content)
}

Function calling:

tools := []llms.Tool{
	{
		Type: "function",
		Function: &llms.FunctionDefinition{
			Name: "get_weather",
			Parameters: map[string]any{
				"type": "object",
				"properties": map[string]any{
					"location": map[string]any{"type": "string"},
				},
			},
		},
	},
}

content, err := llm.GenerateContent(ctx, messages, llms.WithTools(tools))

Multi-modal inputs (text and images):

parts := []llms.ContentPart{
	llms.TextPart("What's in this image?"),
	llms.ImagePart("data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQ..."),
}
content, err := llm.GenerateContent(ctx, []llms.MessageContent{
	{Role: llms.ChatMessageTypeHuman, Parts: parts},
})

Configuration and Environment

Most providers require API keys set as environment variables:

export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"
export HUGGINGFACEHUB_API_TOKEN="your-hf-token"

Error Handling

LangchainGo provides standardized error handling:

import "github.com/vxcontrol/langchaingo/llms"

if err != nil {
	if llms.IsAuthenticationError(err) {
		log.Fatal("Invalid API key")
	}
	if llms.IsRateLimitError(err) {
		log.Println("Rate limited, retrying...")
	}
}

Testing

LangchainGo includes comprehensive testing utilities including HTTP record/replay for internal tests. The httprr package provides deterministic testing of HTTP interactions:

import "github.com/vxcontrol/langchaingo/internal/httprr"

func TestMyFunction(t *testing.T) {
	rr := httprr.OpenForTest(t, http.DefaultTransport)
	defer rr.Close()

	client := rr.Client()
	// Use client for HTTP requests - they'll be recorded/replayed for deterministic testing
}

Examples

See the examples/ directory for complete working examples including:

  • Basic LLM usage
  • RAG (Retrieval Augmented Generation)
  • Agent workflows
  • Vector database integration
  • Multi-modal applications
  • Streaming responses
  • Function calling

Contributing

LangchainGo welcomes contributions! The project follows Go best practices and includes comprehensive testing, linting, and documentation standards.

See CONTRIBUTING.md for detailed guidelines.

Directories

Path Synopsis
Package agents contains the standard interface all agents must implement, implementations of this interface, and an agent executor.
Package agents contains the standard interface all agents must implement, implementations of this interface, and an agent executor.
Package callbacks includes a standard interface for hooking into various stages of your LLM application.
Package callbacks includes a standard interface for hooking into various stages of your LLM application.
Package chains contains a standard interface for chains, a number of built-in chains and functions for calling and running chains.
Package chains contains a standard interface for chains, a number of built-in chains and functions for calling and running chains.
Package documentloaders includes a standard interface for loading documents from a source and implementations of this interface.
Package documentloaders includes a standard interface for loading documents from a source and implementations of this interface.
Package embeddings contains helpers for creating vector embeddings from text using different providers.
Package embeddings contains helpers for creating vector embeddings from text using different providers.
Package exp contains experimental code that is subject to change or removal.
Package exp contains experimental code that is subject to change or removal.
Package httputil provides HTTP transport and client utilities for LangChainGo.
Package httputil provides HTTP transport and client utilities for LangChainGo.
internal
devtools/lint command
Package lint provides architectural linting for the LangChain Go codebase.
Package lint provides architectural linting for the LangChain Go codebase.
devtools/rrtool command
httprr
Package httprr implements HTTP record and replay, mainly for use in tests.
Package httprr implements HTTP record and replay, mainly for use in tests.
testutil/testctr
Package testctr provides utilities for setting up testcontainers in tests.
Package testctr provides utilities for setting up testcontainers in tests.
Package jsonschema provides very simple functionality for representing a JSON schema as a (nested) struct.
Package jsonschema provides very simple functionality for representing a JSON schema as a (nested) struct.
Package llms provides unified support for interacting with different Language Models (LLMs) from various providers.
Package llms provides unified support for interacting with different Language Models (LLMs) from various providers.
cache
Package cache provides a generic wrapper that adds caching to a `llms.Model`.
Package cache provides a generic wrapper that adds caching to a `llms.Model`.
compliance
Package compliance provides a test suite to verify provider implementations.
Package compliance provides a test suite to verify provider implementations.
ernie
Package ernie wrapper around the Baidu Large Language Model Platform APIs.
Package ernie wrapper around the Baidu Large Language Model Platform APIs.
googleai
package googleai implements a langchaingo provider for Google AI LLMs.
package googleai implements a langchaingo provider for Google AI LLMs.
googleai/internal/cmd command
Code generator for vertex.go from googleai.go nolint
Code generator for vertex.go from googleai.go nolint
googleai/palm
package palm implements a langchaingo provider for Google Vertex AI legacy PaLM models.
package palm implements a langchaingo provider for Google Vertex AI legacy PaLM models.
googleai/vertex
package vertex implements a langchaingo provider for Google Vertex AI LLMs, including the new Gemini models.
package vertex implements a langchaingo provider for Google Vertex AI LLMs, including the new Gemini models.
local/internal/localclient
Package localclient provides a client for local LLMs.
Package localclient provides a client for local LLMs.
reasoning
Package reasoning provides primitives for working with reasoning content.
Package reasoning provides primitives for working with reasoning content.
streaming
Package streaming provides a streaming interface for LLMs.
Package streaming provides a streaming interface for LLMs.
Package memory provides an interface for managing conversational data and a variety of implementations for storing and retrieving that data.
Package memory provides an interface for managing conversational data and a variety of implementations for storing and retrieving that data.
sqlite3
Package sqlite3 adds support for chat message history using sqlite3.
Package sqlite3 adds support for chat message history using sqlite3.
zep
Package outputparser provides a set of output parsers to process structured or unstructured data from language models (LLMs).
Package outputparser provides a set of output parsers to process structured or unstructured data from language models (LLMs).
Package prompts contains types, prompt templates, loading utilities, output parsers, example selectors, and other utilities for working with LLM prompts.
Package prompts contains types, prompt templates, loading utilities, output parsers, example selectors, and other utilities for working with LLM prompts.
internal/fstring
Package fstring contains template format with f-string.
Package fstring contains template format with f-string.
Package schema implements a shared core set of data types for use in langchaingo.
Package schema implements a shared core set of data types for use in langchaingo.
Package textsplitter provides tools for splitting long texts into smaller chunks based on configurable rules and parameters.
Package textsplitter provides tools for splitting long texts into smaller chunks based on configurable rules and parameters.
Package tools defines a standard interface for tools to be used by agents.
Package tools defines a standard interface for tools to be used by agents.
duckduckgo
Package duckduckgo contains an implementation of the tool interface with the duckduckgo api client.
Package duckduckgo contains an implementation of the tool interface with the duckduckgo api client.
metaphor
// Package metaphor contains an implementation of the tool interface with the metaphor search api client.
// Package metaphor contains an implementation of the tool interface with the metaphor search api client.
perplexity
Package perplexity provides integration with Perplexity AI's API for AI agents.
Package perplexity provides integration with Perplexity AI's API for AI agents.
scraper
Package scraper contains an implementation of the tool interface for a web scraping tool.
Package scraper contains an implementation of the tool interface for a web scraping tool.
serpapi
Package serpapi contains an implementation of the tool interface with the serapi.
Package serpapi contains an implementation of the tool interface with the serapi.
wikipedia
Package wikipedia contains an implementation of the tool interface with the wikipedia api.
Package wikipedia contains an implementation of the tool interface with the wikipedia api.
zapier
Package zapier contains an implementation of the tool interface with the zapier NLA api client.
Package zapier contains an implementation of the tool interface with the zapier NLA api client.
util
Package vectorstores contains the implementation of VectorStore, an interface for saving and querying documents as vector embeddings.
Package vectorstores contains the implementation of VectorStore, an interface for saving and querying documents as vector embeddings.
azureaisearch
Package azureaisearch contains an implementation of the VectorStore interface that connects to Azure AI search.
Package azureaisearch contains an implementation of the VectorStore interface that connects to Azure AI search.
chroma
Package chroma contains an implementation of the VectorStore interface that connects to an external Chroma database.
Package chroma contains an implementation of the VectorStore interface that connects to an external Chroma database.
inmemory
Package inmemory contains an implementation of the inmemory VectorStore
Package inmemory contains an implementation of the inmemory VectorStore
mongovector
Package mongovector implements a vector store using MongoDB as the backend.
Package mongovector implements a vector store using MongoDB as the backend.
opensearch
Package opensearch contains an implementation of the VectorStore interface that connects to Opensearch.
Package opensearch contains an implementation of the VectorStore interface that connects to Opensearch.
pgvector
Package pgvector contains an implementation of the VectorStore interface using pgvector.
Package pgvector contains an implementation of the VectorStore interface using pgvector.
pinecone
Package pinecone contains an implementation of the VectorStore interface using pinecone.
Package pinecone contains an implementation of the VectorStore interface using pinecone.
qdrant
Package qdrant contains an implementation of the VectorStore interface using Qdrant.
Package qdrant contains an implementation of the VectorStore interface using Qdrant.
redisvector
Package redisvector contains an implementation of the VectorStore interface using redisvector.
Package redisvector contains an implementation of the VectorStore interface using redisvector.
weaviate
Package weaviate contains an implementation of the VectorStore interface using weaviate.
Package weaviate contains an implementation of the VectorStore interface using weaviate.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL