langfuse

package module
v1.3.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 28, 2026 License: MIT Imports: 14 Imported by: 0

README

Langfuse Go SDK

GoDoc Go Report Card GitHub release

Maintained by Optible

This is Langfuse's unofficial Go client, designed to enable you to use Langfuse's services easily from your own applications.

Langfuse

Langfuse provides traces, evals, prompt management and metrics to debug and improve your LLM application.

Features

  • OpenTelemetry-native ingestion: Traces, Generations, Spans, and Events are exported as OTel spans via OTLP HTTP to Langfuse's /api/public/otel/v1/traces endpoint. The legacy /api/public/ingestion endpoint is no longer used.
  • Isolated TracerProvider: The SDK builds and owns its own sdktrace.TracerProvider and never registers it as the global provider, so it coexists cleanly with other OTel pipelines (Sentry, Datadog, etc.) without spans leaking in either direction.
  • Prompt Management: Fetch and cache prompts with version control and labels
  • Smart Caching: Built-in caching with configurable TTL for prompts
  • Fallback Support: Graceful degradation with fallback prompts
  • Type Safety: Strongly typed models for all API interactions
  • Context Support: Full Go context support for cancellation and timeouts

API Support

Feature Status Description
Trace 🟢 Root span exported via OTLP. Call TraceEnd to close.
Generation 🟢 LLM generation span with model, usage, and cost attributes
Span 🟢 Generic observation span within a trace
Event 🟢 Instantaneous observation (single point in time)
Score 🟢 Evaluations on traces/sessions. Sent via REST POST /api/public/scores.
DeleteScore 🟢 Delete scores by ID via REST
GetPrompt 🟢 Fetch prompts with caching, versioning, and labels

Getting started

Installation

You can load langfuse-go into your project by using:

go get github.com/optible/langfuse-go
Configuration

Configure the SDK via environment variables:

Variable Required Description
LANGFUSE_HOST yes Base URL of your Langfuse deployment. Defaults to https://cloud.langfuse.com. The SDK appends /api/public/otel/v1/traces for OTLP ingestion and /api/public/... paths for prompts and scores.
LANGFUSE_PUBLIC_KEY yes Public key (used as the Basic Auth username on every request).
LANGFUSE_SECRET_KEY yes Secret key (used as the Basic Auth password).
LANGFUSE_VERSION_4 no Set to true to send the x-langfuse-ingestion-version: 4 header on OTLP exports. Enables real-time appearance in the Langfuse v4 unified observations table. Leave unset for self-hosted Langfuse v3 deployments.
OTEL_SERVICE_NAME no Reported as the OTel service.name resource attribute. Defaults to langfuse-go.
Coexisting with Sentry or other OTel tracers

The SDK builds its own TracerProvider and never calls otel.SetTracerProvider. Sentry's tracing keeps using the global provider; Langfuse uses ours. No span filtering rules are required — Langfuse spans never reach Sentry, and Sentry's HTTP/DB spans never reach Langfuse. The tradeoff is that parent-child relationships do not span the two providers, which is fine for typical usage where Langfuse only needs LLM-specific operations.

Usage

Please refer to the examples folder to see how to use the SDK.

Lifecycle
  • langfuse.New(ctx) constructs the client. The OTel TracerProvider is built lazily on the first observation call.
  • Always call defer l.Shutdown(ctx) on process exit. Shutdown flushes pending spans and tears down the exporter. Flush(ctx) alone does not close open root spans.
  • For accurate trace duration in the Langfuse UI, call l.TraceEnd(traceID) when you're done with a trace. Otherwise the root span remains open until Shutdown.
Behavior changes from earlier versions
  • OTLP transport. Observations are no longer sent through /api/public/ingestion (deprecated by Langfuse). The SDK now emits OTel spans to /api/public/otel/v1/traces.
  • Score is synchronous. It now performs a POST /api/public/scores and returns errors directly. Previously, score creation was queued in a background batcher and failures were silently swallowed.
  • TraceEnd(traceID) is new. Without it, the root span only ends on Shutdown, which makes traces look longer than they were.
  • Shutdown(ctx) is new and required for clean process exit.
  • WithFlushInterval is a no-op retained for compatibility. The OTel batcher schedules itself.
Basic Ingestion Example

Here's a simple example showing how to create traces, spans, generations, events, and scores:

package main

import (
	"context"

	"github.com/optible/langfuse-go"
	"github.com/optible/langfuse-go/model"
)

func main() {
	ctx := context.Background()
	l := langfuse.New(ctx)
	defer l.Shutdown(ctx)

	// Create a trace
	trace, err := l.Trace(&model.Trace{
		Name:      "my-llm-app",
		SessionID: "user-session-123",
	})
	if err != nil {
		panic(err)
	}
	defer l.TraceEnd(trace.ID)

	// Create a span within the trace
	span, err := l.Span(&model.Span{
		Name:    "data-processing",
		TraceID: trace.ID,
	}, nil)
	if err != nil {
		panic(err)
	}

	// Track an LLM generation
	generation, err := l.Generation(
		&model.Generation{
			TraceID: trace.ID,
			Name:    "chat-completion",
			Model:   "gpt-3.5-turbo",
			ModelParameters: model.M{
				"maxTokens":   "1000",
				"temperature": "0.9",
			},
			Input: []model.M{
				{
					"role":    "system",
					"content": "You are a helpful assistant.",
				},
				{
					"role":    "user",
					"content": "Please generate a summary of the following documents...",
				},
			},
			Metadata: model.M{
				"environment": "production",
			},
		},
		&span.ID,
	)
	if err != nil {
		panic(err)
	}

	// Log an event
	_, err = l.Event(
		&model.Event{
			Name:    "user-feedback",
			TraceID: trace.ID,
			Input: model.M{
				"feedback": "positive",
			},
		},
		&generation.ID,
	)
	if err != nil {
		panic(err)
	}

	// Update generation with output
	generation.Output = model.M{
		"completion": "Here is the summary...",
	}
	_, err = l.GenerationEnd(generation)
	if err != nil {
		panic(err)
	}

	// Add a score
	score, err := l.Score(
		&model.Score{
			TraceID: trace.ID,
			Name:    "quality-score",
			Value:   0.95,
			Comment: "High quality response",
		},
	)
	if err != nil {
		panic(err)
	}

	// Delete a score (optional)
	// err = l.DeleteScore(ctx, score.ID)
	// if err != nil {
	// 	panic(err)
	// }

	// End the span
	_, err = l.SpanEnd(span)
	if err != nil {
		panic(err)
	}

	// l.TraceEnd(trace.ID) and l.Shutdown(ctx) run via defer above.
	// Flush() can be called explicitly to force-export pending spans, but
	// Shutdown() also flushes, so it's only needed for long-running processes.
	l.Flush(ctx)
}
Score Deletion Example

The SDK supports deleting scores once they've been created. This is useful for removing incorrect or outdated scores:

package main

import (
	"context"

	"github.com/optible/langfuse-go"
	"github.com/optible/langfuse-go/model"
)

func main() {
	ctx := context.Background()
	l := langfuse.New(ctx)
	defer l.Shutdown(ctx)

	// Create a trace
	trace, err := l.Trace(&model.Trace{
		Name: "my-llm-app",
	})
	if err != nil {
		panic(err)
	}
	defer l.TraceEnd(trace.ID)

	// Add a score and capture its ID. Score is synchronous and returns
	// any HTTP error directly.
	score, err := l.Score(&model.Score{
		TraceID: trace.ID,
		Name:    "quality-score",
		Value:   0.95,
		Comment: "Initial quality assessment",
	})
	if err != nil {
		panic(err)
	}

	// Delete the score if needed (e.g., if it was incorrect)
	if err := l.DeleteScore(ctx, score.ID); err != nil {
		panic(err)
	}
}

Note: Score deletion is asynchronous on the Langfuse backend. The score may remain visible for a short time after deletion.

Prompt Management Example

The SDK includes powerful prompt management capabilities with caching, versioning, and fallback support:

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/optible/langfuse-go"
	"github.com/optible/langfuse-go/model"
)

func main() {
	ctx := context.Background()

	// Create client with custom cache TTL
	l := langfuse.New(ctx).WithPromptCacheTTL(10 * time.Minute)
	defer l.Shutdown(ctx)

	// Fetch a prompt (defaults to "production" label)
	prompt, err := l.GetPrompt(ctx, "movie-critic", nil)
	if err != nil {
		panic(err)
	}

	fmt.Printf("Fetched prompt: %s (version %d)\n", prompt.GetName(), prompt.GetVersion())

	// Use text prompts
	if prompt.IsText() {
		compiled := prompt.TextPrompt.Compile(map[string]string{
			"movie": "The Matrix",
			"style": "technical",
		})
		fmt.Printf("Compiled prompt: %s\n", compiled)
	}

	// Use chat prompts
	if prompt.IsChat() {
		messages := prompt.ChatPrompt.Compile(map[string]string{
			"movie": "The Matrix",
			"style": "technical",
		})
		for i, msg := range messages {
			fmt.Printf("Message %d [%s]: %s\n", i+1, msg.Role, msg.Content)
		}
	}

	// Fetch a specific version
	version := 2
	promptV2, err := l.GetPrompt(ctx, "movie-critic", &langfuse.GetPromptOptions{
		Version: &version,
	})
	if err != nil {
		panic(err)
	}

	// Fetch by label (e.g., "staging")
	label := "staging"
	promptStaging, err := l.GetPrompt(ctx, "movie-critic", &langfuse.GetPromptOptions{
		Label: &label,
	})
	if err != nil {
		panic(err)
	}

	// Use fallback prompt for high availability
	promptWithFallback, err := l.GetPrompt(ctx, "movie-critic", &langfuse.GetPromptOptions{
		FallbackPrompt: &model.Prompt{
			TextPrompt: &model.TextPrompt{
				Name:    "movie-critic",
				Version: 0,
				Prompt:  "Please review {{movie}} in a {{style}} style.",
				Type:    model.PromptTypeText,
			},
		},
	})
	if err != nil {
		panic(err)
	}

	// Force refresh (bypass cache)
	promptFresh, err := l.GetPrompt(ctx, "movie-critic", &langfuse.GetPromptOptions{
		ForceRefresh: true,
	})
	if err != nil {
		panic(err)
	}

	// Clear cache when needed
	l.ClearPromptCache()
}

Documentation

Overview

Package langfuse is an OpenTelemetry-backed client for Langfuse. Trace, Generation, Span, and Event observations are exported as OTel spans via OTLP HTTP to /api/public/otel/v1/traces. Prompts and Scores continue to use Langfuse's REST endpoints.

The SDK builds its own TracerProvider and never registers it as the global provider, so it can run alongside other OTel pipelines (e.g. Sentry) without cross-pollination.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type GetPromptOptions

type GetPromptOptions struct {
	// Version specifies which version of the prompt to fetch.
	// If not set, fetches by label (defaults to "production").
	Version *int

	// Label specifies which label to fetch (e.g., "production", "staging").
	// If not set and Version is not set, defaults to "production".
	Label *string

	// FallbackPrompt is returned if the prompt cannot be fetched from the API.
	// This provides guaranteed availability even during network issues.
	FallbackPrompt *model.Prompt

	// CacheTTL overrides the default cache TTL for this specific request.
	// If not set, uses the client's default cache TTL.
	CacheTTL *time.Duration

	// FetchTimeout sets the timeout for the API request.
	// If not set, uses the context's deadline.
	FetchTimeout *time.Duration

	// ForceRefresh bypasses the cache and fetches directly from the API.
	ForceRefresh bool
}

GetPromptOptions contains options for fetching a prompt

type Langfuse

type Langfuse struct {
	// contains filtered or unexported fields
}

func New

func New(ctx context.Context) *Langfuse

New constructs a Langfuse client. The TracerProvider is built lazily on the first observation call so that misconfiguration (e.g. missing keys in tests) surfaces only when the SDK is actually used.

func (*Langfuse) ClearPromptCache

func (l *Langfuse) ClearPromptCache()

ClearPromptCache clears all cached prompts.

func (*Langfuse) DeleteScore added in v1.2.0

func (l *Langfuse) DeleteScore(ctx context.Context, scoreID string) error

DeleteScore deletes a score by its ID.

func (*Langfuse) Event

func (l *Langfuse) Event(e *model.Event, parentID *string) (*model.Event, error)

Event records an instantaneous observation.

func (*Langfuse) Flush

func (l *Langfuse) Flush(ctx context.Context)

Flush blocks until queued spans have been exported. Equivalent to OTel's ForceFlush on the underlying TracerProvider.

func (*Langfuse) Generation

func (l *Langfuse) Generation(g *model.Generation, parentID *string) (*model.Generation, error)

Generation opens a generation span. If g.EndTime is set at call time, the span is started and ended in one shot; otherwise it stays open until GenerationEnd is called.

func (*Langfuse) GenerationEnd

func (l *Langfuse) GenerationEnd(g *model.Generation) (*model.Generation, error)

GenerationEnd closes a generation span previously opened by Generation.

func (*Langfuse) GetPrompt

func (l *Langfuse) GetPrompt(ctx context.Context, name string, opts *GetPromptOptions) (*model.Prompt, error)

GetPrompt fetches a prompt by name. By default, fetches the "production" labeled version. Uses caching to minimize API calls.

func (*Langfuse) Score

func (l *Langfuse) Score(s *model.Score) (*model.Score, error)

Score creates a score via Langfuse's REST API. Scores are not part of the OTel ingestion path.

func (*Langfuse) Shutdown added in v1.2.3

func (l *Langfuse) Shutdown(ctx context.Context) error

Shutdown flushes and tears down the OTel TracerProvider. Call on process exit. After Shutdown, the Langfuse client must not be reused.

func (*Langfuse) Span

func (l *Langfuse) Span(s *model.Span, parentID *string) (*model.Span, error)

Span opens a span observation.

func (*Langfuse) SpanEnd

func (l *Langfuse) SpanEnd(s *model.Span) (*model.Span, error)

SpanEnd closes a span observation previously opened by Span.

func (*Langfuse) Trace

func (l *Langfuse) Trace(t *model.Trace) (*model.Trace, error)

Trace opens a root span representing a Langfuse trace. The returned trace's ID is the caller-supplied ID (or a fresh UUID) and is used as the key for later observations and TraceEnd.

func (*Langfuse) TraceEnd added in v1.2.3

func (l *Langfuse) TraceEnd(traceID string) error

TraceEnd closes the root span for traceID. Calling it ensures the trace's duration reflects real work rather than the SDK's process lifetime.

func (*Langfuse) WithFlushInterval

func (l *Langfuse) WithFlushInterval(_ time.Duration) *Langfuse

WithFlushInterval is retained for API compatibility. The OTel batcher uses its own scheduling; this is a no-op.

func (*Langfuse) WithPromptCacheTTL

func (l *Langfuse) WithPromptCacheTTL(ttl time.Duration) *Langfuse

WithPromptCacheTTL sets the default cache TTL for prompts. Default is 5 min.

Directories

Path Synopsis
examples
cmd/ingestion command
cmd/prompt command
internal
pkg/api
Package api is a thin REST client for the Langfuse endpoints that are not covered by OTLP ingestion: prompt management and scores.
Package api is a thin REST client for the Langfuse endpoints that are not covered by OTLP ingestion: prompt management and scores.
pkg/otel
Package otel wires up an isolated OpenTelemetry TracerProvider that exports spans to Langfuse via OTLP HTTP.
Package otel wires up an isolated OpenTelemetry TracerProvider that exports spans to Langfuse via OTLP HTTP.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL