sochdb

package module
v0.4.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 16, 2026 License: Apache-2.0 Imports: 20 Imported by: 0

README

SochDB Go SDK

LLM-Optimized Embedded Database with Native Vector Search


Installation

Step 1: Install the Native Library
# Automatic installation (installs to /usr/local/lib)
SOCHDB_ROOT=/path/to/sochdb ./install-lib.sh

If automatic installation doesn't work, see manual installation steps in Troubleshooting.

Step 2: Install the Go SDK
go get github.com/sochdb/sochdb-go
Step 3: Install pkg-config (if not already installed)

macOS:

brew install pkg-config

Linux:

# Ubuntu/Debian
sudo apt-get install pkg-config

# Fedora/RHEL
sudo yum install pkgconfig
Verify Installation
# Check if library is found
pkg-config --libs libsochdb_storage

# Expected output: -L/usr/local/lib -lsochdb_storage

Architecture: Flexible Deployment

Dual-mode architecture: Embedded (FFI) + Concurrent + Server (gRPC/IPC)
Choose the deployment mode that fits your needs.


SochDB Go SDK Documentation

Version 0.4.5 — LLM-Optimized Embedded Database with Native Vector Search


Table of Contents

  1. Quick Start
  2. Features
  3. Architecture
  4. System Requirements
  5. Troubleshooting
  6. API Reference

1. Quick Start

Concurrent Embedded Mode

For web applications with multiple workers/processes:

import "github.com/sochdb/sochdb-go/embedded"

// Open in concurrent mode - multiple processes can access simultaneously
db, err := embedded.OpenConcurrent("./web_db")
if err != nil {
    log.Fatal(err)
}
defer db.Close()

// Reads are lock-free and parallel (~100ns)
value, err := db.Get([]byte("user:123"))

// Writes are automatically coordinated
err = db.Put([]byte("user:123"), []byte(`{"name": "Alice"}`))

// Check if concurrent mode is active
fmt.Printf("Concurrent mode: %v\n", db.IsConcurrent())  // true
Gin/Echo Example (Multiple Workers)
package main

import (
    "github.com/gin-gonic/gin"
    "github.com/sochdb/sochdb-go/embedded"
    "log"
)

func main() {
    // Open database in concurrent mode
    db, err := embedded.OpenConcurrent("./gin_db")
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()
    
    r := gin.Default()
    
    r.GET("/user/:id", func(c *gin.Context) {
        // Multiple concurrent requests can read simultaneously
        key := []byte("user:" + c.Param("id"))
        data, err := db.Get(key)
        if err != nil || data == nil {
            c.JSON(404, gin.H{"error": "not found"})
            return
        }
        c.Data(200, "application/json", data)
    })
    
    r.POST("/user/:id", func(c *gin.Context) {
        // Writes are serialized automatically
        key := []byte("user:" + c.Param("id"))
        body, _ := c.GetRawData()
        if err := db.Put(key, body); err != nil {
            c.JSON(500, gin.H{"error": err.Error()})
            return
        }
        c.JSON(200, gin.H{"status": "ok"})
    })
    
    // Start with multiple workers (e.g., via systemd or Docker)
    // Each worker process can access the database concurrently
    r.Run(":8080")
}
Performance
Operation Standard Mode Concurrent Mode
Read (single process) ~100ns ~100ns
Read (multi-process) Blocked ~100ns ✅
Write ~5ms (fsync) ~60µs (amortized)
Max concurrent readers 1 1024
Deployment with Systemd (Multiple Workers)
# /etc/systemd/system/myapp@.service
[Unit]
Description=MyApp Worker %i
After=network.target

[Service]
Type=simple
User=appuser
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/server
Restart=always

[Install]
WantedBy=multi-user.target
# Start 4 worker instances (all can access same DB concurrently)
sudo systemctl start myapp@1
sudo systemctl start myapp@2
sudo systemctl start myapp@3
sudo systemctl start myapp@4

# Enable on boot
sudo systemctl enable myapp@{1,2,3,4}
Docker Compose Example
version: '3.8'
services:
  app:
    build: .
    deploy:
      replicas: 4  # 4 workers share the same database
    volumes:
      - ./data:/app/data  # Shared database volume
    ports:
      - "8080-8083:8080"

Engine Status

Component Status
Cost-based optimizer ✅ Production-ready — full cost model, cardinality estimation, plan caching
Adaptive group commit ✅ Implemented — Little's Law-based batch sizing
WAL compaction ⚠️ Partial — manual Checkpoint() + TruncateWAL() available
HNSW vector index ✅ Production-ready — CGo FFI bindings

Features

Semantic Cache - LLM Response Caching

Vector similarity-based caching for LLM responses to reduce costs and latency:

import (
    sochdb "github.com/sochdb/sochdb-go"
    "github.com/sochdb/sochdb-go/embedded"
)

db, _ := embedded.Open("./mydb")
defer db.Close()

cache := sochdb.NewSemanticCache(db, "llm_responses")

// Store LLM response with embedding
cache.Put(
    "What is machine learning?",
    "Machine learning is a subset of AI...",
    []float32{0.1, 0.2, ...},  // 384-dim vector
    3600,  // TTL in seconds
    map[string]interface{}{"model": "gpt-4", "tokens": 150},
)

// Check cache before calling LLM
hit, _ := cache.Get(queryEmbedding, 0.85)
if hit != nil {
    fmt.Printf("Cache HIT! Similarity: %.4f\n", hit.Score)
    fmt.Printf("Response: %s\n", hit.Value)
}

// Get statistics
stats, _ := cache.Stats()
fmt.Printf("Hit rate: %.1f%%\n", stats.HitRate*100)

Key Benefits:

  • ✅ Cosine similarity matching (0-1 threshold)
  • ✅ TTL-based expiration
  • ✅ Hit/miss statistics tracking
  • ✅ Memory usage monitoring
  • ✅ Automatic expired entry purging
Context Query Builder - Token-Aware LLM Context

Assemble LLM context with priority-based truncation and token budgeting:

import sochdb "github.com/sochdb/sochdb-go"

builder := sochdb.NewContextQueryBuilder().
    WithBudget(4096).  // Token limit
    SetFormat(sochdb.FormatTOON).
    SetTruncation(sochdb.TailDrop)

builder.
    Literal("SYSTEM", 0, "You are a helpful AI assistant.").
    Literal("USER_PROFILE", 1, "User: Alice, Role: Engineer").
    Literal("HISTORY", 2, "Recent conversation context...").
    Literal("KNOWLEDGE", 3, "Retrieved documents...")

result, _ := builder.Execute()
fmt.Printf("Tokens: %d/%d\n", result.TokenCount, 4096)
fmt.Printf("Context:\n%s\n", result.Text)

Key Benefits:

  • ✅ Priority-based section ordering (lower = higher priority)
  • ✅ Token budget enforcement
  • ✅ Multiple truncation strategies (tail drop, head drop, proportional)
  • ✅ Multiple output formats (TOON, JSON, Markdown)
  • ✅ Token count estimation
Namespace API - Multi-Tenant Isolation

First-class namespace handles for secure multi-tenancy and data isolation:

import (
    sochdb "github.com/sochdb/sochdb-go"
    "github.com/sochdb/sochdb-go/embedded"
)

db, _ := embedded.Open("./mydb")
defer db.Close()

// Create isolated namespace for each tenant
namespace := &sochdb.Namespace{}

// Create vector collection
collection, _ := namespace.CreateCollection(sochdb.CollectionConfig{
    Name:      "documents",
    Dimension: 384,
    Metric:    sochdb.DistanceMetricCosine,
    Indexed:   true,
})

// Insert and search vectors
collection.Insert([]float32{1.0, 2.0, ...}, map[string]interface{}{"title": "Doc 1"}, "")
results, _ := collection.Search(sochdb.SearchRequest{
    QueryVector: []float32{...},
    K:          10,
})

→ See Full Example

Priority Queue API - Task Processing

Efficient priority queue with ordered-key storage (no O(N) blob rewrites):

import (
    sochdb "github.com/sochdb/sochdb-go"
    "github.com/sochdb/sochdb-go/embedded"
)

db, _ := embedded.Open("./queue_db")
defer db.Close()

queue := sochdb.NewPriorityQueue(db, "tasks", nil)

// Enqueue with priority (lower = higher urgency)
taskID, _ := queue.Enqueue(1, []byte("urgent task"), map[string]interface{}{"type": "payment"})

// Worker processes tasks
task, _ := queue.Dequeue("worker-1")
if task != nil {
    // Process task...
    queue.Ack(task.TaskID)
}

// Get statistics
stats, _ := queue.Stats()
fmt.Printf("Pending: %d, Completed: %d\n", stats.Pending, stats.Completed)

→ See Full Example

Key Benefits:

  • ✅ O(log N) enqueue/dequeue with ordered scans
  • ✅ Atomic claim protocol for concurrent workers
  • ✅ Visibility timeout for crash recovery
  • ✅ Dead letter queue for failed tasks
  • ✅ Multiple queues per database

Architecture: Flexible Deployment

┌─────────────────────────────────────────────────────────────┐
│                    DEPLOYMENT OPTIONS                        │
├─────────────────────────────────────────────────────────────┤
│                                                               │
│  1. EMBEDDED MODE (FFI)          2. SERVER MODE (gRPC)      │
│  ┌─────────────────────┐         ┌─────────────────────┐   │
│  │   Go App        │         │   Go App        │   │
│  │   ├─ Database.open()│         │   ├─ SochDBClient() │   │
│  │   └─ Direct FFI     │         │   └─ gRPC calls     │   │
│  │         │           │         │         │           │   │
│  │         ▼           │         │         ▼           │   │
│  │   libsochdb_storage │         │   sochdb-grpc       │   │
│  │   (Rust native)     │         │   (Rust server)     │   │
│  └─────────────────────┘         └─────────────────────┘   │
│                                                               │
│  ✅ No server needed               ✅ Multi-language          │
│  ✅ Local files                    ✅ Centralized logic      │
│  ✅ Simple deployment              ✅ Production scale       │
└─────────────────────────────────────────────────────────────┘
When to Use Each Mode

Embedded Mode (FFI):

  • ✅ Local development and testing
  • ✅ Jupyter notebooks and data science
  • ✅ Single-process applications
  • ✅ Edge deployments without network
  • ✅ No server setup required

Server Mode (gRPC):

  • ✅ Production deployments
  • ✅ Multi-language teams (Python, Node.js, Go)
  • ✅ Distributed systems
  • ✅ Centralized business logic
  • ✅ Horizontal scaling


System Requirements

For Concurrent Mode
  • SochDB Core: Latest version
  • Go Version: 1.18+
  • CGO: Required (uses C bindings)
  • Native Library: libsochdb_storage.{dylib,so}

Operating Systems:

  • ✅ Linux (Ubuntu 20.04+, RHEL 8+)
  • ✅ macOS (10.15+, both Intel and Apple Silicon)
  • ⚠️ Windows (requires WSL2 or native builds with MinGW)

File Descriptors:

  • Default limit: 1024 (sufficient for most workloads)
  • For high concurrency: Increase with ulimit -n 4096

Memory:

  • Standard mode: ~50MB base + data
  • Concurrent mode: +4KB per concurrent reader slot (1024 slots = ~4MB overhead)

Troubleshooting

"Database is locked" Error (Standard Mode)
Error: database is locked by another process

Solution: Use concurrent mode for multi-process access:

// ❌ Standard mode - only one process allowed
db, _ := embedded.Open("./data.db")

// ✅ Concurrent mode - unlimited processes
db, _ := embedded.OpenConcurrent("./data.db")
Library Not Found Error
ld: library not found for -lsochdb_storage

Solution 1 - Manual installation:

# Build the native library
cd /path/to/sochdb
cargo build --release

# Copy to system location (macOS/Linux)
sudo cp target/release/libsochdb_storage.{dylib,so} /usr/local/lib/
sudo ldconfig  # Linux only

# Create pkg-config file
cat > /tmp/libsochdb_storage.pc <<EOF
prefix=/usr/local
exec_prefix=\${prefix}
libdir=\${exec_prefix}/lib
includedir=\${prefix}/include

Name: libsochdb_storage
Description: SochDB Native Storage Library
Libs: -L\${libdir} -lsochdb_storage
Cflags: -I\${includedir}
EOF

sudo mv /tmp/libsochdb_storage.pc /usr/local/lib/pkgconfig/

Solution 2 - Development mode (no installation):

export DYLD_LIBRARY_PATH=/path/to/sochdb/target/release  # macOS
export LD_LIBRARY_PATH=/path/to/sochdb/target/release    # Linux
export CGO_LDFLAGS="-L/path/to/sochdb/target/release -lsochdb_storage"
CGO Not Found
go: C compiler "gcc" not found

macOS:

xcode-select --install

Linux:

# Ubuntu/Debian
sudo apt-get install build-essential

# RHEL/Fedora
sudo yum groupinstall "Development Tools"
Performance Issues

Symptom: Concurrent reads slower than expected

Check 1 - Verify concurrent mode is active:

if !db.IsConcurrent() {
    log.Fatal("Database opened in standard mode!")
}

Check 2 - Monitor reader slot usage:

# Enable debug logging in SochDB core
export RUST_LOG=sochdb_storage=debug

Check 3 - Tune write batching:

// Batch writes for better throughput
tx, _ := db.BeginTxn()
for i := 0; i < 1000; i++ {
    tx.Put([]byte(fmt.Sprintf("key%d", i)), value)
}
tx.Commit()  // Single fsync for entire batch


API Reference

Version 0.4.5 — Complete API documentation with Go examples.

All core logic runs in the Rust engine via CGo FFI. The SDK is a thin client.


Core Key-Value Operations
package main

import (
    "fmt"
    "log"
    "github.com/sochdb/sochdb-go/embedded"
)

func main() {
    db, err := embedded.Open("./mydb")
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()

    // Put / Get / Delete
    db.Put([]byte("user:1"), []byte(`{"name":"Alice"}`))
    value, _ := db.Get([]byte("user:1"))
    fmt.Println(string(value)) // {"name":"Alice"}
    db.Delete([]byte("user:1"))

    // Path-based keys (hierarchical)
    db.PutPath("/users/alice/profile", []byte(`{"age":30}`))
    profile, _ := db.GetPath("/users/alice/profile")
    fmt.Println(string(profile))
}

Transactions (ACID with SSI)

SochDB uses Serializable Snapshot Isolation for full ACID transactions:

db, _ := embedded.Open("./mydb")
defer db.Close()

// Auto-managed transaction
err := db.WithTransaction(func(txn *embedded.Transaction) error {
    txn.Put([]byte("key1"), []byte("val1"))
    txn.Put([]byte("key2"), []byte("val2"))
    val, _ := txn.Get([]byte("key1"))
    fmt.Println(string(val))
    return nil // auto-commits; return error to abort
})

// Manual transaction control
txn := db.Begin()
txn.Put([]byte("balance:alice"), []byte("100"))
txn.Put([]byte("balance:bob"), []byte("200"))
if err := txn.Commit(); err != nil {
    // SSI conflict — retry
    fmt.Println("Conflict:", err)
}

Prefix Scanning
db, _ := embedded.Open("./mydb")
defer db.Close()

// Insert test data
db.Put([]byte("user:1"), []byte("Alice"))
db.Put([]byte("user:2"), []byte("Bob"))
db.Put([]byte("user:3"), []byte("Charlie"))

// Scan all keys with prefix (auto-transaction)
iter := db.ScanPrefix([]byte("user:"))
defer iter.Close()

for {
    key, value, ok := iter.Next()
    if !ok {
        break
    }
    fmt.Printf("%s = %s\n", key, value)
}

// Transaction-scoped scan
txn := db.Begin()
defer txn.Abort()
it := txn.ScanPrefix([]byte("order:"))
defer it.Close()
for {
    k, v, ok := it.Next()
    if !ok { break }
    fmt.Printf("%s = %s\n", k, v)
}
txn.Commit()

Namespaces & Collections

Multi-tenant isolation with vector-enabled collections:

import (
    sochdb "github.com/sochdb/sochdb-go"
    "github.com/sochdb/sochdb-go/embedded"
)

db, _ := embedded.Open("./mydb")
defer db.Close()

// Create a namespace
ns := sochdb.NewNamespace(db, "tenant_1", sochdb.NamespaceConfig{
    Name:        "tenant_1",
    DisplayName: "Tenant One",
    Labels:      map[string]string{"tier": "premium"},
})

// Create a collection with vector search
docs, _ := ns.CreateCollection(sochdb.CollectionConfig{
    Name:      "documents",
    Dimension: 384,
    Metric:    sochdb.DistanceMetricCosine,
    Indexed:   true,
})

// Insert vectors with metadata
id, _ := docs.Insert(
    []float32{0.1, 0.2, 0.3 /* ...384 dims */},
    map[string]interface{}{"title": "Introduction to AI", "author": "Alice"},
    "", // auto-generate ID
)

// Batch insert
ids, _ := docs.InsertMany(
    [][]float32{{0.1, 0.2}, {0.3, 0.4}},
    []map[string]interface{}{{"title": "Doc 1"}, {"title": "Doc 2"}},
    nil, // auto-generate IDs
)

// Search with filter
results, _ := docs.Search(sochdb.SearchRequest{
    QueryVector:     []float32{0.1, 0.2, 0.3 /* ... */},
    K:               5,
    Filter:          map[string]interface{}{"author": "Alice"},
    IncludeMetadata: true,
})
for _, r := range results {
    fmt.Printf("%s: score=%.4f\n", r.ID, r.Score)
}

// Collection management
count, _ := docs.Count()
collections, _ := ns.ListCollections()
ns.DeleteCollection("documents")

Priority Queues

Ordered task queues with priority-based dequeue, ack/nack, and dead-letter support:

import sochdb "github.com/sochdb/sochdb-go"

db, _ := embedded.Open("./mydb")
defer db.Close()

// Create a queue
queue := sochdb.CreateQueue(db, "background-jobs", &sochdb.QueueConfig{
    VisibilityTimeout: 30,
    MaxRetries:        3,
    DeadLetterQueue:   "failed-jobs",
})

// Enqueue tasks (lower priority = higher urgency)
taskID, _ := queue.Enqueue(
    1,
    []byte(`{"action":"send_email","to":"alice@example.com"}`),
    map[string]interface{}{"source": "api"},
)

queue.Enqueue(10, []byte("low priority"), nil)
queue.Enqueue(1, []byte("high priority"), nil)

// Dequeue highest priority task
task, _ := queue.Dequeue("worker-1")
if task != nil {
    fmt.Printf("Processing: %s (priority: %d)\n", task.TaskID, task.Priority)
    fmt.Printf("Payload: %s\n", task.Payload)

    // Process task...
    if err := processTask(task); err != nil {
        queue.Nack(task.TaskID) // Re-queue for retry
    } else {
        queue.Ack(task.TaskID) // Mark completed
    }
}

// Queue statistics
stats, _ := queue.Stats()
fmt.Printf("Pending: %d, Claimed: %d, Completed: %d\n",
    stats.Pending, stats.Claimed, stats.Completed)

// Purge completed/dead-lettered tasks
purged, _ := queue.Purge()
fmt.Printf("Purged %d tasks\n", purged)

Semantic Cache

Cache LLM responses with vector-similarity retrieval:

import sochdb "github.com/sochdb/sochdb-go"

db, _ := embedded.Open("./cache_db")
defer db.Close()

cache := sochdb.NewSemanticCache(db, "llm_responses")

// Store response with embedding
cache.Put(
    "What is machine learning?",
    "Machine learning is a subset of AI...",
    []float32{0.1, 0.2, 0.3 /* ... */}, // embedding
    3600,  // TTL seconds
    map[string]interface{}{"model": "gpt-4", "tokens": 42},
)

// Check cache before calling LLM
hit, _ := cache.Get(queryEmbedding, 0.85)
if hit != nil {
    fmt.Printf("Cache HIT (score: %.4f)\n", hit.Score)
    fmt.Printf("Response: %s\n", hit.Value)
}

// Cache management
stats, _ := cache.Stats()
fmt.Printf("Hit rate: %.1f%%\n", stats.HitRate*100)
cache.PurgeExpired()
cache.Clear()

Graph Operations (IPC / gRPC)

Graph overlay via IPC or gRPC client:

// IPC Client
client, _ := sochdb.Connect("/tmp/sochdb.sock")
defer client.Close()

// Add nodes
client.AddNode("social", "alice", "person", map[string]string{"name": "Alice"})
client.AddNode("social", "bob", "person", map[string]string{"name": "Bob"})

// Add edges
client.AddEdge("social", "alice", "works_at", "acme", nil)
client.AddEdge("social", "alice", "knows", "bob", nil)

// Graph traversal (BFS/DFS)
result, _ := client.Traverse("social", "alice", 2, "bfs")
fmt.Printf("Nodes: %d, Edges: %d\n", len(result.Nodes), len(result.Edges))

// gRPC Client
grpc, _ := sochdb.GrpcConnect("localhost:50051")
defer grpc.Close()

grpc.AddGraphNode("alice", "person", map[string]string{"name": "Alice"}, "social")
grpc.AddGraphEdge("alice", "knows", "bob", nil, "social")
nodes, edges, _ := grpc.TraverseGraph("alice", 2, "bfs", "social")

Context Query Builder

Token-budget-aware context assembly for LLM prompts:

builder := sochdb.NewContextQueryBuilder()

result, _ := builder.
    ForSession("session_123").
    WithBudget(4096).
    SetFormat(sochdb.FormatMarkdown).
    SetTruncation(sochdb.Proportional).
    Literal("system", 100, "You are a helpful assistant.").
    Literal("user_query", 90, "Tell me about SochDB").
    Execute()

fmt.Printf("Tokens used: %d\n", result.TokenCount)
fmt.Println(result.Text)

Memory System (LLM-Native)

Complete memory for AI agents — extraction, consolidation, hybrid retrieval:

import sochdb "github.com/sochdb/sochdb-go"

db, _ := embedded.Open("./memory_db")
defer db.Close()

// 1. Extract entities and relations
pipeline := sochdb.NewExtractionPipeline(db, "user_123", &sochdb.ExtractionSchema{
    EntityTypes:   []string{"person", "organization"},
    MinConfidence: 0.7,
})

extracted, _ := pipeline.ExtractAndCommit(
    "Alice joined Acme Corp",
    func(text string) (map[string]interface{}, error) {
        return map[string]interface{}{
            "entities": []map[string]interface{}{
                {"name": "Alice", "entity_type": "person", "confidence": 0.95},
                {"name": "Acme Corp", "entity_type": "organization", "confidence": 0.9},
            },
        }, nil
    },
)
fmt.Printf("Extracted %d entities\n", len(extracted.Entities))

// 2. Consolidate facts
consolidator := sochdb.NewConsolidator(db, "user_123", nil)
consolidator.Add(&sochdb.RawAssertion{
    Fact:       map[string]interface{}{"subject": "Alice", "predicate": "lives_in", "object": "SF"},
    Source:     "conversation_1",
    Confidence: 0.9,
})
consolidator.Consolidate()
facts, _ := consolidator.GetCanonicalFacts()

// 3. Hybrid retrieval (vector + BM25 with RRF fusion)
retriever := sochdb.NewHybridRetriever(db, "user_123", nil)
retriever.IndexDocuments(map[string]map[string]interface{}{
    "doc1": {"content": "Alice is a software engineer", "embedding": []float32{0.1, 0.2}},
})
results, _ := retriever.Retrieve("Who is Alice?", sochdb.NewAllAllowedSet())

Data Formats (TOON/JSON/Columnar)
// Wire format types
wire, _ := sochdb.ParseWireFormat("toon")
ctx, _ := sochdb.ParseContextFormat("json")

caps := sochdb.NewFormatCapabilities()
roundTrip := caps.SupportsRoundTrip(wire)
fmt.Printf("TOON round-trip: %v\n", roundTrip) // true

Server Mode (IPC / gRPC)
// IPC Client (Unix socket)
client, _ := sochdb.Connect("/tmp/sochdb.sock")
defer client.Close()

client.Put([]byte("key"), []byte("value"))
value, _ := client.Get([]byte("key"))
client.Delete([]byte("key"))

// Scan with prefix
kvs, _ := client.Scan("user:")
for _, kv := range kvs {
    fmt.Printf("%s = %s\n", kv.Key, kv.Value)
}

// Transactions
txnID, _ := client.BeginTransaction()
// ... operations ...
client.CommitTransaction(txnID)

// Statistics
stats, _ := client.Stats()
fmt.Printf("Memtable: %d bytes, WAL: %d bytes\n",
    stats.MemtableSizeBytes, stats.WALSizeBytes)

// gRPC Client
grpc, _ := sochdb.GrpcConnect("localhost:50051")
defer grpc.Close()

grpc.CreateIndex("embeddings", 384, "cosine")
grpc.InsertVectors("embeddings", []uint64{1, 2}, vectors)
results, _ := grpc.GrpcSearch("embeddings", queryVector, 5)

Checkpoints & Statistics
db, _ := embedded.Open("./mydb")
defer db.Close()

// Force checkpoint (flush memtable → disk)
lsn, _ := db.Checkpoint()
fmt.Printf("Checkpoint LSN: %d\n", lsn)

// Database statistics
stats, _ := db.Stats()
fmt.Printf("Memtable: %d bytes\n", stats.MemtableSizeBytes)
fmt.Printf("WAL: %d bytes\n", stats.WalSizeBytes)
fmt.Printf("Active txns: %d\n", stats.ActiveTransactions)

Index Policies
db, _ := embedded.Open("./mydb")
defer db.Close()

// Set index policy for a table
db.SetTableIndexPolicy("events", embedded.IndexAppendOnly)

// Get current policy
policy, _ := db.GetTableIndexPolicy("events")
// IndexWriteOptimized(0), IndexBalanced(1), IndexScanOptimized(2), IndexAppendOnly(3)

Error Handling
import sochdb "github.com/sochdb/sochdb-go"

db, err := embedded.Open("./mydb")
if err != nil {
    var lockErr *sochdb.DatabaseLockedError
    if errors.As(err, &lockErr) {
        fmt.Printf("Database locked by PID %d\n", lockErr.HolderPID)
    }
    log.Fatal(err)
}

err = db.WithTransaction(func(txn *embedded.Transaction) error {
    return txn.Put([]byte("key"), []byte("value"))
})
if err != nil {
    // SSI conflict error: retry
    fmt.Println("Transaction error:", err)
}

Error types:

  • ConnectionError — socket/network failures
  • ProtocolError — wire protocol issues
  • TransactionError — SSI conflicts
  • DatabaseLockedError — database locked by another process
  • LockTimeoutError — lock acquisition timeout
  • EpochMismatchError — stale epoch
  • SplitBrainError — concurrent mode conflict
  • FormatConversionError — format conversion failure

Performance
Operation Latency
KV Read ~100ns
KV Write (fsync) ~5ms
KV Write (concurrent, amortized) ~60µs
Vector Search (HNSW, 1M vectors) <5ms
Prefix Scan (per item) ~200ns
Max Concurrent Readers 1024

Contributing

See CONTRIBUTING.md for development setup, building from source, and pull request guidelines.


License

Apache License 2.0

Documentation

Overview

Package sochdb provides unified output format semantics.

This module provides format enums for query results and LLM context packaging. It mirrors the Rust sochdb-client format module for consistency.

Package sochdb provides a thin gRPC client wrapper for the SochDB server. All business logic runs on the server (Thick Server / Thin Client architecture).

The client is approximately ~300 lines of code, delegating all operations to the server.

Package sochdb provides a Memory System for LLM applications.

The memory system enables structured knowledge extraction, multi-source fact consolidation, and hybrid retrieval (BM25 + semantic search). It follows an event-sourced architecture with immutable assertions and derived canonical facts.

Package sochdb provides namespace and collection APIs for type-safe isolation

Provides type-safe namespace isolation with first-class namespace handles.

Example:

import "github.com/sochdb/sochdb-go"
import "github.com/sochdb/sochdb-go/embedded"

db, err := embedded.Open("./mydb")
if err != nil {
    log.Fatal(err)
}
defer db.Close()

ns, err := db.CreateNamespace("tenant_123")
if err != nil {
    log.Fatal(err)
}

collection, err := ns.CreateCollection(CollectionConfig{
    Name:      "documents",
    Dimension: 384,
})
if err != nil {
    log.Fatal(err)
}

err = collection.Insert([]float32{1.0, 2.0, ...}, map[string]interface{}{"source": "web"}, "")

Package sochdb provides priority queue functionality

First-class queue API with ordered-key task entries, providing efficient priority queue operations without the O(N) blob rewrite anti-pattern.

Features: - Ordered-key representation: Each task has its own key, no blob parsing - O(log N) enqueue/dequeue with ordered scans - Atomic claim protocol for concurrent workers - Visibility timeout for crash recovery

Example:

import "github.com/sochdb/sochdb-go"
import "github.com/sochdb/sochdb-go/embedded"

db, err := embedded.Open("./queue_db")
if err != nil {
    log.Fatal(err)
}
defer db.Close()

queue := sochdb.NewPriorityQueue(db, "tasks", nil)

// Enqueue task
taskID, err := queue.Enqueue(1, []byte("high priority task"), nil)
if err != nil {
    log.Fatal(err)
}

// Dequeue and process
task, err := queue.Dequeue("worker-1")
if err != nil {
    log.Fatal(err)
}
if task != nil {
    // Process task...
    err = queue.Ack(task.TaskID)
}

Package sochdb provides Go SDK for SochDB v0.4.3

Dual-mode architecture: Embedded (FFI) + Server (gRPC/IPC)

Architecture: Flexible Deployment ================================== This SDK supports BOTH modes:

1. Embedded Mode (FFI) - For single-process apps:

  • Direct FFI bindings to Rust libraries (via CGO)
  • No server required - just import and run
  • Best for: Local development, simple apps, edge deployments

2. Server Mode (gRPC) - For distributed systems:

  • Thin client connecting to sochdb-grpc server
  • Best for: Production, multi-language, scalability

Example (Server Mode - gRPC):

import "github.com/sochdb/sochdb-go"

// Connect to server
client, err := sochdb.GrpcConnect("localhost:50051")
if err != nil {
    log.Fatal(err)
}
defer client.Close()

// Use the client
err = client.PutKv(context.Background(), "namespace", "key", []byte("value"))

Example (Embedded Mode - IPC):

import "github.com/sochdb/sochdb-go"

// Connect via Unix socket
client, err := sochdb.Connect("/path/to/db/sochdb.sock")
if err != nil {
    log.Fatal(err)
}
defer client.Close()

// Use the client
value, err := client.Get([]byte("key"))

Example (Namespace & Collections - v0.4.1):

import "github.com/sochdb/sochdb-go"
import "github.com/sochdb/sochdb-go/embedded"

// Open embedded database
db, err := embedded.Open("./mydb")
if err != nil {
    log.Fatal(err)
}
defer db.Close()

// Create namespace
ns, err := db.CreateNamespace("tenant_123")
if err != nil {
    log.Fatal(err)
}

// Create collection
collection, err := ns.CreateCollection(sochdb.CollectionConfig{
    Name:      "documents",
    Dimension: 384,
})
if err != nil {
    log.Fatal(err)
}

// Insert vectors
err = collection.Insert([]float32{1.0, 2.0, 3.0}, map[string]interface{}{"source": "web"}, "")

Example (Priority Queue - v0.4.1):

import "github.com/sochdb/sochdb-go"
import "github.com/sochdb/sochdb-go/embedded"

// Open database
db, err := embedded.Open("./queue_db")
if err != nil {
    log.Fatal(err)
}
defer db.Close()

// Create queue
queue := sochdb.NewPriorityQueue(db, "tasks", nil)

// Enqueue task
taskID, err := queue.Enqueue(1, []byte("high priority task"), nil)
if err != nil {
    log.Fatal(err)
}

// Dequeue and process
task, err := queue.Dequeue("worker-1")
if err != nil {
    log.Fatal(err)
}
if task != nil {
    // Process task...
    err = queue.Ack(task.TaskID)
}

Index

Constants

View Source
const Version = "0.4.5"

Version is the current SDK version.

Variables

View Source
var (
	// ErrClosed is returned when operating on a closed connection.
	ErrClosed = errors.New("connection closed")

	// ErrNotFound is returned when a key is not found.
	ErrNotFound = errors.New("key not found")

	// ErrInvalidResponse is returned when the server response is invalid.
	ErrInvalidResponse = errors.New("invalid server response")

	// ErrDatabaseLocked is returned when the database is locked by another process.
	ErrDatabaseLocked = errors.New("database locked by another process")

	// ErrLockTimeout is returned when timed out waiting for database lock.
	ErrLockTimeout = errors.New("timed out waiting for database lock")

	// ErrEpochMismatch is returned when WAL epoch mismatch detected.
	ErrEpochMismatch = errors.New("epoch mismatch: stale writer detected")

	// ErrSplitBrain is returned when split-brain condition detected.
	ErrSplitBrain = errors.New("split-brain: multiple active writers")
)

Common errors

Functions

This section is empty.

Types

type AllAllowedSet added in v0.4.1

type AllAllowedSet struct{}

AllAllowedSet permits all documents (no filtering).

func NewAllAllowedSet added in v0.4.1

func NewAllAllowedSet() *AllAllowedSet

NewAllAllowedSet creates a new pass-through filter.

func (*AllAllowedSet) IsAllowed added in v0.4.1

func (s *AllAllowedSet) IsAllowed(_ string, _ map[string]interface{}) bool

IsAllowed always returns true.

type AllowedSet added in v0.4.1

type AllowedSet interface {
	// IsAllowed returns true if the document should be considered for retrieval.
	IsAllowed(id string, metadata map[string]interface{}) bool
}

AllowedSet defines the interface for document pre-filtering. Implementations control which documents are eligible for retrieval.

type Assertion added in v0.4.1

type Assertion struct {
	ID         string  `json:"id,omitempty"`         // Unique identifier
	Subject    string  `json:"subject"`              // Subject entity
	Predicate  string  `json:"predicate"`            // Predicate/relation
	Object     string  `json:"object"`               // Object value
	Confidence float64 `json:"confidence,omitempty"` // Extraction confidence [0-1]
	Provenance string  `json:"provenance,omitempty"` // Source reference
	Timestamp  int64   `json:"timestamp,omitempty"`  // Unix timestamp
}

Assertion represents a subject-predicate-object triple. Assertions capture factual statements in RDF-like format.

type BM25Scorer added in v0.4.1

type BM25Scorer struct {
	// contains filtered or unexported fields
}

BM25Scorer implements BM25 scoring

func NewBM25Scorer added in v0.4.1

func NewBM25Scorer(k1, b float64) *BM25Scorer

NewBM25Scorer creates a new BM25 scorer

func (*BM25Scorer) IndexDocuments added in v0.4.1

func (bm *BM25Scorer) IndexDocuments(docs map[string]string)

IndexDocuments indexes documents for BM25

func (*BM25Scorer) Score added in v0.4.1

func (bm *BM25Scorer) Score(query string, docID string) float64

Score calculates BM25 score for a query against a document

type CacheEntry

type CacheEntry struct {
	Key       string    `json:"key"`
	Value     string    `json:"value"`
	Embedding []float32 `json:"embedding"`
	ExpiresAt int64     `json:"expires_at"`
}

CacheEntry represents an entry in the semantic cache.

type CanonicalFact added in v0.4.1

type CanonicalFact struct {
	ID         string                 `json:"id"`                    // Unique identifier
	MergedFact map[string]interface{} `json:"merged_fact"`           // Consolidated fact
	Confidence float64                `json:"confidence"`            // Merged confidence
	Sources    []string               `json:"sources"`               // Contributing sources
	ValidFrom  int64                  `json:"valid_from"`            // Validity start time
	ValidUntil *int64                 `json:"valid_until,omitempty"` // Validity end time
}

CanonicalFact represents the consolidated truth derived from multiple assertions. Canonical facts are recomputed during consolidation from raw assertion events.

type CanonicalFormat

type CanonicalFormat int

CanonicalFormat represents canonical storage format (server-side only).

This is the format used for internal storage and is optimized for storage efficiency and query performance.

const (
	// CanonicalFormatToon is TOON canonical format.
	CanonicalFormatToon CanonicalFormat = iota
)

func (CanonicalFormat) String

func (f CanonicalFormat) String() string

String returns the string representation of CanonicalFormat.

type Collection added in v0.4.1

type Collection struct {
	// contains filtered or unexported fields
}

Collection represents a vector collection

func (*Collection) Count added in v0.4.1

func (c *Collection) Count() (int, error)

Count returns the number of vectors in the collection

func (*Collection) Delete added in v0.4.1

func (c *Collection) Delete(id string) error

Delete removes a vector by ID

func (*Collection) Get added in v0.4.1

func (c *Collection) Get(id string) (*vectorData, error)

Get retrieves a vector by ID

func (*Collection) Insert added in v0.4.1

func (c *Collection) Insert(vector []float32, metadata map[string]interface{}, id string) (string, error)

Insert adds a vector to the collection

func (*Collection) InsertMany added in v0.4.1

func (c *Collection) InsertMany(vectors [][]float32, metadatas []map[string]interface{}, ids []string) ([]string, error)

InsertMany adds multiple vectors to the collection

func (*Collection) Search added in v0.4.1

func (c *Collection) Search(request SearchRequest) ([]SearchResult, error)

Search finds similar vectors using brute-force cosine similarity

type CollectionConfig added in v0.4.1

type CollectionConfig struct {
	Name               string                 `json:"name"`
	Dimension          int                    `json:"dimension,omitempty"`
	Metric             DistanceMetric         `json:"metric,omitempty"`
	Indexed            bool                   `json:"indexed"`
	HNSWM              int                    `json:"hnsw_m,omitempty"`
	HNSWEfConstruction int                    `json:"hnsw_ef_construction,omitempty"`
	Metadata           map[string]interface{} `json:"metadata,omitempty"`
}

CollectionConfig represents collection configuration

type CollectionExistsError added in v0.4.1

type CollectionExistsError struct {
	Collection string
}

CollectionExistsError is returned when trying to create an existing collection

func (*CollectionExistsError) Error added in v0.4.1

func (e *CollectionExistsError) Error() string

type CollectionNotFoundError added in v0.4.1

type CollectionNotFoundError struct {
	Collection string
}

CollectionNotFoundError is returned when a collection doesn't exist

func (*CollectionNotFoundError) Error added in v0.4.1

func (e *CollectionNotFoundError) Error() string

type ConnectionError

type ConnectionError struct {
	Address string
	Err     error
}

ConnectionError represents a connection failure.

func (*ConnectionError) Error

func (e *ConnectionError) Error() string

func (*ConnectionError) Unwrap

func (e *ConnectionError) Unwrap() error

type ConsolidationConfig added in v0.4.1

type ConsolidationConfig struct {
	SimilarityThreshold float64 `json:"similarity_threshold,omitempty"` // Fact similarity threshold [0-1]
	UseTemporalUpdates  bool    `json:"use_temporal_updates,omitempty"` // Enable time-based superseding
	MaxConflictAge      int64   `json:"max_conflict_age,omitempty"`     // Conflict validity in seconds
}

ConsolidationConfig controls consolidation behavior.

type Consolidator added in v0.4.1

type Consolidator struct {
	// contains filtered or unexported fields
}

Consolidator manages fact consolidation

func NewConsolidator added in v0.4.1

func NewConsolidator(db *embedded.Database, namespace string, config *ConsolidationConfig) *Consolidator

NewConsolidator creates a new consolidator

func (*Consolidator) Add added in v0.4.1

func (c *Consolidator) Add(assertion *RawAssertion) (string, error)

Add a raw assertion (immutable event)

func (*Consolidator) AddWithContradiction added in v0.4.1

func (c *Consolidator) AddWithContradiction(newAssertion *RawAssertion, contradicts []string) (string, error)

AddWithContradiction adds assertion with contradiction handling

func (*Consolidator) Consolidate added in v0.4.1

func (c *Consolidator) Consolidate() (int, error)

Consolidate runs consolidation to update canonical view

func (*Consolidator) Explain added in v0.4.1

func (c *Consolidator) Explain(factID string) (map[string]interface{}, error)

Explain provenance of a fact

func (*Consolidator) GetCanonicalFacts added in v0.4.1

func (c *Consolidator) GetCanonicalFacts() ([]CanonicalFact, error)

GetCanonicalFacts retrieves canonical facts

type ContextFormat

type ContextFormat int

ContextFormat represents output format for LLM context packaging.

These formats are optimized for readability and token efficiency when constructing prompts for language models.

const (
	// ContextFormatToon is TOON format (default, token-efficient).
	// Structured data with minimal syntax overhead.
	ContextFormatToon ContextFormat = iota

	// ContextFormatJSON is JSON format.
	// Widely understood by LLMs, good for structured data.
	ContextFormatJSON

	// ContextFormatMarkdown is Markdown format.
	// Best for human-readable context with formatting.
	ContextFormatMarkdown
)

func ParseContextFormat

func ParseContextFormat(s string) (ContextFormat, error)

ParseContextFormat parses a ContextFormat from a string.

func (ContextFormat) String

func (f ContextFormat) String() string

String returns the string representation of ContextFormat.

type ContextOutputFormat added in v0.4.1

type ContextOutputFormat string

ContextOutputFormat represents the output format for context

const (
	FormatTOON     ContextOutputFormat = "toon"
	FormatJSON     ContextOutputFormat = "json"
	FormatMarkdown ContextOutputFormat = "markdown"
)

type ContextQueryBuilder added in v0.4.1

type ContextQueryBuilder struct {
	// contains filtered or unexported fields
}

ContextQueryBuilder builds LLM context with token awareness

func NewContextQueryBuilder added in v0.4.1

func NewContextQueryBuilder() *ContextQueryBuilder

NewContextQueryBuilder creates a new context builder

func (*ContextQueryBuilder) Execute added in v0.4.1

func (b *ContextQueryBuilder) Execute() (*ContextResult, error)

Execute builds the context

func (*ContextQueryBuilder) ForSession added in v0.4.1

func (b *ContextQueryBuilder) ForSession(sessionID string) *ContextQueryBuilder

ForSession sets the session ID

func (*ContextQueryBuilder) Literal added in v0.4.1

func (b *ContextQueryBuilder) Literal(name string, priority int, text string) *ContextQueryBuilder

Literal adds a literal text section

func (*ContextQueryBuilder) SetFormat added in v0.4.1

SetFormat sets the output format

func (*ContextQueryBuilder) SetTruncation added in v0.4.1

func (b *ContextQueryBuilder) SetTruncation(strategy TruncationStrategy) *ContextQueryBuilder

SetTruncation sets the truncation strategy

func (*ContextQueryBuilder) WithBudget added in v0.4.1

func (b *ContextQueryBuilder) WithBudget(tokens int) *ContextQueryBuilder

WithBudget sets the token budget

type ContextResult added in v0.4.1

type ContextResult struct {
	Text       string           `json:"text"`
	TokenCount int              `json:"token_count"`
	Sections   []ContextSection `json:"sections"`
	Truncated  bool             `json:"truncated"`
}

ContextResult represents the built context

type ContextSection added in v0.4.1

type ContextSection struct {
	Name       string `json:"name"`
	TokenCount int    `json:"token_count"`
	Truncated  bool   `json:"truncated"`
}

ContextSection represents a section in the result

type DatabaseLockedError added in v0.4.1

type DatabaseLockedError struct {
	Path      string
	HolderPID int
}

DatabaseLockedError is returned when the database is locked by another process.

func (*DatabaseLockedError) Error added in v0.4.1

func (e *DatabaseLockedError) Error() string

func (*DatabaseLockedError) Is added in v0.4.1

func (e *DatabaseLockedError) Is(target error) bool

type DistanceMetric added in v0.4.1

type DistanceMetric string

DistanceMetric represents the distance metric for vector similarity

const (
	DistanceMetricCosine     DistanceMetric = "cosine"
	DistanceMetricEuclidean  DistanceMetric = "euclidean"
	DistanceMetricDotProduct DistanceMetric = "dot"
)

type Entity added in v0.4.1

type Entity struct {
	ID         string                 `json:"id,omitempty"`         // Unique identifier
	Name       string                 `json:"name"`                 // Entity name
	EntityType string                 `json:"entity_type"`          // Type classification
	Properties map[string]interface{} `json:"properties,omitempty"` // Additional attributes
	Confidence float64                `json:"confidence,omitempty"` // Extraction confidence [0-1]
	Provenance string                 `json:"provenance,omitempty"` // Source reference
	Timestamp  int64                  `json:"timestamp,omitempty"`  // Unix timestamp
}

Entity represents a named entity extracted from text. Entities are typed objects with properties and confidence scores.

type EpochMismatchError added in v0.4.1

type EpochMismatchError struct {
	Expected uint64
	Actual   uint64
}

EpochMismatchError is returned when WAL epoch mismatch detected.

func (*EpochMismatchError) Error added in v0.4.1

func (e *EpochMismatchError) Error() string

func (*EpochMismatchError) Is added in v0.4.1

func (e *EpochMismatchError) Is(target error) bool

type ExtractionPipeline added in v0.4.1

type ExtractionPipeline struct {
	// contains filtered or unexported fields
}

ExtractionPipeline compiles LLM outputs into typed facts

func NewExtractionPipeline added in v0.4.1

func NewExtractionPipeline(db *embedded.Database, namespace string, schema *ExtractionSchema) *ExtractionPipeline

NewExtractionPipeline creates a new extraction pipeline

func (*ExtractionPipeline) Commit added in v0.4.1

func (p *ExtractionPipeline) Commit(result *ExtractionResult) error

Commit extraction result to database

func (*ExtractionPipeline) Extract added in v0.4.1

func (p *ExtractionPipeline) Extract(text string, extractor ExtractorFunction) (*ExtractionResult, error)

Extract entities and relations from text

func (*ExtractionPipeline) ExtractAndCommit added in v0.4.1

func (p *ExtractionPipeline) ExtractAndCommit(text string, extractor ExtractorFunction) (*ExtractionResult, error)

ExtractAndCommit extracts and immediately commits

func (*ExtractionPipeline) GetAssertions added in v0.4.1

func (p *ExtractionPipeline) GetAssertions() ([]Assertion, error)

GetAssertions retrieves all assertions

func (*ExtractionPipeline) GetEntities added in v0.4.1

func (p *ExtractionPipeline) GetEntities() ([]Entity, error)

GetEntities retrieves all entities

func (*ExtractionPipeline) GetRelations added in v0.4.1

func (p *ExtractionPipeline) GetRelations() ([]Relation, error)

GetRelations retrieves all relations

type ExtractionResult added in v0.4.1

type ExtractionResult struct {
	Entities   []Entity    `json:"entities"`   // Extracted entities
	Relations  []Relation  `json:"relations"`  // Extracted relations
	Assertions []Assertion `json:"assertions"` // Extracted assertions
}

ExtractionResult contains all knowledge extracted from text. This is typically returned by LLM extraction functions.

type ExtractionSchema added in v0.4.1

type ExtractionSchema struct {
	EntityTypes       []string `json:"entity_types,omitempty"`       // Allowed entity types
	RelationTypes     []string `json:"relation_types,omitempty"`     // Allowed relation types
	MinConfidence     float64  `json:"min_confidence,omitempty"`     // Minimum confidence threshold
	RequireProvenance bool     `json:"require_provenance,omitempty"` // Require source tracking
}

ExtractionSchema defines validation rules for extraction. Schemas ensure type safety and quality control.

type ExtractorFunction added in v0.4.1

type ExtractorFunction func(text string) (map[string]interface{}, error)

ExtractorFunction type - user provides this to call their LLM

type FilterAllowedSet added in v0.4.1

type FilterAllowedSet struct {
	FilterFn func(string, map[string]interface{}) bool
}

FilterAllowedSet filters documents using a custom function.

func NewFilterAllowedSet added in v0.4.1

func NewFilterAllowedSet(filterFn func(string, map[string]interface{}) bool) *FilterAllowedSet

NewFilterAllowedSet creates a new function-based filter.

func (*FilterAllowedSet) IsAllowed added in v0.4.1

func (s *FilterAllowedSet) IsAllowed(id string, metadata map[string]interface{}) bool

IsAllowed applies the custom filter function.

type FormatCapabilities

type FormatCapabilities struct{}

FormatCapabilities provides helpers to check format capabilities and conversions.

func NewFormatCapabilities

func NewFormatCapabilities() FormatCapabilities

NewFormatCapabilities creates a new FormatCapabilities instance.

func (FormatCapabilities) ContextToWire

func (FormatCapabilities) ContextToWire(ctx ContextFormat) *WireFormat

ContextToWire converts ContextFormat to WireFormat if compatible. Returns nil if the conversion is not possible.

func (FormatCapabilities) SupportsRoundTrip

func (FormatCapabilities) SupportsRoundTrip(fmt WireFormat) bool

SupportsRoundTrip checks if format supports round-trip: decode(encode(x)) = x.

func (FormatCapabilities) SupportsRoundTripConversion

func (FormatCapabilities) SupportsRoundTripConversion(wire WireFormat, ctx ContextFormat) bool

SupportsRoundTripConversion checks if conversion between wire and context formats is lossless.

func (FormatCapabilities) WireToContext

func (FormatCapabilities) WireToContext(wire WireFormat) *ContextFormat

WireToContext converts WireFormat to ContextFormat if compatible. Returns nil if the conversion is not possible.

type FormatConversionError

type FormatConversionError struct {
	FromFormat string
	ToFormat   string
	Reason     string
}

FormatConversionError represents an error when format conversion fails.

func (*FormatConversionError) Error

func (e *FormatConversionError) Error() string

type GraphEdge

type GraphEdge struct {
	FromID     string            `json:"from_id"`
	EdgeType   string            `json:"edge_type"`
	ToID       string            `json:"to_id"`
	Properties map[string]string `json:"properties"`
}

GraphEdge represents an edge in the graph overlay.

type GraphNode

type GraphNode struct {
	ID         string            `json:"id"`
	NodeType   string            `json:"node_type"`
	Properties map[string]string `json:"properties"`
}

GraphNode represents a node in the graph overlay.

type GrpcClient

type GrpcClient struct {
	// contains filtered or unexported fields
}

GrpcClient provides thin gRPC client for SochDB. All operations are delegated to the SochDB gRPC server.

func GrpcConnect

func GrpcConnect(address string) (*GrpcClient, error)

GrpcConnect is a convenience function to connect to a SochDB gRPC server.

func NewGrpcClient

func NewGrpcClient(opts GrpcClientOptions) (*GrpcClient, error)

NewGrpcClient creates a new gRPC client connected to SochDB server.

func (*GrpcClient) AddDocuments

func (c *GrpcClient) AddDocuments(collectionName string, documents []GrpcDocument, namespace string) ([]string, error)

AddDocuments adds documents to a collection.

func (*GrpcClient) AddEdge

func (c *GrpcClient) AddEdge(ctx context.Context, namespace string, edge GrpcGraphEdge) error

AddEdge adds an edge to the graph (convenience wrapper).

func (*GrpcClient) AddGraphEdge

func (c *GrpcClient) AddGraphEdge(fromID, edgeType, toID string, properties map[string]string, namespace string) error

AddGraphEdge adds an edge between nodes.

func (*GrpcClient) AddGraphNode

func (c *GrpcClient) AddGraphNode(nodeID, nodeType string, properties map[string]string, namespace string) error

AddGraphNode adds a node to the graph.

func (*GrpcClient) CacheGet

func (c *GrpcClient) CacheGet(cacheName string, queryEmbedding []float32, threshold float32) (string, bool, error)

CacheGet retrieves from semantic cache by similarity.

func (*GrpcClient) CachePut

func (c *GrpcClient) CachePut(cacheName, key, value string, keyEmbedding []float32, ttlSeconds int) error

CachePut stores a value in the semantic cache.

func (*GrpcClient) Close

func (c *GrpcClient) Close() error

Close closes the gRPC connection.

func (*GrpcClient) CreateCollection

func (c *GrpcClient) CreateCollection(name string, dimension int, namespace string) error

CreateCollection creates a new collection.

func (*GrpcClient) CreateIndex

func (c *GrpcClient) CreateIndex(name string, dimension int, metric string) error

CreateIndex creates a new vector index. Note: Requires generated proto code. This is a placeholder implementation.

func (*GrpcClient) EndSpan

func (c *GrpcClient) EndSpan(traceID, spanID, status string) (durationUs int64, err error)

EndSpan ends a span.

func (*GrpcClient) GetKv

func (c *GrpcClient) GetKv(ctx context.Context, namespace, key string) ([]byte, error)

GetKv retrieves a value by key from the specified namespace.

func (*GrpcClient) GrpcDelete

func (c *GrpcClient) GrpcDelete(key []byte, namespace string) error

GrpcDelete removes a key.

func (*GrpcClient) GrpcGet

func (c *GrpcClient) GrpcGet(key []byte, namespace string) ([]byte, bool, error)

GrpcGet retrieves a value by key.

func (*GrpcClient) GrpcPut

func (c *GrpcClient) GrpcPut(key, value []byte, namespace string, ttlSeconds int) error

GrpcPut stores a value.

func (*GrpcClient) GrpcSearch

func (c *GrpcClient) GrpcSearch(indexName string, query []float32, k int) ([]GrpcSearchResult, error)

GrpcSearch performs k-nearest neighbor search.

func (*GrpcClient) InsertVectors

func (c *GrpcClient) InsertVectors(indexName string, ids []uint64, vectors [][]float32) (int, error)

InsertVectors inserts vectors into an index.

func (*GrpcClient) PutKv

func (c *GrpcClient) PutKv(ctx context.Context, namespace, key string, value []byte) error

PutKv stores a key-value pair in the specified namespace.

func (*GrpcClient) QueryGraph

func (c *GrpcClient) QueryGraph(ctx context.Context, namespace, fromID, edgeType string, limit int) ([]GrpcGraphEdge, error)

QueryGraph queries the graph for edges (convenience wrapper).

func (*GrpcClient) SearchCollection

func (c *GrpcClient) SearchCollection(collectionName string, query []float32, k int, namespace string) ([]GrpcDocument, error)

SearchCollection searches a collection for similar documents.

func (*GrpcClient) StartSpan

func (c *GrpcClient) StartSpan(traceID, parentSpanID, name string) (spanID string, err error)

StartSpan starts a span within a trace.

func (*GrpcClient) StartTrace

func (c *GrpcClient) StartTrace(name string) (traceID, rootSpanID string, err error)

StartTrace starts a new trace.

func (*GrpcClient) TraverseGraph

func (c *GrpcClient) TraverseGraph(startNode string, maxDepth int, order string, namespace string) ([]GrpcGraphNode, []GrpcGraphEdge, error)

TraverseGraph performs graph traversal from a starting node.

type GrpcClientOptions

type GrpcClientOptions struct {
	Address string
	Timeout time.Duration
	Secure  bool
}

GrpcClientOptions configures the gRPC client.

type GrpcDocument

type GrpcDocument struct {
	ID        string
	Content   string
	Embedding []float32
	Metadata  map[string]string
}

GrpcDocument represents a document with embedding.

type GrpcGraphEdge

type GrpcGraphEdge struct {
	FromID     string
	EdgeType   string
	ToID       string
	Properties map[string]string
}

GrpcGraphEdge represents an edge in the graph.

type GrpcGraphNode

type GrpcGraphNode struct {
	ID         string
	NodeType   string
	Properties map[string]string
}

GrpcGraphNode represents a node in the graph.

type GrpcSearchResult

type GrpcSearchResult struct {
	ID       uint64
	Distance float32
}

GrpcSearchResult represents a vector search result.

type HybridRetriever added in v0.4.1

type HybridRetriever struct {
	// contains filtered or unexported fields
}

HybridRetriever combines lexical and semantic search

func NewHybridRetriever added in v0.4.1

func NewHybridRetriever(db *embedded.Database, namespace string, config *RetrievalConfig) *HybridRetriever

NewHybridRetriever creates a new hybrid retriever

func (*HybridRetriever) Explain added in v0.4.1

func (hr *HybridRetriever) Explain(query string, docID string) map[string]interface{}

Explain retrieval for debugging

func (*HybridRetriever) IndexDocuments added in v0.4.1

func (hr *HybridRetriever) IndexDocuments(documents map[string]map[string]interface{}) error

IndexDocuments indexes documents for retrieval

func (*HybridRetriever) Retrieve added in v0.4.1

func (hr *HybridRetriever) Retrieve(query string, allowed AllowedSet) ([]map[string]interface{}, error)

Retrieve performs hybrid retrieval

type IPCClient

type IPCClient struct {
	// contains filtered or unexported fields
}

IPCClient handles low-level IPC communication with the SochDB server.

func Connect

func Connect(socketPath string) (*IPCClient, error)

Connect establishes a connection to the SochDB server.

func ConnectToDatabase

func ConnectToDatabase(dbPath string) (*IPCClient, error)

ConnectToDatabase connects to a database at the given path.

func (*IPCClient) AbortTransaction

func (c *IPCClient) AbortTransaction(txnID uint64) error

AbortTransaction aborts a transaction.

func (*IPCClient) AddEdge

func (c *IPCClient) AddEdge(namespace, fromID, edgeType, toID string, properties map[string]string) error

AddEdge adds an edge between nodes in the graph overlay.

func (*IPCClient) AddNode

func (c *IPCClient) AddNode(namespace, nodeID, nodeType string, properties map[string]string) error

AddNode adds a node to the graph overlay.

func (*IPCClient) BeginTransaction

func (c *IPCClient) BeginTransaction() (uint64, error)

BeginTransaction starts a new transaction.

func (*IPCClient) CacheGet

func (c *IPCClient) CacheGet(cacheName string, queryEmbedding []float32, threshold float32) (string, bool, error)

CacheGet looks up a value in the semantic cache by embedding similarity.

func (*IPCClient) CachePut

func (c *IPCClient) CachePut(cacheName, key, value string, embedding []float32, ttlSeconds int64) error

CachePut stores a value in the semantic cache with its embedding.

func (*IPCClient) Checkpoint

func (c *IPCClient) Checkpoint() error

Checkpoint forces a checkpoint.

func (*IPCClient) Close

func (c *IPCClient) Close() error

Close closes the connection.

func (*IPCClient) CommitTransaction

func (c *IPCClient) CommitTransaction(txnID uint64) error

CommitTransaction commits a transaction.

func (*IPCClient) Delete

func (c *IPCClient) Delete(key []byte) error

Delete removes a key.

func (*IPCClient) EndSpan

func (c *IPCClient) EndSpan(traceID, spanID, status string) (int64, error)

EndSpan ends a span and returns its duration in microseconds.

func (*IPCClient) Get

func (c *IPCClient) Get(key []byte) ([]byte, error)

Get retrieves a value by key.

func (*IPCClient) GetPath

func (c *IPCClient) GetPath(path string) ([]byte, error)

GetPath retrieves a value by path.

func (*IPCClient) Put

func (c *IPCClient) Put(key, value []byte) error

Put stores a key-value pair.

func (*IPCClient) PutPath

func (c *IPCClient) PutPath(path string, value []byte) error

PutPath stores a value at a path.

func (*IPCClient) Query

func (c *IPCClient) Query(prefix string, limit, offset int) ([]KeyValue, error)

Query executes a prefix query. Wire format: path_len(2 LE) + path + limit(4 LE) + offset(4 LE) + cols_count(2 LE)

func (*IPCClient) Scan

func (c *IPCClient) Scan(prefix string) ([]KeyValue, error)

Scan scans keys with a prefix, returning key-value pairs. This is the preferred method for prefix-based iteration.

func (*IPCClient) StartSpan

func (c *IPCClient) StartSpan(traceID, parentSpanID, name string) (string, error)

StartSpan starts a child span within a trace.

func (*IPCClient) StartTrace

func (c *IPCClient) StartTrace(name string) (*TraceInfo, error)

StartTrace starts a new trace.

func (*IPCClient) Stats

func (c *IPCClient) Stats() (*StorageStats, error)

Stats retrieves storage statistics.

func (*IPCClient) Traverse

func (c *IPCClient) Traverse(namespace, startNode string, maxDepth int, order string) (*TraverseResult, error)

Traverse traverses the graph from a starting node. order: "bfs" for breadth-first, "dfs" for depth-first

type IdsAllowedSet added in v0.4.1

type IdsAllowedSet struct {
	// contains filtered or unexported fields
}

IdsAllowedSet filters documents by explicit ID whitelist.

func NewIdsAllowedSet added in v0.4.1

func NewIdsAllowedSet(ids []string) *IdsAllowedSet

NewIdsAllowedSet creates a new ID-based filter.

func (*IdsAllowedSet) IsAllowed added in v0.4.1

func (a *IdsAllowedSet) IsAllowed(id string, _ map[string]interface{}) bool

IsAllowed checks if the document ID is in the whitelist.

type KeyValue

type KeyValue struct {
	Key   []byte
	Value []byte
}

KeyValue represents a key-value pair returned from queries.

type LockError added in v0.4.1

type LockError struct {
	Path        string
	Message     string
	Remediation string
}

LockError represents a lock-related error.

func (*LockError) Error added in v0.4.1

func (e *LockError) Error() string

type LockTimeoutError added in v0.4.1

type LockTimeoutError struct {
	Path        string
	TimeoutSecs float64
}

LockTimeoutError is returned when timed out waiting for database lock.

func (*LockTimeoutError) Error added in v0.4.1

func (e *LockTimeoutError) Error() string

func (*LockTimeoutError) Is added in v0.4.1

func (e *LockTimeoutError) Is(target error) bool

type Namespace added in v0.4.1

type Namespace struct {
	// contains filtered or unexported fields
}

Namespace represents a namespace handle

func (*Namespace) Collection added in v0.4.1

func (ns *Namespace) Collection(name string) (*Collection, error)

Collection gets an existing collection

func (*Namespace) CreateCollection added in v0.4.1

func (ns *Namespace) CreateCollection(config CollectionConfig) (*Collection, error)

CreateCollection creates a new collection in this namespace

func (*Namespace) DeleteCollection added in v0.4.1

func (ns *Namespace) DeleteCollection(name string) error

DeleteCollection deletes a collection and all its data

func (*Namespace) GetConfig added in v0.4.1

func (ns *Namespace) GetConfig() NamespaceConfig

GetConfig returns the namespace config

func (*Namespace) GetName added in v0.4.1

func (ns *Namespace) GetName() string

GetName returns the namespace name

func (*Namespace) GetOrCreateCollection added in v0.4.1

func (ns *Namespace) GetOrCreateCollection(config CollectionConfig) (*Collection, error)

GetOrCreateCollection gets or creates a collection

func (*Namespace) ListCollections added in v0.4.1

func (ns *Namespace) ListCollections() ([]string, error)

ListCollections lists all collections in this namespace

type NamespaceAllowedSet added in v0.4.1

type NamespaceAllowedSet struct {
	// contains filtered or unexported fields
}

NamespaceAllowedSet filters documents by namespace prefix.

func NewNamespaceAllowedSet added in v0.4.1

func NewNamespaceAllowedSet(namespace string) *NamespaceAllowedSet

NewNamespaceAllowedSet creates a new namespace-based filter.

func (*NamespaceAllowedSet) IsAllowed added in v0.4.1

func (s *NamespaceAllowedSet) IsAllowed(id string, _ map[string]interface{}) bool

IsAllowed checks if the document ID has the namespace prefix.

type NamespaceConfig added in v0.4.1

type NamespaceConfig struct {
	Name        string            `json:"name"`
	DisplayName string            `json:"display_name,omitempty"`
	Labels      map[string]string `json:"labels,omitempty"`
	ReadOnly    bool              `json:"read_only"`
}

NamespaceConfig represents namespace configuration

type NamespaceExistsError added in v0.4.1

type NamespaceExistsError struct {
	Namespace string
}

NamespaceExistsError is returned when trying to create an existing namespace

func (*NamespaceExistsError) Error added in v0.4.1

func (e *NamespaceExistsError) Error() string

type NamespaceGrant added in v0.4.1

type NamespaceGrant struct {
	ID            string   `json:"id"`
	FromNamespace string   `json:"from_namespace"`
	ToNamespace   string   `json:"to_namespace"`
	Operations    []string `json:"operations"`
	ExpiresAt     *int64   `json:"expires_at,omitempty"`
	Reason        string   `json:"reason,omitempty"`
}

NamespaceGrant for cross-namespace access

type NamespaceNotFoundError added in v0.4.1

type NamespaceNotFoundError struct {
	Namespace string
}

NamespaceNotFoundError is returned when a namespace doesn't exist

func (*NamespaceNotFoundError) Error added in v0.4.1

func (e *NamespaceNotFoundError) Error() string

type NamespacePolicy added in v0.4.1

type NamespacePolicy string

NamespacePolicy for tenant isolation

const (
	NamespacePolicyStrict     NamespacePolicy = "strict"
	NamespacePolicyExplicit   NamespacePolicy = "explicit"
	NamespacePolicyPermissive NamespacePolicy = "permissive"
)

type OpCode

type OpCode uint8

OpCode represents the wire protocol operation codes. Must match sochdb-storage/src/ipc_server.rs opcodes exactly.

const (
	OpPut        OpCode = 0x01
	OpGet        OpCode = 0x02
	OpDelete     OpCode = 0x03
	OpBeginTxn   OpCode = 0x04
	OpCommitTxn  OpCode = 0x05
	OpAbortTxn   OpCode = 0x06
	OpQuery      OpCode = 0x07
	OpPutPath    OpCode = 0x09
	OpGetPath    OpCode = 0x0A
	OpScan       OpCode = 0x0B
	OpCheckpoint OpCode = 0x0C
	OpStats      OpCode = 0x0D
	OpPing       OpCode = 0x0E
	OpExecuteSQL OpCode = 0x0F
)

Client → Server opcodes

const (
	OpOK        OpCode = 0x80
	OpError     OpCode = 0x81
	OpValue     OpCode = 0x82
	OpTxnID     OpCode = 0x83
	OpRow       OpCode = 0x84
	OpEndStream OpCode = 0x85
	OpStatsResp OpCode = 0x86
	OpPong      OpCode = 0x87
)

Server → Client response opcodes

type PriorityQueue added in v0.4.1

type PriorityQueue struct {
	// contains filtered or unexported fields
}

PriorityQueue represents a priority queue

func CreateQueue added in v0.4.1

func CreateQueue(db interface{}, name string, config *QueueConfig) *PriorityQueue

CreateQueue creates a new queue instance (convenience function)

func NewPriorityQueue added in v0.4.1

func NewPriorityQueue(db interface{}, name string, config *QueueConfig) *PriorityQueue

NewPriorityQueue creates a new priority queue

func (*PriorityQueue) Ack added in v0.4.1

func (pq *PriorityQueue) Ack(taskID string) error

Ack acknowledges task completion

func (*PriorityQueue) Dequeue added in v0.4.1

func (pq *PriorityQueue) Dequeue(workerID string) (*Task, error)

Dequeue gets the highest priority task Returns nil if no tasks available

func (*PriorityQueue) Enqueue added in v0.4.1

func (pq *PriorityQueue) Enqueue(priority int64, payload []byte, metadata map[string]interface{}) (string, error)

Enqueue adds a task to the queue with priority Lower priority number = higher urgency

func (*PriorityQueue) Nack added in v0.4.1

func (pq *PriorityQueue) Nack(taskID string) error

Nack returns a task to the queue (negative acknowledge)

func (*PriorityQueue) Purge added in v0.4.1

func (pq *PriorityQueue) Purge() (int, error)

Purge removes completed and dead-lettered tasks

func (*PriorityQueue) Stats added in v0.4.1

func (pq *PriorityQueue) Stats() (*QueueStats, error)

Stats returns queue statistics

type ProtocolError

type ProtocolError struct {
	Message string
}

ProtocolError represents a protocol-level error.

func (*ProtocolError) Error

func (e *ProtocolError) Error() string

type QueueConfig added in v0.4.1

type QueueConfig struct {
	Name              string
	VisibilityTimeout int    // milliseconds, default 30000
	MaxRetries        int    // default 3
	DeadLetterQueue   string // optional
}

QueueConfig represents queue configuration

type QueueKey added in v0.4.1

type QueueKey struct {
	QueueID  string
	Priority int64
	ReadyTs  int64 // timestamp in milliseconds
	Sequence uint64
	TaskID   string
}

QueueKey represents a composite key for queue entries

func DecodeQueueKey added in v0.4.4

func DecodeQueueKey(data []byte) (*QueueKey, error)

DecodeQueueKey decodes a queue key from bytes using positional parsing. Key format: "queue/" + queueId + "/" + i64BE(priority) + "/" + u64BE(readyTs) + "/" + u64BE(sequence) + "/" + taskId Binary fields may contain 0x2F ('/'), so split('/') is NOT safe.

func (*QueueKey) Encode added in v0.4.1

func (qk *QueueKey) Encode() []byte

Encode encodes the queue key to bytes

type QueueStats added in v0.4.1

type QueueStats struct {
	Pending       int `json:"pending"`
	Claimed       int `json:"claimed"`
	Completed     int `json:"completed"`
	DeadLettered  int `json:"dead_lettered"`
	TotalEnqueued int `json:"total_enqueued"`
	TotalDequeued int `json:"total_dequeued"`
}

QueueStats represents queue statistics

type RawAssertion added in v0.4.1

type RawAssertion struct {
	ID         string                 `json:"id,omitempty"`        // Unique identifier
	Fact       map[string]interface{} `json:"fact"`                // Factual claim
	Source     string                 `json:"source"`              // Source identifier
	Confidence float64                `json:"confidence"`          // Source confidence [0-1]
	Timestamp  int64                  `json:"timestamp,omitempty"` // Unix timestamp
}

RawAssertion represents an immutable assertion event from a source. Raw assertions are never modified, only superseded by new events.

type Relation added in v0.4.1

type Relation struct {
	ID           string                 `json:"id,omitempty"`         // Unique identifier
	FromEntity   string                 `json:"from_entity"`          // Source entity
	RelationType string                 `json:"relation_type"`        // Relationship type
	ToEntity     string                 `json:"to_entity"`            // Target entity
	Properties   map[string]interface{} `json:"properties,omitempty"` // Relation attributes
	Confidence   float64                `json:"confidence,omitempty"` // Extraction confidence [0-1]
	Provenance   string                 `json:"provenance,omitempty"` // Source reference
	Timestamp    int64                  `json:"timestamp,omitempty"`  // Unix timestamp
}

Relation represents a typed relationship between two entities. Relations capture semantic connections with optional properties.

type RetrievalConfig added in v0.4.1

type RetrievalConfig struct {
	Limit           int     `json:"limit,omitempty"`            // Maximum results to return
	LexicalWeight   float64 `json:"lexical_weight,omitempty"`   // BM25 weight [0-1]
	SemanticWeight  float64 `json:"semantic_weight,omitempty"`  // Vector weight [0-1]
	RRFConstant     int     `json:"rrf_constant,omitempty"`     // Reciprocal Rank Fusion constant
	PrefilterRatio  float64 `json:"prefilter_ratio,omitempty"`  // Pre-filter expansion ratio
	UsePrefiltering bool    `json:"use_prefiltering,omitempty"` // Enable pre-filtering
}

RetrievalConfig controls hybrid search behavior.

type RetrievalResponse added in v0.4.1

type RetrievalResponse struct {
	Results      []RetrievalResult `json:"results"`
	QueryTime    int64             `json:"query_time"` // milliseconds
	TotalResults int               `json:"total_results"`
}

RetrievalResponse from retrieval

type RetrievalResult added in v0.4.1

type RetrievalResult struct {
	ID          string                 `json:"id"`
	Score       float64                `json:"score"`
	Content     string                 `json:"content"`
	Metadata    map[string]interface{} `json:"metadata,omitempty"`
	VectorRank  *int                   `json:"vector_rank,omitempty"`
	KeywordRank *int                   `json:"keyword_rank,omitempty"`
}

RetrievalResult from search

type ScanIteratorInterface added in v0.4.4

type ScanIteratorInterface interface {
	Next() (key []byte, value []byte, ok bool)
	Close()
	Err() error
}

ScanIteratorInterface is the interface for scan iterators

type ScanPrefixDB added in v0.4.4

type ScanPrefixDB interface {
	Put(key, value []byte) error
	Get(key []byte) ([]byte, error)
	ScanPrefix(prefix []byte) ScanIteratorInterface
}

ScanPrefixDB is the interface databases must implement for scan-based operations

type SearchRequest added in v0.4.1

type SearchRequest struct {
	QueryVector     []float32              `json:"query_vector"`
	K               int                    `json:"k"`
	Filter          map[string]interface{} `json:"filter,omitempty"`
	IncludeMetadata bool                   `json:"include_metadata"`
}

SearchRequest represents a vector search request

type SearchResult added in v0.4.1

type SearchResult struct {
	ID       string                 `json:"id"`
	Score    float32                `json:"score"`
	Vector   []float32              `json:"vector,omitempty"`
	Metadata map[string]interface{} `json:"metadata,omitempty"`
}

SearchResult represents a single search result

type SemanticCache added in v0.4.1

type SemanticCache struct {
	// contains filtered or unexported fields
}

SemanticCache provides semantic caching for LLM responses

func NewSemanticCache added in v0.4.1

func NewSemanticCache(db *embedded.Database, cacheName string) *SemanticCache

NewSemanticCache creates a new semantic cache

func (*SemanticCache) Clear added in v0.4.1

func (c *SemanticCache) Clear() (int, error)

Clear removes all entries in this cache

func (*SemanticCache) Delete added in v0.4.1

func (c *SemanticCache) Delete(key string) error

Delete removes a specific cache entry

func (*SemanticCache) Get added in v0.4.1

func (c *SemanticCache) Get(queryEmbedding []float32, threshold float32) (*SemanticCacheHit, error)

Get retrieves cached response by similarity

func (*SemanticCache) PurgeExpired added in v0.4.1

func (c *SemanticCache) PurgeExpired() (int, error)

PurgeExpired removes expired entries

func (*SemanticCache) Put added in v0.4.1

func (c *SemanticCache) Put(key, value string, embedding []float32, ttlSeconds int64, metadata map[string]interface{}) error

Put stores a cached response

func (*SemanticCache) Stats added in v0.4.1

func (c *SemanticCache) Stats() (*SemanticCacheStats, error)

Stats returns cache statistics

type SemanticCacheEntry added in v0.4.1

type SemanticCacheEntry struct {
	Key       string                 `json:"key"`
	Value     string                 `json:"value"`
	Embedding []float32              `json:"embedding"`
	Timestamp int64                  `json:"timestamp"`
	TTL       int64                  `json:"ttl,omitempty"`
	Metadata  map[string]interface{} `json:"metadata,omitempty"`
}

SemanticCacheEntry represents a cached response with embedding

type SemanticCacheHit added in v0.4.1

type SemanticCacheHit struct {
	SemanticCacheEntry
	Score float32 `json:"score"`
}

SemanticCacheHit represents a cache hit with similarity score

type SemanticCacheStats added in v0.4.1

type SemanticCacheStats struct {
	Count       int     `json:"count"`
	Hits        int     `json:"hits"`
	Misses      int     `json:"misses"`
	HitRate     float64 `json:"hit_rate"`
	MemoryUsage int64   `json:"memory_usage"`
}

SemanticCacheStats represents cache statistics

type ServerError

type ServerError struct {
	Message string
}

ServerError represents an error returned by the server.

func (*ServerError) Error

func (e *ServerError) Error() string

type SochDBError

type SochDBError struct {
	Op      string
	Message string
}

SochDBError represents a general SochDB error.

func (*SochDBError) Error

func (e *SochDBError) Error() string

type SpanData

type SpanData struct {
	SpanID       string `json:"span_id"`
	Name         string `json:"name"`
	StartUs      int64  `json:"start_us"`
	ParentSpanID string `json:"parent_span_id"`
	Status       string `json:"status"`
	EndUs        int64  `json:"end_us,omitempty"`
	DurationUs   int64  `json:"duration_us,omitempty"`
}

SpanData represents a span in a trace.

type SplitBrainError added in v0.4.1

type SplitBrainError struct {
	Message string
}

SplitBrainError is returned when split-brain condition detected.

func (*SplitBrainError) Error added in v0.4.1

func (e *SplitBrainError) Error() string

func (*SplitBrainError) Is added in v0.4.1

func (e *SplitBrainError) Is(target error) bool

type StorageStats

type StorageStats struct {
	MemtableSizeBytes  uint64
	WALSizeBytes       uint64
	ActiveTransactions int
}

StorageStats contains database storage statistics.

type Task added in v0.4.1

type Task struct {
	TaskID      string                 `json:"task_id"`
	Priority    int64                  `json:"priority"`
	Payload     []byte                 `json:"payload"`
	State       TaskState              `json:"state"`
	EnqueuedAt  int64                  `json:"enqueued_at"`
	ClaimedAt   *int64                 `json:"claimed_at,omitempty"`
	ClaimedBy   string                 `json:"claimed_by,omitempty"`
	CompletedAt *int64                 `json:"completed_at,omitempty"`
	Retries     int                    `json:"retries"`
	Metadata    map[string]interface{} `json:"metadata,omitempty"`
}

Task represents a queue task

type TaskState added in v0.4.1

type TaskState string

TaskState represents the state of a queue task

const (
	TaskStatePending      TaskState = "pending"
	TaskStateClaimed      TaskState = "claimed"
	TaskStateCompleted    TaskState = "completed"
	TaskStateDeadLettered TaskState = "dead_lettered"
)

type TraceInfo

type TraceInfo struct {
	TraceID    string `json:"trace_id"`
	RootSpanID string `json:"root_span_id"`
}

TraceInfo contains trace and span IDs.

type TransactionError

type TransactionError struct {
	Message string
}

TransactionError represents a transaction-related error.

func (*TransactionError) Error

func (e *TransactionError) Error() string

type TraverseResult

type TraverseResult struct {
	Nodes []GraphNode `json:"nodes"`
	Edges []GraphEdge `json:"edges"`
}

TraverseResult contains the result of a graph traversal.

type TruncationStrategy added in v0.4.1

type TruncationStrategy string

TruncationStrategy represents how to truncate context

const (
	TailDrop     TruncationStrategy = "tail_drop"    // Drop from end
	HeadDrop     TruncationStrategy = "head_drop"    // Drop from beginning
	Proportional TruncationStrategy = "proportional" // Proportional across sections
)

type WireFormat

type WireFormat int

WireFormat represents output format for query results sent to clients.

These formats are optimized for transmission efficiency and client-side processing.

const (
	// WireFormatToon is TOON format (default, 40-66% fewer tokens than JSON).
	// Optimized for LLM consumption.
	WireFormatToon WireFormat = iota

	// WireFormatJSON is standard JSON for compatibility.
	WireFormatJSON

	// WireFormatColumnar is raw columnar format for analytics.
	// More efficient for large result sets with projection pushdown.
	WireFormatColumnar
)

func ParseWireFormat

func ParseWireFormat(s string) (WireFormat, error)

ParseWireFormat parses a WireFormat from a string.

func (WireFormat) String

func (f WireFormat) String() string

String returns the string representation of WireFormat.

Directories

Path Synopsis
cmd
direct-posthog-test command
Direct PostHog test - no SochDB dependencies
Direct PostHog test - no SochDB dependencies
sochdb-bulk command
sochdb-server command
Package embedded provides direct FFI access to SochDB native library
Package embedded provides direct FFI access to SochDB native library
embedded command
namespace command
queue command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL