protoregistry

package module
v0.70.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 6, 2026 License: MIT Imports: 10 Imported by: 0

README

protoregistry

A multi-namespace protobuf schema registry for Go, with versioning, staging, backward compatibility enforcement, and hot-swap capabilities.

Status: v0.x pre-stable. The gRPC service in proto/protoregistry/v1/ is the durable integration point; the Go library API may change at minor versions until v1.0. See STABILITY.md for the full contract.

Protoregistry compiles .proto files at runtime using protocompile, stores versioned schemas in PostgreSQL with content-addressable deduplication, and serves compiled descriptors for dynamic message creation and validation via gRPC.

Features

  • Multi-namespace isolation — each namespace is a self-contained scope (chroot model); proto imports resolve only within the same namespace
  • Two-phase staging — publish compiles and stages; promote atomically swaps all staged versions to current, enabling coordinated multi-schema changes
  • Backward compatibility enforcement — breaking changes (field deletion, type changes, cardinality changes) are rejected at promote time
  • Content-addressable storage — proto sources are normalized, SHA-256 hashed, and deduplicated; rollback is a pointer move with zero data duplication
  • Hot-swap — readers access compiled descriptors via atomic.Pointer; swaps are instant and lock-free
  • Dynamic message support — create dynamicpb.Message instances from any registered schema at runtime
  • Custom built-in types — extend the standard Google well-known types with your own shared protos via the reserved __builtins__ namespace
  • Well-known type shadowing protection — publishing files that shadow Google well-known types is rejected by default; requires explicit force flag
  • Startup recovery — rebuilds in-memory state from pre-compiled descriptors in Postgres without recompilation
  • CLI toolprotoregistry binary for managing the registry and running the gRPC server
  • Go client SDKprotoregistry/client provides a remote-backed protoreflect.MessageTypeResolver / protodesc.Resolver with eager population, polling refresh, version pinning, and atomic hot-swap (see Go client SDK)

Quick Start

Running the server
# Build the binary
go build -o protoregistry ./cmd/protoregistry/

# Start the server (runs migrations and listens on :50051)
protoregistry serve --db "postgres://localhost:5432/protoregistry?sslmode=disable" --migrate --listen :50051

# Optionally bootstrap built-in types from a directory
protoregistry serve --db "$DATABASE_URL" --migrate --builtins ./company-types/
Using the CLI
# Create a namespace
protoregistry namespace create acme

# Push proto files (publish + stage)
protoregistry push acme billing ./protos/billing/

# Promote staged changes to current
protoregistry promote acme

# Load an entire directory of proto files in dependency order
protoregistry load acme ./protos/ --promote

# List namespaces and schemas
protoregistry namespace list
protoregistry schema list acme
protoregistry schema info acme billing

# Retrieve source or compiled descriptors
protoregistry schema source acme billing --version 2
protoregistry schema descriptor acme billing --out billing.binpb

# Rollback to a previous version
protoregistry rollback acme billing 1 --promote

# Discard all staged changes
protoregistry discard acme
Using as a Go library
package main

import (
    "context"
    "fmt"
    "log"

    "github.com/jackc/pgx/v5/pgxpool"

    protoregistry "github.com/trendvidia/protoregistry"
    "github.com/trendvidia/protoregistry/store/postgres"
)

func main() {
    ctx := context.Background()

    pool, err := pgxpool.New(ctx, "postgres://localhost:5432/protoregistry?sslmode=disable")
    if err != nil {
        log.Fatal(err)
    }
    defer pool.Close()

    store := postgres.New(pool)
    reg := protoregistry.New(store)

    if err := reg.Restore(ctx); err != nil {
        log.Fatal(err)
    }

    result, err := reg.Publish(ctx, &protoregistry.PublishRequest{
        NamespaceID: "acme",
        SchemaID:    "billing",
        Sources: map[string][]byte{
            "billing/config.proto": []byte(`
                syntax = "proto3";
                package billing;
                message Config {
                    string name = 1;
                    int32 timeout_ms = 2;
                }
            `),
        },
        CreatedBy: "deploy-bot",
    })
    if err != nil {
        log.Fatal(err)
    }
    fmt.Printf("Published version %d (no_change=%v)\n", result.Version, result.NoChange)

    promoted, err := reg.Promote(ctx, "acme")
    if err != nil {
        log.Fatal(err)
    }
    fmt.Printf("Promoted %d schema(s)\n", len(promoted.Promoted))

    snap := reg.Current("acme", "billing")
    msg, _ := snap.NewMessage("billing.Config")
    fmt.Printf("Created dynamic message: %s\n", msg.ProtoReflect().Descriptor().FullName())
}

CLI Reference

Global flags
Flag Env var Default Description
--server, -s PROTOREGISTRY_SERVER localhost:50051 gRPC server address
--namespace, -n Default namespace for commands
--output, -o table Output format: table or json
--token PROTOREGISTRY_TOKEN Bearer token for authentication
--tls false Connect over TLS using the system root CA pool
--tls-ca PEM-encoded CA file to verify the server cert (implies --tls)
--tls-cert PEM-encoded client certificate for mTLS (implies --tls)
--tls-key PEM-encoded client key for mTLS (implies --tls)
--tls-server-name Override the server name used for cert verification (implies --tls)
--tls-skip-verify false Skip server cert verification — testing only (implies --tls)
Commands
Command Description
serve Start the gRPC registry server
namespace list List all namespaces
namespace create <id> Create a namespace
schema list [namespace] List schemas in a namespace
schema info [namespace] <schema> Show schema details
schema source [namespace] <schema> Show proto source files
schema descriptor [namespace] <schema> Get compiled FileDescriptorSet
push [namespace] <schema> <path...> Publish proto files as a schema version
load [namespace] <path> Load all protos from a directory in dependency order
promote [namespace] Promote all staged versions to current
discard [namespace] Discard all staged versions
rollback [namespace] <schema> <version> Stage a previous version
serve flags
Flag Env var Default Description
--db DATABASE_URL PostgreSQL connection URL (required)
--listen :50051 gRPC listen address
--builtins Directory of built-in .proto files to bootstrap
--migrate false Run database migrations on startup
push / load flags
Flag Default Description
--created-by $USER Author of this version
--promote false Promote immediately after publishing
--force false Allow shadowing well-known types
--metadata Key=value metadata pairs (push only)

Staging Workflow

Schema updates follow a two-phase model, similar to git staging:

1. Publish    ->  compile + store + stage
2. Promote    ->  compat check + atomic swap (all staged -> current)

Multiple schemas can be staged independently, then promoted together as a coordinated set. The compiler resolves imports against the "proposed" state (staged where available, current otherwise), so cross-schema changes compile against each other before going live.

Developer pushes "common" v3 to staging
Developer pushes "billing" v5 to staging  (compiles against common v3)
Developer promotes                         ->  both go live atomically

Rollback stages a previous version, then promotes it:

protoregistry rollback acme billing 1    # stages v1
protoregistry promote acme               # v1 becomes current

Built-in Types

The compiler provides Google well-known types (google/protobuf/timestamp.proto, etc.) automatically via protocompile. To add your own shared types available to all namespaces, publish them to the reserved __builtins__ namespace:

# Push company-wide shared types as built-ins
protoregistry push __builtins__ company-types ./protos/company/
protoregistry promote __builtins__

# Now any namespace can import them:
#   import "company/base.proto";

The import resolution order during compilation is:

  1. Namespace sources — the schema's own files + other schemas in the same namespace
  2. Built-ins — files from the __builtins__ namespace
  3. Google well-known typesgoogle/protobuf/*.proto (provided by protocompile)
Well-known type protection

Publishing a file that shadows a Google well-known type (e.g., google/protobuf/timestamp.proto) is rejected by default. The check exists because shadowing happens silently — the protocompile resolver picks the namespace-local file before falling back to standard imports, so a typo'd or malicious filename can replace the well-known type for every schema in the namespace and break compilation in confusing ways down the line.

When you genuinely need to substitute a well-known type (for example, to provide a richer Timestamp with extra fields), pass --force:

protoregistry push __builtins__ custom-timestamp ./my-timestamp/ --force

This flag is intended for operator use; it should not be exposed to self-service publishers.

The server can also bootstrap built-ins from a directory on disk at startup:

protoregistry serve --db "$DATABASE_URL" --builtins ./company-types/

Architecture

.proto source -> protocompile -> compiled descriptors
                                      |
                            +-------------------+
                            |     Registry      |
                            |  (orchestrator)   |
                            +--------+----------+
                     +---------------+---------------+
                     v               v               v
              +-------------+ +------------+ +--------------+
              |  Namespace  | |   Store    | |    Compat    |
              | (in-memory) | | (Postgres) | |  (checker)   |
              +-------------+ +------------+ +--------------+
                     v
              +-------------+
              |  Snapshot   |  <- atomic.Pointer, lock-free reads
              | (immutable) |
              +-------------+
                     v
              +-------------+
              |  Resolver   |  <- protobuf-go bridge
              | (dynamicpb) |
              +-------------+

Database

Protoregistry uses PostgreSQL with sqlc for type-safe queries and goose for migrations.

# Run migrations
goose -dir migrations postgres "$DATABASE_URL" up

Storage uses a content-addressable design with a versioning indirection layer:

proto_blobs (namespace_id, sha256) -> original source
     ^
schema_version_files (version, filename) -> blob_sha256
     ^
schema_versions (version) -> compiled FileDescriptorSet + compiler_version
     ^
schemas (namespace_id, schema_id) -> current_version / staged_version

Same content submitted multiple times (or across tenants) is stored once. Rollback is a pointer move — no data is copied.

gRPC API

The RegistryService exposes the full lifecycle over gRPC:

RPC Description
Publish Compile and stage a new schema version
Promote Atomically move all staged versions to current
DiscardStaging Clear all staged versions in a namespace
Rollback Stage a previous version for promotion
GetSchema Get schema metadata and version list
ListSchemas List all schemas in a namespace
GetDescriptor Get compiled FileDescriptorSet for a version
GetSource Get original .proto source files for a version
ListNamespaces List all namespaces
CreateNamespace Create a new namespace

See proto/protoregistry/v1/registry.proto for the full definition.

Type Resolution

The resolve package bridges namespace snapshots with protobuf-go's standard resolver interfaces:

import "github.com/trendvidia/protoregistry/resolve"

// Namespace-wide resolver — searches all schemas.
r := resolve.NewResolver(namespace)
mt, _ := r.FindMessageByName("billing.Config")
msg := dynamicpb.NewMessage(mt.Descriptor())

// Schema-scoped resolver.
sr := resolve.NewSchemaResolver(namespace, "billing")
msg, _ := sr.NewMessage("billing.Config")

Resolvers are live — they always read the current snapshot, so hot-swaps are immediately reflected.

Go client SDK

github.com/trendvidia/protoregistry/client is the Go SDK for services that consume descriptors from a running registry, as opposed to embedding the registry library in-process. The client is namespace-scoped and implements the same standard resolver interfaces (protoreflect.MessageTypeResolver, protoregistry.ExtensionTypeResolver, protodesc.Resolver) as the in-process resolve package, so call sites that read descriptors don't change when you swap embedded for remote.

import (
    "context"

    "github.com/trendvidia/protoregistry/client"
)

ctx := context.Background()
r, err := client.Dial(ctx, "registry.internal:50051", "billing")
if err != nil { /* ... */ }
defer r.Close()

desc, _ := r.FindDescriptorByName("billing.Config")
msg, _  := r.NewMessage("billing.Config")

Behavior:

  • Eager population. Dial / client.New fetches every schema in the namespace up front, so lookup misses surface at startup, not in the request path. Restrict to a subset with client.WithSchemas("foo", "bar").
  • Polling refresh (default 30s; client.WithRefreshInterval). A background goroutine re-fetches only schemas whose current version advanced and atomically swaps the snapshot. Failures are logged and survived (stale-while-error). Force a refresh with r.Refresh(ctx).
  • r.Pin(ctx, map[string]uint64) returns a derived resolver frozen at a specific (schemaIDversion) map — useful for reproducible reads or replaying captured payloads against the exact version they were produced with.
  • r.Schema(schemaID) narrows lookups to one schema in the namespace — cheaper and immune to cross-schema FQN collisions.
  • Fail-loud collisions. If two schemas in the namespace export the same fully-qualified type name, client.New returns an error rather than silently picking one.
Hierarchical fallback

A Resolver can fall back to a parent registry when a local lookup misses. Useful for shared / well-known types across multiple namespaces, or for chaining a tenant Resolver behind a "common types" Resolver. Both the namespace-wide aggregate (FindFileByPath, FindExtensionByNumber) and each per-schema view inherit the same parent, so the fallback is reachable from every lookup tier.

Three options:

// 1. Explicit parent registries — most general.
client.WithFallback(parentFiles, parentTypes)

// 2. Chain another Resolver as the parent. Convenience over (1) — passes
//    the parent's nsFiles / nsTypes through. The parent must outlive
//    every child.
client.WithParent(commonsResolver)

// 3. Fall back to upstream protoregistry.GlobalFiles / GlobalTypes,
//    which have generated proto types compiled into the binary.
client.WithGlobalFallback()

Example — every per-tenant Resolver inherits a commons namespace:

commons, _ := client.Dial(ctx, addr, "commons", client.WithRefreshInterval(0))
defer commons.Close()

billing, _ := client.Dial(ctx, addr, "billing",
    client.WithParent(commons),
)
defer billing.Close()

// "shared.Trace" lives in the commons namespace; billing resolves it
// via the fallback chain.
desc, _ := billing.FindDescriptorByName("shared.Trace")

Local entries always shadow the parent — there is no fail-loud collision check across the parent boundary. Two consecutive Resolvers can register the same FQN if it appears in both the local namespace and the parent; the local version wins.

Pinned Resolvers (returned by r.Pin(...)) inherit the parent's fallback chain. If the parent refreshes, the pinned view sees the new parent state via fallback even though its own local schemas are frozen. For a fully-frozen view, build an independent frozen parent and pass it via WithFallback.

Pairs cleanly with protowire-go (the pxf / sbe codecs accept any protoreflect.MessageDescriptor), protojson, anypb, and dynamicpb without adapter code:

import "github.com/trendvidia/protowire-go/encoding/pxf"

desc, _ := r.FindDescriptorByName("billing.Config")
msg, _  := pxf.UnmarshalDescriptor(pxfBytes, desc.(protoreflect.MessageDescriptor))
Note: this module pins the trendvidia/protobuf-go fork

protoregistry/client stores per-schema descriptors in *protoregistry.NamespacedFiles / *protoregistry.NamespacedTypes — the namespace-isolated registry types added in the trendvidia/protobuf-go fork. Those types do not exist in upstream google.golang.org/protobuf, so this module's go.mod carries:

replace google.golang.org/protobuf => github.com/trendvidia/protobuf-go v1.36.12

Go's replace directive does not propagate across module boundaries, so consuming binaries will still pull upstream protobuf-go by default when they depend on protoregistry. Without the same replace in the top-level binary's go.mod, the build fails — the namespace types are referenced by name and have no upstream equivalent.

Add the same replace to the binary's go.mod when you depend on protoregistry/client. The fork keeps the upstream import path, tracks upstream tags closely, and adds only the namespace registry + the dynamicpb.SetUnsafe family used by protowire-go. Code that compiles against upstream compiles against the fork unchanged.

If your project must use upstream google.golang.org/protobuf exactly, do not import protoregistry/client — call the gRPC service directly via the generated stubs in proto/protoregistry/v1 and store descriptors yourself with upstream *protoregistry.Files.

Development

go build ./...                  # Build all packages
go build ./cmd/protoregistry/   # Build the CLI/server binary
go test -race ./...             # All tests (needs Docker for Postgres integration tests)

sqlc generate                   # Regenerate SQL query code
Prerequisites
  • Go 1.26+
  • Docker (for integration tests)
  • protoc + protoc-gen-go + protoc-gen-go-grpc (for proto regeneration)
  • sqlc (for SQL code regeneration)

How protoregistry compares

protoregistry is designed for teams that want a schema registry they can embed, scope per-tenant, and run as a small Go binary against an existing PostgreSQL — without adopting a broader platform.

Need protoregistry Buf Schema Registry Confluent Schema Registry
Self-hosted (single Go binary + Postgres) hosted / BSR Pro ✓ (Kafka-coupled)
Multi-tenant namespace isolation (chroot) modules subjects
Two-phase staging + atomic multi-schema promote drafts
Backward-compat enforcement at promote ✓ (per-subject)
Embeddable as a Go library
Lock-free hot-swap of compiled descriptors n/a n/a
Built-in dynamic message creation dynamicpb
Wire-format support protobuf protobuf Avro / JSON / protobuf

If you need a polished SaaS, lint rules, code generation, or a wide ecosystem of integrations, the Buf Schema Registry is the better choice. If you are already standardized on Kafka, Confluent's registry integrates natively with the broker. protoregistry's niche is embed-and-control: a small library + service you can run inside your own infrastructure with strong tenant isolation and a coordinated promotion workflow.

License

This project is licensed under the MIT License — Copyright (c) 2026 TrendVidia, LLC.

Documentation

Overview

Package protoregistry provides a multi-namespace protobuf schema registry with versioning, staging, and hot-swap capabilities.

It uses protocompile to compile .proto source files at runtime, stores versioned schemas in Postgres with content-addressable deduplication, and serves compiled descriptors for dynamic message creation and validation.

Each namespace is an isolated scope — proto imports resolve only within the same namespace, similar to a chroot. Schemas within a namespace are versioned independently, with a staging mechanism for coordinated multi-schema promotions.

Index

Constants

View Source
const BuiltinsNamespace = "__builtins__"

BuiltinsNamespace is the reserved namespace for built-in proto files. Schemas published here are available to all namespaces during compilation.

Variables

This section is empty.

Functions

This section is empty.

Types

type Option

type Option func(*Registry)

Option configures a Registry.

func WithCompiler

func WithCompiler(c *compiler.Compiler) Option

WithCompiler sets a custom compiler for the registry.

type PromoteResult

type PromoteResult struct {
	Promoted []store.PromotedSchema
}

PromoteResult contains the outcome of a promote operation.

type PublishRequest

type PublishRequest struct {
	NamespaceID string
	SchemaID    string
	Sources     map[string][]byte // filename → proto source content
	CreatedBy   string
	Metadata    map[string]string
	Force       bool // when true, allows publishing files that shadow well-known types
}

PublishRequest contains the parameters for publishing a new schema version.

type PublishResult

type PublishResult struct {
	Version     uint64
	Fingerprint string
	Snapshot    *snapshot.Snapshot
	NoChange    bool // true if the content is identical to the current staged/current version
}

PublishResult contains the outcome of a publish operation.

type Registry

type Registry struct {
	// contains filtered or unexported fields
}

Registry is the top-level orchestrator that ties together compilation, storage, namespace management, and compatibility checking.

func New

func New(s store.Store, opts ...Option) *Registry

New creates a new Registry.

func (*Registry) Current

func (r *Registry) Current(namespaceID, schemaID string) *snapshot.Snapshot

Current returns the current snapshot for a schema, or nil if none.

func (*Registry) DiscardStaging

func (r *Registry) DiscardStaging(ctx context.Context, namespaceID string) error

DiscardStaging clears all staged versions in a namespace without promoting.

func (*Registry) Namespaces

func (r *Registry) Namespaces() *namespace.Registry

Namespaces returns the namespace registry for direct access.

func (*Registry) Promote

func (r *Registry) Promote(ctx context.Context, namespaceID string) (*PromoteResult, error)

Promote atomically moves all staged versions to current within a namespace. Before promoting, it runs backward compatibility checks on each staged schema against its current version.

func (*Registry) Publish

func (r *Registry) Publish(ctx context.Context, req *PublishRequest) (*PublishResult, error)

Publish compiles new proto sources and stages the result. The new version is not yet visible to consumers — call Promote to make it current.

The compilation resolves imports against the namespace's proposed state (staged versions where they exist, current otherwise), enabling coordinated multi-schema changes.

func (*Registry) Restore

func (r *Registry) Restore(ctx context.Context) error

Restore loads all current schema versions from the store and rebuilds the in-memory namespace state. Call this at startup.

func (*Registry) Rollback

func (r *Registry) Rollback(ctx context.Context, namespaceID, schemaID string, version uint64, opts RollbackOptions) error

Rollback stages a previous version of a schema for promotion.

Unless opts.Force is set, Rollback runs the same compat check as a forward Publish, treating the current version as "old" and the target version as "new". This catches the common screw-up of rolling back to a release that lacks API surface (fields, methods, enum values) added in the meantime, which would silently break consumers when the rollback is promoted.

The version must already exist in the store.

func (*Registry) Staged

func (r *Registry) Staged(namespaceID, schemaID string) *snapshot.Snapshot

Staged returns the staged snapshot for a schema, or nil if none.

type RollbackOptions

type RollbackOptions struct {
	// Force bypasses the API-compat check that would otherwise reject a
	// rollback whose target is not backward-compatible with the current
	// version. Use sparingly: skipping the check shifts the cost to
	// downstream consumers, who may break unexpectedly when the rollback
	// is promoted.
	Force bool
}

RollbackOptions configures a Rollback call.

Directories

Path Synopsis
Package client provides a remote-backed protobuf descriptor resolver that fetches schemas from a running protoregistry server over gRPC.
Package client provides a remote-backed protobuf descriptor resolver that fetches schemas from a running protoregistry server over gRPC.
internal/clienttest
Package clienttest is a test-only harness that wires a real protoregistry server (Postgres + gRPC over bufconn) and exposes a *grpc.ClientConn ready to be passed to client.New, plus helpers for the publish/promote dance.
Package clienttest is a test-only harness that wires a real protoregistry server (Postgres + gRPC over bufconn) and exposes a *grpc.ClientConn ready to be passed to client.New, plus helpers for the publish/promote dance.
cmd
protoregistry command
Package compat checks backward compatibility between schema versions.
Package compat checks backward compatibility between schema versions.
Package compiler wraps protocompile to provide namespace-scoped compilation of proto source files.
Package compiler wraps protocompile to provide namespace-scoped compilation of proto source files.
internal
ctl
Package ctl implements the protoregistry CLI commands.
Package ctl implements the protoregistry CLI commands.
Package migrations embeds the SQL migration files for use by goose.
Package migrations embeds the SQL migration files for use by goose.
Package namespace provides the isolation boundary for schemas.
Package namespace provides the isolation boundary for schemas.
proto
Package resolve bridges the protoregistry namespace snapshots with protobuf-go's protoregistry types, enabling runtime type resolution for external consumers.
Package resolve bridges the protoregistry namespace snapshots with protobuf-go's protoregistry types, enabling runtime type resolution for external consumers.
Package server implements the gRPC RegistryService.
Package server implements the gRPC RegistryService.
Package snapshot provides immutable, compiled descriptor sets that are safe for concurrent access.
Package snapshot provides immutable, compiled descriptor sets that are safe for concurrent access.
Package store defines the persistence interface for the schema registry.
Package store defines the persistence interface for the schema registry.
postgres
Package postgres implements the store.Store interface using PostgreSQL with sqlc-generated query code and pgx as the driver.
Package postgres implements the store.Store interface using PostgreSQL with sqlc-generated query code and pgx as the driver.
postgres/pgtest
Package pgtest provides a shared test helper for spinning up a PostgreSQL container with migrations applied.
Package pgtest provides a shared test helper for spinning up a PostgreSQL container with migrations applied.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL