connpool

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 26, 2026 License: MIT Imports: 8 Imported by: 0

README

connpool

CI Go Reference Go Report Card

A production-grade TCP connection pool for Go. Designed for high-throughput, low-latency network workloads where connection reuse matters.

Why This Pool?

Most Go projects that need TCP connection pooling either use archived/unmaintained libraries (like fatih/pool) or build their own. This pool fills the gap with the features that production systems actually need:

  • Health check on Get() — stale/dead connections are detected before use
  • Idle timeout — connections sitting unused are evicted automatically
  • Max lifetime with jitter — prevents thundering herd reconnection storms
  • Background evictor — proactively cleans stale connections (don't wait for Get())
  • Context-aware Get() — respects deadlines and cancellation
  • Zero-alloc fast path — channel-based pool with 0 allocs on Get/Put (~139ns/op)
  • Comprehensive metrics — idle closed, lifetime closed, ping failures, wait time

Install

go get github.com/soyvural/connpool

Quick Start

package main

import (
    "context"
    "fmt"
    "net"
    "time"

    "github.com/soyvural/connpool"
)

func main() {
    cfg := connpool.Config{
        MinSize:     5,
        MaxSize:     20,
        Increment:   2,
        IdleTimeout: 30 * time.Second,
        MaxLifetime: 5 * time.Minute,
        Ping: func(c net.Conn) error {
            // Quick health check: try a zero-byte read with short deadline.
            // Timeout = healthy (nothing to read). Any other error = dead.
            c.SetReadDeadline(time.Now().Add(time.Millisecond))
            buf := make([]byte, 1)
            if _, err := c.Read(buf); err != nil {
                if netErr, ok := err.(net.Error); ok && netErr.Timeout() {
                    c.SetReadDeadline(time.Time{})
                    return nil // timeout = connection is alive
                }
                return err // connection is dead
            }
            c.SetReadDeadline(time.Time{})
            return nil
        },
    }

    factory := func() (net.Conn, error) {
        return net.DialTimeout("tcp", "localhost:9090", 10*time.Second)
    }

    p, err := connpool.New(cfg, factory, connpool.WithName("my-pool"))
    if err != nil {
        panic(err)
    }
    defer p.Stop()

    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
    defer cancel()

    conn, err := p.Get(ctx)
    if err != nil {
        panic(err)
    }
    defer conn.Close() // returns to pool (or destroys if marked unusable)

    // Use conn...
    fmt.Fprintf(conn, "hello\n")

    // If write fails, mark unusable before closing:
    // p.MarkUnusable(conn)

    // Check pool health:
    stats := p.Stats()
    fmt.Printf("Pool %s: size=%d active=%d available=%d idle_closed=%d lifetime_closed=%d ping_failed=%d\n",
        p.Name(), stats.Size(), stats.Active(), stats.Available(),
        stats.IdleClosed(), stats.LifetimeClosed(), stats.PingFailed())
}

Configuration

Field Type Default Description
MinSize int 0 Connections created on startup. Maintained by evictor.
MaxSize int required Maximum total connections (idle + active).
Increment int required, >= 1 How many connections to create when pool needs to grow.
IdleTimeout Duration 0 (disabled) Close connections idle longer than this.
MaxLifetime Duration 0 (disabled) Close connections older than this. 10% jitter applied automatically.
Ping func(net.Conn) error nil (disabled) Health check called on Get() before returning connection.
EvictInterval Duration 30s Background evictor frequency. Set -1 to disable.

How It Works

Get(ctx) flow:
  1. Try non-blocking channel read (fast path, ~139ns)
  2. Check health: lifetime -> idle timeout -> ping
  3. If unhealthy, discard and retry (up to 3 times)
  4. If no idle conn available, grow pool (up to MaxSize)
  5. If at MaxSize, block until conn returned or ctx cancelled

Put (via conn.Close()) flow:
  1. If marked unusable -> destroy, decrement size
  2. If pool stopped -> destroy
  3. Stamp lastUsed time, push to channel
  4. If channel full -> destroy (overflow)

Background evictor (every EvictInterval):
  1. Drain channel, health-check each connection
  2. Discard stale/expired connections
  3. Replenish to MinSize if needed

Metrics

All metrics are available via pool.Stats():

Metric Description
Size() Total connections (idle + active)
Available() Idle connections in pool
Active() Connections currently checked out
Request() Total Get() calls
Success() Successful Get() calls
IdleClosed() Connections closed due to idle timeout
LifetimeClosed() Connections closed due to max lifetime
PingFailed() Connections discarded due to ping failure
WaitCount() Get() calls that had to wait (pool at max)
WaitTime() Cumulative time spent waiting

Benchmarks

goos: darwin
goarch: arm64
cpu: Apple M2 Pro
BenchmarkGetPut_Sequential-10    8,556,068    139.9 ns/op    0 B/op    0 allocs/op
BenchmarkGetPut_Parallel-10      5,075,728    245.1 ns/op    0 B/op    0 allocs/op
BenchmarkGetPut_WithPing-10        216,386   5571   ns/op   81 B/op    2 allocs/op
BenchmarkGetPut_Contended-10     2,405,068    478.7 ns/op    0 B/op    0 allocs/op

Examples

Working examples are in the examples/ directory:

Example Description
tcp-echo Basic pool usage with a TCP echo server
redis-proxy Connection pooling with Ping health checks against Redis
load-balancer Round-robin load balancing across multiple backends using independent pools
# TCP echo (start: ncat -l -k -p 9090 --sh-exec "cat")
go run ./examples/tcp-echo

# Redis proxy (start: docker run -d -p 6379:6379 redis:alpine)
go run ./examples/redis-proxy

# Load balancer (start two ncat echo servers on ports 9001 and 9002)
go run ./examples/load-balancer

Design Decisions

Channel-based pool over mutex+slice: Channels give us natural blocking semantics for the wait-for-connection path, and the non-blocking select/default pattern makes the fast path extremely cheap (~139ns, 0 allocs).

Max lifetime with jitter: When all connections are created at the same time (startup), they'd all expire at the same time — causing a reconnection storm ("thundering herd"). Adding 10% random jitter spreads expiration across time.

Health check order: Lifetime check -> idle check -> ping. The cheapest checks (time comparisons) run first. The expensive check (network ping) only runs if the connection passed the time-based checks.

Background evictor: Without it, stale connections only get cleaned on Get(). If the pool is idle (no Get() calls), dead connections accumulate. The evictor proactively cleans them and maintains the minimum pool size.

Ideal Connection Pool Properties

This pool was designed based on patterns from production-grade pools at scale:

Property Source Status
Idle timeout pgx, go-redis, Vitess Implemented
Max lifetime with jitter pgx, Vitess Implemented
Health check on borrow go-redis, Apache Commons Pool Implemented
Background evictor pgx, Apache Commons Pool, Vitess Implemented
Context-aware Get Vitess, pgx Implemented
Zero-alloc fast path go-redis Achieved
Comprehensive metrics Vitess Implemented
Min idle maintenance pgx Implemented (via evictor)
MarkUnusable fatih/pool Implemented

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrClosed    = fmt.Errorf("pool is closed")
	ErrExhausted = fmt.Errorf("pool is exhausted")
)

Functions

This section is empty.

Types

type Config

type Config struct {
	// MinSize is the number of connections created on startup and maintained
	// by the background evictor. Must be >= 0.
	MinSize int

	// MaxSize is the maximum number of connections (idle + active). Must be > 0.
	MaxSize int

	// Increment is how many connections to create at once when the pool needs
	// to grow. Must be >= 1 and <= MaxSize - MinSize.
	Increment int

	// IdleTimeout is how long a connection can sit idle before being evicted.
	// Zero means no idle timeout.
	IdleTimeout time.Duration

	// MaxLifetime is the maximum duration a connection can live since creation.
	// Connections older than this are closed on Get() and by the evictor.
	// A jitter of up to 10% is applied to prevent thundering herd.
	// Zero means no max lifetime.
	MaxLifetime time.Duration

	// Ping is an optional health check function called on Get() before
	// returning a connection. If it returns an error, the connection is
	// discarded and a new one is tried.
	Ping PingFunc

	// EvictInterval is how often the background evictor runs.
	// Zero defaults to 30 seconds. Set to -1 to disable the evictor.
	EvictInterval time.Duration
}

Config controls pool behavior.

type Factory

type Factory func() (net.Conn, error)

Factory creates a new net.Conn. It is called when the pool needs to grow.

type Option

type Option func(p *pool) error

Option configures a pool.

func WithName

func WithName(name string) Option

WithName sets the pool name. If not provided, an auto-generated name is used.

type PingFunc

type PingFunc func(net.Conn) error

PingFunc checks whether a connection is still alive. Return nil if the connection is healthy, or an error to discard it.

type Pool

type Pool interface {
	// Name returns the pool name.
	Name() string

	// Get returns a healthy connection from the pool.
	// It blocks until a connection is available, the context is cancelled,
	// or the pool is closed.
	Get(ctx context.Context) (net.Conn, error)

	// Stop shuts down the pool and closes all connections.
	Stop() error

	// Stats returns a snapshot of pool statistics.
	Stats() Stats

	// MarkUnusable marks a connection so it will be destroyed instead of
	// returned to the pool on Close().
	MarkUnusable(conn net.Conn)
}

Pool is a thread-safe connection pool for net.Conn.

func New

func New(cfg Config, factory Factory, options ...Option) (Pool, error)

New returns a connection Pool. The factory function is called to create new connections. MinSize connections are created immediately.

type Stats

type Stats interface {
	// Available is the number of idle connections in the pool.
	Available() int

	// Active is the number of connections currently checked out.
	Active() int

	// Size is the total number of connections (idle + active).
	Size() int

	// Request is the total number of Get() calls.
	Request() int

	// Success is the total number of successful Get() calls.
	Success() int

	// IdleClosed is the total number of connections closed due to idle timeout.
	IdleClosed() int

	// LifetimeClosed is the total number of connections closed due to max lifetime.
	LifetimeClosed() int

	// PingFailed is the total number of connections discarded due to ping failure.
	PingFailed() int

	// WaitCount is the total number of Get() calls that had to wait for a connection.
	WaitCount() int

	// WaitTime is the cumulative time spent waiting for connections.
	WaitTime() time.Duration
}

Stats is a snapshot of pool statistics.

Directories

Path Synopsis
examples
load-balancer command
Example: Round-robin load balancer across multiple backends using connpool.
Example: Round-robin load balancer across multiple backends using connpool.
redis-proxy command
Example: Simple Redis PING proxy using connpool.
Example: Simple Redis PING proxy using connpool.
tcp-echo command
Example: TCP echo client using connpool.
Example: TCP echo client using connpool.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL