brrr

package module
v0.3.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 11, 2026 License: MIT Imports: 7 Imported by: 0

README

go brrr

go-brrr

CI Go Reference Go Report Card Version

Brotli compression library for Go (RFC 7932), with encoder and decoder support.

Highlights

  • No C toolchain. Builds with standard Go tooling.
  • Faster than other pure-Go brotli libraries at every quality level we measure (see Benchmarks).
  • Even faster than CGO brotli on levels 2-9.
  • Compound dictionaries.
  • Encoder tuning. LGWin (window size) and SizeHint (expected total input size) are exposed via WriterOptions. SizeHint lets the encoder pick context modeling and hasher parameters tuned for the actual payload size.

Status

The encoder and decoder are covered by compatibility tests and fuzzing, but the public API may still evolve before v1.0.0.

Compatibility

go-brrr implements Brotli RFC 7932 and is tested against the Brotli reference corpus. Encoded output is byte-compatible with the C reference implementation.

Compared to other Go brotli libraries

go-brrr andybalholm google/brotli/go/brotli cbrotli
Pure Go (no cgo)
Encoder
Decoder
Compound dictionaries (encode) n/a
Compound dictionaries (decode)
LGWin tuning n/a
SizeHint n/a
Writer Reset n/a
Reader Reset

If you're using andybalholm/brotli, go-brrr is a near drop-in upgrade with higher throughput on both compression and decompression, plus compound-dictionary and SizeHint support. If you're using cbrotli, go-brrr gives up cgo, adds multi-chunk compound dictionaries, and exposes poolable Writer/Reader instances. cbrotli has no Reset, so each stream allocates a fresh encoder/decoder state, which is noticeable on many-small-file workloads.

Install

go get github.com/molecule-man/go-brrr
import "github.com/molecule-man/go-brrr"

The import path is github.com/molecule-man/go-brrr; the package name is brrr.

Examples

Compression
func Example_compress() {
	input := []byte("Hello, brotli!")

	var compressed bytes.Buffer
	w, err := brrr.NewWriter(&compressed, 6)
	if err != nil {
		log.Fatal(err)
	}
	if _, err := w.Write(input); err != nil {
		log.Fatal(err)
	}
	if err := w.Close(); err != nil {
		log.Fatal(err)
	}
}

More examples are available in example_test.go and the Go package docs: round-trip compression and decompression, one-shot decompression, reusing writers and readers, pooling, and compound dictionaries.

When to use go-brrr

The best use case for brotli is static asset compression - CSS, JS, HTML, fonts, WASM - where you compress once at build time and serve the result millions of times. Use quality 11 for this: speed doesn't matter because you pay the cost once, and brotli q11 delivers ratios that neither gzip nor zstd can match. Every browser shipped since 2016 supports Content-Encoding: br.

For on-the-fly compression, brotli q5–6 is a strong choice if you're already using zstd at its highest level: q5 is often faster with a better ratio, and q6 is only slightly slower with an even better ratio. At lower compression levels, zstd is significantly faster - if throughput is your priority and you don't need the best ratio, zstd is the better tool for the job.

If you compress or decompress repeatedly (e.g. per request in a webserver), keep *brrr.Writer and *brrr.Reader instances in sync.Pools and Reset each one into the next stream rather than allocating new instances each time. See the compiled examples.

Implementation notes

go-brrr is optimized for throughput. Some hot paths intentionally use larger functions, duplicated loops, and specialized code where benchmarks showed measurable wins. These choices stay local to performance-sensitive encoder and decoder internals; public APIs stay small and conventional.

Acknowledgments

This library is a port of the Brotli reference implementation by the Brotli Authors, licensed under the MIT License.

Compression Speed vs Ratio

All benchmarks were taken on the following setup with turboboost, etc, being disabled via denoise-amd.sh:

goos: linux
goarch: amd64
cpu: AMD Ryzen 5 7535HS with Radeon Graphics

Compared against klauspost/compress zstd (pure Go) and stdlib gzip. Single CPU, no parallelism. These plots measure reused streaming encoders: the timed loop resets a warmed writer and discards compressed output, while ratio is measured from a warmup buffer.

Compression Decompression
HTML 522KB HTML 522KB
JS 187KB JS 187KB
JSON 58KB JSON 58KB

Benchmarks

Compared against other Go brotli libraries. go-brrr is the base in all comparisons. The smaller the number the better.

  • andybalholm - github.com/andybalholm/brotli, pure Go encoder and decoder.
  • google-brotli - github.com/google/brotli/go/brotli, Google's official pure Go decoder, transpiled from the Java reference. Decompression only, no encoder.
  • cbrotli - github.com/google/brotli/go/cbrotli, Google's official cgo bindings to the C reference implementation. Including a cgo library in a pure Go comparison isn't apples-to-apples, but it is a useful comparison against Google's C implementation as exposed through its Go bindings.
One-shot Compression

The table below measures end-to-end throughput through each package's public Go API for many independent brotli streams, not only the inner compression loop. Each payload is written as a complete stream with a fresh public writer instance so cbrotli, which has no resettable writer API, can be included.

go-brrr still benefits from internal reuse in that shape: encoder arenas, hashers, hash tables, and scratch buffers are kept reusable through reset paths and internal sync.Pools. That avoids repeated large allocations and zeroing, which matters for small and mid-size payloads. cbrotli uses the C reference encoder underneath, but each payload creates a new BrotliEncoderState through cbrotli.NewWriter and destroys it on Close, paying setup, teardown, cgo, and allocation costs for every stream.

Read these rows as repeated complete-stream compression through the Go APIs. They are not a claim that every pure-Go compression hot path is faster than the C implementation; the same table shows quality levels where cbrotli is faster.

go-brrr (sec/op) andybalholm (sec/op) cbrotli (sec/op)
CompressOneshot/q=0/payload=VariedPayloads 7.731m ± 1% 12.257m ± 0% +58.55% 6.833m ± 0% -11.62%
CompressOneshot/q=1/payload=VariedPayloads 10.65m ± 0% 20.23m ± 0% +89.93% 10.79m ± 0% +1.27%
CompressOneshot/q=2/payload=VariedPayloads 16.53m ± 0% 38.73m ± 2% +134.26% 17.79m ± 0% +7.59%
CompressOneshot/q=3/payload=VariedPayloads 18.14m ± 0% 43.69m ± 1% +140.88% 20.61m ± 0% +13.63%
CompressOneshot/q=4/payload=VariedPayloads 26.57m ± 0% 60.86m ± 1% +129.10% 29.93m ± 0% +12.66%
CompressOneshot/q=5/payload=VariedPayloads 39.83m ± 0% 79.93m ± 1% +100.70% 47.31m ± 0% +18.78%
CompressOneshot/q=6/payload=VariedPayloads 44.46m ± 0% 90.02m ± 1% +102.49% 54.96m ± 0% +23.62%
CompressOneshot/q=7/payload=VariedPayloads 53.99m ± 0% 127.20m ± 1% +135.59% 107.72m ± 0% +99.52%
CompressOneshot/q=8/payload=VariedPayloads 63.04m ± 0% 146.76m ± 1% +132.82% 82.13m ± 0% +30.28%
CompressOneshot/q=9/payload=VariedPayloads 86.94m ± 0% 212.10m ± 2% +143.98% 234.17m ± 0% +169.36%
CompressOneshot/q=10/payload=VariedPayloads 1236.4m ± 0% 1375.7m ± 1% +11.27% 862.3m ± 0% -30.25%
CompressOneshot/q=11/payload=VariedPayloads 3.081 ± 1% 3.441 ± 1% +11.67% 2.263 ± 0% -26.55%
geomean 57.51m 110.9m +92.75% 67.21m +16.87%

Streaming uses brrr.NewReader + io.ReadAll; one-shot uses brrr.Decompress on a complete in-memory blob.

Streaming Decompression

As cbrotli doesn't have the "resettable" API it's not included here.

go-brrr (sec/op) andybalholm (sec/op)
Decompress/q=4/payload=VariedPayloads 5.378m ± 0% 9.539m ± 0% +77.36%
Decompress/q=5/payload=VariedPayloads 5.302m ± 0% 9.143m ± 0% +72.43%
Decompress/q=6/payload=VariedPayloads 5.146m ± 0% 8.881m ± 0% +72.56%
Decompress/q=11/payload=VariedPayloads 5.621m ± 0% 8.959m ± 0% +59.37%
geomean 5.359m 9.127m +70.30%
One-shot Decompression
go-brrr (sec/op) andybalholm (sec/op) cbrotli (sec/op) google-brotli (sec/op)
DecompressOneshot/q=4/payload=VariedPayloads 5.458m ± 0% 10.042m ± 0% +84.01% 5.191m ± 2% -4.89% 10.595m ± 0% +94.13%
DecompressOneshot/q=5/payload=VariedPayloads 5.458m ± 1% 9.609m ± 0% +76.07% 5.022m ± 11% ~ 10.541m ± 0% +93.14%
DecompressOneshot/q=6/payload=VariedPayloads 5.329m ± 1% 9.384m ± 0% +76.11% 4.916m ± 4% -7.74% 10.240m ± 1% +92.17%
DecompressOneshot/q=11/payload=VariedPayloads 5.816m ± 1% 9.540m ± 0% +64.03% 6.981m ± 1% +20.03% crashed
geomean 5.512m 9.641m +74.91% 5.469m -0.78% 10.46m +93.14%

The VariedPayloads benchmark rotates through a heterogeneous mix of files, guarding against benchmark-shaped optimizations - wins that only show up when the same input is fed back-to-back should not move these rows. Payloads span small JSON API responses, mid-size HTML and JS bundles, and larger English prose, drawn from the Brotli reference test corpus and the local testdata/ directory.

File Size Source
github_events_2k.json 2.2 KB testdata
github_events_5k.json 5.2 KB testdata
github_events_8k.json 8.3 KB testdata
asyoulik.txt 122 KB brotli-ref
alice29.txt 149 KB brotli-ref
gh_172KB.html 167 KB testdata
reactcore_187KB.js 182 KB testdata
lcet10.txt 417 KB brotli-ref
plrabn12.txt 471 KB brotli-ref

Documentation

Overview

Package brrr implements the Brotli compressed data format (RFC 7932).

Brotli is a lossless compression algorithm that typically produces better ratios than gzip and zstd at the cost of slower compression. Decompression is fast at every quality level. Every major browser has accepted Content-Encoding: br since 2016, which makes brotli a good match for static web asset pipelines.

Basic usage

Streaming compression through an io.Writer:

var buf bytes.Buffer
w, err := brrr.NewWriter(&buf, 6)
if err != nil {
	log.Fatal(err)
}
if _, err := w.Write(data); err != nil {
	log.Fatal(err)
}
if err := w.Close(); err != nil {
	log.Fatal(err)
}

Streaming decompression through an io.Reader:

r := brrr.NewReader(src)
if _, err := io.Copy(dst, r); err != nil {
	log.Fatal(err)
}

One-shot decompression of a complete in-memory blob:

out, err := brrr.Decompress(compressed)

Key types

  • Writer implements io.WriteCloser and compresses data written to an underlying io.Writer. Close must be called to finalize the brotli stream.
  • Reader implements io.Reader and decompresses a brotli stream read from an underlying io.Reader.
  • Decompress is a convenience function for one-shot decompression of a complete in-memory blob.

Writer and Reader both expose Reset methods, so the same instance can be reused across payloads without reallocating per-call buffers. Both also accept compound dictionaries — useful when the payloads share content with a known corpus. The encoder takes pre-built PreparedDictionary values via [WriterOptions.Dictionaries], so the hash table is built once and may be shared across many Writers and goroutines. The decoder takes raw dictionary bytes via [ReaderOptions.Dictionaries].

Choosing a quality level

Quality controls the ratio/speed trade-off on a scale of 0 to 11:

  • Quality 0–1 use fast one-pass and two-pass encoders optimised for throughput.
  • Quality 2–9 use a streaming encoder that spends progressively more work per byte for better ratios.
  • Quality 10–11 use a Zopfli-style optimal-parsing encoder that produces the best ratios but is orders of magnitude slower than the lower levels.

The canonical use case for brotli is static asset compression — CSS, JS, HTML, fonts, WASM — where the data is compressed once at build time and served many times; quality 11 is the right choice there. For on-the-fly compression, quality 5–6 gives a good balance of speed and ratio.

Compatibility

Output is byte-compatible with the reference implementation at https://github.com/google/brotli. Any conforming brotli decoder can read streams produced by this package, and this package can decode any valid brotli stream.

Example (CompoundDictionary)
// A compound dictionary supplies extra reference data that both the
// encoder and decoder can use for backward references. This is useful
// when compressing data that shares content with a known corpus.
dict := []byte(strings.Repeat("common dictionary content that appears in many documents. ", 50))
input := []byte("This document references common dictionary content that appears in many documents. " +
	"It benefits from the shared dictionary because repeated phrases compress better.")

// Build the encoder-side hash table once. The same *PreparedDictionary
// can be shared across many Writers, including across goroutines.
pd, err := brrr.PrepareDictionary(dict)
if err != nil {
	log.Fatal(err)
}

// Compress with compound dictionary.
var compressed bytes.Buffer
w, err := brrr.NewWriterOptions(&compressed, 4, brrr.WriterOptions{
	Dictionaries: []*brrr.PreparedDictionary{pd},
})
if err != nil {
	log.Fatal(err)
}
if _, err := w.Write(input); err != nil {
	log.Fatal(err)
}
if err := w.Close(); err != nil {
	log.Fatal(err)
}

// Decompress with the same compound dictionary.
r, err := brrr.NewReaderOptions(&compressed, brrr.ReaderOptions{
	Dictionaries: [][]byte{dict},
})
if err != nil {
	log.Fatal(err)
}
result, err := io.ReadAll(r)
if err != nil {
	log.Fatal(err)
}
fmt.Println(string(result))
Output:
This document references common dictionary content that appears in many documents. It benefits from the shared dictionary because repeated phrases compress better.
Example (Compress)
input := []byte("Hello, brotli!")

var compressed bytes.Buffer
w, err := brrr.NewWriter(&compressed, 6)
if err != nil {
	log.Fatal(err)
}
if _, err := w.Write(input); err != nil {
	log.Fatal(err)
}
if err := w.Close(); err != nil {
	log.Fatal(err)
}
Example (Decompress)
// Compress some data first.
original := []byte("Decompress restores the original bytes from a brotli-compressed slice.")
var compressed bytes.Buffer
w, err := brrr.NewWriter(&compressed, 4)
if err != nil {
	log.Fatal(err)
}
if _, err = w.Write(original); err != nil {
	log.Fatal(err)
}
if err = w.Close(); err != nil {
	log.Fatal(err)
}

// One-shot decompression from a byte slice.
result, err := brrr.Decompress(compressed.Bytes())
if err != nil {
	log.Fatal(err)
}
fmt.Println(string(result))
Output:
Decompress restores the original bytes from a brotli-compressed slice.
Example (Pool)
// For repeated compression and decompression (e.g. per-request in an
// HTTP server, per-message in a stream processor), keep *brrr.Writer
// and *brrr.Reader instances in sync.Pools. Get, Reset, use, Put back.
// This avoids allocating encoder hash tables and decoder ring buffers
// each time.
writerPool := sync.Pool{
	New: func() any {
		w, err := brrr.NewWriter(io.Discard, 5)
		if err != nil {
			// NewWriter only fails for an invalid level, which is
			// static here.
			panic(err)
		}
		return w
	},
}
readerPool := sync.Pool{
	New: func() any { return brrr.NewReader(nil) },
}

compress := func(dst io.Writer, payload []byte) error {
	w := writerPool.Get().(*brrr.Writer)
	defer writerPool.Put(w)

	w.Reset(dst)
	if _, err := w.Write(payload); err != nil {
		return err
	}
	return w.Close()
}

decompress := func(src io.Reader) ([]byte, error) {
	r := readerPool.Get().(*brrr.Reader)
	defer readerPool.Put(r)

	r.Reset(src)
	return io.ReadAll(r)
}

payloads := []string{
	"First response body.",
	"Second response body.",
}
for _, p := range payloads {
	var buf bytes.Buffer
	if err := compress(&buf, []byte(p)); err != nil {
		log.Fatal(err)
	}
	out, err := decompress(&buf)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(string(out))
}
Output:
First response body.
Second response body.
Example (Reuse)
// Reset lets you reuse a Writer and Reader across multiple payloads,
// avoiding repeated allocations.
payloads := []string{
	"First payload: the quick brown fox jumps over the lazy dog.",
	"Second payload: pack my box with five dozen liquor jugs.",
}

var compressed bytes.Buffer
w, err := brrr.NewWriter(&compressed, 4)
if err != nil {
	log.Fatal(err)
}
r := brrr.NewReader(nil)

for _, payload := range payloads {
	// Compress
	compressed.Reset()
	w.Reset(&compressed)
	if _, err := w.Write([]byte(payload)); err != nil {
		log.Fatal(err)
	}
	if err := w.Close(); err != nil {
		log.Fatal(err)
	}

	// Decompress
	r.Reset(&compressed)
	result, err := io.ReadAll(r)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(string(result))
}
Output:
First payload: the quick brown fox jumps over the lazy dog.
Second payload: pack my box with five dozen liquor jugs.
Example (Roundtrip)
// Compress
original := []byte("Hello, brotli! This is a round-trip compression example.")
var compressed bytes.Buffer
w, err := brrr.NewWriter(&compressed, 6)
if err != nil {
	log.Fatal(err)
}
if _, err := w.Write(original); err != nil {
	log.Fatal(err)
}
if err := w.Close(); err != nil {
	log.Fatal(err)
}

// Decompress
r := brrr.NewReader(&compressed)
decompressed, err := io.ReadAll(r)
if err != nil {
	log.Fatal(err)
}
fmt.Println(string(decompressed))
Output:
Hello, brotli! This is a round-trip compression example.

Index

Examples

Constants

View Source
const (
	BestSpeed       = 0
	BestCompression = 11
)

Compression level constants.

Variables

This section is empty.

Functions

func Decompress

func Decompress(data []byte) ([]byte, error)

Decompress decodes the brotli-compressed data and returns the original bytes.

Types

type PreparedDictionary

type PreparedDictionary = encoder.PreparedDictionary

PreparedDictionary is a compound dictionary chunk built once and shared across many Writers. See [WriterOptions.Dictionaries].

func PrepareDictionary

func PrepareDictionary(data []byte) (*PreparedDictionary, error)

PrepareDictionary builds an immutable PreparedDictionary from the given source bytes, suitable for use as a compound dictionary chunk via [WriterOptions.Dictionaries]. The returned dictionary may be shared across any number of Writers and goroutines.

The returned dictionary keeps a reference to data; the caller must not mutate data while any Writer holding the dictionary is still in use.

Returns an error if data is empty.

type Reader

type Reader struct {
	// contains filtered or unexported fields
}

Reader decompresses brotli-compressed data from an underlying io.Reader.

func NewReader

func NewReader(src io.Reader) *Reader

NewReader returns a new Reader reading brotli-compressed data from src.

func NewReaderOptions

func NewReaderOptions(src io.Reader, opts ReaderOptions) (*Reader, error)

NewReaderOptions returns a new Reader reading brotli-compressed data from src with additional options. Compound dictionaries supplied via opts.Dictionaries must match those used by the encoder.

func (*Reader) Close

func (r *Reader) Close() error

Close releases resources held by the Reader.

func (*Reader) Read

func (r *Reader) Read(p []byte) (int, error)

Read decompresses data into p.

func (*Reader) Reset

func (r *Reader) Reset(src io.Reader)

Reset discards internal state and switches to reading from src. Compound dictionaries supplied via ReaderOptions are preserved.

type ReaderOptions

type ReaderOptions struct {
	// Dictionaries are compound dictionary chunks the decoder will use to
	// resolve backward references beyond the ring buffer. They must match
	// the dictionaries supplied to the encoder. Up to 15 chunks are allowed;
	// each must be non-empty. Dictionaries are preserved across Reset.
	Dictionaries [][]byte
}

ReaderOptions configures the brotli decoder.

type Writer

type Writer struct {
	// contains filtered or unexported fields
}

Writer compresses data into brotli format.

Callers must Close the Writer to finalize the brotli stream.

func NewWriter

func NewWriter(dst io.Writer, level int) (*Writer, error)

NewWriter returns a new Writer compressing data to dst at the given quality level. Supported levels are 0 (BestSpeed) through 11 (BestCompression).

func NewWriterOptions

func NewWriterOptions(dst io.Writer, level int, opts WriterOptions) (*Writer, error)

NewWriterOptions returns a new Writer compressing data to dst at the given quality level with additional tuning options. LGWin range is 10–24; 0 selects the default (22). Compound dictionaries supplied via opts.Dictionaries require level >= 2.

func (*Writer) Close

func (w *Writer) Close() error

Close flushes remaining data, finalizes the brotli stream by writing the final empty meta-block, and writes everything to the underlying writer. Close does not close the underlying writer.

func (*Writer) Flush

func (w *Writer) Flush() error

Flush compresses any buffered data and writes it to the underlying writer as one or more non-final meta-blocks. Flush does not finalize the brotli stream; call Close for that.

func (*Writer) Reset

func (w *Writer) Reset(dst io.Writer)

Reset discards internal state and switches to writing to dst. This permits reusing a Writer rather than allocating a new one. Compound dictionaries supplied via WriterOptions are preserved.

func (*Writer) Write

func (w *Writer) Write(p []byte) (int, error)

Write compresses p and writes it to the underlying writer. Data may be buffered internally; call Flush or Close to ensure all data is written.

type WriterOptions

type WriterOptions struct {
	// Dictionaries are compound dictionary chunks the encoder may reference
	// as backward distances beyond the ring buffer, useful when inputs share
	// content with a known corpus. Build each chunk once with
	// [PrepareDictionary] and share the resulting *PreparedDictionary across
	// any number of Writers. Up to 15 chunks are allowed and compound
	// dictionaries require compression level >= 2. Dictionaries are preserved
	// across Reset.
	Dictionaries []*PreparedDictionary

	// LGWin sets the base-2 logarithm of the sliding window size (10–24).
	// 0 selects the default (22).
	LGWin int

	// SizeHint is the expected total input size in bytes. When set, the
	// encoder uses it to make better decisions about context modeling and
	// hasher selection for large inputs. 0 means unknown; the encoder will
	// auto-estimate from the first Write call.
	//
	// SizeHint is advisory: a lower-than-actual value is safe in both
	// correctness and compression ratio. The encoder transparently
	// promotes its internal hasher to the unbounded variant if buffered
	// input crosses the size-tuned threshold, preserving the bucket state
	// that was learned under the small-hint dispatch.
	SizeHint uint
}

WriterOptions configures advanced tuning knobs for the brotli encoder. The compression level is passed positionally to NewWriter / NewWriterOptions.

Directories

Path Synopsis
cmd
genstaticdict command
Generates static dictionary lookup tables for the Brotli encoder.
Generates static dictionary lookup tables for the Brotli encoder.
internal
benchcache
Package benchcache stores precomputed compression outputs on disk so that benchmark and test setup does not have to re-run the (slow) C reference encoder or other expensive paths on every run.
Package benchcache stores precomputed compression outputs on disk so that benchmark and test setup does not have to re-run the (slow) C reference encoder or other expensive paths on every run.
core
Package core holds RFC 7932 spec constants, Huffman primitives, and lookup tables shared between the encoder and the decoder.
Package core holds RFC 7932 spec constants, Huffman primitives, and lookup tables shared between the encoder and the decoder.
cref
Package cref wraps the C reference brotli encoder/decoder for use in tests.
Package cref wraps the C reference brotli encoder/decoder for use in tests.
creftest
Package creftest provides shared test helpers that wrap the C reference brotli encoder/decoder.
Package creftest provides shared test helpers that wrap the C reference brotli encoder/decoder.
encoder
Package encoder hosts the brotli streaming/fast encoder implementations and shared codec primitives consumed by the root brrr package's decoder.
Package encoder hosts the brotli streaming/fast encoder implementations and shared codec primitives consumed by the root brrr package's decoder.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL