Documentation
¶
Overview ¶
Package brrr implements the Brotli compressed data format (RFC 7932).
Brotli is a lossless compression algorithm that typically produces better ratios than gzip and zstd at the cost of slower compression. Decompression is fast at every quality level. Every major browser has accepted Content-Encoding: br since 2016, which makes brotli a good match for static web asset pipelines.
Basic usage ¶
Streaming compression through an io.Writer:
var buf bytes.Buffer
w, err := brrr.NewWriter(&buf, 6)
if err != nil {
log.Fatal(err)
}
if _, err := w.Write(data); err != nil {
log.Fatal(err)
}
if err := w.Close(); err != nil {
log.Fatal(err)
}
Streaming decompression through an io.Reader:
r := brrr.NewReader(src)
if _, err := io.Copy(dst, r); err != nil {
log.Fatal(err)
}
One-shot decompression of a complete in-memory blob:
out, err := brrr.Decompress(compressed)
Key types ¶
- Writer implements io.WriteCloser and compresses data written to an underlying io.Writer. Close must be called to finalize the brotli stream.
- Reader implements io.Reader and decompresses a brotli stream read from an underlying io.Reader.
- Decompress is a convenience function for one-shot decompression of a complete in-memory blob.
Writer and Reader both expose Reset methods, so the same instance can be reused across payloads without reallocating per-call buffers. Both also accept compound dictionaries — useful when the payloads share content with a known corpus. The encoder takes pre-built PreparedDictionary values via [WriterOptions.Dictionaries], so the hash table is built once and may be shared across many Writers and goroutines. The decoder takes raw dictionary bytes via [ReaderOptions.Dictionaries].
Choosing a quality level ¶
Quality controls the ratio/speed trade-off on a scale of 0 to 11:
- Quality 0–1 use fast one-pass and two-pass encoders optimised for throughput.
- Quality 2–9 use a streaming encoder that spends progressively more work per byte for better ratios.
- Quality 10–11 use a Zopfli-style optimal-parsing encoder that produces the best ratios but is orders of magnitude slower than the lower levels.
The canonical use case for brotli is static asset compression — CSS, JS, HTML, fonts, WASM — where the data is compressed once at build time and served many times; quality 11 is the right choice there. For on-the-fly compression, quality 5–6 gives a good balance of speed and ratio.
Compatibility ¶
Output is byte-compatible with the reference implementation at https://github.com/google/brotli. Any conforming brotli decoder can read streams produced by this package, and this package can decode any valid brotli stream.
Example (CompoundDictionary) ¶
// A compound dictionary supplies extra reference data that both the
// encoder and decoder can use for backward references. This is useful
// when compressing data that shares content with a known corpus.
dict := []byte(strings.Repeat("common dictionary content that appears in many documents. ", 50))
input := []byte("This document references common dictionary content that appears in many documents. " +
"It benefits from the shared dictionary because repeated phrases compress better.")
// Build the encoder-side hash table once. The same *PreparedDictionary
// can be shared across many Writers, including across goroutines.
pd, err := brrr.PrepareDictionary(dict)
if err != nil {
log.Fatal(err)
}
// Compress with compound dictionary.
var compressed bytes.Buffer
w, err := brrr.NewWriterOptions(&compressed, 4, brrr.WriterOptions{
Dictionaries: []*brrr.PreparedDictionary{pd},
})
if err != nil {
log.Fatal(err)
}
if _, err := w.Write(input); err != nil {
log.Fatal(err)
}
if err := w.Close(); err != nil {
log.Fatal(err)
}
// Decompress with the same compound dictionary.
r, err := brrr.NewReaderOptions(&compressed, brrr.ReaderOptions{
Dictionaries: [][]byte{dict},
})
if err != nil {
log.Fatal(err)
}
result, err := io.ReadAll(r)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(result))
Output: This document references common dictionary content that appears in many documents. It benefits from the shared dictionary because repeated phrases compress better.
Example (Compress) ¶
input := []byte("Hello, brotli!")
var compressed bytes.Buffer
w, err := brrr.NewWriter(&compressed, 6)
if err != nil {
log.Fatal(err)
}
if _, err := w.Write(input); err != nil {
log.Fatal(err)
}
if err := w.Close(); err != nil {
log.Fatal(err)
}
Example (Decompress) ¶
// Compress some data first.
original := []byte("Decompress restores the original bytes from a brotli-compressed slice.")
var compressed bytes.Buffer
w, err := brrr.NewWriter(&compressed, 4)
if err != nil {
log.Fatal(err)
}
if _, err = w.Write(original); err != nil {
log.Fatal(err)
}
if err = w.Close(); err != nil {
log.Fatal(err)
}
// One-shot decompression from a byte slice.
result, err := brrr.Decompress(compressed.Bytes())
if err != nil {
log.Fatal(err)
}
fmt.Println(string(result))
Output: Decompress restores the original bytes from a brotli-compressed slice.
Example (Pool) ¶
// For repeated compression and decompression (e.g. per-request in an
// HTTP server, per-message in a stream processor), keep *brrr.Writer
// and *brrr.Reader instances in sync.Pools. Get, Reset, use, Put back.
// This avoids allocating encoder hash tables and decoder ring buffers
// each time.
writerPool := sync.Pool{
New: func() any {
w, err := brrr.NewWriter(io.Discard, 5)
if err != nil {
// NewWriter only fails for an invalid level, which is
// static here.
panic(err)
}
return w
},
}
readerPool := sync.Pool{
New: func() any { return brrr.NewReader(nil) },
}
compress := func(dst io.Writer, payload []byte) error {
w := writerPool.Get().(*brrr.Writer)
defer writerPool.Put(w)
w.Reset(dst)
if _, err := w.Write(payload); err != nil {
return err
}
return w.Close()
}
decompress := func(src io.Reader) ([]byte, error) {
r := readerPool.Get().(*brrr.Reader)
defer readerPool.Put(r)
r.Reset(src)
return io.ReadAll(r)
}
payloads := []string{
"First response body.",
"Second response body.",
}
for _, p := range payloads {
var buf bytes.Buffer
if err := compress(&buf, []byte(p)); err != nil {
log.Fatal(err)
}
out, err := decompress(&buf)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(out))
}
Output: First response body. Second response body.
Example (Reuse) ¶
// Reset lets you reuse a Writer and Reader across multiple payloads,
// avoiding repeated allocations.
payloads := []string{
"First payload: the quick brown fox jumps over the lazy dog.",
"Second payload: pack my box with five dozen liquor jugs.",
}
var compressed bytes.Buffer
w, err := brrr.NewWriter(&compressed, 4)
if err != nil {
log.Fatal(err)
}
r := brrr.NewReader(nil)
for _, payload := range payloads {
// Compress
compressed.Reset()
w.Reset(&compressed)
if _, err := w.Write([]byte(payload)); err != nil {
log.Fatal(err)
}
if err := w.Close(); err != nil {
log.Fatal(err)
}
// Decompress
r.Reset(&compressed)
result, err := io.ReadAll(r)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(result))
}
Output: First payload: the quick brown fox jumps over the lazy dog. Second payload: pack my box with five dozen liquor jugs.
Example (Roundtrip) ¶
// Compress
original := []byte("Hello, brotli! This is a round-trip compression example.")
var compressed bytes.Buffer
w, err := brrr.NewWriter(&compressed, 6)
if err != nil {
log.Fatal(err)
}
if _, err := w.Write(original); err != nil {
log.Fatal(err)
}
if err := w.Close(); err != nil {
log.Fatal(err)
}
// Decompress
r := brrr.NewReader(&compressed)
decompressed, err := io.ReadAll(r)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(decompressed))
Output: Hello, brotli! This is a round-trip compression example.
Index ¶
Examples ¶
Constants ¶
const ( BestSpeed = 0 BestCompression = 11 )
Compression level constants.
Variables ¶
This section is empty.
Functions ¶
func Decompress ¶
Decompress decodes the brotli-compressed data and returns the original bytes.
Types ¶
type PreparedDictionary ¶
type PreparedDictionary = encoder.PreparedDictionary
PreparedDictionary is a compound dictionary chunk built once and shared across many Writers. See [WriterOptions.Dictionaries].
func PrepareDictionary ¶
func PrepareDictionary(data []byte) (*PreparedDictionary, error)
PrepareDictionary builds an immutable PreparedDictionary from the given source bytes, suitable for use as a compound dictionary chunk via [WriterOptions.Dictionaries]. The returned dictionary may be shared across any number of Writers and goroutines.
The returned dictionary keeps a reference to data; the caller must not mutate data while any Writer holding the dictionary is still in use.
Returns an error if data is empty.
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
Reader decompresses brotli-compressed data from an underlying io.Reader.
func NewReaderOptions ¶
func NewReaderOptions(src io.Reader, opts ReaderOptions) (*Reader, error)
NewReaderOptions returns a new Reader reading brotli-compressed data from src with additional options. Compound dictionaries supplied via opts.Dictionaries must match those used by the encoder.
type ReaderOptions ¶
type ReaderOptions struct {
// Dictionaries are compound dictionary chunks the decoder will use to
// resolve backward references beyond the ring buffer. They must match
// the dictionaries supplied to the encoder. Up to 15 chunks are allowed;
// each must be non-empty. Dictionaries are preserved across Reset.
Dictionaries [][]byte
}
ReaderOptions configures the brotli decoder.
type Writer ¶
type Writer struct {
// contains filtered or unexported fields
}
Writer compresses data into brotli format.
Callers must Close the Writer to finalize the brotli stream.
func NewWriter ¶
NewWriter returns a new Writer compressing data to dst at the given quality level. Supported levels are 0 (BestSpeed) through 11 (BestCompression).
func NewWriterOptions ¶
NewWriterOptions returns a new Writer compressing data to dst at the given quality level with additional tuning options. LGWin range is 10–24; 0 selects the default (22). Compound dictionaries supplied via opts.Dictionaries require level >= 2.
func (*Writer) Close ¶
Close flushes remaining data, finalizes the brotli stream by writing the final empty meta-block, and writes everything to the underlying writer. Close does not close the underlying writer.
func (*Writer) Flush ¶
Flush compresses any buffered data and writes it to the underlying writer as one or more non-final meta-blocks. Flush does not finalize the brotli stream; call Close for that.
type WriterOptions ¶
type WriterOptions struct {
// Dictionaries are compound dictionary chunks the encoder may reference
// as backward distances beyond the ring buffer, useful when inputs share
// content with a known corpus. Build each chunk once with
// [PrepareDictionary] and share the resulting *PreparedDictionary across
// any number of Writers. Up to 15 chunks are allowed and compound
// dictionaries require compression level >= 2. Dictionaries are preserved
// across Reset.
Dictionaries []*PreparedDictionary
// LGWin sets the base-2 logarithm of the sliding window size (10–24).
// 0 selects the default (22).
LGWin int
// SizeHint is the expected total input size in bytes. When set, the
// encoder uses it to make better decisions about context modeling and
// hasher selection for large inputs. 0 means unknown; the encoder will
// auto-estimate from the first Write call.
//
// SizeHint is advisory: a lower-than-actual value is safe in both
// correctness and compression ratio. The encoder transparently
// promotes its internal hasher to the unbounded variant if buffered
// input crosses the size-tuned threshold, preserving the bucket state
// that was learned under the small-hint dispatch.
SizeHint uint
}
WriterOptions configures advanced tuning knobs for the brotli encoder. The compression level is passed positionally to NewWriter / NewWriterOptions.
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
cmd
|
|
|
genstaticdict
command
Generates static dictionary lookup tables for the Brotli encoder.
|
Generates static dictionary lookup tables for the Brotli encoder. |
|
internal
|
|
|
benchcache
Package benchcache stores precomputed compression outputs on disk so that benchmark and test setup does not have to re-run the (slow) C reference encoder or other expensive paths on every run.
|
Package benchcache stores precomputed compression outputs on disk so that benchmark and test setup does not have to re-run the (slow) C reference encoder or other expensive paths on every run. |
|
core
Package core holds RFC 7932 spec constants, Huffman primitives, and lookup tables shared between the encoder and the decoder.
|
Package core holds RFC 7932 spec constants, Huffman primitives, and lookup tables shared between the encoder and the decoder. |
|
cref
Package cref wraps the C reference brotli encoder/decoder for use in tests.
|
Package cref wraps the C reference brotli encoder/decoder for use in tests. |
|
creftest
Package creftest provides shared test helpers that wrap the C reference brotli encoder/decoder.
|
Package creftest provides shared test helpers that wrap the C reference brotli encoder/decoder. |
|
encoder
Package encoder hosts the brotli streaming/fast encoder implementations and shared codec primitives consumed by the root brrr package's decoder.
|
Package encoder hosts the brotli streaming/fast encoder implementations and shared codec primitives consumed by the root brrr package's decoder. |






