parco

package module
v0.13.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 7, 2026 License: MIT Imports: 12 Imported by: 0

README

CI Go Report Card GoDoc License Go Version

Parco

Parco is a high-performance binary serializer and deserializer for Go: no reflection, highly extensible, focused on speed and usability through generics, with zero external dependencies for the core API.

Performance: 2-3x faster than JSON, payloads 40% smaller than MessagePack, 10x fewer allocations.

A note on honesty: These numbers might make you suspicious, and you should be. Parco achieves this by trading flexibility for speed: it uses a known schema approach (like Protocol Buffers) rather than self-describing formats (like JSON). Both sender and receiver must agree on the data structure beforehand. This isn't a sneaky benchmark trick--it's a fundamental design decision for specific use cases. If you need to inspect unknown JSON from the wild, Parco won't help you. If you control both ends of the wire and want it fast, keep reading. See the benchmarks section for an honest comparison and guidance on when to use what.

Many packages rely on struct tags and reflection, which is convenient until you need custom types or care about performance. Parco uses a builder API with getters and setters: you describe how each field is read and written, and the library generates efficient, allocation-conscious code. Think of it as Go code that writes Go code, except you're the one writing it.

Table of contents

Features

  • No reflection -- Types and fields are described via a fluent builder and generics.
  • Composable -- Use primitives, slices, maps, structs, and optional types; nest and reuse builders.
  • Multi-model on the same stream -- Register multiple models with a type ID and parse/compile from a single reader/writer.
  • Control over layout -- Choose byte order (e.g. binary.LittleEndian) for integers and floats; pick header types for length-prefixed data (e.g. UInt8Header() for up to 255 elements).
  • Reusable types -- Build a type once (e.g. parco.Int(binary.LittleEndian)), then use it in structs, slices, maps, or standalone.
  • Optional factory -- Use ObjectFactory[T]() for default zero values or PooledFactory for pooling instances when parsing.

When to use Parco?

Use Parco when:

  • You control both client and server (shared schema)
  • Performance and bandwidth are critical
  • You want compile-time safety (no reflection)
  • Working with IoT, microservices, or game servers
  • Need predictable memory usage (no hidden allocations)

Consider alternatives when:

  • Schema changes frequently without versioning
  • Need human-readable debug output
  • Third-party integrations require JSON
  • Dynamic/exploratory data analysis

In other words: if you're building a public REST API that strangers will consume, Parco is probably overkill (and arguably the wrong tool). If you're building a game server where every millisecond counts and you control the client, Parco makes sense. Choose your battles.

Why Parco instead of Protocol Buffers or Cap'n Proto?

Fair question. Protobuf and Cap'n Proto are mature, battle-tested, and widely used. Here's why you might still choose Parco:

When Parco makes sense:

Pure Go projects: If your entire stack is Go and you don't need cross-language support, Protobuf's complexity (.proto files, protoc compiler, generated code, versioning generated files) is unnecessary overhead. Parco is just Go code.

No codegen pipeline: Protobuf requires writing .proto files, running protoc, managing generated code, and keeping it in sync with your actual code. Parco's builder API lives directly in your Go code. Change your schema, change your builder, done.

Learning curve: Proto syntax is another DSL to learn. Parco is just Go--if you know Go, you know Parco. No new syntax, no protoc flags, no plugin versions to juggle.

Full Go flexibility: Protobuf limits you to what the Proto language supports. Want custom serialization logic? Conditional fields? Complex validation inline? With Parco, you have the full power of Go (closures, interfaces, generics) directly in your builder.

Zero dependencies (core API): Parco's core has zero external dependencies. Protobuf requires the protobuf runtime library, and Cap'n Proto needs its own runtime. This matters for small binaries, embedded systems, or when you just want fewer moving parts.

Tighter control: With Protobuf, you're at the mercy of what the code generator produces. With Parco, you write exactly what you want. Want to serialize a field conditionally based on runtime state? Just write the code.

When Protobuf/Cap'n Proto make more sense:

Cross-language support: If you need to talk to Rust, Python, C++, or any non-Go language, Protobuf is the obvious choice. Parco is Go-only and will stay that way.

Industry standard: Protobuf is used by Google, gRPC, and thousands of companies. It's proven at massive scale. Parco is a hobby project by comparison.

Tooling ecosystem: Protobuf has validators, linters, documentation generators, schema registries, and IDE plugins. Parco has... this README.

Schema evolution guarantees: Protobuf's backward/forward compatibility rules are well-documented and enforced by the toolchain. With Parco, you're responsible for not breaking things.

Team size: If you have multiple teams working on different services in different languages, Protobuf's schema-as-contract model makes sense. If it's just you or a small Go team, Parco's simpler.

Realistic comparison:

Think of Parco as the "write SQL queries directly" approach, while Protobuf is the "use an ORM" approach. ORMs (Protobuf) give you safety, abstraction, and cross-database (cross-language) support. Raw SQL (Parco) gives you control, performance, and simplicity when you don't need the abstraction.

Neither is universally better. Choose based on your constraints:

  • Pure Go + want simplicity → Parco
  • Multi-language + need tooling → Protobuf
  • Maximum performance + control → Cap'n Proto (zero-copy) or Parco
  • Industry standard + large team → Protobuf

Quick decision matrix:

Your Situation Recommendation
Pure Go, small team, want simplicity Parco
Multi-language microservices Protocol Buffers
Go + some Python/Rust/JS Protocol Buffers
Maximum performance, Go-only, willing to manage schema manually Parco
Need industry standard with proven tooling Protocol Buffers
Want zero-copy, can handle complexity FlatBuffers or Cap'n Proto
Public API, need flexibility JSON or MessagePack
Debugging/logging/config files JSON or YAML

Bottom line: If you're already using Protobuf successfully, there's probably no reason to switch. If you're starting fresh on a pure Go project and find Protobuf's machinery overkill, give Parco a look.

Installation

go get github.com/sonirico/parco

Requires Go 1.19+.

Concepts

Concept Description
Type A Type[T] knows how to Parse and Compile values of type T (e.g. UInt8(), Int(binary.LittleEndian)).
Builder Builder[T](factory) defines the layout of a struct (or model) by adding fields. Each field has a getter (compile) and setter (parse).
Parser Produced by the builder; reads bytes from an io.Reader and fills a T (using the factory to create the value).
Compiler Produced by the builder; writes a T to an io.Writer.
Factory Factory[T] provides new instances when parsing (e.g. ObjectFactory[T]() returns zero value; you can use a pool for reuse).
Header For variable-length data (slices, maps, varchar), a “header” type encodes the length (e.g. UInt8Header() for 0–255, UInt16HeaderLE() for 0–65535).

You typically build once, then reuse the same parser and compiler for many reads/writes.

Quick start

package main

import (
	"bytes"
	"encoding/binary"
	"log"

	"github.com/sonirico/parco"
)

func main() {
	type Point struct {
		X, Y int32
	}

	factory := parco.ObjectFactory[Point]()
	parser, compiler := parco.Builder[Point](factory).
		Int32(binary.LittleEndian,
			func(p *Point) int32 { return p.X },
			func(p *Point, x int32) { p.X = x },
		).
		Int32(binary.LittleEndian,
			func(p *Point) int32 { return p.Y },
			func(p *Point, y int32) { p.Y = y },
		).
		Parco()

	value := Point{X: 1, Y: 2}
	var buf bytes.Buffer
	_ = compiler.Compile(value, &buf)

	parsed, _ := parser.Parse(&buf)
	log.Println(parsed) // {1 2}
}

Usage

Parser & compiler (single model)

Define a model and use the builder to add fields in order. Call Parco() to obtain the parser and compiler.

package main

import (
  "bytes"
  "encoding/binary"
  "encoding/json"
  "log"
  "reflect"
  "time"

  "github.com/sonirico/parco"
)

type (
  Animal struct {
    Age    uint8
    Specie string
  }

  Example struct {
    Greet              string
    LifeSense          uint8
    Friends            []string
    Grades             map[string]uint8
    EvenOrOdd          bool
    Pet                Animal
    Pointer            *int
    Flags              [5]bool
    Balance            float32
    MorePreciseBalance float64
    CreatedAt          time.Time
  }
)

func (e Example) String() string {
  bts, _ := json.MarshalIndent(e, "", "\t")
  return string(bts)
}

func main() {
  animalBuilder := parco.Builder[Animal](parco.ObjectFactory[Animal]()).
    SmallVarchar(
      func(a *Animal) string { return a.Specie },
      func(a *Animal, specie string) { a.Specie = specie },
    ).
    UInt8(
      func(a *Animal) uint8 { return a.Age },
      func(a *Animal, age uint8) { a.Age = age },
    )

  exampleFactory := parco.ObjectFactory[Example]()

  exampleParser, exampleCompiler := parco.Builder[Example](exampleFactory).
    SmallVarchar(
      func(e *Example) string { return e.Greet },
      func(e *Example, s string) { e.Greet = s },
    ).
    UInt8(
      func(e *Example) uint8 { return e.LifeSense },
      func(e *Example, lifeSense uint8) { e.LifeSense = lifeSense },
    ).
    Map(
      parco.MapField[Example, string, uint8](
        parco.UInt8Header(),
        parco.SmallVarchar(),
        parco.UInt8(),
        func(s *Example, grades map[string]uint8) { s.Grades = grades },
        func(s *Example) map[string]uint8 { return s.Grades },
      ),
    ).
    Slice(
      parco.SliceField[Example, string](
        parco.UInt8Header(),  // up to 255 items
        parco.SmallVarchar(), // each item's type
        func(e *Example, friends parco.SliceView[string]) { e.Friends = friends },
        func(e *Example) parco.SliceView[string] { return e.Friends },
      ),
    ).
    Bool(
      func(e *Example) bool { return e.EvenOrOdd },
      func(e *Example, evenOrOdd bool) { e.EvenOrOdd = evenOrOdd },
    ).
    Struct(
      parco.StructField[Example, Animal](
        func(e *Example) Animal { return e.Pet },
        func(e *Example, a Animal) { e.Pet = a },
        animalBuilder,
      ),
    ).
    Option(
      parco.OptionField[Example, int](
        parco.Int(binary.LittleEndian),
        func(e *Example, value *int) { e.Pointer = value },
        func(e *Example) *int { return e.Pointer },
      ),
    ).
    Array(
      parco.ArrayField[Example, bool](
        5,
        parco.Bool(),
        func(e *Example, flags parco.SliceView[bool]) {
          copy(e.Flags[:], flags)
        },
        func(e *Example) parco.SliceView[bool] {
          return e.Flags[:]
        },
      ),
    ).
    Float32(
      binary.LittleEndian,
      func(e *Example) float32 {
        return e.Balance
      },
      func(e *Example, balance float32) {
        e.Balance = balance
      },
    ).
    Float64(
      binary.LittleEndian,
      func(e *Example) float64 {
        return e.MorePreciseBalance
      },
      func(e *Example, balance float64) {
        e.MorePreciseBalance = balance
      },
    ).
    TimeUTC(
      func(e *Example) time.Time {
        return e.CreatedAt
      },
      func(e *Example, createdAt time.Time) {
        e.CreatedAt = createdAt
      },
    ).
    Parco()

  ex := Example{
    Greet:              "hey",
    LifeSense:          42,
    Grades:             map[string]uint8{"math": 5, "english": 6},
    Friends:            []string{"@boliri", "@danirod", "@enrigles", "@f3r"},
    EvenOrOdd:          true,
    Pet:                Animal{Age: 3, Specie: "cat"},
    Pointer:            parco.Ptr(73),
    Flags:              [5]bool{true, false, false, true, false},
    Balance:            234.987,
    MorePreciseBalance: 1234243.5678,
    CreatedAt:          time.Now().UTC(),
  }

  output := bytes.NewBuffer(nil)
  if err := exampleCompiler.Compile(ex, output); err != nil {
    log.Fatal(err)
  }

  log.Println(parco.FormatBytes(output.Bytes()))

  parsed, err := exampleParser.ParseBytes(output.Bytes())

  if err != nil {
    log.Fatal(err)
  }

  log.Println(parsed.String())

  if !reflect.DeepEqual(ex, parsed) {
    panic("not equals")
  }
}
Single types

You can use primitive and composite types without a full struct builder.

Integer
func main() {
  intType := parco.Int(binary.LittleEndian)
  buf := bytes.NewBuffer(nil)
  _ = intType.Compile(math.MaxInt, buf)
  n, _ := intType.Parse(buf)
  log.Println(n == math.MaxInt)
}
Slice of structs
import (
  "bytes"
  "log"

  "github.com/sonirico/parco"
)

type (
  Animal struct {
    Age    uint8
    Specie string
  }
)

func main() {
  animalBuilder := parco.Builder[Animal](parco.ObjectFactory[Animal]()).
    SmallVarchar(
      func(a *Animal) string { return a.Specie },
      func(a *Animal, specie string) { a.Specie = specie },
    ).
    UInt8(
      func(a *Animal) uint8 { return a.Age },
      func(a *Animal, age uint8) { a.Age = age },
    )

  animalsType := parco.Slice[Animal](
    parco.UInt8Header(), // length prefix: up to 255 items
    parco.Struct[Animal](animalBuilder),
  )

  payload := []Animal{
    {Specie: "cat", Age: 32},
    {Specie: "dog", Age: 12},
  }

  buf := bytes.NewBuffer(nil)
  _ = animalsType.Compile(parco.SliceView[Animal](payload), buf)
  log.Println(buf.Bytes())

  res, _ := animalsType.Parse(buf)
  log.Println(res.Len())
  _ = res.Range(func(animal Animal) error {
    log.Println(animal)
    return nil
  })
}
Multi-model (parsers & compilers)

Serialize and deserialize different models on the same stream by registering them with a type ID. The first field written/read is the model ID (e.g. from UInt8Header()), then the payload.

Models that implement ParcoID() T can use Compile(item, w); otherwise use CompileAny(id, item, w).

type (
  Animal struct {
    Age    uint8
    Specie string
  }

  Flat struct {
    Price   float32
    Address string
  }
)

const (
  AnimalType int = 0
  FlatType       = 1
)

func (a Animal) ParcoID() int { return AnimalType }
func (a Flat) ParcoID() int   { return FlatType }

func main() {
  animalBuilder := parco.Builder[Animal](parco.ObjectFactory[Animal]()).
    SmallVarchar(
      func(a *Animal) string { return a.Specie },
      func(a *Animal, specie string) { a.Specie = specie },
    ).
    UInt8(
      func(a *Animal) uint8 { return a.Age },
      func(a *Animal, age uint8) { a.Age = age },
    )

  flatBuilder := parco.Builder[Flat](parco.ObjectFactory[Flat]()).
    Float32(
      binary.LittleEndian,
      func(f *Flat) float32 { return f.Price },
      func(f *Flat, price float32) { f.Price = price },
    ).
    SmallVarchar(
      func(f *Flat) string { return f.Address },
      func(f *Flat, address string) { f.Address = address },
    )

  parCo := parco.MultiBuilder(parco.UInt8Header()). // up to 255 model types
    MustRegister(AnimalType, animalBuilder).
    MustRegister(FlatType, flatBuilder)

  buf := bytes.NewBuffer(nil)

  // Compile when model implements ParcoID():
  _ = parCo.Compile(Animal{Age: 10, Specie: "monkeys"}, buf)
  _ = parCo.Compile(Flat{Price: 42, Address: "Plaza mayor"}, buf)

  // Or specify ID explicitly:
  _ = parCo.CompileAny(AnimalType, Animal{Age: 7, Specie: "felix catus"}, buf)

  id, something, _ := parCo.Parse(buf)
  Print(id, something)
  id, something, _ = parCo.Parse(buf)
  Print(id, something)
  id, something, _ = parCo.Parse(buf)
  Print(id, something)
}

func Print(id int, x any) {
  switch id {
  case AnimalType:
    log.Println("animal:", x.(Animal))
  case FlatType:
    log.Println("flat:", x.(Flat))
  }
}

Supported types

Field Size
byte 1
int8 1
uint8 1
int16 2
uint16 2
int32 4
uint32 4
int64 8
uint64 8
float32 4
float64 8
int 4/8 (platform)
bool 1
small varchar dyn (up to 255)
varchar dyn (up to 65535)
text dyn (up to max uint32 chars)
long text dyn (up to max uint64 chars)
string dyn
bytes (blob) dyn
map variable
slice variable
array (fixed) length × element size
struct sum of field sizes
time.Time 8 (+ small varchar if TZ aware)
optional[T] (pointer) 1 + inner size

For length-prefixed types (slices, maps, varchars), use a header type for the length: UInt8Header(), UInt16HeaderLE(), Int32LEHeader(), etc., depending on the maximum count you need.

Error handling

The package uses sentinel errors and a few custom types you can check with errors.Is / type assertions:

Error Description
parco.ErrNotIntegerType Integer type assertion failed.
parco.ErrOverflow Value exceeds the range of the header or type.
parco.ErrCannotRead Fewer bytes read than required.
parco.ErrCannotWrite Fewer bytes written than required.
parco.ErrAlreadyRegistered Multi-model: type ID already registered.
parco.ErrUnknownType Multi-model: unknown type ID when parsing, or CompileAny with wrong type.
parco.ErrUnSufficientBytes Not enough bytes (want/have).
parco.ErrFieldNotFound Field lookup failed.
parco.ErrTypeAssertion Type assertion failed (expected/actual).
parco.ErrCompile Generic compile-time error (reason string).

Examples

Run the examples from the repo root:

Example Description
examples/builder Full builder API: parser + compiler from a single builder.
examples/compiler Compiler-only usage (no parser).
examples/parser Parser-only usage (no compiler).
examples/registry Multi-model: several types on the same stream with MultiBuilder.
examples/registry_singleton Global multi-model registry.
go run ./examples/registry/

Benchmarks

Run benchmarks:

# Using just
just bench

# Direct go test
go test -bench=. -benchmem
Performance Comparison

Parco vs JSON vs MessagePack on an Intel i7-8750H @ 2.20GHz:

Format Small (91 bytes) Medium (742 bytes) Large (8123 bytes)
Parco 2373 ns/op
91 bytes
3 allocs
15727 ns/op
742 bytes
3 allocs
154303 ns/op
8123 bytes
3 allocs
JSON 2972 ns/op (1.25x slower)
269 bytes (2.96x larger)
23 allocs (7.6x more)
28305 ns/op (1.8x slower)
1681 bytes (2.27x larger)
203 allocs (67x more)
319851 ns/op (2.07x slower)
16637 bytes (2.05x larger)
2003 allocs (667x more)
MessagePack 3020 ns/op (1.27x slower)
155 bytes (1.70x larger)
25 allocs (8.3x more)
20051 ns/op (1.27x slower)
991 bytes (1.34x larger)
207 allocs (69x more)
206755 ns/op (1.34x slower)
10171 bytes (1.25x larger)
2007 allocs (669x more)

Key Takeaways:

  • Consistently faster: 25-100% faster than alternatives
  • Minimal allocations: Only 3 allocations regardless of payload size
  • Compact payloads: Up to 3x smaller than JSON
  • Scales linearly: Performance degrades predictably with size
Why is Parco faster?

Design Philosophy: Known Schema vs Self-Describing

Aspect Parco JSON/MessagePack
Schema Known at compile time Embedded in each message
Field names Not transmitted Included in payload
Type info Pre-defined Encoded per value
Use case Client-server with shared schema Dynamic/exploratory data
Similar to Protocol Buffers, FlatBuffers Self-describing formats

This is not a "trick" - it's a deliberate trade-off:

  • If sender and receiver know the schema → Parco saves bandwidth/CPU
  • If schema is unknown or changes frequently → JSON/MessagePack are more flexible

Think of it like HTTP/1.1 (text headers) vs HTTP/2 (binary with HPACK compression) - both valid, different use cases.

Understanding These Benchmarks (or: Why You Should Be Skeptical)

These results might seem "too good to be true"--and in some sense, they are. Let me explain what Parco doesn't do, by design, which is why it's faster:

What Parco Does NOT Include:

No field names in the wire format JSON sends {"name":"Alice","age":30} with the actual strings "name" and "age" embedded. Parco just sends the values. Sender and receiver must agree on field order beforehand, or things will break spectacularly.

No type information per value JSON includes type metadata for every value (string, number, array, etc.). Parco assumes you defined the types at compile time and doesn't waste bytes transmitting them.

No schema discovery You can curl a random JSON endpoint and understand the structure. With Parco, you need the schema definition or you're just staring at binary gibberish.

No reflection at runtime JSON uses reflection to inspect structs (slow but flexible). Parco uses a builder API that generates direct code paths at compile time (fast but rigid).

What This Means:
// JSON payload example (self-describing):
{
  "user": {
    "id": 42,
    "name": "Alice",
    "active": true
  }
}
// Size: ~60 bytes, includes all metadata

// Parco payload (binary, schema known):
[42]["Alice"][1]
// Size: ~8 bytes, just the raw data

When Parco Makes Sense:

Microservices: Both ends control the schema, deploy together.

Game networking: Performance critical, schema updates via patches. Nobody wants to send field names when you're already fighting latency.

IoT devices: Bandwidth constrained, firmware includes schema. Battery life matters.

High-frequency trading: Every microsecond matters, and you're probably already using FIX or worse.

Internal APIs: Teams coordinate schema changes. If breaking changes require a meeting, you have bigger problems than serialization.

When JSON/MessagePack Make More Sense:

Public REST APIs: Unknown clients, self-documentation expected. You want developers to curl | jq your API and understand it.

Configuration files: Human readable, easy to edit. Good luck hand-editing binary files.

Debugging/logging: Need to inspect data without specialized tools. cat log.json beats xxd log.bin | head -c 1000.

Polyglot systems: Multiple languages without shared schema. Convincing your Rust/Python/JS teams to use the same schema definition is hard enough without binary formats.

Rapid prototyping: Schema changes frequently. If your schema changes three times a day, schema-based serialization will drive you insane.

The Honest Truth:

Parco is not universally better than JSON. It's optimized for a specific use case where:

  • You control both client and server
  • Performance matters more than flexibility
  • Schema is relatively stable or versioned
  • Binary format is acceptable

If that's not you, don't use Parco. Seriously.

If you need JSON's flexibility but better performance:

  • MessagePack: Good middle ground (still self-describing but more compact)
  • JSON with schema validation: Use JSON Schema for safety, keep the flexibility
  • CBOR: Like MessagePack but RFC-standardized (if you care about that)

If you want Parco-level performance with wider tooling:

  • Protocol Buffers: Industry standard, multi-language, mature tooling. Trade-off: .proto files, codegen pipeline, learning curve.
  • FlatBuffers: Zero-copy deserialization, very fast. Trade-off: more complex to use, schema must be stable.
  • Cap'n Proto: Protobuf's successor by its original author, simpler model, fast. Trade-off: smaller ecosystem than Protobuf.

Bottom line: These benchmarks compare different design philosophies honestly. Parco trades flexibility for speed. If that trade works for your use case, great. If not, there are plenty of other tools that might suit you better. Choose wisely.

Development

Parco uses just for task automation. If you don't have it: cargo install just

Common commands:

Command Description
just test Run tests with race detector
just test-coverage Generate HTML coverage report
just bench Run benchmarks
just bench-profile Profile CPU and memory
just lint Run golangci-lint
just format Format code
just ci Run all checks (test + lint)
just setup Install dev tools
just example builder Run specific example

See just --list for all available commands.

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

Quick start:

# Setup development tools
just setup

# Run tests
just test

# Run benchmarks
just bench

# Run all checks
just ci

Roadmap

Short-term

  • Comprehensive test coverage (41.8% and climbing)
  • Memory safety improvements (done: limits on allocations)
  • Schema evolution utilities
  • Validation helpers

Long-term

  • Static code generation from struct tags or DSL
  • Replace encoding/binary with faster, zero-alloc primitives
  • Custom Reader/Writer interfaces for single-byte operations
  • Cross-language support (C, Rust bindings)

Contributions welcome, but please open an issue before starting significant work.

Additional Resources

License

MIT License - see LICENSE for details.

Acknowledgments

Thanks to all contributors.

Documentation

Index

Constants

View Source
const (
	// MaxPoolMapSize limits the number of dynamically created pools to prevent memory leaks
	MaxPoolMapSize = 100
	// DefaultPoolAnySize is the size of buffers returned by GetAny() (8KB)
	DefaultPoolAnySize = 1 << 13
)
View Source
const (
	// MaxReasonableMapLength is the maximum allowed length for maps
	// to prevent malicious or corrupted data from causing excessive memory allocation.
	MaxReasonableMapLength = 10_000_000 // 10 million entries
)
View Source
const (
	// MaxReasonableSliceLength is the maximum allowed length for slices
	// to prevent malicious or corrupted data from causing excessive memory allocation.
	MaxReasonableSliceLength = 10_000_000 // 10 million elements
)
View Source
const (
	// MaxReasonableVarSize is the maximum allowed size for variable-length types
	// to prevent malicious or corrupted data from causing excessive memory allocation.
	// Set to 100MB - adjust based on your use case.
	MaxReasonableVarSize = 100 * 1024 * 1024 // 100MB
)

Variables

View Source
var (
	ErrNotIntegerType    = errors.New("not an integer type")
	ErrOverflow          = errors.New("bytes overflow")
	ErrCannotRead        = errors.New("unsufficient bytes read")
	ErrCannotWrite       = errors.New("unsufficient bytes written")
	ErrAlreadyRegistered = errors.New("builder is registered already")
	ErrUnknownType       = errors.New("unknown type")
	ErrInvalidLength     = errors.New("invalid length")
)
View Source
var (
	SinglePool = NewPool()
)

Functions

func Bytes2String

func Bytes2String(data []byte) string

func CompileAnyModel added in v0.12.0

func CompileAnyModel(id int, item any, w io.Writer) (err error)

func CompileBlob

func CompileBlob(x []byte, w io.Writer) (err error)

func CompileFloat32 added in v0.9.0

func CompileFloat32(f32 float32, box []byte, order binary.ByteOrder) (err error)

func CompileFloat64 added in v0.9.0

func CompileFloat64(f32 float64, box []byte, order binary.ByteOrder) (err error)

func CompileInt added in v0.5.0

func CompileInt(i int, box []byte, order binary.ByteOrder) (err error)

func CompileInt8 added in v0.5.0

func CompileInt8(i8 int8, box []byte) (err error)

func CompileInt8Header added in v0.5.0

func CompileInt8Header(i int, box []byte) (err error)

func CompileInt16 added in v0.5.0

func CompileInt16(i16 int16, box []byte, order binary.ByteOrder) (err error)

func CompileInt32 added in v0.5.0

func CompileInt32(i32 int32, box []byte, order binary.ByteOrder) (err error)

func CompileInt64 added in v0.5.0

func CompileInt64(i64 int64, box []byte, order binary.ByteOrder) (err error)

func CompileModel added in v0.12.0

func CompileModel(item serializable[int], w io.Writer) (err error)

func CompileString

func CompileString(x string, box []byte) (err error)

func CompileStringWriter

func CompileStringWriter(x string, w io.Writer) (err error)

func CompileTime added in v0.10.0

func CompileTime(t time.Time, box []byte, order binary.ByteOrder) error

func CompileUInt added in v0.5.0

func CompileUInt(u uint, box []byte, order binary.ByteOrder) (err error)

func CompileUInt8

func CompileUInt8(u8 uint8, box []byte) (err error)

func CompileUInt8Header

func CompileUInt8Header(i int, box []byte) (err error)

func CompileUInt16

func CompileUInt16(u16 uint16, box []byte, order binary.ByteOrder) (err error)

func CompileUInt32 added in v0.5.0

func CompileUInt32(u32 uint32, box []byte, order binary.ByteOrder) (err error)

func CompileUInt64 added in v0.5.0

func CompileUInt64(u64 uint64, box []byte, order binary.ByteOrder) (err error)

func FormatBytes added in v0.4.0

func FormatBytes(data []byte) string

func If added in v0.5.0

func If[T any](condition bool, a, b T) T

func MapType added in v0.2.0

func MapType[K comparable, V any](header IntType, keyType Type[K], valueType Type[V]) mapType[K, V]

func MaxByteSize64

func MaxByteSize64(byteLength int) int64

func MaxSize

func MaxSize(byteLength int) int

func ParseBlob

func ParseBlob(data []byte) ([]byte, error)

func ParseFloat32 added in v0.9.0

func ParseFloat32(data []byte, order binary.ByteOrder) (float32, error)

func ParseFloat64 added in v0.9.0

func ParseFloat64(data []byte, order binary.ByteOrder) (float64, error)

func ParseInt added in v0.5.0

func ParseInt(data []byte, order binary.ByteOrder) (int, error)

func ParseInt8 added in v0.5.0

func ParseInt8(data []byte) (int8, error)

func ParseInt8Header added in v0.5.0

func ParseInt8Header(box []byte) (int, error)

func ParseInt16 added in v0.5.0

func ParseInt16(data []byte, order binary.ByteOrder) (int16, error)

func ParseInt32 added in v0.5.0

func ParseInt32(data []byte, order binary.ByteOrder) (int32, error)

func ParseInt64 added in v0.5.0

func ParseInt64(data []byte, order binary.ByteOrder) (int64, error)

func ParseModel added in v0.12.0

func ParseModel(r io.Reader) (int, any, error)

func ParseString

func ParseString(data []byte) (res string, err error)

func ParseTime added in v0.10.0

func ParseTime(box []byte, order binary.ByteOrder) (time.Time, error)

func ParseUInt added in v0.5.0

func ParseUInt(data []byte, order binary.ByteOrder) (uint, error)

func ParseUInt8

func ParseUInt8(data []byte) (uint8, error)

func ParseUInt8Header

func ParseUInt8Header(data []byte) (int, error)

func ParseUInt16

func ParseUInt16(data []byte, order binary.ByteOrder) (uint16, error)

func ParseUInt32 added in v0.5.0

func ParseUInt32(data []byte, order binary.ByteOrder) (uint32, error)

func ParseUInt64 added in v0.5.0

func ParseUInt64(data []byte, order binary.ByteOrder) (uint64, error)

func Ptr added in v0.6.0

func Ptr[T any](t T) *T

func RegisterModel added in v0.12.0

func RegisterModel(id int, builder builderAny) (err error)

func String2Bytes

func String2Bytes(str string) (bts []byte)

func WithResetFunc added in v0.4.1

func WithResetFunc[T any](fn func(*T)) nativePooledFactoryOption[T]

Types

type ArrayType

type ArrayType[T any] struct {
	// contains filtered or unexported fields
}

func Array added in v0.6.0

func Array[T any](length int, inner Type[T]) ArrayType[T]

func (ArrayType[T]) ByteLength

func (t ArrayType[T]) ByteLength() int

func (ArrayType[T]) Compile

func (t ArrayType[T]) Compile(x Iterable[T], w io.Writer) error

func (ArrayType[T]) Parse

func (t ArrayType[T]) Parse(r io.Reader) (res Iterable[T], err error)

type BasicArrayField added in v0.8.0

type BasicArrayField[T, U any] struct {
	// contains filtered or unexported fields
}

func (BasicArrayField[T, U]) Compile added in v0.8.0

func (s BasicArrayField[T, U]) Compile(item *T, w io.Writer) error

func (BasicArrayField[T, U]) ID added in v0.8.0

func (s BasicArrayField[T, U]) ID() string

func (BasicArrayField[T, U]) Parse added in v0.8.0

func (s BasicArrayField[T, U]) Parse(item *T, r io.Reader) error

type BasicSliceField added in v0.7.0

type BasicSliceField[T, U any] struct {
	// contains filtered or unexported fields
}

func (BasicSliceField[T, U]) Compile added in v0.7.0

func (s BasicSliceField[T, U]) Compile(item *T, w io.Writer) error

func (BasicSliceField[T, U]) ID added in v0.7.0

func (s BasicSliceField[T, U]) ID() string

func (BasicSliceField[T, U]) Parse added in v0.7.0

func (s BasicSliceField[T, U]) Parse(item *T, r io.Reader) error

type BufferCursor

type BufferCursor struct {
	// contains filtered or unexported fields
}

func NewBufferCursor

func NewBufferCursor(data []byte, cursor int) BufferCursor

func (*BufferCursor) Read

func (b *BufferCursor) Read(box []byte) (int, error)

type Compiler

type Compiler[T any] struct {
	// contains filtered or unexported fields
}

func CompilerModel

func CompilerModel[T any]() *Compiler[T]

func (*Compiler[T]) Array added in v0.6.0

func (c *Compiler[T]) Array(field fieldCompiler[T]) *Compiler[T]

func (*Compiler[T]) Bool added in v0.6.0

func (c *Compiler[T]) Bool(getter Getter[T, bool]) *Compiler[T]

func (*Compiler[T]) Byte added in v0.6.0

func (c *Compiler[T]) Byte(getter Getter[T, byte]) *Compiler[T]

func (Compiler[T]) Compile

func (c Compiler[T]) Compile(value T, w io.Writer) error

func (*Compiler[T]) Field added in v0.6.0

func (c *Compiler[T]) Field(f fieldCompiler[T]) *Compiler[T]

func (*Compiler[T]) Float32 added in v0.9.0

func (c *Compiler[T]) Float32(order binary.ByteOrder, getter Getter[T, float32]) *Compiler[T]

func (*Compiler[T]) Float64 added in v0.9.0

func (c *Compiler[T]) Float64(order binary.ByteOrder, getter Getter[T, float64]) *Compiler[T]

func (*Compiler[T]) Int added in v0.6.0

func (c *Compiler[T]) Int(order binary.ByteOrder, getter Getter[T, int]) *Compiler[T]

func (*Compiler[T]) Int8 added in v0.6.0

func (c *Compiler[T]) Int8(getter Getter[T, int8]) *Compiler[T]

func (*Compiler[T]) Int16 added in v0.6.0

func (c *Compiler[T]) Int16(order binary.ByteOrder, getter Getter[T, int16]) *Compiler[T]

func (*Compiler[T]) Int32 added in v0.6.0

func (c *Compiler[T]) Int32(order binary.ByteOrder, getter Getter[T, int32]) *Compiler[T]

func (*Compiler[T]) Int64 added in v0.6.0

func (c *Compiler[T]) Int64(order binary.ByteOrder, getter Getter[T, int64]) *Compiler[T]

func (*Compiler[T]) Map added in v0.6.0

func (c *Compiler[T]) Map(field fieldCompiler[T]) *Compiler[T]

func (*Compiler[T]) Option added in v0.6.0

func (c *Compiler[T]) Option(f fieldCompiler[T]) *Compiler[T]

func (*Compiler[T]) Slice added in v0.7.0

func (c *Compiler[T]) Slice(field fieldCompiler[T]) *Compiler[T]

func (*Compiler[T]) SmallVarchar added in v0.6.0

func (c *Compiler[T]) SmallVarchar(getter Getter[T, string]) *Compiler[T]

func (*Compiler[T]) Struct added in v0.6.0

func (c *Compiler[T]) Struct(field fieldCompiler[T]) *Compiler[T]

func (*Compiler[T]) Time added in v0.10.0

func (c *Compiler[T]) Time(withLocation bool, getter Getter[T, time.Time]) *Compiler[T]

func (*Compiler[T]) TimeLocation added in v0.10.0

func (c *Compiler[T]) TimeLocation(getter Getter[T, time.Time]) *Compiler[T]

func (*Compiler[T]) TimeUTC added in v0.10.0

func (c *Compiler[T]) TimeUTC(getter Getter[T, time.Time]) *Compiler[T]

func (*Compiler[T]) UInt8 added in v0.6.0

func (c *Compiler[T]) UInt8(getter Getter[T, uint8]) *Compiler[T]

func (*Compiler[T]) UInt16 added in v0.6.0

func (c *Compiler[T]) UInt16(order binary.ByteOrder, getter Getter[T, uint16]) *Compiler[T]

func (*Compiler[T]) UInt16BE added in v0.6.0

func (c *Compiler[T]) UInt16BE(getter Getter[T, uint16]) *Compiler[T]

func (*Compiler[T]) UInt16LE added in v0.6.0

func (c *Compiler[T]) UInt16LE(getter Getter[T, uint16]) *Compiler[T]

func (*Compiler[T]) UInt32 added in v0.6.0

func (c *Compiler[T]) UInt32(order binary.ByteOrder, getter Getter[T, uint32]) *Compiler[T]

func (*Compiler[T]) UInt64 added in v0.6.0

func (c *Compiler[T]) UInt64(order binary.ByteOrder, getter Getter[T, uint64]) *Compiler[T]

func (*Compiler[T]) Varchar added in v0.6.0

func (c *Compiler[T]) Varchar(getter Getter[T, string]) *Compiler[T]

type CompilerFunc

type CompilerFunc[T any] func(data T, box []byte) error

func CompileStringFactory

func CompileStringFactory() CompilerFunc[string]

func CompileUInt16Factory

func CompileUInt16Factory(order binary.ByteOrder) CompilerFunc[uint16]

func CompileUInt16HeaderFactory

func CompileUInt16HeaderFactory(order binary.ByteOrder) CompilerFunc[int]

type CompilerType added in v0.4.0

type CompilerType[T any] interface {
	Compile(T, io.Writer) error
}

type ErrCompile

type ErrCompile struct {
	// contains filtered or unexported fields
}

func NewErrCompile

func NewErrCompile(reason string) ErrCompile

func (ErrCompile) Error

func (e ErrCompile) Error() string

type ErrFieldNotFound

type ErrFieldNotFound struct {
	// contains filtered or unexported fields
}

func NewErrFieldNotFoundError

func NewErrFieldNotFoundError(field string) ErrFieldNotFound

func (ErrFieldNotFound) Error

func (e ErrFieldNotFound) Error() string

type ErrTypeAssertion

type ErrTypeAssertion struct {
	// contains filtered or unexported fields
}

func NewErrTypeAssertionError

func NewErrTypeAssertionError(want, have string) ErrTypeAssertion

func (ErrTypeAssertion) Error

func (e ErrTypeAssertion) Error() string

type ErrUnSufficientBytes

type ErrUnSufficientBytes struct {
	// contains filtered or unexported fields
}

func NewErrUnSufficientBytesError

func NewErrUnSufficientBytesError(want, have int) ErrUnSufficientBytes

func (ErrUnSufficientBytes) Error

func (e ErrUnSufficientBytes) Error() string

type Factory

type Factory[T any] interface {
	Get() T
}

func ObjectFactory

func ObjectFactory[T any]() Factory[T]

type Field

type Field[T, U any] interface {
	ID() string
	Parse(*T, io.Reader) error
	Compile(*T, io.Writer) error
}

func ArrayField

func ArrayField[T, U any](
	length int,
	inner Type[U],
	setter Setter[T, SliceView[U]],
	getter Getter[T, SliceView[U]],
) Field[T, U]

func ArrayFieldGetter

func ArrayFieldGetter[T any, U any](
	length int,
	inner Type[U],
	getter Getter[T, SliceView[U]],
) Field[T, U]

func ArrayFieldSetter

func ArrayFieldSetter[T, U any](
	length int,
	inner Type[U],
	setter Setter[T, SliceView[U]],
) Field[T, U]

func BoolField added in v0.3.0

func BoolField[T any](
	tp Type[bool],
	getter Getter[T, bool],
	setter Setter[T, bool],
) Field[T, bool]

func BoolFieldGetter added in v0.3.0

func BoolFieldGetter[T any](
	tp Type[bool],
	getter Getter[T, bool],
) Field[T, bool]

func BoolFieldSetter added in v0.3.0

func BoolFieldSetter[T any](
	tp Type[bool],
	setter Setter[T, bool],
) Field[T, bool]

func DefaultSkipField

func DefaultSkipField[T any](pad int) Field[T, any]

func Float32Field added in v0.9.0

func Float32Field[T any](
	tp Type[float32],
	getter Getter[T, float32],
	setter Setter[T, float32],
) Field[T, float32]

func Float32FieldGetter added in v0.9.0

func Float32FieldGetter[T any](
	tp Type[float32],
	getter Getter[T, float32],
) Field[T, float32]

func Float32FieldSetter added in v0.9.0

func Float32FieldSetter[T any](
	tp Type[float32],
	setter Setter[T, float32],
) Field[T, float32]

func Float64Field added in v0.9.0

func Float64Field[T any](
	tp Type[float64],
	getter Getter[T, float64],
	setter Setter[T, float64],
) Field[T, float64]

func Float64FieldGetter added in v0.9.0

func Float64FieldGetter[T any](
	tp Type[float64],
	getter Getter[T, float64],
) Field[T, float64]

func Float64FieldSetter added in v0.9.0

func Float64FieldSetter[T any](
	tp Type[float64],
	setter Setter[T, float64],
) Field[T, float64]

func Int8Field added in v0.5.0

func Int8Field[T any](
	tp Type[int8],
	getter Getter[T, int8],
	setter Setter[T, int8],
) Field[T, int8]

func Int8FieldGetter added in v0.5.0

func Int8FieldGetter[T any](
	tp Type[int8],
	getter Getter[T, int8],
) Field[T, int8]

func Int8FieldSetter added in v0.5.0

func Int8FieldSetter[T any](
	tp Type[int8],
	setter Setter[T, int8],
) Field[T, int8]

func Int16Field added in v0.5.0

func Int16Field[T any](
	tp Type[int16],
	getter Getter[T, int16],
	setter Setter[T, int16],
) Field[T, int16]

func Int16FieldGetter added in v0.5.0

func Int16FieldGetter[T any](
	tp Type[int16],
	getter Getter[T, int16],
) Field[T, int16]

func Int16FieldSetter added in v0.5.0

func Int16FieldSetter[T any](
	tp Type[int16],
	setter Setter[T, int16],
) Field[T, int16]

func Int32Field added in v0.5.0

func Int32Field[T any](
	tp Type[int32],
	getter Getter[T, int32],
	setter Setter[T, int32],
) Field[T, int32]

func Int32FieldGetter added in v0.5.0

func Int32FieldGetter[T any](
	tp Type[int32],
	getter Getter[T, int32],
) Field[T, int32]

func Int32FieldSetter added in v0.5.0

func Int32FieldSetter[T any](
	tp Type[int32],
	setter Setter[T, int32],
) Field[T, int32]

func Int64Field added in v0.5.0

func Int64Field[T any](
	tp Type[int64],
	getter Getter[T, int64],
	setter Setter[T, int64],
) Field[T, int64]

func Int64FieldGetter added in v0.5.0

func Int64FieldGetter[T any](
	tp Type[int64],
	getter Getter[T, int64],
) Field[T, int64]

func Int64FieldSetter added in v0.5.0

func Int64FieldSetter[T any](
	tp Type[int64],
	setter Setter[T, int64],
) Field[T, int64]

func IntField added in v0.5.0

func IntField[T any](
	tp Type[int],
	getter Getter[T, int],
	setter Setter[T, int],
) Field[T, int]

func IntFieldGetter added in v0.5.0

func IntFieldGetter[T any](
	tp Type[int],
	getter Getter[T, int],
) Field[T, int]

func IntFieldSetter added in v0.5.0

func IntFieldSetter[T any](
	tp Type[int],
	setter Setter[T, int],
) Field[T, int]

func MapField added in v0.2.0

func MapField[T any, K comparable, V any](
	header IntType,
	keyType Type[K],
	valueType Type[V],
	setter Setter[T, map[K]V],
	getter Getter[T, map[K]V],
) Field[T, map[K]V]

func MapFieldGetter added in v0.2.0

func MapFieldGetter[T any, K comparable, V any](
	header IntType,
	keyType Type[K],
	valueType Type[V],
	getter Getter[T, map[K]V],
) Field[T, map[K]V]

func MapFieldSetter added in v0.2.0

func MapFieldSetter[T any, K comparable, V any](
	header IntType,
	keyType Type[K],
	valueType Type[V],
	setter Setter[T, map[K]V],
) Field[T, map[K]V]

func SkipField

func SkipField[T any](
	tp Type[any],
) Field[T, any]

func SliceField added in v0.7.0

func SliceField[T, U any](
	header IntType,
	inner Type[U],
	setter Setter[T, SliceView[U]],
	getter Getter[T, SliceView[U]],
) Field[T, U]

func SliceFieldGetter added in v0.7.0

func SliceFieldGetter[T any, U any](
	header IntType,
	inner Type[U],
	getter Getter[T, SliceView[U]],
) Field[T, U]

func SliceFieldSetter added in v0.7.0

func SliceFieldSetter[T, U any](
	header IntType,
	inner Type[U],
	setter Setter[T, SliceView[U]],
) Field[T, U]

func StringField

func StringField[T any](
	tp Type[string],
	getter Getter[T, string],
	setter Setter[T, string],
) Field[T, string]

func StringFieldGetter

func StringFieldGetter[T any](
	tp Type[string],
	getter Getter[T, string],
) Field[T, string]

func StringFieldSetter

func StringFieldSetter[T any](
	tp Type[string],
	setter Setter[T, string],
) Field[T, string]

func StructField added in v0.4.0

func StructField[T, U any](
	getter Getter[T, U],
	setter Setter[T, U],
	inner structTypeI[U],
) Field[T, U]

func StructFieldGetter added in v0.4.0

func StructFieldGetter[T, U any](
	getter Getter[T, U],
	compiler CompilerType[U],
) Field[T, U]

func StructFieldSetter added in v0.4.0

func StructFieldSetter[T, U any](
	setter Setter[T, U],
	parser ParserType[U],
) Field[T, U]

func TimeField added in v0.10.0

func TimeField[T any](
	withLocation bool,
	getter Getter[T, time.Time],
	setter Setter[T, time.Time],
) Field[T, time.Time]

func TimeFieldGetter added in v0.10.0

func TimeFieldGetter[T any](
	withLocation bool,
	getter Getter[T, time.Time],
) Field[T, time.Time]

func TimeFieldSetter added in v0.10.0

func TimeFieldSetter[T any](
	withLocation bool,
	setter Setter[T, time.Time],
) Field[T, time.Time]

func UInt8Field

func UInt8Field[T any](
	tp Type[uint8],
	getter Getter[T, uint8],
	setter Setter[T, uint8],
) Field[T, uint8]

func UInt8FieldGetter

func UInt8FieldGetter[T any](
	tp Type[uint8],
	getter Getter[T, uint8],
) Field[T, uint8]

func UInt8FieldSetter

func UInt8FieldSetter[T any](
	tp Type[uint8],
	setter Setter[T, uint8],
) Field[T, uint8]

func UInt16Field

func UInt16Field[T any](
	tp Type[uint16],
	getter Getter[T, uint16],
	setter Setter[T, uint16],
) Field[T, uint16]

func UInt16FieldGetter

func UInt16FieldGetter[T any](
	tp Type[uint16],
	getter Getter[T, uint16],
) Field[T, uint16]

func UInt16FieldSetter

func UInt16FieldSetter[T any](
	tp Type[uint16],
	setter Setter[T, uint16],
) Field[T, uint16]

func UInt32Field added in v0.5.0

func UInt32Field[T any](
	tp Type[uint32],
	getter Getter[T, uint32],
	setter Setter[T, uint32],
) Field[T, uint32]

func UInt32FieldGetter added in v0.5.0

func UInt32FieldGetter[T any](
	tp Type[uint32],
	getter Getter[T, uint32],
) Field[T, uint32]

func UInt32FieldSetter added in v0.5.0

func UInt32FieldSetter[T any](
	tp Type[uint32],
	setter Setter[T, uint32],
) Field[T, uint32]

func UInt64Field added in v0.5.0

func UInt64Field[T any](
	tp Type[uint64],
	getter Getter[T, uint64],
	setter Setter[T, uint64],
) Field[T, uint64]

func UInt64FieldGetter added in v0.5.0

func UInt64FieldGetter[T any](
	tp Type[uint64],
	getter Getter[T, uint64],
) Field[T, uint64]

func UInt64FieldSetter added in v0.5.0

func UInt64FieldSetter[T any](
	tp Type[uint64],
	setter Setter[T, uint64],
) Field[T, uint64]

type FixedField

type FixedField[T, U any] struct {
	Id     string
	Type   Type[U]
	Setter Setter[T, U]
	Getter Getter[T, U]
}

func (FixedField[T, U]) Compile

func (s FixedField[T, U]) Compile(item *T, w io.Writer) (err error)

func (FixedField[T, U]) ID

func (s FixedField[T, U]) ID() string

func (FixedField[T, U]) Parse

func (s FixedField[T, U]) Parse(item *T, r io.Reader) error

type FuncFactory

type FuncFactory[T any] func() T

func (FuncFactory[T]) Get

func (f FuncFactory[T]) Get() T

type Getter

type Getter[T, U any] func(*T) U

type IntType

type IntType = Type[int]

type Iterable

type Iterable[T any] interface {
	Len() int
	Range(ranger[T]) error
	Unwrap() SliceView[T]
}

type ModelBuilder

type ModelBuilder[T any] struct {
	// contains filtered or unexported fields
}

func Builder

func Builder[T any](factory Factory[T]) ModelBuilder[T]

func (ModelBuilder[T]) Array

func (b ModelBuilder[T]) Array(field fieldBuilder[T]) ModelBuilder[T]

func (ModelBuilder[T]) Bool added in v0.3.0

func (b ModelBuilder[T]) Bool(getter Getter[T, bool], setter Setter[T, bool]) ModelBuilder[T]

func (ModelBuilder[T]) Byte added in v0.5.0

func (b ModelBuilder[T]) Byte(getter Getter[T, byte], setter Setter[T, byte]) ModelBuilder[T]

func (ModelBuilder[T]) Compile

func (b ModelBuilder[T]) Compile(value T, w io.Writer) error

func (ModelBuilder[T]) CompileAny added in v0.11.0

func (b ModelBuilder[T]) CompileAny(value any, w io.Writer) error

func (ModelBuilder[T]) Field

func (b ModelBuilder[T]) Field(f fieldBuilder[T]) ModelBuilder[T]

func (ModelBuilder[T]) Float32 added in v0.9.0

func (b ModelBuilder[T]) Float32(
	order binary.ByteOrder,
	getter Getter[T, float32],
	setter Setter[T, float32],
) ModelBuilder[T]

func (ModelBuilder[T]) Float64 added in v0.9.0

func (b ModelBuilder[T]) Float64(
	order binary.ByteOrder,
	getter Getter[T, float64],
	setter Setter[T, float64],
) ModelBuilder[T]

func (ModelBuilder[T]) Int added in v0.5.0

func (b ModelBuilder[T]) Int(order binary.ByteOrder, getter Getter[T, int], setter Setter[T, int]) ModelBuilder[T]

func (ModelBuilder[T]) Int8 added in v0.5.0

func (b ModelBuilder[T]) Int8(getter Getter[T, int8], setter Setter[T, int8]) ModelBuilder[T]

func (ModelBuilder[T]) Int32 added in v0.5.0

func (b ModelBuilder[T]) Int32(
	order binary.ByteOrder,
	getter Getter[T, int32],
	setter Setter[T, int32],
) ModelBuilder[T]

func (ModelBuilder[T]) Int64 added in v0.5.0

func (b ModelBuilder[T]) Int64(
	order binary.ByteOrder,
	getter Getter[T, int64],
	setter Setter[T, int64],
) ModelBuilder[T]

func (ModelBuilder[T]) Map added in v0.2.0

func (b ModelBuilder[T]) Map(field fieldBuilder[T]) ModelBuilder[T]

func (ModelBuilder[T]) Option added in v0.6.0

func (b ModelBuilder[T]) Option(field fieldBuilder[T]) ModelBuilder[T]

func (ModelBuilder[T]) Parco added in v0.8.0

func (b ModelBuilder[T]) Parco() (*Parser[T], *Compiler[T])

func (ModelBuilder[T]) Parse added in v0.4.0

func (b ModelBuilder[T]) Parse(r io.Reader) (T, error)

func (ModelBuilder[T]) ParseAny added in v0.11.0

func (b ModelBuilder[T]) ParseAny(r io.Reader) (any, error)

func (ModelBuilder[T]) Slice added in v0.7.0

func (b ModelBuilder[T]) Slice(field fieldBuilder[T]) ModelBuilder[T]

func (ModelBuilder[T]) SmallVarchar

func (b ModelBuilder[T]) SmallVarchar(getter Getter[T, string], setter Setter[T, string]) ModelBuilder[T]

func (ModelBuilder[T]) Struct added in v0.4.0

func (b ModelBuilder[T]) Struct(field fieldBuilder[T]) ModelBuilder[T]

func (ModelBuilder[T]) Time added in v0.10.0

func (b ModelBuilder[T]) Time(
	withLocation bool,
	getter Getter[T, time.Time],
	setter Setter[T, time.Time],
) ModelBuilder[T]

func (ModelBuilder[T]) TimeLocation added in v0.10.0

func (b ModelBuilder[T]) TimeLocation(getter Getter[T, time.Time], setter Setter[T, time.Time]) ModelBuilder[T]

func (ModelBuilder[T]) TimeUTC added in v0.10.0

func (b ModelBuilder[T]) TimeUTC(getter Getter[T, time.Time], setter Setter[T, time.Time]) ModelBuilder[T]

func (ModelBuilder[T]) UInt8

func (b ModelBuilder[T]) UInt8(getter Getter[T, uint8], setter Setter[T, uint8]) ModelBuilder[T]

func (ModelBuilder[T]) UInt16

func (b ModelBuilder[T]) UInt16(
	order binary.ByteOrder,
	getter Getter[T, uint16],
	setter Setter[T, uint16],
) ModelBuilder[T]

func (ModelBuilder[T]) UInt16BE

func (b ModelBuilder[T]) UInt16BE(getter Getter[T, uint16], setter Setter[T, uint16]) ModelBuilder[T]

func (ModelBuilder[T]) UInt16LE

func (b ModelBuilder[T]) UInt16LE(getter Getter[T, uint16], setter Setter[T, uint16]) ModelBuilder[T]

func (ModelBuilder[T]) UInt32 added in v0.5.0

func (b ModelBuilder[T]) UInt32(
	order binary.ByteOrder,
	getter Getter[T, uint32],
	setter Setter[T, uint32],
) ModelBuilder[T]

func (ModelBuilder[T]) UInt64 added in v0.5.0

func (b ModelBuilder[T]) UInt64(
	order binary.ByteOrder,
	getter Getter[T, uint64],
	setter Setter[T, uint64],
) ModelBuilder[T]

func (ModelBuilder[T]) Varchar

func (b ModelBuilder[T]) Varchar(getter Getter[T, string], setter Setter[T, string]) ModelBuilder[T]

type ModelMultiBuilder added in v0.11.0

type ModelMultiBuilder[T comparable] struct {
	// contains filtered or unexported fields
}

func MultiBuilder added in v0.11.0

func MultiBuilder[T comparable](header Type[T]) *ModelMultiBuilder[T]

func (*ModelMultiBuilder[T]) Compile added in v0.11.0

func (b *ModelMultiBuilder[T]) Compile(item serializable[T], w io.Writer) (err error)

func (*ModelMultiBuilder[T]) CompileAny added in v0.11.0

func (b *ModelMultiBuilder[T]) CompileAny(id T, item any, w io.Writer) (err error)

func (*ModelMultiBuilder[T]) MustRegister added in v0.11.0

func (b *ModelMultiBuilder[T]) MustRegister(id T, builder builderAny) *ModelMultiBuilder[T]

func (*ModelMultiBuilder[T]) Parse added in v0.11.0

func (b *ModelMultiBuilder[T]) Parse(r io.Reader) (id T, res any, err error)

func (*ModelMultiBuilder[T]) Register added in v0.11.0

func (b *ModelMultiBuilder[T]) Register(id T, builder builderAny) (*ModelMultiBuilder[T], error)

type NativePooledFactory

type NativePooledFactory[T any] struct {
	// contains filtered or unexported fields
}

func (*NativePooledFactory[T]) Get

func (f *NativePooledFactory[T]) Get() T

func (*NativePooledFactory[T]) Put

func (f *NativePooledFactory[T]) Put(t T)

type OptionalField added in v0.6.0

type OptionalField[T, U any] struct {
	// contains filtered or unexported fields
}

func OptionField added in v0.6.0

func OptionField[T, U any](
	tp Type[U],
	setter Setter[T, *U],
	getter Getter[T, *U],
) OptionalField[T, U]

func OptionFieldGetter added in v0.6.0

func OptionFieldGetter[T, U any](
	tp Type[U],
	getter Getter[T, *U],
) OptionalField[T, U]

func OptionFieldSetter added in v0.6.0

func OptionFieldSetter[T, U any](
	tp Type[U],
	setter Setter[T, *U],
) OptionalField[T, U]

func (OptionalField[T, U]) Compile added in v0.6.0

func (s OptionalField[T, U]) Compile(item *T, w io.Writer) (err error)

func (OptionalField[T, U]) ID added in v0.6.0

func (s OptionalField[T, U]) ID() string

func (OptionalField[T, U]) Parse added in v0.6.0

func (s OptionalField[T, U]) Parse(item *T, r io.Reader) error

type OptionalType added in v0.6.0

type OptionalType[T any] struct {
	// contains filtered or unexported fields
}

func Option added in v0.6.0

func Option[T any](inner Type[T]) OptionalType[T]

func (OptionalType[T]) Compile added in v0.6.0

func (i OptionalType[T]) Compile(item *T, w io.Writer) (err error)

func (OptionalType[T]) Parse added in v0.6.0

func (i OptionalType[T]) Parse(r io.Reader) (*T, error)

type Parser

type Parser[T any] struct {
	// contains filtered or unexported fields
}

func ParserModel

func ParserModel[T any](factory Factory[T]) *Parser[T]

func (*Parser[T]) Array added in v0.6.0

func (p *Parser[T]) Array(field fieldParser[T]) *Parser[T]

func (*Parser[T]) Bool added in v0.6.0

func (c *Parser[T]) Bool(setter Setter[T, bool]) *Parser[T]

func (*Parser[T]) Byte added in v0.6.0

func (p *Parser[T]) Byte(setter Setter[T, byte]) *Parser[T]

func (*Parser[T]) Field added in v0.6.0

func (p *Parser[T]) Field(f fieldParser[T]) *Parser[T]

func (*Parser[T]) Float32 added in v0.9.0

func (p *Parser[T]) Float32(order binary.ByteOrder, setter Setter[T, float32]) *Parser[T]

func (*Parser[T]) Float64 added in v0.9.0

func (p *Parser[T]) Float64(order binary.ByteOrder, setter Setter[T, float64]) *Parser[T]

func (*Parser[T]) Int added in v0.6.0

func (p *Parser[T]) Int(order binary.ByteOrder, setter Setter[T, int]) *Parser[T]

func (*Parser[T]) Int8 added in v0.6.0

func (p *Parser[T]) Int8(setter Setter[T, int8]) *Parser[T]

func (*Parser[T]) Int32 added in v0.6.0

func (p *Parser[T]) Int32(order binary.ByteOrder, setter Setter[T, int32]) *Parser[T]

func (*Parser[T]) Int64 added in v0.6.0

func (p *Parser[T]) Int64(order binary.ByteOrder, setter Setter[T, int64]) *Parser[T]

func (*Parser[T]) Map added in v0.6.0

func (p *Parser[T]) Map(field fieldParser[T]) *Parser[T]

func (*Parser[T]) Option added in v0.6.0

func (p *Parser[T]) Option(f fieldParser[T]) *Parser[T]

func (*Parser[T]) Parse

func (p *Parser[T]) Parse(r io.Reader) (T, error)

func (*Parser[T]) ParseBytes

func (p *Parser[T]) ParseBytes(data []byte) (T, error)

func (*Parser[T]) Skip added in v0.6.0

func (p *Parser[T]) Skip(pad int) *Parser[T]

func (*Parser[T]) Slice added in v0.7.0

func (p *Parser[T]) Slice(field fieldParser[T]) *Parser[T]

func (*Parser[T]) SmallVarchar added in v0.6.0

func (p *Parser[T]) SmallVarchar(setter Setter[T, string]) *Parser[T]

func (*Parser[T]) Struct added in v0.6.0

func (p *Parser[T]) Struct(field fieldParser[T]) *Parser[T]

func (*Parser[T]) Time added in v0.10.0

func (p *Parser[T]) Time(withLocation bool, setter Setter[T, time.Time]) *Parser[T]

func (*Parser[T]) TimeLocation added in v0.10.0

func (p *Parser[T]) TimeLocation(setter Setter[T, time.Time]) *Parser[T]

func (*Parser[T]) TimeUTC added in v0.10.0

func (p *Parser[T]) TimeUTC(setter Setter[T, time.Time]) *Parser[T]

func (*Parser[T]) UInt8 added in v0.6.0

func (p *Parser[T]) UInt8(setter Setter[T, uint8]) *Parser[T]

func (*Parser[T]) UInt16 added in v0.6.0

func (p *Parser[T]) UInt16(order binary.ByteOrder, setter Setter[T, uint16]) *Parser[T]

func (*Parser[T]) UInt16BE added in v0.6.0

func (p *Parser[T]) UInt16BE(setter Setter[T, uint16]) *Parser[T]

func (*Parser[T]) UInt16LE added in v0.6.0

func (p *Parser[T]) UInt16LE(setter Setter[T, uint16]) *Parser[T]

func (*Parser[T]) UInt32 added in v0.6.0

func (p *Parser[T]) UInt32(order binary.ByteOrder, setter Setter[T, uint32]) *Parser[T]

func (*Parser[T]) UInt64 added in v0.6.0

func (p *Parser[T]) UInt64(order binary.ByteOrder, setter Setter[T, uint64]) *Parser[T]

func (*Parser[T]) Varchar added in v0.6.0

func (p *Parser[T]) Varchar(setter Setter[T, string]) *Parser[T]

type ParserFunc

type ParserFunc[T any] func([]byte) (T, error)

func ParseStringFactory

func ParseStringFactory() ParserFunc[string]

func ParseUInt16Factory

func ParseUInt16Factory(order binary.ByteOrder) ParserFunc[uint16]

func ParseUInt16HeaderFactory

func ParseUInt16HeaderFactory(order binary.ByteOrder) ParserFunc[int]

type ParserType added in v0.4.0

type ParserType[T any] interface {
	Parse(reader io.Reader) (T, error)
}

type Pool

type Pool struct {
	// contains filtered or unexported fields
}

func NewPool

func NewPool() *Pool

func (*Pool) Get

func (p *Pool) Get(size int) *[]byte

Get returns a byte slice of at least the requested size from the pool. The returned slice may be larger than requested. Always use a defer to Put() the slice back when done.

func (*Pool) Get1

func (p *Pool) Get1() *[]byte

func (*Pool) Get2

func (p *Pool) Get2() *[]byte

func (*Pool) Get4

func (p *Pool) Get4() *[]byte

func (*Pool) Get8

func (p *Pool) Get8() *[]byte

func (*Pool) Get256

func (p *Pool) Get256() *[]byte

func (*Pool) GetAny

func (p *Pool) GetAny() *[]byte

func (*Pool) GetAnyMap

func (p *Pool) GetAnyMap(size int) *[]byte

GetAnyMap returns a byte slice of exactly the requested size from a dynamically created pool. This method creates new pools on-demand but limits the total number to MaxPoolMapSize. If the limit is reached, it falls back to GetAny(). Note: Only use this if you need exact sizes and will call PutAnyMap() to return the buffer.

func (*Pool) Put

func (p *Pool) Put(b *[]byte)

func (*Pool) Put1

func (p *Pool) Put1(b *[]byte)

func (*Pool) Put2

func (p *Pool) Put2(b *[]byte)

func (*Pool) Put4

func (p *Pool) Put4(b *[]byte)

func (*Pool) Put8

func (p *Pool) Put8(b *[]byte)

func (*Pool) Put256

func (p *Pool) Put256(b *[]byte)

func (*Pool) PutAny

func (p *Pool) PutAny(b *[]byte)

func (*Pool) PutAnyMap

func (p *Pool) PutAnyMap(b *[]byte)

PutAnyMap returns a byte slice to the dynamically created pool. If no pool exists for this size (e.g., buffer came from GetAny fallback), the buffer is silently discarded to prevent panics.

type PoolFactory

type PoolFactory[T any] interface {
	Factory[T]

	Put(T)
}

func PooledFactory

func PooledFactory[T any](inner Factory[T], options ...nativePooledFactoryOption[T]) PoolFactory[T]

type Pooler

type Pooler interface {
	Get(size int) *[]byte
	Put(*[]byte)
}

type Setter

type Setter[T, U any] func(*T, U)

type SizerFunc

type SizerFunc[T any] func(T) int

func (SizerFunc[T]) Len

func (s SizerFunc[T]) Len(item T) int

type SliceType added in v0.7.0

type SliceType[T any] struct {
	// contains filtered or unexported fields
}

func Slice

func Slice[T any](header IntType, inner Type[T]) SliceType[T]

func (SliceType[T]) ByteLength added in v0.7.0

func (t SliceType[T]) ByteLength() int

func (SliceType[T]) Compile added in v0.7.0

func (t SliceType[T]) Compile(x Iterable[T], w io.Writer) error

func (SliceType[T]) Parse added in v0.7.0

func (t SliceType[T]) Parse(r io.Reader) (res Iterable[T], err error)

type SliceView added in v0.7.0

type SliceView[T any] []T

func (SliceView[T]) Len added in v0.7.0

func (s SliceView[T]) Len() int

func (SliceView[T]) Range added in v0.7.0

func (s SliceView[T]) Range(fn ranger[T]) error

func (SliceView[T]) Unwrap added in v0.7.0

func (s SliceView[T]) Unwrap() SliceView[T]

type StructType added in v0.6.1

type StructType[T any] struct {
	ParserType[T]
	CompilerType[T]
}

func Struct added in v0.6.1

func Struct[T any](b ModelBuilder[T]) StructType[T]

func StructCo added in v0.6.1

func StructCo[T any](compiler CompilerType[T]) StructType[T]

func StructPar added in v0.6.1

func StructPar[T any](parser ParserType[T]) StructType[T]

func StructParco added in v0.6.1

func StructParco[T any](parser ParserType[T], compiler CompilerType[T]) StructType[T]

func (StructType[T]) ByteLength added in v0.6.1

func (s StructType[T]) ByteLength() int

type TimeType added in v0.10.0

type TimeType struct {
	// contains filtered or unexported fields
}

func (TimeType) ByteLength added in v0.10.0

func (t TimeType) ByteLength() int

func (TimeType) Compile added in v0.10.0

func (t TimeType) Compile(tt time.Time, w io.Writer) error

func (TimeType) Parse added in v0.10.0

func (t TimeType) Parse(r io.Reader) (time.Time, error)

type Type

type Type[T any] interface {
	// ByteLength represents type byte length for this type. E.g: uint8=1, uint16=2, uint32=4
	// For non-fixed types, returns the byte length of the header
	ByteLength() int

	ParserType[T]
	CompilerType[T]
}

func Blob

func Blob(header IntType) Type[[]byte]

func Bool added in v0.3.0

func Bool() Type[bool]

func Byte added in v0.5.0

func Byte() Type[byte]

func Float32 added in v0.9.0

func Float32(order binary.ByteOrder) Type[float32]

func Float32BE added in v0.9.0

func Float32BE() Type[float32]

func Float32LE added in v0.9.0

func Float32LE() Type[float32]

func Float64 added in v0.9.0

func Float64(order binary.ByteOrder) Type[float64]

func Float64BE added in v0.9.0

func Float64BE() Type[float64]

func Float64LE added in v0.9.0

func Float64LE() Type[float64]

func Int added in v0.5.0

func Int(order binary.ByteOrder) Type[int]

func Int8 added in v0.5.0

func Int8() Type[int8]

func Int8Header added in v0.5.0

func Int8Header() Type[int]

func Int16 added in v0.5.0

func Int16(order binary.ByteOrder) Type[int16]

func Int16BE added in v0.5.0

func Int16BE() Type[int16]

func Int16BEHeader added in v0.5.0

func Int16BEHeader() Type[int]

func Int16Header added in v0.5.0

func Int16Header(order binary.ByteOrder) Type[int]

func Int16LE added in v0.5.0

func Int16LE() Type[int16]

func Int16LEHeader added in v0.5.0

func Int16LEHeader() Type[int]

func Int32 added in v0.5.0

func Int32(order binary.ByteOrder) Type[int32]

func Int32BE added in v0.5.0

func Int32BE() Type[int32]

func Int32BEHeader added in v0.5.0

func Int32BEHeader() Type[int]

func Int32Header added in v0.5.0

func Int32Header(order binary.ByteOrder) Type[int]

func Int32LE added in v0.5.0

func Int32LE() Type[int32]

func Int32LEHeader added in v0.5.0

func Int32LEHeader() Type[int]

func Int64 added in v0.5.0

func Int64(order binary.ByteOrder) Type[int64]

func Int64BE added in v0.5.0

func Int64BE() Type[int64]

func Int64BEHeader added in v0.5.0

func Int64BEHeader() Type[int]

func Int64Header added in v0.5.0

func Int64Header(order binary.ByteOrder) Type[int]

func Int64LE added in v0.5.0

func Int64LE() Type[int64]

func Int64LEHeader added in v0.5.0

func Int64LEHeader() Type[int]

func IntBE added in v0.5.0

func IntBE() Type[int]

func IntBEHeader added in v0.5.0

func IntBEHeader() Type[int]

func IntHeader added in v0.5.0

func IntHeader(order binary.ByteOrder) Type[int]

func IntLE added in v0.5.0

func IntLE() Type[int]

func IntLEHeader added in v0.5.0

func IntLEHeader() Type[int]

func LongText added in v0.5.0

func LongText(order binary.ByteOrder) Type[string]

func NewFixedType

func NewFixedType[T any](
	byteLength int,
	parserFunc ParserFunc[T],
	compilerFunc CompilerFunc[T],
) Type[T]

func NewVarcharType

func NewVarcharType(header IntType) Type[string]

func SkipType

func SkipType(pad int) Type[any]

func SkipTypeFactory

func SkipTypeFactory(pad int) Type[any]

func SmallVarchar

func SmallVarchar() Type[string]

func String

func String(header IntType) Type[string]

func Text added in v0.5.0

func Text(order binary.ByteOrder) Type[string]

func TimeLocation added in v0.10.0

func TimeLocation() Type[time.Time]

func TimeUTC added in v0.10.0

func TimeUTC() Type[time.Time]

func UInt added in v0.5.0

func UInt(order binary.ByteOrder) Type[uint]

func UInt8

func UInt8() Type[uint8]

func UInt8Header

func UInt8Header() Type[int]

func UInt16

func UInt16(order binary.ByteOrder) Type[uint16]

func UInt16BE

func UInt16BE() Type[uint16]

func UInt16Header

func UInt16Header(order binary.ByteOrder) Type[int]

func UInt16HeaderBE

func UInt16HeaderBE() Type[int]

func UInt16HeaderLE

func UInt16HeaderLE() Type[int]

func UInt16LE

func UInt16LE() Type[uint16]

func UInt32 added in v0.5.0

func UInt32(order binary.ByteOrder) Type[uint32]

func UInt32BE added in v0.5.0

func UInt32BE() Type[uint32]

func UInt32BEHeader added in v0.5.0

func UInt32BEHeader() Type[int]

func UInt32Header added in v0.5.0

func UInt32Header(order binary.ByteOrder) Type[int]

func UInt32LE added in v0.5.0

func UInt32LE() Type[uint32]

func UInt32LEHeader added in v0.5.0

func UInt32LEHeader() Type[int]

func UInt64 added in v0.5.0

func UInt64(order binary.ByteOrder) Type[uint64]

func UInt64BE added in v0.5.0

func UInt64BE() Type[uint64]

func UInt64BEHeader added in v0.5.0

func UInt64BEHeader() Type[int]

func UInt64Header added in v0.5.0

func UInt64Header(order binary.ByteOrder) Type[int]

func UInt64LE added in v0.5.0

func UInt64LE() Type[uint64]

func UInt64LEHeader added in v0.5.0

func UInt64LEHeader() Type[int]

func UIntBE added in v0.5.0

func UIntBE() Type[uint]

func UIntBEHeader added in v0.5.0

func UIntBEHeader() Type[int]

func UIntHeader added in v0.5.0

func UIntHeader(order binary.ByteOrder) Type[int]

func UIntLE added in v0.5.0

func UIntLE() Type[uint]

func UIntLEHeader added in v0.5.0

func UIntLEHeader() Type[int]

func Varchar

func Varchar() Type[string]

func VarcharOrder

func VarcharOrder(order binary.ByteOrder) Type[string]

Directories

Path Synopsis
examples
builder command
compiler command
parser command
registry command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL