Back to godoc.org

Package deprecated

v0.0.9
Latest Go to latest
Published: Jun 24, 2020 | License: Apache-2.0 | Module: github.com/grailbio/base

Index

Package Files

Variables

var (
	// MaxPackedItems defines the max items that can be
	// packed into a single record by a PackedWriter.
	MaxPackedItems = uint32(10 * 1024 * 1024)
	// DefaultPackedItems defines the default number of items that can
	// be packed into a single record by a PackedWriter.
	DefaultPackedItems = uint32(16 * 1024)
	// DefaultPackedBytes defines the default number of bytes that can
	// be packed into a single record by a PackedWriter.
	DefaultPackedBytes = uint32(16 * 1024 * 1024)
)

func NewRangeReader

func NewRangeReader(rs io.ReadSeeker, offset, length int64) (io.ReadSeeker, error)

NewRangeReader returns a new RangedReader.

type ItemIndexFunc

type ItemIndexFunc func(itemOffset, itemLength uint64, v interface{}, p []byte) error

ItemIndexFunc is called every item that an item is added to a record.

type LegacyPackedScanner

type LegacyPackedScanner interface {
	LegacyScanner
}

PackedScanner represents an interface that can be used to read items from a recordio file written using a PackedWriter.

type LegacyPackedScannerOpts

type LegacyPackedScannerOpts struct {
	LegacyScannerOpts

	// Transform is called on the data read from a record to reverse any
	// transformations performed when creating the record. It is intended
	// for decompression, decryption etc.
	Transform func(scratch, in []byte) (out []byte, err error)
}

LegacyPackedScannerOpts represents the options to NewPackedScanner

type LegacyPackedWriter

type LegacyPackedWriter interface {
	// Write writes a []byte record to the supplied writer. Each call to write
	// results in a new record being written.
	// Calls to Write and Record may be interspersed.
	io.Writer

	// Marshal marshals an object priort to writing it to the underlying
	// recordio stream.
	Marshal(v interface{}) (n int, err error)

	// Flush is called to write any currently buffered data to the current
	// record. A subsequent write will result in a new record being
	// written. Flush must be called to ensure that the last record is
	// completely written.
	Flush() error
}

PackedWriter represents an interface that can be used to write multiple items to the same recordio record.

func NewLegacyPackedWriter

func NewLegacyPackedWriter(wr io.Writer, opts LegacyPackedWriterOpts) LegacyPackedWriter

NewLegacyPackedWriter is deprecated. Use NewWriterV2 instead.

NewLegacyPackedWriter is writer that will pack up to MaxItems or MaxBytes, whichever comes first, into a single write to the underlying recordio stream. Callers to Write must guarantee that they will not modify the buffers passed as arguments since Write does not make an internal copy until the buffered data is written. A caller can count items/bytes or provide a Flushed called to determine when it is safe to reuse any storage. This scheme avoids an unnecessary copy for []byte and most implementations of Marshal will create a new buffer to store the marshaled data.

type LegacyPackedWriterOpts

type LegacyPackedWriterOpts struct {
	// Marshal is called to marshal an object to a byte slice.
	Marshal MarshalFunc

	// Index is called whenever a new record is written.
	Index RecordIndex

	// Flushed is called whenever a record is written.
	Flushed func() error

	// Transform is called when buffered data is about to be written to a record.
	// It is intended for implementing data transformations such as compression
	// and/or encryption. The Transform function specified here must be
	// reversible by the Transform function in the Scanner.
	Transform func(in [][]byte) (buf []byte, err error)

	// MaxItems is the maximum number of items to pack into a single record.
	// It defaults to DefaultPackedItems if set to 0.
	// If MaxItems exceeds MaxPackedItems it will silently set to MaxPackedItems.
	MaxItems uint32

	// MaxBytes is the maximum number of bytes to pack into a single record.
	// It defaults to DefaultPackedBytes if set to 0.
	MaxBytes uint32
}

LegacyPackedWriterOpts represents the options to NewPackedWriter

type LegacyScanner

type LegacyScanner interface {
	// Reset is equivalent to creating a new scanner, but it retains underlying
	// storage. So it is more efficient than NewScanner. Err is reset to nil. Scan
	// and Bytes will read from rd.
	Reset(rd io.Reader)

	// Scan returns true if a new record was read, false otherwise. It will return
	// false on encoutering an error; the error may be retrieved using the Err
	// method. Note, that Scan will reuse storage from one invocation to the next.
	Scan() bool

	// Bytes returns the current record as read by a prior call to Scan. It may
	// always be called.
	Bytes() []byte

	// Err returns the first error encountered.
	Err() error

	// Unmarshal unmarshals the raw bytes using a preconfigured UnmarshalFunc.
	// It will return an error if there is no preconfigured UnmarshalFunc.
	// Calls to Bytes and Unmarshal may be interspersed.
	Unmarshal(data interface{}) error
}

LegacyScanner is the interface for reading recordio files as streams of typed records. Each record is available as both raw bytes and as type via Unmarshal.

func NewLegacyPackedScanner

func NewLegacyPackedScanner(rd io.Reader, opts LegacyPackedScannerOpts) LegacyScanner

NewLegacyPackedScanner is deprecated. Use NewScannerV2 instead.

func NewLegacyScanner

func NewLegacyScanner(rd io.Reader, opts LegacyScannerOpts) LegacyScanner

NewLegacyScanner is DEPRECATED. Use NewScannerV2 instead.

type LegacyScannerImpl

type LegacyScannerImpl struct {
	// contains filtered or unexported fields
}

scanner represents scanner for the recordio format.

func (*LegacyScannerImpl) Bytes

func (s *LegacyScannerImpl) Bytes() []byte

Bytes implements Scanner.Bytes.

func (*LegacyScannerImpl) Err

func (s *LegacyScannerImpl) Err() error

Err implements Scanner.Err.

func (*LegacyScannerImpl) InternalScan

func (s *LegacyScannerImpl) InternalScan() (internal.MagicBytes, bool)

func (*LegacyScannerImpl) Reset

func (s *LegacyScannerImpl) Reset(rd io.Reader)

Reset implements Scanner.Reset.

func (*LegacyScannerImpl) Scan

func (s *LegacyScannerImpl) Scan() bool

Scan implements Scanner.Scan.

func (*LegacyScannerImpl) Unmarshal

func (s *LegacyScannerImpl) Unmarshal(v interface{}) error

Unmarshal implements Scanner.Unmarshal.

type LegacyScannerOpts

type LegacyScannerOpts struct {
	// Unmarshal is called to unmarshal an object from the supplied byte slice.
	Unmarshal UnmarshalFunc
}

LegacyScannerOpts represents the options accepted by NewScanner.

type LegacyWriter

type LegacyWriter interface {
	// Write writes a []byte record to the supplied writer. Each call to write
	// results in a new record being written.
	// Calls to Write and Record may be interspersed.
	io.Writer

	// WriteSlices writes out the supplied slices as a single record, it
	// is intended to avoid having to copy slices into a single slice purely
	// to write them out as a single record.
	WriteSlices(hdr []byte, bufs ...[]byte) (n int, err error)

	// Marshal writes a record using a preconfigured MarshalFunc to the supplied
	// writer. Each call to Record results in a new record being written.
	// Calls to Write and Record may be interspersed.
	Marshal(v interface{}) (n int, err error)
}

Writer is the interface for writing recordio files as streams of typed records.

func NewLegacyWriter

func NewLegacyWriter(wr io.Writer, opts LegacyWriterOpts) LegacyWriter

NewLegacyWriter is DEPRECATED. Use NewWriterV2 instead.

type LegacyWriterOpts

type LegacyWriterOpts struct {
	// Marshal is called to marshal an object to a byte slice.
	Marshal MarshalFunc

	// Index is called to enable generating an index for a recordio file; it is
	// called whenever a new record or item is written as per the package
	// level comments.
	Index func(offset, length uint64, v interface{}, p []byte) error
}

LegacyWriterOpts represents the options accepted by NewLegacyWriter.

type MarshalFunc

type MarshalFunc func(scratch []byte, v interface{}) ([]byte, error)

type ObjectPacker

type ObjectPacker struct {
	// contains filtered or unexported fields
}

ObjectPacker marshals and buffers objects using a Packer according to the recordio packed record format. It is intended to enable concurrent writing via a ConcurrentPackedWriter. The objects are intended to be recovered using Unpacker and then unmarshaling the byte slices it returns.

func NewObjectPacker

func NewObjectPacker(objects []interface{}, fn MarshalFunc, opts ObjectPackerOpts) *ObjectPacker

NewObjectPacker creates a new ObjectPacker. Objects must be large enough to store all of the objects marshalled.

func (*ObjectPacker) Contents

func (mp *ObjectPacker) Contents() ([]interface{}, *Packer)

Contents returns the current object contents of the packer and the Packer that can be used to serialize its contents.

func (*ObjectPacker) Marshal

func (mp *ObjectPacker) Marshal(v interface{}) error

Marshal marshals and buffers the supplied object.

type ObjectPackerOpts

type ObjectPackerOpts struct {
	PackerOpts
}

ObjectPackerOpts represents the options for NewObjectPacker.

type Packer

type Packer struct {
	// contains filtered or unexported fields
}

Packer buffers and packs multiple buffers according to the packed recordio format. It is used to implement PackedWriter and to enable concurrent writing via a ConcurrentPackedWriter.

func NewPacker

func NewPacker(opts PackerOpts) *Packer

NewPacker creates a new Packer.

func (*Packer) Pack

func (pbw *Packer) Pack() (hdr []byte, dataSize int, buffers [][]byte, err error)

Pack packs the stored buffers according to the recordio packed record format and resets internal state in preparation for being reused. The packed record is returned as the hdr and buffers results; dataSize is the sum of the bytes in all of the buffers.

func (*Packer) Stored

func (pbw *Packer) Stored() (numItems, numBytes int)

Stored returns the number of buffers and bytes currently stored in the Packer.

func (*Packer) Write

func (pbw *Packer) Write(p []byte) (int, error)

Write implements io.Writer.

type PackerOpts

type PackerOpts struct {
	// Buffers will be used to accumulate the buffers written to the packer
	// rather than the Packer allocating its own storage. However, this
	// supplied slice will be grown when its capacity is exceeded, in which case
	// the underlying array used by the caller will no longer store the
	// buffers being packed. It is left to the caller to allocate a buffer
	// large enough to contain the number of buffers that it writes.
	Buffers [][]byte

	// Transform is called when buffered data is about to be written to a record.
	// It is intended for implementing data transformations such as compression
	// and/or encryption. The Transform function specified here must be
	// reversible by the Transform function in the Scanner.
	Transform func(in [][]byte) (buf []byte, err error)
}

PackerOpts represents the options accepted by NewPacker.

type RangeReader

type RangeReader struct {
	// contains filtered or unexported fields
}

RangeReader represents an io.ReadSeeker that operates over a restricted range of the supplied ReadSeeker. Reading and Seeking are consequently relative to the start of the range used to create the RangeReader.

func (*RangeReader) Read

func (rr *RangeReader) Read(buf []byte) (int, error)

Read implements io.Read.

func (*RangeReader) Seek

func (rr *RangeReader) Seek(offset int64, whence int) (int64, error)

Seek implements io.Seek.

type RecordIndex

type RecordIndex func(recordOffset, recordLength, nitems uint64) (ItemIndexFunc, error)

RecordIndex is called every time a new record is added to a stream. It is called with the offset and size of the record, and the number of items being written to the record. It can optionally return a function that will be subsequently called for each item that is written to this record. This makes it possibly to ensure that all calls to index the items in a single record are handled by the same method and object and hence to index records concurrently.

type UnmarshalFunc

type UnmarshalFunc func(data []byte, v interface{}) error

type Unpacker

type Unpacker struct {
	// contains filtered or unexported fields
}

Unpacker unpacks the format created by Packer.

func NewUnpacker

func NewUnpacker(opts UnpackerOpts) *Unpacker

NewUnpacker creates a new unpacker.

func (*Unpacker) Unpack

func (up *Unpacker) Unpack(buf []byte) ([][]byte, error)

Unpack unpacks the buffers serialized in buf according to the recordio packed format. The slices it returns point to the bytes stored in the supplied buffer.

type UnpackerOpts

type UnpackerOpts struct {
	// Buffers is used in the same way as by the Packer and PackerOpts.
	Buffers [][]byte

	// Transform is called on the data read from a record to reverse any
	// transformations performed when creating the record. It is intended
	// for decompression, decryption etc.
	Transform func(scratch, in []byte) (out []byte, err error)
}

UnpackerOpts represents the options accepted by NewUnpacker.

Documentation was rendered with GOOS=linux and GOARCH=amd64.

Jump to identifier

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to identifier