Documentation ¶
Index ¶
- Constants
- Variables
- func Cleanup(objC obj.Client, chunks *Storage)
- func CopyN(w *Writer, r *Reader, n int64) error
- func RandSeq(n int) []byte
- type Chunk
- func (*Chunk) Descriptor() ([]byte, []int)
- func (m *Chunk) GetHash() string
- func (m *Chunk) Marshal() (dAtA []byte, err error)
- func (m *Chunk) MarshalTo(dAtA []byte) (int, error)
- func (*Chunk) ProtoMessage()
- func (m *Chunk) Reset()
- func (m *Chunk) Size() (n int)
- func (m *Chunk) String() string
- func (m *Chunk) Unmarshal(dAtA []byte) error
- func (m *Chunk) XXX_DiscardUnknown()
- func (m *Chunk) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *Chunk) XXX_Merge(src proto.Message)
- func (m *Chunk) XXX_Size() int
- func (m *Chunk) XXX_Unmarshal(b []byte) error
- type DataRef
- func (*DataRef) Descriptor() ([]byte, []int)
- func (m *DataRef) GetChunk() *Chunk
- func (m *DataRef) GetHash() string
- func (m *DataRef) GetOffsetBytes() int64
- func (m *DataRef) GetSizeBytes() int64
- func (m *DataRef) Marshal() (dAtA []byte, err error)
- func (m *DataRef) MarshalTo(dAtA []byte) (int, error)
- func (*DataRef) ProtoMessage()
- func (m *DataRef) Reset()
- func (m *DataRef) Size() (n int)
- func (m *DataRef) String() string
- func (m *DataRef) Unmarshal(dAtA []byte) error
- func (m *DataRef) XXX_DiscardUnknown()
- func (m *DataRef) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *DataRef) XXX_Merge(src proto.Message)
- func (m *DataRef) XXX_Size() int
- func (m *DataRef) XXX_Unmarshal(b []byte) error
- type Reader
- type Storage
- type Writer
Constants ¶
const ( // MB is Megabytes. MB = 1024 * 1024 // AverageBits determines the average chunk size (2^AverageBits). AverageBits = 23 // WindowSize is the size of the rolling hash window. WindowSize = 64 )
Variables ¶
var ( ErrInvalidLengthChunk = fmt.Errorf("proto: negative length found during unmarshaling") ErrIntOverflowChunk = fmt.Errorf("proto: integer overflow") )
Functions ¶
Types ¶
type Chunk ¶
type Chunk struct { Hash string `protobuf:"bytes,1,opt,name=hash,proto3" json:"hash,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*Chunk) Descriptor ¶
func (*Chunk) ProtoMessage ¶
func (*Chunk) ProtoMessage()
func (*Chunk) XXX_DiscardUnknown ¶
func (m *Chunk) XXX_DiscardUnknown()
func (*Chunk) XXX_Marshal ¶
func (*Chunk) XXX_Unmarshal ¶
type DataRef ¶
type DataRef struct { // The chunk the referenced data is located in. Chunk *Chunk `protobuf:"bytes,1,opt,name=chunk,proto3" json:"chunk,omitempty"` // The hash of the data being referenced. // This field is empty when it is equal to the chunk hash (the ref is the whole chunk). Hash string `protobuf:"bytes,2,opt,name=hash,proto3" json:"hash,omitempty"` // The offset and size used for accessing the data within the chunk. OffsetBytes int64 `protobuf:"varint,3,opt,name=offset_bytes,json=offsetBytes,proto3" json:"offset_bytes,omitempty"` SizeBytes int64 `protobuf:"varint,4,opt,name=size_bytes,json=sizeBytes,proto3" json:"size_bytes,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
DataRef is a reference to data within a chunk.
func (*DataRef) Descriptor ¶
func (*DataRef) GetOffsetBytes ¶
func (*DataRef) GetSizeBytes ¶
func (*DataRef) ProtoMessage ¶
func (*DataRef) ProtoMessage()
func (*DataRef) XXX_DiscardUnknown ¶
func (m *DataRef) XXX_DiscardUnknown()
func (*DataRef) XXX_Marshal ¶
func (*DataRef) XXX_Unmarshal ¶
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
Reader reads a set of DataRefs from chunk storage.
func (*Reader) Close ¶
Close closes the reader. Currently a no-op, but will be used when streaming is implemented.
type Storage ¶
type Storage struct {
// contains filtered or unexported fields
}
Storage is the abstraction that manages chunk storage.
func LocalStorage ¶
LocalStorage creates a local chunk storage instance. Useful for storage layer tests.
func (*Storage) NewReader ¶
NewReader creates an io.ReadCloser for a chunk. (bryce) The whole chunk is in-memory right now. Could be a problem with concurrency, particularly the merge process. May want to handle concurrency here (pass in multiple data refs)
type Writer ¶
type Writer struct {
// contains filtered or unexported fields
}
Writer splits a byte stream into content defined chunks that are hashed and deduplicated/uploaded to object storage. Chunk split points are determined by a bit pattern in a rolling hash function (buzhash64 at https://github.com/chmduquesne/rollinghash). (bryce) The chunking/hashing/uploading could be made concurrent by reading ahead a certain amount and splitting the data among chunking/hashing/uploading workers in a circular array where the first identified chunk (or whole chunk if there is no chunk split point) in a worker is appended to the prior workers data. This would handle chunk splits that show up when the rolling hash window is across the data splits. The callback would still be executed sequentially so that the order would be correct for the file index.
- An improvement to this would be to just append WindowSize bytes to the prior worker's data, then stitch together the correct chunks. It doesn't make sense to roll the window over the same data twice.
(bryce) have someone else double check the hash resetting strategy. It should be fine in terms of consistent hashing, but may result with a little bit less deduplication across different files, not sure. This resetting strategy allows me to avoid reading the tail end of a copied chunk to get the correct hasher state for the following data (if it is not another copied chunk).
func (*Writer) Close ¶
Close closes the writer and flushes the remaining bytes to a chunk and finishes the final range.
func (*Writer) RangeCount ¶
RangeCount returns a count of the number of ranges associated with the writer.
func (*Writer) StartRange ¶
StartRange specifies the start of a range within the byte stream that is meaningful to the caller. When this range has ended (by calling StartRange again or Close) and all of the necessary chunks are written, the callback given during initialization will be called with DataRefs that can be used for accessing that range.