bstore

package
v0.0.0-...-a2e7767 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 30, 2020 License: GPL-3.0 Imports: 16 Imported by: 0

Documentation

Index

Constants

View Source
const ABSZERO = 1
View Source
const FULLZERO = 2
View Source
const LatestGeneration = uint64(^(uint64(0)))
View Source
const (
	RELOCATION_BASE = 0xFF00000000000000
)

Note to self, if you bump VSIZE such that the max blob goes past 2^16, make sure to adapt providers

View Source
const VALUE = 0

These functions allow us to read/write the packed numbers in the datablocks These are huffman encoded in big endian 0xxx xxxx 7 0x00 10xx xxxx +1 14 0x80 1100 xxxx +2 20 0xC0 1101 xxxx +3 28 0xD0 1110 xxxx +4 36 0xE0 1111 00xx +5 42 0xF0 1111 01xx +6 50 0xF4 1111 10xx +7 58 0xF8 1111 1100 +8 64 0xFC 1111 1101 +0 ABSZERO (special symbol) 0xFD 1111 1110 +0 FULLZERO (special symbol) 0xFE

Variables

View Source
var ErrDatablockNotFound = errors.New("Coreblock not found")
View Source
var ErrGenerationNotFound = errors.New("Generation not found")

Functions

func CreateDatabase

func CreateDatabase(params map[string]string)

func LinkAndStore

func LinkAndStore(uuid []byte, bs *BlockStore, bp bprovider.StorageProvider, vblocks []*Vectorblock, cblocks []*Coreblock) map[uint64]uint64

返回一个此前的虚拟地址到实际物理地址的转换表

func UUIDToMapKey

func UUIDToMapKey(id uuid.UUID) [16]byte

将 uuid 转化成比特数组

Types

type BlockStore

type BlockStore struct {
	// contains filtered or unexported fields
}

func NewBlockStore

func NewBlockStore(params map[string]string) (*BlockStore, error)

func (*BlockStore) FreeCoreblock

func (bs *BlockStore) FreeCoreblock(cb **Coreblock)

func (*BlockStore) FreeVectorblock

func (bs *BlockStore) FreeVectorblock(vb **Vectorblock)

func (*BlockStore) GetCacheMaxSize

func (bs *BlockStore) GetCacheMaxSize() uint64

func (*BlockStore) LoadSuperblock

func (bs *BlockStore) LoadSuperblock(id uuid.UUID, generation uint64) *Superblock

从存储引擎中加载超级块信息

func (*BlockStore) ObtainGeneration

func (bs *BlockStore) ObtainGeneration(id uuid.UUID) *Generation

* This obtains a generation, blocking if necessary

func (*BlockStore) ReadDatablock

func (bs *BlockStore) ReadDatablock(uuid uuid.UUID, addr uint64, impl_Generation uint64, impl_Pointwidth uint8, impl_StartTime int64, k int, v int) Datablock

type BlockType

type BlockType uint64
const (
	Vector BlockType = 1
	Core   BlockType = 2
	Bad    BlockType = 255
)

func DatablockGetBufferType

func DatablockGetBufferType(buf []byte) BlockType

type CacheItem

type CacheItem struct {
	// contains filtered or unexported fields
}

type Coreblock

type Coreblock struct {
	//Metadata, not copied
	Identifier uint64 "metadata,implicit"
	Generation uint64 "metadata,implicit"

	//Payload, copied
	PointWidth  uint8 "implicit"
	StartTime   int64 "implicit"
	Addr        []uint64
	Count       []uint64
	Min         []float64
	Mean        []float64
	Max         []float64
	CGeneration []uint64
}

func (*Coreblock) CopyInto

func (src *Coreblock) CopyInto(dst *Coreblock)

Copy a core block, only copying the payload, not the metadata

func (*Coreblock) Deserialize

func (c *Coreblock) Deserialize(src []byte)

func (*Coreblock) GetDatablockType

func (*Coreblock) GetDatablockType() BlockType

func (*Coreblock) Serialize

func (c *Coreblock) Serialize(dst []byte) []byte

type Datablock

type Datablock interface {
	GetDatablockType() BlockType
}

type Generation

type Generation struct {
	// 某一次更新操作中的状态信息,包括分配的所有的数据块、新生成的超级块等等,最后会在 Link 阶段刷新到底层存储当中
	Cur_SB *Superblock
	New_SB *Superblock
	// contains filtered or unexported fields
}

A generation stores all the information acquired during a write pass. * A superblock contains all the information required to navigate a tree.

func (*Generation) AllocateCoreblock

func (gen *Generation) AllocateCoreblock() (*Coreblock, error)

*

  • The real function is supposed to allocate an address for the data
  • block, reserving it on disk, and then give back the data block that
  • can be filled in
  • This stub makes up an address, and mongo pretends its real

func (*Generation) AllocateVectorblock

func (gen *Generation) AllocateVectorblock() (*Vectorblock, error)

func (*Generation) Commit

func (gen *Generation) Commit() (map[uint64]uint64, error)

The returned address map is primarily for unit testing

func (*Generation) IsNewTS

func (g *Generation) IsNewTS() bool

func (*Generation) Number

func (g *Generation) Number() uint64

func (*Generation) UpdateRootAddr

func (g *Generation) UpdateRootAddr(addr uint64)

func (*Generation) Uuid

func (g *Generation) Uuid() *uuid.UUID

type Superblock

type Superblock struct {
	// contains filtered or unexported fields
}

func NewSuperblock

func NewSuperblock(id uuid.UUID) *Superblock

func (*Superblock) Clone

func (s *Superblock) Clone() *Superblock

func (*Superblock) Gen

func (s *Superblock) Gen() uint64

func (*Superblock) InitNewTS

func (s *Superblock) InitNewTS(K uint16, V uint32)

func (*Superblock) K

func (s *Superblock) K() uint16

func (*Superblock) Root

func (s *Superblock) Root() uint64

func (*Superblock) Unlinked

func (s *Superblock) Unlinked() bool

func (*Superblock) Uuid

func (s *Superblock) Uuid() uuid.UUID

func (*Superblock) V

func (s *Superblock) V() uint32

type Vectorblock

type Vectorblock struct {
	//Metadata, not copied on clone
	Identifier uint64 "metadata,implicit"
	Generation uint64 "metadata,implicit"

	//Payload, copied on clone
	Len        uint16
	PointWidth uint8 "implicit"
	StartTime  int64 "implicit"
	Time       []int64
	Value      []float64
}

The leaf datablock type. The tags allow unit tests to work out if clone / serdes are working properly metadata is not copied when a node is cloned implicit is not serialised

func (*Vectorblock) CopyInto

func (src *Vectorblock) CopyInto(dst *Vectorblock)

func (*Vectorblock) Deserialize

func (v *Vectorblock) Deserialize(src []byte)

func (*Vectorblock) GetDatablockType

func (*Vectorblock) GetDatablockType() BlockType

func (*Vectorblock) Serialize

func (v *Vectorblock) Serialize(dst []byte) []byte

The current algorithm is as follows: entry 0: absolute time and value entry 1: delta time and value since 0 entry 2: delta since delta 1 entry 3: delta from average delta (1+2) enrty 4+ delta from average delta (n-1, n-2, n-3) * 写入的数据: 1. 序列化数据块的类型、数据点的数量; 2. 写入第一个点(数据,时间)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL