Documentation
¶
Overview ¶
Package nutsdb implements a simple, fast, embeddable and persistent key/value store written in pure Go. It supports fully serializable transactions. And it also supports data structure such as list、set、sorted set etc.
NutsDB currently works on Mac OS, Linux and Windows.
Usage ¶
NutsDB has the following main types: DB, BPTree, Entry, DataFile And Tx. and NutsDB supports bucket, A bucket is a collection of unique keys that are associated with values.
All operations happen inside a Tx. Tx represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key , or iterate over a set of key-value pairs (prefix scanning or range scanning). read-write transactions can also update and delete keys from the DB.
See the examples for more usage details.
Index ¶
- Constants
- Variables
- func ErrBucketAndKey(bucket string, key []byte) error
- func ErrNotFoundKeyInBucket(bucket string, key []byte) error
- func ErrSeparatorForZSetKey() error
- func ErrWhenBuildListIdx(err error) error
- func GetRandomBytes(length int) []byte
- func GetTestBytes(i int) []byte
- func IsBucketEmpty(err error) bool
- func IsBucketNotFound(err error) bool
- func IsDBClosed(err error) bool
- func IsExpired(ttl uint32, timestamp uint64) bool
- func IsKeyEmpty(err error) bool
- func IsKeyNotFound(err error) bool
- func IsPrefixScan(err error) bool
- func IsPrefixSearchScan(err error) bool
- func MarshalInts(ints []int) ([]byte, error)
- func MatchForRange(pattern, key string, f func(key string) bool) (end bool, err error)
- func NewIndex() *index
- func OneOfUint16Array(value uint16, array []uint16) bool
- func SortFID(BPTreeRootIdxGroup []*BPTreeRootIdx, by sortBy)
- func Truncate(path string, capacity int64, f *os.File) error
- func UnmarshalInts(data []byte) ([]int, error)
- type BPTree
- func (t *BPTree) All() (records Records, err error)
- func (t *BPTree) Find(key []byte) (*Record, error)
- func (t *BPTree) FindLeaf(key []byte) *Node
- func (t *BPTree) FindRange(start, end []byte, f func(key []byte, pointer interface{}) bool) (numFound int, keys [][]byte, pointers []interface{})
- func (t *BPTree) Insert(key []byte, e *Entry, h *Hint, countFlag bool) error
- func (t *BPTree) PrefixScan(prefix []byte, offsetNum int, limitNum int) (records Records, off int, err error)
- func (t *BPTree) PrefixSearchScan(prefix []byte, reg string, offsetNum int, limitNum int) (records Records, off int, err error)
- func (t *BPTree) Range(start, end []byte) (records Records, err error)
- func (t *BPTree) SetKeyPosMap(keyPosMap map[string]int64)
- func (t *BPTree) ToBinary(n *Node) (result []byte, err error)
- func (t *BPTree) WriteNode(n *Node, off int64, syncEnable bool, fd *os.File) (number int, err error)
- func (t *BPTree) WriteNodes(rwMode RWMode, syncEnable bool, flag int) error
- type BPTreeIdx
- type BPTreeRootIdx
- type BPTreeRootIdxWrapper
- type BTree
- func (bt *BTree) All() []*Record
- func (bt *BTree) Count() int
- func (bt *BTree) Delete(key []byte) bool
- func (bt *BTree) Find(key []byte) (*Record, bool)
- func (bt *BTree) Insert(key []byte, e *Entry, h *Hint) bool
- func (bt *BTree) PrefixScan(prefix []byte, offset, limitNum int) []*Record
- func (bt *BTree) PrefixSearchScan(prefix []byte, reg string, offset, limitNum int) []*Record
- func (bt *BTree) Range(start, end []byte) []*Record
- type BTreeIdx
- type BinaryNode
- type BucketMeta
- type BucketMetasIdx
- type CEntries
- type DB
- func (db *DB) Backup(dir string) error
- func (db *DB) BackupTarGZ(w io.Writer) error
- func (db *DB) Begin(writable bool) (tx *Tx, err error)
- func (db *DB) Close() error
- func (db *DB) IsClose() bool
- func (db *DB) Merge() error
- func (db *DB) NewWriteBatch() (*WriteBatch, error)
- func (db *DB) Update(fn func(tx *Tx) error) error
- func (db *DB) View(fn func(tx *Tx) error) error
- type DataFile
- func (df *DataFile) Close() (err error)
- func (df *DataFile) ReadAt(off int) (e *Entry, err error)
- func (df *DataFile) ReadRecord(off int, payloadSize int64) (e *Entry, err error)
- func (df *DataFile) Release() (err error)
- func (df *DataFile) Sync() (err error)
- func (df *DataFile) WriteAt(b []byte, off int64) (n int, err error)
- type Entries
- type Entry
- func (e *Entry) Encode() []byte
- func (e *Entry) GetBucketString() string
- func (e *Entry) GetCrc(buf []byte) uint32
- func (e *Entry) GetTxIDBytes() []byte
- func (e *Entry) IsZero() bool
- func (e *Entry) ParseMeta(buf []byte) error
- func (e *Entry) ParsePayload(data []byte) error
- func (e *Entry) Size() int64
- func (e *Entry) WithBucket(bucket []byte) *Entry
- func (e *Entry) WithKey(key []byte) *Entry
- func (e *Entry) WithMeta(meta *MetaData) *Entry
- func (e *Entry) WithValue(value []byte) *Entry
- type EntryIdxMode
- type ErrorHandler
- type ErrorHandlerFunc
- type FdInfo
- type FileIORWManager
- type Hint
- type Item
- type Iterator
- type IteratorOptions
- type LessFunc
- type List
- func (l *List) GetListTTL(key string) (uint32, error)
- func (l *List) IsEmpty(key string) (bool, error)
- func (l *List) IsExpire(key string) bool
- func (l *List) LPeek(key string) (*Record, error)
- func (l *List) LPop(key string) (*Record, error)
- func (l *List) LPush(key string, r *Record) error
- func (l *List) LRange(key string, start, end int) ([]*Record, error)
- func (l *List) LRem(key string, count int, cmp func(r *Record) (bool, error)) error
- func (l *List) LRemByIndex(key string, indexes []int) error
- func (l *List) LSet(key string, index int, r *Record) error
- func (l *List) LTrim(key string, start, end int) error
- func (l *List) RPeek(key string) (*Record, error)
- func (l *List) RPop(key string) (*Record, error)
- func (l *List) RPush(key string, r *Record) error
- func (l *List) Size(key string) (int, error)
- type ListIdx
- type MMapRWManager
- type MetaData
- func (meta *MetaData) PayloadSize() int64
- func (meta *MetaData) WithBucketSize(bucketSize uint32) *MetaData
- func (meta *MetaData) WithCrc(crc uint32) *MetaData
- func (meta *MetaData) WithDs(ds uint16) *MetaData
- func (meta *MetaData) WithFlag(flag uint16) *MetaData
- func (meta *MetaData) WithKeySize(keySize uint32) *MetaData
- func (meta *MetaData) WithStatus(status uint16) *MetaData
- func (meta *MetaData) WithTTL(ttl uint32) *MetaData
- func (meta *MetaData) WithTimeStamp(timestamp uint64) *MetaData
- func (meta *MetaData) WithTxID(txID uint64) *MetaData
- func (meta *MetaData) WithValueSize(valueSize uint32) *MetaData
- type Node
- type Option
- func WithBufferSizeOfRecovery(size int) Option
- func WithCleanFdsCacheThreshold(threshold float64) Option
- func WithCommitBufferSize(commitBufferSize int64) Option
- func WithDir(dir string) Option
- func WithEntryIdxMode(entryIdxMode EntryIdxMode) Option
- func WithErrorHandler(errorHandler ErrorHandler) Option
- func WithGCWhenClose(enable bool) Option
- func WithLessFunc(lessFunc LessFunc) Option
- func WithMaxBatchCount(count int64) Option
- func WithMaxBatchSize(size int64) Option
- func WithMaxFdNumsInCache(num int) Option
- func WithNodeNum(num int64) Option
- func WithRWMode(rwMode RWMode) Option
- func WithSegmentSize(size int64) Option
- func WithSyncEnable(enable bool) Option
- type Options
- type RWManager
- type RWMode
- type Record
- type Records
- type Set
- func (s *Set) SAdd(key string, values [][]byte, records []*Record) error
- func (s *Set) SAreMembers(key string, values ...[]byte) (bool, error)
- func (s *Set) SCard(key string) int
- func (s *Set) SDiff(key1, key2 string) ([]*Record, error)
- func (s *Set) SHasKey(key string) bool
- func (s *Set) SInter(key1, key2 string) ([]*Record, error)
- func (s *Set) SIsMember(key string, value []byte) (bool, error)
- func (s *Set) SMembers(key string) ([]*Record, error)
- func (s *Set) SMove(key1, key2 string, value []byte) (bool, error)
- func (s *Set) SPop(key string) *Record
- func (s *Set) SRem(key string, values ...[]byte) error
- func (s *Set) SUnion(key1, key2 string) ([]*Record, error)
- type SetIdx
- type SortedSetIdx
- type Throttle
- type Tx
- func (tx *Tx) CheckExpire(bucket string, key []byte) bool
- func (tx *Tx) Commit() (err error)
- func (tx *Tx) CommitWith(cb func(error))
- func (tx *Tx) Delete(bucket string, key []byte) error
- func (tx *Tx) DeleteBucket(ds uint16, bucket string) error
- func (tx *Tx) ExistBucket(ds uint16, bucket string) (bool, error)
- func (tx *Tx) ExpireList(bucket string, key []byte, ttl uint32) error
- func (tx *Tx) FindLeafOnDisk(fID int64, rootOff int64, key, newKey []byte) (bn *BinaryNode, err error)
- func (tx *Tx) FindOnDisk(fID uint64, rootOff uint64, key, newKey []byte) (entry *Entry, err error)
- func (tx *Tx) FindTxIDOnDisk(fID, txID uint64) (ok bool, err error)
- func (tx *Tx) Get(bucket string, key []byte) (e *Entry, err error)
- func (tx *Tx) GetAll(bucket string) (entries Entries, err error)
- func (tx *Tx) GetListTTL(bucket string, key []byte) (uint32, error)
- func (tx *Tx) IterateBuckets(ds uint16, pattern string, f func(key string) bool) error
- func (tx *Tx) LKeys(bucket, pattern string, f func(key string) bool) error
- func (tx *Tx) LPeek(bucket string, key []byte) (item []byte, err error)
- func (tx *Tx) LPop(bucket string, key []byte) (item []byte, err error)
- func (tx *Tx) LPush(bucket string, key []byte, values ...[]byte) error
- func (tx *Tx) LRange(bucket string, key []byte, start, end int) ([][]byte, error)
- func (tx *Tx) LRem(bucket string, key []byte, count int, value []byte) error
- func (tx *Tx) LRemByIndex(bucket string, key []byte, indexes ...int) error
- func (tx *Tx) LSet(bucket string, key []byte, index int, value []byte) error
- func (tx *Tx) LSize(bucket string, key []byte) (int, error)
- func (tx *Tx) LTrim(bucket string, key []byte, start, end int) error
- func (tx *Tx) PrefixScan(bucket string, prefix []byte, offsetNum int, limitNum int) (es Entries, err error)
- func (tx *Tx) PrefixSearchScan(bucket string, prefix []byte, reg string, offsetNum int, limitNum int) (es Entries, err error)
- func (tx *Tx) Put(bucket string, key, value []byte, ttl uint32) error
- func (tx *Tx) PutWithTimestamp(bucket string, key, value []byte, ttl uint32, timestamp uint64) error
- func (tx *Tx) RPeek(bucket string, key []byte) ([]byte, error)
- func (tx *Tx) RPop(bucket string, key []byte) (item []byte, err error)
- func (tx *Tx) RPush(bucket string, key []byte, values ...[]byte) error
- func (tx *Tx) RangeScan(bucket string, start, end []byte) (es Entries, err error)
- func (tx *Tx) Rollback() error
- func (tx *Tx) SAdd(bucket string, key []byte, items ...[]byte) error
- func (tx *Tx) SAreMembers(bucket string, key []byte, items ...[]byte) (bool, error)
- func (tx *Tx) SCard(bucket string, key []byte) (int, error)
- func (tx *Tx) SDiffByOneBucket(bucket string, key1, key2 []byte) ([][]byte, error)
- func (tx *Tx) SDiffByTwoBuckets(bucket1 string, key1 []byte, bucket2 string, key2 []byte) ([][]byte, error)
- func (tx *Tx) SHasKey(bucket string, key []byte) (bool, error)
- func (tx *Tx) SIsMember(bucket string, key, item []byte) (bool, error)
- func (tx *Tx) SKeys(bucket, pattern string, f func(key string) bool) error
- func (tx *Tx) SMembers(bucket string, key []byte) ([][]byte, error)
- func (tx *Tx) SMoveByOneBucket(bucket string, key1, key2, item []byte) (bool, error)
- func (tx *Tx) SMoveByTwoBuckets(bucket1 string, key1 []byte, bucket2 string, key2, item []byte) (bool, error)
- func (tx *Tx) SPop(bucket string, key []byte) ([]byte, error)
- func (tx *Tx) SRem(bucket string, key []byte, items ...[]byte) error
- func (tx *Tx) SUnionByOneBucket(bucket string, key1, key2 []byte) ([][]byte, error)
- func (tx *Tx) SUnionByTwoBuckets(bucket1 string, key1 []byte, bucket2 string, key2 []byte) ([][]byte, error)
- func (tx *Tx) ZAdd(bucket string, key []byte, score float64, val []byte) error
- func (tx *Tx) ZCard(bucket string) (int, error)
- func (tx *Tx) ZCount(bucket string, start, end float64, opts *zset.GetByScoreRangeOptions) (int, error)
- func (tx *Tx) ZGetByKey(bucket string, key []byte) (*zset.SortedSetNode, error)
- func (tx *Tx) ZKeys(bucket, pattern string, f func(key string) bool) error
- func (tx *Tx) ZMembers(bucket string) (map[string]*zset.SortedSetNode, error)
- func (tx *Tx) ZPeekMax(bucket string) (*zset.SortedSetNode, error)
- func (tx *Tx) ZPeekMin(bucket string) (*zset.SortedSetNode, error)
- func (tx *Tx) ZPopMax(bucket string) (*zset.SortedSetNode, error)
- func (tx *Tx) ZPopMin(bucket string) (*zset.SortedSetNode, error)
- func (tx *Tx) ZRangeByRank(bucket string, start, end int) ([]*zset.SortedSetNode, error)
- func (tx *Tx) ZRangeByScore(bucket string, start, end float64, opts *zset.GetByScoreRangeOptions) ([]*zset.SortedSetNode, error)
- func (tx *Tx) ZRank(bucket string, key []byte) (int, error)
- func (tx *Tx) ZRem(bucket, key string) error
- func (tx *Tx) ZRemRangeByRank(bucket string, start, end int) error
- func (tx *Tx) ZRevRank(bucket string, key []byte) (int, error)
- func (tx *Tx) ZScore(bucket string, key []byte) (float64, error)
- type WriteBatch
- func (wb *WriteBatch) Cancel() error
- func (wb *WriteBatch) Delete(bucket string, key []byte) error
- func (wb *WriteBatch) Error() error
- func (wb *WriteBatch) Flush() error
- func (wb *WriteBatch) Put(bucket string, key, value []byte, ttl uint32) error
- func (wb *WriteBatch) Reset() error
- func (wb *WriteBatch) SetMaxPendingTxns(max int)
Constants ¶
const ( // DefaultInvalidAddress returns default invalid node address. DefaultInvalidAddress = -1 // RangeScan returns range scanMode flag. RangeScan = "RangeScan" // PrefixScan returns prefix scanMode flag. PrefixScan = "PrefixScan" // PrefixSearchScan returns prefix and search scanMode flag. PrefixSearchScan = "PrefixSearchScan" // CountFlagEnabled returns enabled CountFlag. CountFlagEnabled = true // CountFlagDisabled returns disabled CountFlag. CountFlagDisabled = false // BPTIndexSuffix returns b+ tree index suffix. BPTIndexSuffix = ".bptidx" // BPTRootIndexSuffix returns b+ tree root index suffix. BPTRootIndexSuffix = ".bptridx" // BPTTxIDIndexSuffix returns b+ tree tx ID index suffix. BPTTxIDIndexSuffix = ".bpttxid" // BPTRootTxIDIndexSuffix returns b+ tree root tx ID index suffix. BPTRootTxIDIndexSuffix = ".bptrtxid" )
const ( // BucketMetaHeaderSize returns the header size of the BucketMeta. BucketMetaHeaderSize = 12 // BucketMetaSuffix returns b+ tree index suffix. BucketMetaSuffix = ".meta" )
const ( // DataSuffix returns the data suffix DataSuffix = ".dat" // DataEntryHeaderSize returns the entry header size DataEntryHeaderSize = 42 )
const ( // DataDeleteFlag represents the data delete flag DataDeleteFlag uint16 = iota // DataSetFlag represents the data set flag DataSetFlag // DataLPushFlag represents the data LPush flag DataLPushFlag // DataRPushFlag represents the data RPush flag DataRPushFlag // DataLRemFlag represents the data LRem flag DataLRemFlag // DataLPopFlag represents the data LPop flag DataLPopFlag // DataRPopFlag represents the data RPop flag DataRPopFlag // DataLSetFlag represents the data LSet flag DataLSetFlag // DataLTrimFlag represents the data LTrim flag DataLTrimFlag // DataZAddFlag represents the data ZAdd flag DataZAddFlag // DataZRemFlag represents the data ZRem flag DataZRemFlag // DataZRemRangeByRankFlag represents the data ZRemRangeByRank flag DataZRemRangeByRankFlag // DataZPopMaxFlag represents the data ZPopMax flag DataZPopMaxFlag // DataZPopMinFlag represents the data aZPopMin flag DataZPopMinFlag // DataSetBucketDeleteFlag represents the delete Set bucket flag DataSetBucketDeleteFlag // DataSortedSetBucketDeleteFlag represents the delete Sorted Set bucket flag DataSortedSetBucketDeleteFlag // DataBPTreeBucketDeleteFlag represents the delete BPTree bucket flag DataBPTreeBucketDeleteFlag // DataListBucketDeleteFlag represents the delete List bucket flag DataListBucketDeleteFlag // LRemByIndex represents the data LRemByIndex flag DataLRemByIndex // DataListBucketDeleteFlag represents that set ttl for the list DataExpireListFlag )
const ( // UnCommitted represents the tx unCommitted status UnCommitted uint16 = 0 // Committed represents the tx committed status Committed uint16 = 1 // Persistent represents the data persistent flag Persistent uint32 = 0 // ScanNoLimit represents the data scan no limit flag ScanNoLimit int = -1 KvWriteChCapacity = 1000 )
const ( // DataStructureSet represents the data structure set flag DataStructureSet uint16 = iota // DataStructureSortedSet represents the data structure sorted set flag DataStructureSortedSet // DataStructureTree represents the data structure b+ tree or b tree flag DataStructureTree // DataStructureList represents the data structure list flag DataStructureList // DataStructureNone represents not the data structure DataStructureNone )
const ( B = 1 KB = 1024 * B MB = 1024 * KB GB = 1024 * MB )
const BPTreeRootIdxHeaderSize = 28
BPTreeRootIdxHeaderSize returns the header size of the root index.
const (
DefaultMaxFileNums = 256
)
const (
DefaultThrottleSize = 16
)
const FLockName = "nutsdb-flock"
const MAX_SIZE = math.MaxInt32
const SeparatorForListKey = "|"
SeparatorForListKey represents separator for listKey
const SeparatorForZSetKey = "|"
SeparatorForZSetKey represents separator for zSet key.
const (
TooManyFileOpenErrSuffix = "too many open files"
)
Variables ¶
var ( // ErrStartKey is returned when Range is called by a error start key. ErrStartKey = errors.New("err start key") // ErrScansNoResult is returned when Range or prefixScan or prefixSearchScan are called no result to found. ErrScansNoResult = errors.New("range scans or prefix or prefix and search scans no result") // ErrPrefixSearchScansNoResult is returned when prefixSearchScan is called no result to found. ErrPrefixSearchScansNoResult = errors.New("prefix and search scans no result") // ErrKeyNotFound is returned when the key is not in the b+ tree. ErrKeyNotFound = errors.New("key not found") // ErrBadRegexp is returned when bad regular expression given. ErrBadRegexp = errors.New("bad regular expression") )
var ( // ErrCrcZero is returned when crc is 0 ErrCrcZero = errors.New("error crc is 0") // ErrCrc is returned when crc is error ErrCrc = errors.New("crc error") // ErrCapacity is returned when capacity is error. ErrCapacity = errors.New("capacity error") ErrEntryZero = errors.New("entry is zero ") )
var ( // ErrDBClosed is returned when db is closed. ErrDBClosed = errors.New("db is closed") // ErrBucket is returned when bucket is not in the HintIdx. ErrBucket = errors.New("err bucket") // ErrEntryIdxModeOpt is returned when set db EntryIdxMode option is wrong. ErrEntryIdxModeOpt = errors.New("err EntryIdxMode option set") // ErrFn is returned when fn is nil. ErrFn = errors.New("err fn") // ErrBucketNotFound is returned when looking for bucket that does not exist ErrBucketNotFound = errors.New("bucket not found") // ErrDataStructureNotSupported is returned when pass a not supported data structure ErrDataStructureNotSupported = errors.New("this data structure is not supported for now") // ErrNotSupportHintBPTSparseIdxMode is returned not support mode `HintBPTSparseIdxMode` ErrNotSupportHintBPTSparseIdxMode = errors.New("not support mode `HintBPTSparseIdxMode`") // ErrDirLocked is returned when can't get the file lock of dir ErrDirLocked = errors.New("the dir of db is locked") // ErrDirUnlocked is returned when the file lock already unlocked ErrDirUnlocked = errors.New("the dir of db is unlocked") // ErrIsMerging is returned when merge in progress ErrIsMerging = errors.New("merge in progress") )
var ( // ErrListNotFound is returned when the list not found. ErrListNotFound = errors.New("the list not found") // ErrIndexOutOfRange is returned when use LSet function set index out of range. ErrIndexOutOfRange = errors.New("index out of range") )
var ( // ErrUnmappedMemory is returned when a function is called on unmapped memory ErrUnmappedMemory = errors.New("unmapped memory") // ErrIndexOutOfBound is returned when given offset out of mapped region ErrIndexOutOfBound = errors.New("offset out of mapped region") )
var ( // ErrSetNotExist is returned when the key does not exist. ErrSetNotExist = errors.New("set not exist") // ErrSetMemberNotExist is returned when the member of set does not exist ErrSetMemberNotExist = errors.New("set member not exist") // ErrMemberEmpty is returned when the item received is nil ErrMemberEmpty = errors.New("item empty") )
var ( // ErrDataSizeExceed is returned when given key and value size is too big. ErrDataSizeExceed = errors.New("data size too big") // ErrTxClosed is returned when committing or rolling back a transaction // that has already been committed or rolled back. ErrTxClosed = errors.New("tx is closed") // ErrTxNotWritable is returned when performing a write operation on // a read-only transaction. ErrTxNotWritable = errors.New("tx not writable") // ErrKeyEmpty is returned if an empty key is passed on an update function. ErrKeyEmpty = errors.New("key cannot be empty") // ErrBucketEmpty is returned if bucket is empty. ErrBucketEmpty = errors.New("bucket is empty") // ErrRangeScan is returned when range scanning not found the result ErrRangeScan = errors.New("range scans not found") // ErrPrefixScan is returned when prefix scanning not found the result ErrPrefixScan = errors.New("prefix scans not found") // ErrPrefixSearchScan is returned when prefix and search scanning not found the result ErrPrefixSearchScan = errors.New("prefix and search scans not found") // ErrNotFoundKey is returned when key not found int the bucket on an view function. ErrNotFoundKey = errors.New("key not found in the bucket") // ErrCannotCommitAClosedTx is returned when the tx committing a closed tx ErrCannotCommitAClosedTx = errors.New("can not commit a closed tx") // ErrCannotRollbackACommittingTx is returned when the tx rollback a committing tx ErrCannotRollbackACommittingTx = errors.New("can not rollback a committing tx") ErrCannotRollbackAClosedTx = errors.New("can not rollback a closed tx") // ErrNotFoundBucket is returned when key not found int the bucket on an view function. ErrNotFoundBucket = errors.New("bucket not found") // ErrTxnTooBig is returned if too many writes are fit into a single transaction. ErrTxnTooBig = errors.New("Txn is too big to fit into one request") )
var DefaultOptions = func() Options { return Options{ EntryIdxMode: HintKeyValAndRAMIdxMode, SegmentSize: defaultSegmentSize, NodeNum: 1, RWMode: FileIO, SyncEnable: true, CommitBufferSize: 4 * MB, MergeInterval: 2 * time.Hour, MaxBatchSize: (15 * defaultSegmentSize / 4) / 100, MaxBatchCount: (15 * defaultSegmentSize / 4) / 100 / 100, } }()
DefaultOptions represents the default options.
var ErrCommitAfterFinish = errors.New("Batch commit not permitted after finish")
ErrCommitAfterFinish indicates that write batch commit was called after
var (
ErrDontNeedMerge = errors.New("the number of files waiting to be merged is at least 2")
)
var ( // ErrSeparatorForListKey returns when list key contains the SeparatorForListKey. ErrSeparatorForListKey = errors.Errorf("contain separator (%s) for List key", SeparatorForListKey) )
Functions ¶
func ErrBucketAndKey ¶
ErrBucketAndKey returns when bucket or key not found.
func ErrNotFoundKeyInBucket ¶
ErrNotFoundKeyInBucket returns when key not in the bucket.
func ErrSeparatorForZSetKey ¶
func ErrSeparatorForZSetKey() error
ErrSeparatorForZSetKey returns when zSet key contains the SeparatorForZSetKey flag.
func ErrWhenBuildListIdx ¶
ErrWhenBuildListIdx returns err when build listIdx
func GetRandomBytes ¶ added in v0.12.5
func GetTestBytes ¶ added in v0.12.5
func IsBucketEmpty ¶
IsBucketEmpty is true if the bucket is empty.
func IsBucketNotFound ¶
IsBucketNotFound is true if the error indicates the bucket is not exists.
func IsDBClosed ¶
IsDBClosed is true if the error indicates the db was closed.
func IsKeyNotFound ¶
IsKeyNotFound is true if the error indicates the key is not found.
func IsPrefixScan ¶
IsPrefixScan is true if prefix scanning not found the result.
func IsPrefixSearchScan ¶
IsPrefixSearchScan is true if prefix and search scanning not found the result.
func MarshalInts ¶
func MatchForRange ¶
func OneOfUint16Array ¶ added in v0.12.4
func SortFID ¶
func SortFID(BPTreeRootIdxGroup []*BPTreeRootIdx, by sortBy)
SortFID sorts BPTreeRootIdx data.
func UnmarshalInts ¶
Types ¶
type BPTree ¶
type BPTree struct {
ValidKeyCount int // the number of the key that not expired or deleted
FirstKey []byte
LastKey []byte
LastAddress int64
Filepath string
// contains filtered or unexported fields
}
BPTree records root node and valid key number.
func NewTree ¶
func NewTree() *BPTree
NewTree returns a newly initialized BPTree Object that implements the BPTree.
func (*BPTree) FindRange ¶
func (t *BPTree) FindRange(start, end []byte, f func(key []byte, pointer interface{}) bool) (numFound int, keys [][]byte, pointers []interface{})
FindRange returns numFound,keys and pointers at the given start key and end key.
func (*BPTree) Insert ¶
Insert inserts record to the b+ tree, and if the key exists, update the record and the counter(if countFlag set true,it will start count).
func (*BPTree) PrefixScan ¶
func (t *BPTree) PrefixScan(prefix []byte, offsetNum int, limitNum int) (records Records, off int, err error)
PrefixScan returns records at the given prefix and limitNum. limitNum: limit the number of the scanned records return.
func (*BPTree) PrefixSearchScan ¶
func (t *BPTree) PrefixSearchScan(prefix []byte, reg string, offsetNum int, limitNum int) (records Records, off int, err error)
PrefixSearchScan returns records at the given prefix, match regular expression and limitNum limitNum: limit the number of the scanned records return.
func (*BPTree) SetKeyPosMap ¶
SetKeyPosMap sets the key offset of all entries in the b+ tree.
func (*BPTree) WriteNode ¶
func (t *BPTree) WriteNode(n *Node, off int64, syncEnable bool, fd *os.File) (number int, err error)
WriteNode writes a binary node to the File starting at byte offset off. It returns the number of bytes written and an error, if any. WriteAt returns a non-nil error when n != len(b).
type BPTreeRootIdx ¶
type BPTreeRootIdx struct {
// contains filtered or unexported fields
}
BPTreeRootIdx represents the b+ tree root index.
func ReadBPTreeRootIdxAt ¶
func ReadBPTreeRootIdxAt(fd *os.File, off int64) (*BPTreeRootIdx, error)
ReadBPTreeRootIdxAt reads BPTreeRootIdx entry from the File starting at byte offset off.
func (*BPTreeRootIdx) Encode ¶
func (bri *BPTreeRootIdx) Encode() []byte
Encode returns the slice after the BPTreeRootIdx be encoded.
func (*BPTreeRootIdx) GetCrc ¶
func (bri *BPTreeRootIdx) GetCrc(buf []byte) uint32
GetCrc returns the crc at given buf slice.
func (*BPTreeRootIdx) IsZero ¶
func (bri *BPTreeRootIdx) IsZero() bool
IsZero checks if the BPTreeRootIdx entry is zero or not.
func (*BPTreeRootIdx) Persistence ¶
func (bri *BPTreeRootIdx) Persistence(path string, offset int64, syncEnable bool) (number int, err error)
Persistence writes BPTreeRootIdx entry to the File starting at byte offset off.
func (*BPTreeRootIdx) Size ¶
func (bri *BPTreeRootIdx) Size() int64
Size returns the size of the BPTreeRootIdx entry.
type BPTreeRootIdxWrapper ¶
type BPTreeRootIdxWrapper struct {
BSGroup []*BPTreeRootIdx
// contains filtered or unexported fields
}
BPTreeRootIdxWrapper records BSGroup and by, in order to sort.
func (BPTreeRootIdxWrapper) Len ¶
func (bsw BPTreeRootIdxWrapper) Len() int
Len is the number of elements in the collection bsw.BSGroup.
func (BPTreeRootIdxWrapper) Less ¶
func (bsw BPTreeRootIdxWrapper) Less(i, j int) bool
Less reports whether the element with index i should sort before the element with index j.
func (BPTreeRootIdxWrapper) Swap ¶
func (bsw BPTreeRootIdxWrapper) Swap(i, j int)
Swap swaps the elements with indexes i and j.
type BTree ¶ added in v0.13.1
type BTree struct {
// contains filtered or unexported fields
}
func (*BTree) PrefixScan ¶ added in v0.13.1
func (*BTree) PrefixSearchScan ¶ added in v0.13.1
type BinaryNode ¶
type BinaryNode struct {
// hint offset
Keys [order - 1]int64
// the last pointer would point to the previous node
// the next to last one pointer would point to the next node
Pointers [order + 1]int64
IsLeaf uint16
KeysNum uint16
Address int64
NextAddress int64
}
BinaryNode represents binary node.
type BucketMeta ¶
type BucketMeta struct {
// contains filtered or unexported fields
}
BucketMeta represents the bucket's meta-information.
func ReadBucketMeta ¶
func ReadBucketMeta(name string) (bucketMeta *BucketMeta, err error)
ReadBucketMeta returns bucketMeta at given file path name.
func (*BucketMeta) Encode ¶
func (bm *BucketMeta) Encode() []byte
Encode returns the slice after the BucketMeta be encoded.
func (*BucketMeta) GetCrc ¶
func (bm *BucketMeta) GetCrc(buf []byte) uint32
GetCrc returns the crc at given buf slice.
func (*BucketMeta) Size ¶
func (bm *BucketMeta) Size() int64
Size returns the size of the BucketMeta.
type BucketMetasIdx ¶
type BucketMetasIdx map[string]*BucketMeta
BucketMetasIdx represents the index of the bucket's meta-information
type DB ¶
type DB struct {
BTreeIdx BTreeIdx
BPTreeRootIdxes []*BPTreeRootIdx
BPTreeKeyEntryPosMap map[string]int64 // key = bucket+key val = EntryPos
SetIdx SetIdx
SortedSetIdx SortedSetIdx
Index *index
ActiveFile *DataFile
ActiveBPTreeIdx *BPTree
ActiveCommittedTxIdsIdx *BPTree
MaxFileID int64
KeyCount int // total key number ,include expired, deleted, repeated.
// contains filtered or unexported fields
}
DB represents a collection of buckets that persist on disk.
func (*DB) BackupTarGZ ¶
BackupTarGZ Backup copy the database to writer.
func (*DB) Begin ¶
Begin opens a new transaction. Multiple read-only transactions can be opened at the same time but there can only be one read/write transaction at a time. Attempting to open a read/write transactions while another one is in progress will result in blocking until the current read/write transaction is completed. All transactions must be closed by calling Commit() or Rollback() when done.
func (*DB) NewWriteBatch ¶ added in v0.13.1
func (db *DB) NewWriteBatch() (*WriteBatch, error)
type DataFile ¶
type DataFile struct {
ActualSize int64
// contains filtered or unexported fields
}
DataFile records about data file information.
func NewDataFile ¶
NewDataFile will return a new DataFile Object.
func (*DataFile) Close ¶
Close closes the RWManager. If RWManager is FileRWManager represents closes the File, rendering it unusable for I/O. If RWManager is a MMapRWManager represents Unmap deletes the memory mapped region, flushes any remaining changes.
func (*DataFile) ReadRecord ¶
ReadRecord returns entry at the given off(offset). payloadSize = bucketSize + keySize + valueSize
type Entry ¶
Entry represents the data item.
func (*Entry) Encode ¶
Encode returns the slice after the entry be encoded.
the entry stored format: |----------------------------------------------------------------------------------------------------------------| | crc | timestamp | ksz | valueSize | flag | TTL |bucketSize| status | ds | txId | bucket | key | value | |----------------------------------------------------------------------------------------------------------------| | uint32| uint64 |uint32 | uint32 | uint16 | uint32| uint32 | uint16 | uint16 |uint64 |[]byte|[]byte | []byte | |----------------------------------------------------------------------------------------------------------------|
func (*Entry) GetBucketString ¶ added in v0.12.4
GetBucketString return the string of bucket
func (*Entry) GetTxIDBytes ¶ added in v0.12.4
GetTxIDBytes return the bytes of TxID
func (*Entry) ParsePayload ¶
ParsePayload means this function will parse a byte array to bucket, key, size of an entry
func (*Entry) WithBucket ¶ added in v0.12.4
WithBucket set bucket to Entry
type EntryIdxMode ¶
type EntryIdxMode int
EntryIdxMode represents entry index mode.
const ( // HintKeyValAndRAMIdxMode represents ram index (key and value) mode. HintKeyValAndRAMIdxMode EntryIdxMode = iota // HintKeyAndRAMIdxMode represents ram index (only key) mode. HintKeyAndRAMIdxMode // HintBPTSparseIdxMode represents b+ tree sparse index mode. HintBPTSparseIdxMode )
type ErrorHandler ¶ added in v0.13.0
type ErrorHandler interface {
HandleError(err error)
}
An ErrorHandler handles an error occurred during transaction.
type ErrorHandlerFunc ¶ added in v0.13.0
type ErrorHandlerFunc func(err error)
The ErrorHandlerFunc type is an adapter to ErrorHandler.
func (ErrorHandlerFunc) HandleError ¶ added in v0.13.0
func (fn ErrorHandlerFunc) HandleError(err error)
type FdInfo ¶
type FdInfo struct {
// contains filtered or unexported fields
}
FdInfo holds base fd info
type FileIORWManager ¶
type FileIORWManager struct {
// contains filtered or unexported fields
}
FileIORWManager represents the RWManager which using standard I/O.
func (*FileIORWManager) Close ¶
func (fm *FileIORWManager) Close() (err error)
Close will remove the cache in the fdm of the specified path, and call the close method of the os of the file
func (*FileIORWManager) ReadAt ¶
func (fm *FileIORWManager) ReadAt(b []byte, off int64) (n int, err error)
ReadAt reads len(b) bytes from the File starting at byte offset off. `ReadAt` is a wrapper of the *File.ReadAt.
func (*FileIORWManager) Release ¶
func (fm *FileIORWManager) Release() (err error)
Release is a wrapper around the reduceUsing method
func (*FileIORWManager) Sync ¶
func (fm *FileIORWManager) Sync() (err error)
Sync commits the current contents of the file to stable storage. Typically, this means flushing the file system's in-memory copy of recently written data to disk. `Sync` is a wrapper of the *File.Sync.
type Hint ¶
Hint represents the index of the key
func (*Hint) WithDataPos ¶ added in v0.12.4
WithDataPos set DataPos to Hint
func (*Hint) WithFileId ¶ added in v0.12.4
WithFileId set FileID to Hint
type Iterator ¶
type Iterator struct {
// contains filtered or unexported fields
}
func NewIterator ¶
func NewIterator(tx *Tx, bucket string, options IteratorOptions) *Iterator
type IteratorOptions ¶
type IteratorOptions struct {
Reverse bool
}
type List ¶ added in v0.13.0
List represents the list.
func (*List) LRange ¶ added in v0.13.0
LRange returns the specified elements of the list stored at key [start,end]
func (*List) LRem ¶ added in v0.13.0
LRem removes the first count occurrences of elements equal to value from the list stored at key. The count argument influences the operation in the following ways: count > 0: Remove elements equal to value moving from head to tail. count < 0: Remove elements equal to value moving from tail to head. count = 0: Remove all elements equal to value.
func (*List) LRemByIndex ¶ added in v0.13.0
LRemByIndex remove the list element at specified index
func (*List) LTrim ¶ added in v0.13.0
LTrim trim an existing list so that it will contain only the specified range of elements specified.
type MMapRWManager ¶
type MMapRWManager struct {
// contains filtered or unexported fields
}
MMapRWManager represents the RWManager which using mmap.
func (*MMapRWManager) Close ¶
func (mm *MMapRWManager) Close() (err error)
Close will remove the cache in the fdm of the specified path, and call the close method of the os of the file
func (*MMapRWManager) ReadAt ¶
func (mm *MMapRWManager) ReadAt(b []byte, off int64) (n int, err error)
ReadAt copies data to b slice from mapped region starting at given off and returns number of bytes copied to the b slice.
func (*MMapRWManager) Release ¶
func (mm *MMapRWManager) Release() (err error)
Release deletes the memory mapped region, flushes any remaining changes
func (*MMapRWManager) Sync ¶
func (mm *MMapRWManager) Sync() (err error)
Sync synchronizes the mapping's contents to the file's contents on disk.
type MetaData ¶
type MetaData struct {
KeySize uint32
ValueSize uint32
Timestamp uint64
TTL uint32
Flag uint16 // delete / set
BucketSize uint32
TxID uint64
Status uint16 // committed / uncommitted
Ds uint16 // data structure
Crc uint32
}
MetaData represents the meta information of the data item.
func NewMetaData ¶ added in v0.12.6
func NewMetaData() *MetaData
func (*MetaData) PayloadSize ¶
func (*MetaData) WithBucketSize ¶ added in v0.12.6
func (*MetaData) WithKeySize ¶ added in v0.12.6
func (*MetaData) WithStatus ¶ added in v0.12.6
func (*MetaData) WithTimeStamp ¶ added in v0.12.6
func (*MetaData) WithValueSize ¶ added in v0.12.6
type Node ¶
type Node struct {
Keys [][]byte
KeysNum int
Next *Node
Address int64
// contains filtered or unexported fields
}
Node records keys and pointers and parent node.
type Option ¶
type Option func(*Options)
func WithCommitBufferSize ¶ added in v0.13.0
func WithEntryIdxMode ¶
func WithEntryIdxMode(entryIdxMode EntryIdxMode) Option
func WithErrorHandler ¶ added in v0.13.0
func WithErrorHandler(errorHandler ErrorHandler) Option
func WithGCWhenClose ¶ added in v0.12.4
func WithLessFunc ¶ added in v0.13.0
func WithMaxBatchCount ¶ added in v0.13.1
func WithMaxBatchSize ¶ added in v0.13.1
func WithMaxFdNumsInCache ¶
func WithNodeNum ¶
func WithRWMode ¶
func WithSegmentSize ¶
func WithSyncEnable ¶
type Options ¶
type Options struct {
// Dir represents Open the database located in which dir.
Dir string
// EntryIdxMode represents using which mode to index the entries.
EntryIdxMode EntryIdxMode
// RWMode represents the read and write mode.
// RWMode includes two options: FileIO and MMap.
// FileIO represents the read and write mode using standard I/O.
// MMap represents the read and write mode using mmap.
RWMode RWMode
SegmentSize int64
// NodeNum represents the node number.
// Default NodeNum is 1. NodeNum range [1,1023].
NodeNum int64
// SyncEnable represents if call Sync() function.
// if SyncEnable is false, high write performance but potential data loss likely.
// if SyncEnable is true, slower but persistent.
SyncEnable bool
// MaxFdNumsInCache represents the max numbers of fd in cache.
MaxFdNumsInCache int
// CleanFdsCacheThreshold represents the maximum threshold for recycling fd, it should be between 0 and 1.
CleanFdsCacheThreshold float64
// BufferSizeOfRecovery represents the buffer size of recoveryReader buffer Size
BufferSizeOfRecovery int
// CcWhenClose represent initiative GC when calling db.Close()
GCWhenClose bool
// CommitBufferSize represent allocated memory for tx
CommitBufferSize int64
// ErrorHandler handles an error occurred during transaction.
// Example:
// func triggerAlertError(err error) {
// if errors.Is(err, targetErr) {
// alertManager.TriggerAlert()
// }
// })
ErrorHandler ErrorHandler
// LessFunc is a function that sorts keys.
LessFunc LessFunc
// MergeInterval represent the interval for automatic merges, with 0 meaning automatic merging is disabled.
MergeInterval time.Duration
MaxBatchCount int64 // max entries in batch
MaxBatchSize int64 // max batch size in bytes
}
Options records params for creating DB object.
type RWManager ¶
type RWManager interface {
WriteAt(b []byte, off int64) (n int, err error)
ReadAt(b []byte, off int64) (n int, err error)
Sync() (err error)
Release() (err error)
Close() (err error)
}
RWManager represents an interface to a RWManager.
type Record ¶
Record records entry and hint.
func (*Record) UpdateRecord ¶
UpdateRecord updates the record.
func (*Record) WithBucket ¶ added in v0.12.4
WithBucket set the Bucket to Record
type Records ¶
type Records []*Record
Records records multi-records as result when is called Range or PrefixScan.
type Set ¶ added in v0.13.1
func (*Set) SAreMembers ¶ added in v0.13.1
SAreMembers Returns if members are members of the set stored at key. For multiple items it returns true only if all the items exist.
func (*Set) SCard ¶ added in v0.13.1
SCard Returns the set cardinality (number of elements) of the set stored at key.
func (*Set) SDiff ¶ added in v0.13.1
SDiff Returns the members of the set resulting from the difference between the first set and all the successive sets.
func (*Set) SInter ¶ added in v0.13.1
SInter Returns the members of the set resulting from the intersection of all the given sets.
func (*Set) SIsMember ¶ added in v0.13.1
SIsMember Returns if member is a member of the set stored at key.
func (*Set) SMembers ¶ added in v0.13.1
SMembers returns all the members of the set value stored at key.
func (*Set) SMove ¶ added in v0.13.1
SMove moves member from the set at source to the set at destination.
func (*Set) SPop ¶ added in v0.13.1
SPop removes and returns one or more random elements from the set value store at key.
type SortedSetIdx ¶
SortedSetIdx represents the sorted set index
type Throttle ¶ added in v0.13.1
type Throttle struct {
// contains filtered or unexported fields
}
Throttle allows a limited number of workers to run at a time. It also provides a mechanism to check for errors encountered by workers and wait for them to finish.
func NewThrottle ¶ added in v0.13.1
NewThrottle creates a new throttle with a max number of workers.
func (*Throttle) Do ¶ added in v0.13.1
Do should be called by workers before they start working. It blocks if there are already maximum number of workers working. If it detects an error from previously Done workers, it would return it.
func (*Throttle) Done ¶ added in v0.13.1
Done should be called by workers when they finish working. They can also pass the error status of work done.
type Tx ¶
type Tx struct {
ReservedStoreTxIDIdxes map[int64]*BPTree
// contains filtered or unexported fields
}
Tx represents a transaction.
func (*Tx) Commit ¶
Commit commits the transaction, following these steps:
1. check the length of pendingWrites.If there are no writes, return immediately.
2. check if the ActiveFile has not enough space to store entry. if not, call rotateActiveFile function.
3. write pendingWrites to disk, if a non-nil error,return the error.
4. build Hint index.
5. Unlock the database and clear the db field.
func (*Tx) CommitWith ¶ added in v0.13.1
func (*Tx) DeleteBucket ¶
DeleteBucket delete bucket depends on ds (represents the data structure)
func (*Tx) ExistBucket ¶ added in v0.13.0
func (*Tx) FindLeafOnDisk ¶
func (tx *Tx) FindLeafOnDisk(fID int64, rootOff int64, key, newKey []byte) (bn *BinaryNode, err error)
FindLeafOnDisk returns binary leaf node on disk at given fId, rootOff and key.
func (*Tx) FindOnDisk ¶
FindOnDisk returns entry on disk at given fID, rootOff and key.
func (*Tx) FindTxIDOnDisk ¶
FindTxIDOnDisk returns if txId on disk at given fid and txID.
func (*Tx) Get ¶
Get retrieves the value for a key in the bucket. The returned value is only valid for the life of the transaction.
func (*Tx) GetListTTL ¶ added in v0.12.1
func (*Tx) IterateBuckets ¶
IterateBuckets iterate over all the bucket depends on ds (represents the data structure)
func (*Tx) LPeek ¶
LPeek returns the first element of the list stored in the bucket at given bucket and key.
func (*Tx) LPop ¶
LPop removes and returns the first element of the list stored in the bucket at given bucket and key.
func (*Tx) LPush ¶
LPush inserts the values at the head of the list stored in the bucket at given bucket,key and values.
func (*Tx) LRange ¶
LRange returns the specified elements of the list stored in the bucket at given bucket,key, start and end. The offsets start and stop are zero-based indexes 0 being the first element of the list (the head of the list), 1 being the next element and so on. Start and end can also be negative numbers indicating offsets from the end of the list, where -1 is the last element of the list, -2 the penultimate element and so on.
func (*Tx) LRem ¶
LRem removes the first count occurrences of elements equal to value from the list stored in the bucket at given bucket,key,count. The count argument influences the operation in the following ways: count > 0: Remove elements equal to value moving from head to tail. count < 0: Remove elements equal to value moving from tail to head. count = 0: Remove all elements equal to value.
func (*Tx) LRemByIndex ¶
LRemByIndex remove the list element at specified index
func (*Tx) LSize ¶
LSize returns the size of key in the bucket in the bucket at given bucket and key.
func (*Tx) LTrim ¶
LTrim trims an existing list so that it will contain only the specified range of elements specified. the offsets start and stop are zero-based indexes 0 being the first element of the list (the head of the list), 1 being the next element and so on. start and end can also be negative numbers indicating offsets from the end of the list, where -1 is the last element of the list, -2 the penultimate element and so on.
func (*Tx) PrefixScan ¶
func (tx *Tx) PrefixScan(bucket string, prefix []byte, offsetNum int, limitNum int) (es Entries, err error)
PrefixScan iterates over a key prefix at given bucket, prefix and limitNum. LimitNum will limit the number of entries return.
func (*Tx) PrefixSearchScan ¶
func (tx *Tx) PrefixSearchScan(bucket string, prefix []byte, reg string, offsetNum int, limitNum int) (es Entries, err error)
PrefixSearchScan iterates over a key prefix at given bucket, prefix, match regular expression and limitNum. LimitNum will limit the number of entries return.
func (*Tx) PutWithTimestamp ¶
func (*Tx) RPeek ¶
RPeek returns the last element of the list stored in the bucket at given bucket and key.
func (*Tx) RPop ¶
RPop removes and returns the last element of the list stored in the bucket at given bucket and key.
func (*Tx) RPush ¶
RPush inserts the values at the tail of the list stored in the bucket at given bucket,key and values.
func (*Tx) SAdd ¶
SAdd adds the specified members to the set stored int the bucket at given bucket,key and items.
func (*Tx) SAreMembers ¶
SAreMembers returns if the specified members are the member of the set int the bucket at given bucket,key and items.
func (*Tx) SCard ¶
SCard returns the set cardinality (number of elements) of the set stored in the bucket at given bucket and key.
func (*Tx) SDiffByOneBucket ¶
SDiffByOneBucket returns the members of the set resulting from the difference between the first set and all the successive sets in one bucket.
func (*Tx) SDiffByTwoBuckets ¶
func (tx *Tx) SDiffByTwoBuckets(bucket1 string, key1 []byte, bucket2 string, key2 []byte) ([][]byte, error)
SDiffByTwoBuckets returns the members of the set resulting from the difference between the first set and all the successive sets in two buckets.
func (*Tx) SIsMember ¶
SIsMember returns if member is a member of the set stored int the bucket at given bucket,key and item.
func (*Tx) SMembers ¶
SMembers returns all the members of the set value stored int the bucket at given bucket and key.
func (*Tx) SMoveByOneBucket ¶
SMoveByOneBucket moves member from the set at source to the set at destination in one bucket.
func (*Tx) SMoveByTwoBuckets ¶
func (tx *Tx) SMoveByTwoBuckets(bucket1 string, key1 []byte, bucket2 string, key2, item []byte) (bool, error)
SMoveByTwoBuckets moves member from the set at source to the set at destination in two buckets.
func (*Tx) SPop ¶
SPop removes and returns one or more random elements from the set value store in the bucket at given bucket and key.
func (*Tx) SRem ¶
SRem removes the specified members from the set stored int the bucket at given bucket,key and items.
func (*Tx) SUnionByOneBucket ¶
SUnionByOneBucket the members of the set resulting from the union of all the given sets in one bucket.
func (*Tx) SUnionByTwoBuckets ¶
func (tx *Tx) SUnionByTwoBuckets(bucket1 string, key1 []byte, bucket2 string, key2 []byte) ([][]byte, error)
SUnionByTwoBuckets the members of the set resulting from the union of all the given sets in two buckets.
func (*Tx) ZAdd ¶
ZAdd adds the specified member key with the specified score and specified val to the sorted set stored at bucket.
func (*Tx) ZCard ¶
ZCard returns the sorted set cardinality (number of elements) of the sorted set stored at bucket.
func (*Tx) ZCount ¶
func (tx *Tx) ZCount(bucket string, start, end float64, opts *zset.GetByScoreRangeOptions) (int, error)
ZCount returns the number of elements in the sorted set at bucket with a score between min and max and opts. opts includes the following parameters: Limit int // limit the max nodes to return ExcludeStart bool // exclude start value, so it search in interval (start, end] or (start, end) ExcludeEnd bool // exclude end value, so it search in interval [start, end) or (start, end)
func (*Tx) ZPeekMax ¶
func (tx *Tx) ZPeekMax(bucket string) (*zset.SortedSetNode, error)
ZPeekMax returns the member with the highest score in the sorted set stored at bucket.
func (*Tx) ZPeekMin ¶
func (tx *Tx) ZPeekMin(bucket string) (*zset.SortedSetNode, error)
ZPeekMin returns the member with the lowest score in the sorted set stored at bucket.
func (*Tx) ZPopMax ¶
func (tx *Tx) ZPopMax(bucket string) (*zset.SortedSetNode, error)
ZPopMax removes and returns the member with the highest score in the sorted set stored at bucket.
func (*Tx) ZPopMin ¶
func (tx *Tx) ZPopMin(bucket string) (*zset.SortedSetNode, error)
ZPopMin removes and returns the member with the lowest score in the sorted set stored at bucket.
func (*Tx) ZRangeByRank ¶
ZRangeByRank returns all the elements in the sorted set in one bucket and key with a rank between start and end (including elements with rank equal to start or end).
func (*Tx) ZRangeByScore ¶
func (tx *Tx) ZRangeByScore(bucket string, start, end float64, opts *zset.GetByScoreRangeOptions) ([]*zset.SortedSetNode, error)
ZRangeByScore returns all the elements in the sorted set at bucket with a score between min and max.
func (*Tx) ZRank ¶
ZRank returns the rank of member in the sorted set stored in the bucket at given bucket and key, with the scores ordered from low to high.
func (*Tx) ZRem ¶
ZRem removes the specified members from the sorted set stored in one bucket at given bucket and key.
func (*Tx) ZRemRangeByRank ¶
ZRemRangeByRank removes all elements in the sorted set stored in one bucket at given bucket with rank between start and end. the rank is 1-based integer. Rank 1 means the first node; Rank -1 means the last node.
type WriteBatch ¶ added in v0.13.1
WriteBatch holds the necessary info to perform batched writes.
func (*WriteBatch) Cancel ¶ added in v0.13.1
func (wb *WriteBatch) Cancel() error
func (*WriteBatch) Delete ¶ added in v0.13.1
func (wb *WriteBatch) Delete(bucket string, key []byte) error
func (tx *Tx) Delete(bucket string, key []byte) error
func (*WriteBatch) Error ¶ added in v0.13.1
func (wb *WriteBatch) Error() error
Error returns any errors encountered so far. No commits would be run once an error is detected.
func (*WriteBatch) Flush ¶ added in v0.13.1
func (wb *WriteBatch) Flush() error
func (*WriteBatch) Put ¶ added in v0.13.1
func (wb *WriteBatch) Put(bucket string, key, value []byte, ttl uint32) error
func (*WriteBatch) Reset ¶ added in v0.13.1
func (wb *WriteBatch) Reset() error
func (*WriteBatch) SetMaxPendingTxns ¶ added in v0.13.1
func (wb *WriteBatch) SetMaxPendingTxns(max int)
SetMaxPendingTxns sets a limit on maximum number of pending transactions while writing batches. This function should be called before using WriteBatch. Default value of MaxPendingTxns is 16 to minimise memory usage.
Source Files
¶
- batch.go
- bptree.go
- bptree_root_idx.go
- btree.go
- bucket_meta.go
- const.go
- datafile.go
- db.go
- doc.go
- entry.go
- errors.go
- fd_manager.go
- file_manager.go
- index.go
- iterator.go
- list.go
- merge.go
- options.go
- record.go
- recovery_reader.go
- rwmanager.go
- rwmanger_fileio.go
- rwmanger_mmap.go
- set.go
- tar.go
- test_utils.go
- throttle.go
- tx.go
- tx_bucket.go
- tx_list.go
- tx_set.go
- tx_tree.go
- tx_zset.go
- utils.go
- value.go
