nutsdb

package module
Version: v0.6.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 21, 2021 License: Apache-2.0 Imports: 23 Imported by: 42

README

NutsDB GoDoc Go Report Card Build Status Coverage Status License Mentioned in Awesome Go

English | 简体中文

NutsDB is a simple, fast, embeddable and persistent key/value store written in pure Go.

It supports fully serializable transactions and many data structures such as list、set、sorted set. All operations happen inside a Tx. Tx represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given bucket and a given key or iterate over a set of key-value pairs. Read-write transactions can read, update and delete keys from the DB.

Motivation

I wanted a simple, fast, embeddable and persistent key/value store written in pure Go. And if it supports more data structures such as list, set, sorted set,it will be better.

There are some options around the embeddable key/value store in Go:

BoltDB is based on B+ tree, has a good random read performance and awesome sequential scan performance, and it supports ACID transactions with serializable isolation, but it is terrible at random write performance and not supports more data structures such as list, etc.

GoLevelDB is based on a log-structured merge-tree (LSM tree), but it does not support more data structures.

Badger is based on LSM tree with value log. It designed for SSDs. It also supports transactions. But its write performance is not as good as I thought. And it also does not support more data structures.

Moreover, I was curious about how to implement a key/value database. The database can be said to be the core of the system, to understand the database kernel or their own implementation, better use of the same kind of database or the next time according to the business custom database is very helpful.

So I tried to build a key/value store by myself, I wanted to find a simple store engine model as a reference. Finally, I found the Bitcask model. It is simple and easy to implement. However, it has its limitation, like range or prefix queries, are not efficient. For example, you cannot easily scan over all keys between user000000 and user999999, you had to look up each key individually in the hashmap.

In order to break the limitation, I tried to optimize them. Finally, I did it and named NutsDB. NutsDB offers a high read/write performance and supports transactions. And it still has a lot of room for optimization. Welcome contributions to NutsDB.

Table of Contents

Getting Started

Installing

To start using NutsDB, first needs Go installed (version 1.11+ is required). and run go get:

go get -u github.com/xujiajun/nutsdb
Opening a database

To open your database, use the nutsdb.Open() function,with the appropriate options.The Dir , EntryIdxMode and SegmentSize options are must be specified by the client. About options see here for detail.

package main

import (
	"log"

	"github.com/xujiajun/nutsdb"
)

func main() {
	// Open the database located in the /tmp/nutsdb directory.
	// It will be created if it doesn't exist.
	opt := nutsdb.DefaultOptions
	opt.Dir = "/tmp/nutsdb"
	db, err := nutsdb.Open(opt)
	if err != nil {
		log.Fatal(err)
	}
	defer db.Close()

	...
}
Options
  • Dir string

Dir represents Open the database located in which dir.

  • EntryIdxMode EntryIdxMode

EntryIdxMode represents using which mode to index the entries. EntryIdxMode includes three options: HintKeyValAndRAMIdxMode,HintKeyAndRAMIdxMode and HintBPTSparseIdxMode. HintKeyValAndRAMIdxMode represents ram index (key and value) mode, HintKeyAndRAMIdxMode represents ram index (only key) mode and HintBPTSparseIdxMode represents b+ tree sparse index mode.

  • RWMode RWMode

RWMode represents the read and write mode. RWMode includes two options: FileIO and MMap. FileIO represents the read and write mode using standard I/O. And MMap represents the read and write mode using mmap.

  • SegmentSize int64

NutsDB will truncate data file if the active file is larger than SegmentSize. Current verison default SegmentSize is 8MB,but you can custom it. Once set, it cannot be changed. see caveats--limitations for detail.

  • NodeNum int64

NodeNum represents the node number.Default NodeNum is 1. NodeNum range [1,1023] .

  • SyncEnable bool

SyncEnable represents if call Sync() function. if SyncEnable is false, high write performance but potential data loss likely. if SyncEnable is true, slower but persistent.

  • StartFileLoadingMode RWMode

StartFileLoadingMode represents when open a database which RWMode to load files.

Default Options

Recommend to use the DefaultOptions . Unless you know what you're doing.

var DefaultOptions = Options{
	EntryIdxMode:         HintKeyValAndRAMIdxMode,
	SegmentSize:          defaultSegmentSize,
	NodeNum:              1,
	RWMode:               FileIO,
	SyncEnable:           true,
	StartFileLoadingMode: MMap,
}
Transactions

NutsDB allows only one read-write transaction at a time but allows as many read-only transactions as you want at a time. Each transaction has a consistent view of the data as it existed when the transaction started.

When a transaction fails, it will roll back, and revert all changes that occurred to the database during that transaction. if set the option SyncEnable true When a read/write transaction succeeds all changes are persisted to disk.

Creating transaction from the DB is thread safe.

Read-write transactions
err := db.Update(
	func(tx *nutsdb.Tx) error {
	...
	return nil
})

Read-only transactions
err := db.View(
	func(tx *nutsdb.Tx) error {
	...
	return nil
})

Managing transactions manually

The DB.View() and DB.Update() functions are wrappers around the DB.Begin() function. These helper functions will start the transaction, execute a function, and then safely close your transaction if an error is returned. This is the recommended way to use NutsDB transactions.

However, sometimes you may want to manually start and end your transactions. You can use the DB.Begin() function directly but please be sure to close the transaction.

 // Start a write transaction.
tx, err := db.Begin(true)
if err != nil {
    return err
}

bucket := "bucket1"
key := []byte("foo")
val := []byte("bar")

// Use the transaction.
if err = tx.Put(bucket, key, val, Persistent); err != nil {
	// Rollback the transaction.
	tx.Rollback()
} else {
	// Commit the transaction and check for error.
	if err = tx.Commit(); err != nil {
		tx.Rollback()
		return err
	}
}
Using buckets

Buckets are collections of key/value pairs within the database. All keys in a bucket must be unique. Bucket can be interpreted as a table or namespace. So you can store the same key in different bucket.


key := []byte("key001")
val := []byte("val001")

bucket001 := "bucket001"
if err := db.Update(
	func(tx *nutsdb.Tx) error {
		if err := tx.Put(bucket001, key, val, 0); err != nil {
			return err
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

bucket002 := "bucket002"
if err := db.Update(
	func(tx *nutsdb.Tx) error {
		if err := tx.Put(bucket002, key, val, 0); err != nil {
			return err
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

Also, this bucket is related to the data structure you use. Different data index structures that use the same bucket are also different. For example, you define a bucket named bucket_foo, so you need to use the list data structure, use tx.RPush to add data, you must query or retrieve from this bucket_foo data structure, use tx.RPop, tx.LRange, etc. You cannot use tx.Get (same index type as tx.GetAll, tx.Put, tx.Delete, tx.RangeScan, etc.) to read the data in this bucket_foo, because the index structure is different. Other data structures such as Set, Sorted Set are the same.

Using key/value pairs

To save a key/value pair to a bucket, use the tx.Put method:


if err := db.Update(
	func(tx *nutsdb.Tx) error {
	key := []byte("name1")
	val := []byte("val1")
	bucket := "bucket1"
	if err := tx.Put(bucket, key, val, 0); err != nil {
		return err
	}
	return nil
}); err != nil {
	log.Fatal(err)
}

This will set the value of the "name1" key to "val1" in the bucket1 bucket.

To update the the value of the "name1" key,we can still use the tx.Put function:

if err := db.Update(
	func(tx *nutsdb.Tx) error {
	key := []byte("name1")
	val := []byte("val1-modify") // Update the value
	bucket := "bucket1"
	if err := tx.Put(bucket, key, val, 0); err != nil {
		return err
	}
	return nil
}); err != nil {
	log.Fatal(err)
}

To retrieve this value, we can use the tx.Get function:

if err := db.View(
func(tx *nutsdb.Tx) error {
	key := []byte("name1")
	bucket := "bucket1"
	if e, err := tx.Get(bucket, key); err != nil {
		return err
	} else {
		fmt.Println(string(e.Value)) // "val1-modify"
	}
	return nil
}); err != nil {
	log.Println(err)
}

Use the tx.Delete() function to delete a key from the bucket.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
	key := []byte("name1")
	bucket := "bucket1"
	if err := tx.Delete(bucket, key); err != nil {
		return err
	}
	return nil
}); err != nil {
	log.Fatal(err)
}
Using TTL(Time To Live)

NusDB supports TTL(Time to Live) for keys, you can use tx.Put function with a ttl parameter.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
	key := []byte("name1")
	val := []byte("val1")
	bucket := "bucket1"
	
	// If set ttl = 0 or Persistent, this key will nerver expired.
	// Set ttl = 60 , after 60 seconds, this key will expired.
	if err := tx.Put(bucket, key, val, 60); err != nil {
		return err
	}
	return nil
}); err != nil {
	log.Fatal(err)
}
Iterating over keys

NutsDB stores its keys in byte-sorted order within a bucket. This makes sequential iteration over these keys extremely fast.

Prefix scans

To iterate over a key prefix, we can use PrefixScan function, and the parameters offsetNum and limitNum constrain the number of entries returned :


if err := db.View(
	func(tx *nutsdb.Tx) error {
		prefix := []byte("user_")
		bucket := "user_list"
		// Constrain 100 entries returned 
		if entries, _, err := tx.PrefixScan(bucket, prefix, 25, 100); err != nil {
			return err
		} else {
			for _, entry := range entries {
				fmt.Println(string(entry.Key), string(entry.Value))
			}
		}
		return nil
	}); err != nil {
		log.Fatal(err)
}

Prefix search scans

To iterate over a key prefix with search by regular expression on a second part of key without prefix, we can use PrefixSearchScan function, and the parameters offsetNum, limitNum constrain the number of entries returned :


if err := db.View(
	func(tx *nutsdb.Tx) error {
		prefix := []byte("user_")
		reg := "username"
		bucket := "user_list"
		// Constrain 100 entries returned 
		if entries, _, err := tx.PrefixSearchScan(bucket, prefix, reg, 25, 100); err != nil {
			return err
		} else {
			for _, entry := range entries {
				fmt.Println(string(entry.Key), string(entry.Value))
			}
		}
		return nil
	}); err != nil {
		log.Fatal(err)
}

Range scans

To scan over a range, we can use RangeScan function. For example:

if err := db.View(
	func(tx *nutsdb.Tx) error {
		// Assume key from user_0000000 to user_9999999.
		// Query a specific user key range like this.
		start := []byte("user_0010001")
		end := []byte("user_0010010")
		bucket := "user_list"
		if entries, err := tx.RangeScan(bucket, start, end); err != nil {
			return err
		} else {
			for _, entry := range entries {
				fmt.Println(string(entry.Key), string(entry.Value))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
Get all

To scan all keys and values of the bucket stored, we can use GetAll function. For example:

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "user_list"
		entries, err := tx.GetAll(bucket)
		if err != nil {
			return err
		}

		for _, entry := range entries {
			fmt.Println(string(entry.Key),string(entry.Value))
		}

		return nil
	}); err != nil {
	log.Println(err)
}
Merge Operation

NutsDB supports merge operation. you can use db.Merge() function removes dirty data and reduce data redundancy. Call this function from a read-write transaction. It will effect other write request. So you can execute it at the appropriate time.

err := db.Merge()
if err != nil {
    ...
}

Notice: the HintBPTSparseIdxMode mode does not support the merge operation of the current version.

Database backup

NutsDB is easy to backup. You can use the db.Backup() function at given dir, call this function from a read-only transaction, it will perform a hot backup and not block your other database reads and writes.

err = db.Backup(dir)
if err != nil {
   ...
}
Using other data structures

The syntax here is modeled after Redis commands

List
RPush

Inserts the values at the tail of the list stored in the bucket at given bucket,key and values.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "bucketForList"
		key := []byte("myList")
		val := []byte("val1")
		return tx.RPush(bucket, key, val)
	}); err != nil {
	log.Fatal(err)
}
LPush

Inserts the values at the head of the list stored in the bucket at given bucket,key and values.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
	        bucket := "bucketForList"
		key := []byte("myList")
		val := []byte("val2")
		return tx.LPush(bucket, key, val)
	}); err != nil {
	log.Fatal(err)
}
LPop

Removes and returns the first element of the list stored in the bucket at given bucket and key.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
	        bucket := "bucketForList"
		key := []byte("myList")
		if item, err := tx.LPop(bucket, key); err != nil {
			return err
		} else {
			fmt.Println("LPop item:", string(item))
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
LPeek

Returns the first element of the list stored in the bucket at given bucket and key.

if err := db.View(
	func(tx *nutsdb.Tx) error {
	        bucket := "bucketForList"
		key := []byte("myList")
		if item, err := tx.LPeek(bucket, key); err != nil {
			return err
		} else {
			fmt.Println("LPeek item:", string(item)) //val11
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
RPop

Removes and returns the last element of the list stored in the bucket at given bucket and key.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
	        bucket := "bucketForList"
		key := []byte("myList")
		if item, err := tx.RPop(bucket, key); err != nil {
			return err
		} else {
			fmt.Println("RPop item:", string(item))
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
RPeek

Returns the last element of the list stored in the bucket at given bucket and key.

if err := db.View(
	func(tx *nutsdb.Tx) error {
	        bucket := "bucketForList"
		key := []byte("myList")
		if item, err := tx.RPeek(bucket, key); err != nil {
			return err
		} else {
			fmt.Println("RPeek item:", string(item))
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
LRange

Returns the specified elements of the list stored in the bucket at given bucket,key, start and end. The offsets start and stop are zero-based indexes 0 being the first element of the list (the head of the list), 1 being the next element and so on. Start and end can also be negative numbers indicating offsets from the end of the list, where -1 is the last element of the list, -2 the penultimate element and so on.

if err := db.View(
	func(tx *nutsdb.Tx) error {
	        bucket := "bucketForList"
		key := []byte("myList")
		if items, err := tx.LRange(bucket, key, 0, -1); err != nil {
			return err
		} else {
			//fmt.Println(items)
			for _, item := range items {
				fmt.Println(string(item))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
LRem

Note: This feature can be used starting from v0.6.0

Removes the first count occurrences of elements equal to value from the list stored in the bucket at given bucket,key,count. The count argument influences the operation in the following ways:

  • count > 0: Remove elements equal to value moving from head to tail.
  • count < 0: Remove elements equal to value moving from tail to head.
  • count = 0: Remove all elements equal to value.
if err := db.Update(
	func(tx *nutsdb.Tx) error {
	        bucket := "bucketForList"
		key := []byte("myList")
		return tx.LRem(bucket, key, 1, []byte("value11))
	}); err != nil {
	log.Fatal(err)
}
LSet

Sets the list element at index to value.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
	        bucket := "bucketForList"
		key := []byte("myList")
		if err := tx.LSet(bucket, key, 0, []byte("val11")); err != nil {
			return err
		} else {
			fmt.Println("LSet ok, index 0 item value => val11")
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
Ltrim

Trims an existing list so that it will contain only the specified range of elements specified. the offsets start and stop are zero-based indexes 0 being the first element of the list (the head of the list), 1 being the next element and so on.Start and end can also be negative numbers indicating offsets from the end of the list, where -1 is the last element of the list, -2 the penultimate element and so on.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
	        bucket := "bucketForList"
		key := []byte("myList")
		return tx.LTrim(bucket, key, 0, 1)
	}); err != nil {
	log.Fatal(err)
}
LSize

Returns the size of key in the bucket in the bucket at given bucket and key.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
	        bucket := "bucketForList"
		key := []byte("myList")
		if size,err := tx.LSize(bucket, key); err != nil {
			return err
		} else {
			fmt.Println("myList size is ",size)
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
Set
SAdd

Adds the specified members to the set stored int the bucket at given bucket,key and items.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
	        bucket := "bucketForSet"
		key := []byte("mySet")
		return tx.SAdd(bucket, key, []byte("a"), []byte("b"), []byte("c"))
	}); err != nil {
	log.Fatal(err)
}
SAreMembers

Returns if the specified members are the member of the set int the bucket at given bucket,key and items.

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "bucketForSet"
		key := []byte("mySet")
		if ok, err := tx.SAreMembers(bucket, key, []byte("a"), []byte("b"), []byte("c")); err != nil {
			return err
		} else {
			fmt.Println("SAreMembers:", ok)
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
SCard

Returns the set cardinality (number of elements) of the set stored in the bucket at given bucket and key.


if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "bucketForSet"
		key := []byte("mySet")
		if num, err := tx.SCard(bucket, key); err != nil {
			return err
		} else {
			fmt.Println("SCard:", num)
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

SDiffByOneBucket

Returns the members of the set resulting from the difference between the first set and all the successive sets in one bucket.


key1 := []byte("mySet1")
key2 := []byte("mySet2")
bucket := "bucketForSet"

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket, key1, []byte("a"), []byte("b"), []byte("c"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket, key2, []byte("c"), []byte("d"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if items, err := tx.SDiffByOneBucket(bucket, key1, key2); err != nil {
			return err
		} else {
			fmt.Println("SDiffByOneBucket:", items)
			for _, item := range items {
				fmt.Println("item", string(item))
			}
			//item a
			//item b
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

SDiffByTwoBuckets

Returns the members of the set resulting from the difference between the first set and all the successive sets in two buckets.

bucket1 := "bucket1"
key1 := []byte("mySet1")

bucket2 := "bucket2"
key2 := []byte("mySet2")

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket1, key1, []byte("a"), []byte("b"), []byte("c"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket2, key2, []byte("c"), []byte("d"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if items, err := tx.SDiffByTwoBuckets(bucket1, key1, bucket2, key2); err != nil {
			return err
		} else {
			fmt.Println("SDiffByTwoBuckets:", items)
			for _, item := range items {
				fmt.Println("item", string(item))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

SHasKey

Returns if the set in the bucket at given bucket and key.


bucket := "bucketForSet"

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if ok, err := tx.SHasKey(bucket, []byte("mySet")); err != nil {
			return err
		} else {
			fmt.Println("SHasKey", ok)
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

SIsMember

Returns if member is a member of the set stored int the bucket at given bucket,key and item.

bucket := "bucketForSet"

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if ok, err := tx.SIsMember(bucket, []byte("mySet"), []byte("a")); err != nil {
			return err
		} else {
			fmt.Println("SIsMember", ok)
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
SMembers

Returns all the members of the set value stored int the bucket at given bucket and key.

bucket := "bucketForSet"

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if items, err := tx.SMembers(bucket, []byte("mySet")); err != nil {
			return err
		} else {
			fmt.Println("SMembers", items)
			for _, item := range items {
				fmt.Println("item", string(item))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
SMoveByOneBucket

Moves member from the set at source to the set at destination in one bucket.

bucket3 := "bucket3"

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return SAdd(bucket3, []byte("mySet1"), []byte("a"), []byte("b"), []byte("c"))
	}); err != nil {
	log.Fatal(err)
}
if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket3, []byte("mySet2"), []byte("c"), []byte("d"), []byte("e"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		if ok, err := tx.SMoveByOneBucket(bucket3, []byte("mySet1"), []byte("mySet2"), []byte("a")); err != nil {
			return err
		} else {
			fmt.Println("SMoveByOneBucket", ok)
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if items, err := tx.SMembers(bucket3, []byte("mySet1")); err != nil {
			return err
		} else {
			fmt.Println("after SMoveByOneBucket bucket3 mySet1 SMembers", items)
			for _, item := range items {
				fmt.Println("item", string(item))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if items, err := tx.SMembers(bucket3, []byte("mySet2")); err != nil {
			return err
		} else {
			fmt.Println("after SMoveByOneBucket bucket3 mySet2 SMembers", items)
			for _, item := range items {
				fmt.Println("item", string(item))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
SMoveByTwoBuckets

Moves member from the set at source to the set at destination in two buckets.

bucket4 := "bucket4"
bucket5 := "bucket5"
if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket4, []byte("mySet1"), []byte("a"), []byte("b"), []byte("c"))
	}); err != nil {
	log.Fatal(err)
}
if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket5, []byte("mySet2"), []byte("c"), []byte("d"), []byte("e"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		if ok, err := tx.SMoveByTwoBuckets(bucket4, []byte("mySet1"), bucket5, []byte("mySet2"), []byte("a")); err != nil {
			return err
		} else {
			fmt.Println("SMoveByTwoBuckets", ok)
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if items, err := tx.SMembers(bucket4, []byte("mySet1")); err != nil {
			return err
		} else {
			fmt.Println("after SMoveByTwoBuckets bucket4 mySet1 SMembers", items)
			for _, item := range items {
				fmt.Println("item", string(item))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if items, err := tx.SMembers(bucket5, []byte("mySet2")); err != nil {
			return err
		} else {
			fmt.Println("after SMoveByTwoBuckets bucket5 mySet2 SMembers", items)
			for _, item := range items {
				fmt.Println("item", string(item))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
SPop

Removes and returns one or more random elements from the set value store in the bucket at given bucket and key.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		key := []byte("mySet")
		if item, err := tx.SPop(bucket, key); err != nil {
			return err
		} else {
			fmt.Println("SPop item from mySet:", string(item))
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
SRem

Removes the specified members from the set stored int the bucket at given bucket,key and items.

bucket6:="bucket6"
if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket6, []byte("mySet"), []byte("a"), []byte("b"), []byte("c"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		if err := tx.SRem(bucket6, []byte("mySet"), []byte("a")); err != nil {
			return err
		} else {
			fmt.Println("SRem ok")
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if items, err := tx.SMembers(bucket6, []byte("mySet")); err != nil {
			return err
		} else {
			fmt.Println("SMembers items:", items)
			for _, item := range items {
				fmt.Println("item:", string(item))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
SUnionByOneBucket

The members of the set resulting from the union of all the given sets in one bucket.

bucket7 := "bucket1"
key1 := []byte("mySet1")
key2 := []byte("mySet2")

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket7, key1, []byte("a"), []byte("b"), []byte("c"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket7, key2, []byte("c"), []byte("d"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if items, err := tx.SUnionByOneBucket(bucket7, key1, key2); err != nil {
			return err
		} else {
			fmt.Println("SUnionByOneBucket:", items)
			for _, item := range items {
				fmt.Println("item", string(item))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
SUnionByTwoBuckets

The members of the set resulting from the union of all the given sets in two buckets.

bucket8 := "bucket1"
key1 := []byte("mySet1")

bucket9 := "bucket2"
key2 := []byte("mySet2")

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket8, key1, []byte("a"), []byte("b"), []byte("c"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		return tx.SAdd(bucket9, key2, []byte("c"), []byte("d"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		if items, err := tx.SUnionByTwoBuckets(bucket8, key1, bucket9, key2); err != nil {
			return err
		} else {
			fmt.Println("SUnionByTwoBucket:", items)
			for _, item := range items {
				fmt.Println("item", string(item))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
Sorted Set
ZAdd

Adds the specified member with the specified score and the specified value to the sorted set stored at bucket.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet1"
		key := []byte("key1")
		return tx.ZAdd(bucket, key, 1, []byte("val1"))
	}); err != nil {
	log.Fatal(err)
}
ZCard

Returns the sorted set cardinality (number of elements) of the sorted set stored at bucket.

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet1"
		if num, err := tx.ZCard(bucket); err != nil {
			return err
		} else {
			fmt.Println("ZCard num", num)
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZCount

Returns the number of elements in the sorted set at bucket with a score between min and max and opts.

Opts includes the following parameters:

  • Limit int // limit the max nodes to return
  • ExcludeStart bool // exclude start value, so it search in interval (start, end] or (start, end)
  • ExcludeEnd bool // exclude end value, so it search in interval [start, end) or (start, end)
if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet1"
		if num, err := tx.ZCount(bucket, 0, 1, nil); err != nil {
			return err
		} else {
			fmt.Println("ZCount num", num)
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZGetByKey

Returns node in the bucket at given bucket and key.

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet1"
		key := []byte("key2")
		if node, err := tx.ZGetByKey(bucket, key); err != nil {
			return err
		} else {
			fmt.Println("ZGetByKey key2 val:", string(node.Value))
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZMembers

Returns all the members of the set value stored at bucket.

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet1"
		if nodes, err := tx.ZMembers(bucket); err != nil {
			return err
		} else {
			fmt.Println("ZMembers:", nodes)

			for _, node := range nodes {
				fmt.Println("member:", node.Key(), string(node.Value))
			}
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZPeekMax

Returns the member with the highest score in the sorted set stored at bucket.

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet1"
		if node, err := tx.ZPeekMax(bucket); err != nil {
			return err
		} else {
			fmt.Println("ZPeekMax:", string(node.Value))
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZPeekMin

Returns the member with lowest score in the sorted set stored at bucket.

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet1"
		if node, err := tx.ZPeekMin(bucket); err != nil {
			return err
		} else {
			fmt.Println("ZPeekMin:", string(node.Value))
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZPopMax

Removes and returns the member with the highest score in the sorted set stored at bucket.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet1"
		if node, err := tx.ZPopMax(bucket); err != nil {
			return err
		} else {
			fmt.Println("ZPopMax:", string(node.Value)) //val3
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZPopMin

Removes and returns the member with the lowest score in the sorted set stored at bucket.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet1"
		if node, err := tx.ZPopMin(bucket); err != nil {
			return err
		} else {
			fmt.Println("ZPopMin:", string(node.Value)) //val1
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZRangeByRank

Returns all the elements in the sorted set in one bucket at bucket and key with a rank between start and end (including elements with rank equal to start or end).

// ZAdd add items
if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet2"
		key1 := []byte("key1")
		return tx.ZAdd(bucket, key1, 1, []byte("val1"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet2"
		key2 := []byte("key2")
		return tx.ZAdd(bucket, key2, 2, []byte("val2"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet2"
		key3 := []byte("key3")
		return tx.ZAdd(bucket, key3, 3, []byte("val3"))
	}); err != nil {
	log.Fatal(err)
}

// ZRangeByRank
if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet2"
		if nodes, err := tx.ZRangeByRank(bucket, 1, 2); err != nil {
			return err
		} else {
			fmt.Println("ZRangeByRank nodes :", nodes)
			for _, node := range nodes {
				fmt.Println("item:", node.Key(), node.Score())
			}
			
			//item: key1 1
			//item: key2 2
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZRangeByScore

Returns all the elements in the sorted set at key with a score between min and max. And the parameter Opts is the same as ZCount's.

// ZAdd
if err := db.Update(
		func(tx *nutsdb.Tx) error {
			bucket := "myZSet3"
			key1 := []byte("key1")
			return tx.ZAdd(bucket, key1, 70, []byte("val1"))
		}); err != nil {
		log.Fatal(err)
	}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet3"
		key2 := []byte("key2")
		return tx.ZAdd(bucket, key2, 90, []byte("val2"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet3"
		key3 := []byte("key3")
		return tx.ZAdd(bucket, key3, 86, []byte("val3"))
	}); err != nil {
	log.Fatal(err)
}

// ZRangeByScore
if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet3"
		if nodes, err := tx.ZRangeByScore(bucket, 80, 100,nil); err != nil {
			return err
		} else {
			fmt.Println("ZRangeByScore nodes :", nodes)
			for _, node := range nodes {
				fmt.Println("item:", node.Key(), node.Score())
			}
			//item: key3 86
			//item: key2 90
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}	
ZRank

Returns the rank of member in the sorted set stored in the bucket at given bucket and key, with the scores ordered from low to high.


// ZAdd
if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet4"
		key1 := []byte("key1")
		return tx.ZAdd(bucket, key1, 70, []byte("val1"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet4"
		key2 := []byte("key2")
		return tx.ZAdd(bucket, key2, 90, []byte("val2"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet4"
		key3 := []byte("key3")
		return tx.ZAdd(bucket, key3, 86, []byte("val3"))
	}); err != nil {
	log.Fatal(err)
}

// ZRank
if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet4"
		key1 := []byte("key1")
		if rank, err := tx.ZRank(bucket, key1); err != nil {
			return err
		} else {
			fmt.Println("key1 ZRank :", rank) // key1 ZRank : 1
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZRevRank

Returns the rank of member in the sorted set stored in the bucket at given bucket and key,with the scores ordered from high to low.

// ZAdd
if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet8"
		key1 := []byte("key1")
		return tx.ZAdd(bucket, key1, 10, []byte("val1"))
	}); err != nil {
	log.Fatal(err)
}
if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet8"
		key2 := []byte("key2")
		return tx.ZAdd(bucket, key2, 20, []byte("val2"))
	}); err != nil {
	log.Fatal(err)
}
if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet8"
		key3 := []byte("key3")
		return tx.ZAdd(bucket, key3, 30, []byte("val3"))
	}); err != nil {
	log.Fatal(err)
}

// ZRevRank
if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet8"
		if rank, err := tx.ZRevRank(bucket, []byte("key3")); err != nil {
			return err
		} else {
			fmt.Println("ZRevRank key1 rank:", rank) //ZRevRank key3 rank: 1
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZRem

Removes the specified members from the sorted set stored in one bucket at given bucket and key.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet5"
		key1 := []byte("key1")
		return tx.ZAdd(bucket, key1, 10, []byte("val1"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet5"
		key2 := []byte("key2")
		return tx.ZAdd(bucket, key2, 20, []byte("val2"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet5"
		if nodes,err := tx.ZMembers(bucket); err != nil {
			return err
		} else {
			fmt.Println("before ZRem key1, ZMembers nodes",nodes)
			for _,node:=range nodes {
				fmt.Println("item:",node.Key(),node.Score())
			}
		}
		// before ZRem key1, ZMembers nodes map[key1:0xc00008cfa0 key2:0xc00008d090]
		// item: key1 10
		// item: key2 20
		return nil
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet5"
		if err := tx.ZRem(bucket, "key1"); err != nil {
			return err
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet5"
		if nodes,err := tx.ZMembers(bucket); err != nil {
			return err
		} else {
			fmt.Println("after ZRem key1, ZMembers nodes",nodes)
			for _,node:=range nodes {
				fmt.Println("item:",node.Key(),node.Score())
			}
			// after ZRem key1, ZMembers nodes map[key2:0xc00008d090]
			// item: key2 20
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

ZRemRangeByRank

Removes all elements in the sorted set stored in one bucket at given bucket with rank between start and end. The rank is 1-based integer. Rank 1 means the first node; Rank -1 means the last node.

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet6"
		key1 := []byte("key1")
		return tx.ZAdd(bucket, key1, 10, []byte("val1"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet6"
		key2 := []byte("key2")
		return tx.ZAdd(bucket, key2, 20, []byte("val2"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet6"
		key3 := []byte("key3")
		return tx.ZAdd(bucket, key3, 30, []byte("val2"))
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet6"
		if nodes,err := tx.ZMembers(bucket); err != nil {
			return err
		} else {
			fmt.Println("before ZRemRangeByRank, ZMembers nodes",nodes)
			for _,node:=range nodes {
				fmt.Println("item:",node.Key(),node.Score())
			}
			// before ZRemRangeByRank, ZMembers nodes map[key3:0xc00008d450 key1:0xc00008d270 key2:0xc00008d360]
			// item: key1 10
			// item: key2 20
			// item: key3 30
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

if err := db.Update(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet6"
		if err := tx.ZRemRangeByRank(bucket, 1,2); err != nil {
			return err
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet6"
		if nodes,err := tx.ZMembers(bucket); err != nil {
			return err
		} else {
			fmt.Println("after ZRemRangeByRank, ZMembers nodes",nodes)
			for _,node:=range nodes {
				fmt.Println("item:",node.Key(),node.Score())
			}
			// after ZRemRangeByRank, ZMembers nodes map[key3:0xc00008d450]
			// item: key3 30
			// key1 ZScore 10
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
ZScore

Returns the score of member in the sorted set in the bucket at given bucket and key.

if err := db.View(
	func(tx *nutsdb.Tx) error {
		bucket := "myZSet7"
		if score,err := tx.ZScore(bucket, []byte("key1")); err != nil {
			return err
		} else {
			fmt.Println("ZScore key1 score:",score)
		}
		return nil
	}); err != nil {
	log.Fatal(err)
}
Comparison with other databases
BoltDB

BoltDB is similar to NutsDB, both use B+tree and support transaction. However, Bolt uses a B+tree internally and only a single file, and NutsDB is based on bitcask model with multiple log files. NutsDB supports TTL and many data structures, but BoltDB not supports them .

LevelDB, RocksDB

LevelDB and RocksDB are based on a log-structured merge-tree (LSM tree).An LSM tree optimizes random writes by using a write ahead log and multi-tiered, sorted files called SSTables. LevelDB does not have transactions. It supports batch writing of key/values pairs and it supports read snapshots but it will not give you the ability to do a compare-and-swap operation safely. NutsDB supports many data structures, but they not support them.

Badger

Badger is based in LSM tree with value log. It designed for SSDs. It also supports transaction and TTL. But in my benchmark its write performance is not as good as i thought. In addition, NutsDB supports data structures such as list、set、sorted set, but Badger not supports them.

Benchmarks

Tested kvstore

Selected kvstore which is embedded, persistence and support transactions.

  • BadgerDB (master branch with default options)
  • BoltDB (master branch with default options)
  • NutsDB (master branch with default options or custom options)

Benchmark System:

  • Go Version : go1.11.4 darwin/amd64
  • OS: Mac OS X 10.13.6
  • Architecture: x86_64
  • 16 GB 2133 MHz LPDDR3
  • CPU: 3.1 GHz Intel Core i7

Benchmark results:

badger 2019/03/11 18:06:05 INFO: All 0 tables opened in 0s
goos: darwin
goarch: amd64
pkg: github.com/xujiajun/kvstore-bench
BenchmarkBadgerDBPutValue64B-8    	   10000	    112382 ns/op	    2374 B/op	      74 allocs/op
BenchmarkBadgerDBPutValue128B-8   	   20000	     94110 ns/op	    2503 B/op	      74 allocs/op
BenchmarkBadgerDBPutValue256B-8   	   20000	     93480 ns/op	    2759 B/op	      74 allocs/op
BenchmarkBadgerDBPutValue512B-8   	   10000	    101407 ns/op	    3271 B/op	      74 allocs/op
BenchmarkBadgerDBGet-8            	 1000000	      1552 ns/op	     416 B/op	       9 allocs/op
BenchmarkBoltDBPutValue64B-8      	   10000	    203128 ns/op	   21231 B/op	      62 allocs/op
BenchmarkBoltDBPutValue128B-8     	    5000	    229568 ns/op	   13716 B/op	      64 allocs/op
BenchmarkBoltDBPutValue256B-8     	   10000	    196513 ns/op	   17974 B/op	      64 allocs/op
BenchmarkBoltDBPutValue512B-8     	   10000	    199805 ns/op	   17064 B/op	      64 allocs/op
BenchmarkBoltDBGet-8              	 1000000	      1122 ns/op	     592 B/op	      10 allocs/op
BenchmarkNutsDBPutValue64B-8      	   30000	     53614 ns/op	     626 B/op	      14 allocs/op
BenchmarkNutsDBPutValue128B-8     	   30000	     51998 ns/op	     664 B/op	      13 allocs/op
BenchmarkNutsDBPutValue256B-8     	   30000	     53958 ns/op	     920 B/op	      13 allocs/op
BenchmarkNutsDBPutValue512B-8     	   30000	     55787 ns/op	    1432 B/op	      13 allocs/op
BenchmarkNutsDBGet-8              	 2000000	       661 ns/op	      88 B/op	       3 allocs/op
BenchmarkNutsDBGetByHintKey-8     	   50000	     27255 ns/op	     840 B/op	      16 allocs/op
PASS
ok  	github.com/xujiajun/kvstore-bench	83.856s

Conclusions:

Put(write) Performance:

NutsDB is fastest. NutsDB is 2-5x faster than BoltDB, 0.5-2x faster than BadgerDB. And BadgerDB is 1-3x faster than BoltDB.

Get(read) Performance:

All are fast. And NutsDB is 1x faster than others. And NutsDB reads with HintKey option is much slower than its default option way.

the benchmark code can be found in the gokvstore-bench repo.

Caveats & Limitations
Index mode

From the version v0.3.0, NutsDB supports two modes about entry index: HintKeyValAndRAMIdxMode and HintKeyAndRAMIdxMode. From the version v0.5.0, NutsDB supports HintBPTSparseIdxMode mode.

The default mode use HintKeyValAndRAMIdxMode, entries are indexed base on RAM, so its read/write performance is fast. but can’t handle databases much larger than the available physical RAM. If you set the HintKeyAndRAMIdxMode mode, HintIndex will not cache the value of the entry. Its write performance is also fast. To retrieve a key by seeking to offset relative to the start of the data file, so its read performance more slowly that RAM way, but it can save memory. The mode HintBPTSparseIdxMode is based b+ tree sparse index, this mode saves memory very much (1 billion data only uses about 80MB of memory). And other data structures such as list, set, sorted set only supported with mode HintKeyValAndRAMIdxMode. It cannot switch back and forth between modes because the index structure is different.

NutsDB will truncate data file if the active file is larger than SegmentSize, so the size of an entry can not be set larger than SegmentSize , defalut SegmentSize is 8MB, you can set it(opt.SegmentSize) as option before DB opening. Once set, it cannot be changed.

Support OS

NutsDB currently works on Mac OS, Linux and Windows.

About merge operation

The HintBPTSparseIdxMode mode does not support the merge operation of the current version.

About transactions

Recommend use the latest version.

Contact
Contributing

See CONTRIBUTING for details on submitting patches and the contribution workflow.

Acknowledgements

This package is inspired by the following:

License

The NutsDB is open-sourced software licensed under the Apache 2.0 license.

Documentation

Overview

Package nutsdb implements a simple, fast, embeddable and persistent key/value store written in pure Go. It supports fully serializable transactions. And it also supports data structure such as list、set、sorted set etc.

NutsDB currently works on Mac OS, Linux and Windows.

Usage

NutsDB has the following main types: DB, BPTree, Entry, DataFile And Tx. and NutsDB supports bucket, A bucket is a collection of unique keys that are associated with values.

All operations happen inside a Tx. Tx represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key , or iterate over a set of key-value pairs (prefix scanning or range scanning). read-write transactions can also update and delete keys from the DB.

See the examples for more usage details.

Index

Constants

View Source
const (

	// DefaultInvalidAddress returns default invalid node address.
	DefaultInvalidAddress = -1

	// RangeScan returns range scanMode flag.
	RangeScan = "RangeScan"

	// PrefixScan returns prefix scanMode flag.
	PrefixScan = "PrefixScan"

	// PrefixSearchScan returns prefix and search scanMode flag.
	PrefixSearchScan = "PrefixSearchScan"

	// CountFlagEnabled returns enabled CountFlag.
	CountFlagEnabled = true

	// CountFlagDisabled returns disabled CountFlag.
	CountFlagDisabled = false

	// BPTIndexSuffix returns b+ tree index suffix.
	BPTIndexSuffix = ".bptidx"

	// BPTRootIndexSuffix returns b+ tree root index suffix.
	BPTRootIndexSuffix = ".bptridx"

	// BPTTxIDIndexSuffix returns b+ tree tx ID index suffix.
	BPTTxIDIndexSuffix = ".bpttxid"

	// BPTRootTxIDIndexSuffix returns b+ tree root tx ID index suffix.
	BPTRootTxIDIndexSuffix = ".bptrtxid"
)
View Source
const (
	// BucketMetaHeaderSize returns the header size of the BucketMeta.
	BucketMetaHeaderSize = 12

	// BucketMetaSuffix returns b+ tree index suffix.
	BucketMetaSuffix = ".meta"
)
View Source
const (
	// DataSuffix returns the data suffix
	DataSuffix = ".dat"

	// DataEntryHeaderSize returns the entry header size
	DataEntryHeaderSize = 42
)
View Source
const (
	// DataDeleteFlag represents the data delete flag
	DataDeleteFlag uint16 = iota

	// DataSetFlag represents the data set flag
	DataSetFlag

	// DataLPushFlag represents the data LPush flag
	DataLPushFlag

	// DataRPushFlag represents the data RPush flag
	DataRPushFlag

	// DataLRemFlag represents the data LRem flag
	DataLRemFlag

	// DataLPopFlag represents the data LPop flag
	DataLPopFlag

	// DataRPopFlag represents the data RPop flag
	DataRPopFlag

	// DataLSetFlag represents the data LSet flag
	DataLSetFlag

	// DataLTrimFlag represents the data LTrim flag
	DataLTrimFlag

	// DataZAddFlag represents the data ZAdd flag
	DataZAddFlag

	// DataZRemFlag represents the data ZRem flag
	DataZRemFlag

	// DataZRemRangeByRankFlag represents the data ZRemRangeByRank flag
	DataZRemRangeByRankFlag

	// DataZPopMaxFlag represents the data ZPopMax flag
	DataZPopMaxFlag

	// DataZPopMinFlag represents the data aZPopMin flag
	DataZPopMinFlag
)
View Source
const (
	// UnCommitted represents the tx unCommitted status
	UnCommitted uint16 = 0

	// Committed represents the tx committed status
	Committed uint16 = 1

	// Persistent represents the data persistent flag
	Persistent uint32 = 0

	// ScanNoLimit represents the data scan no limit flag
	ScanNoLimit int = -1
)
View Source
const (
	// DataStructureSet represents the data structure set flag
	DataStructureSet uint16 = iota

	// DataStructureSortedSet represents the data structure sorted set flag
	DataStructureSortedSet

	// DataStructureBPTree represents the data structure b+ tree flag
	DataStructureBPTree

	// DataStructureList represents the data structure list flag
	DataStructureList
)
View Source
const BPTreeRootIdxHeaderSize = 28

BPTreeRootIdxHeaderSize returns the header size of the root index.

View Source
const SeparatorForListKey = "|"

SeparatorForListKey represents separator for listKey

View Source
const SeparatorForZSetKey = "|"

SeparatorForZSetKey represents separator for zSet key.

Variables

View Source
var (
	// ErrStartKey is returned when Range is called by a error start key.
	ErrStartKey = errors.New("err start key")

	// ErrScansNoResult is returned when Range or prefixScan or prefixSearchScan are called no result to found.
	ErrScansNoResult = errors.New("range scans or prefix or prefix and search scans no result")

	// ErrPrefixScansNoResult is returned when prefixScan is called no result to found.
	ErrPrefixScansNoResult = errors.New("prefix scans no result")

	// ErrPrefixSearchScansNoResult is returned when prefixSearchScan is called no result to found.
	ErrPrefixSearchScansNoResult = errors.New("prefix and search scans no result")

	// ErrKeyNotFound is returned when the key is not in the b+ tree.
	ErrKeyNotFound = errors.New("key not found")

	// ErrBadRegexp is returned when bad regular expression given.
	ErrBadRegexp = errors.New("bad regular expression")
)
View Source
var (
	// ErrCrcZero is returned when crc is 0
	ErrCrcZero = errors.New("error crc is 0")

	// ErrCrc is returned when crc is error
	ErrCrc = errors.New(" crc error")

	// ErrCapacity is returned when capacity is error.
	ErrCapacity = errors.New("capacity error")
)
View Source
var (
	// ErrDBClosed is returned when db is closed.
	ErrDBClosed = errors.New("db is closed")

	// ErrBucket is returned when bucket is not in the HintIdx.
	ErrBucket = errors.New("err bucket")

	// ErrEntryIdxModeOpt is returned when set db EntryIdxMode option is wrong.
	ErrEntryIdxModeOpt = errors.New("err EntryIdxMode option set")

	// ErrFn is returned when fn is nil.
	ErrFn = errors.New("err fn")

	// ErrBucketNotFound is returned when looking for bucket that does not exist
	ErrBucketNotFound = errors.New("bucket not found")
)
View Source
var (
	// ErrUnmappedMemory is returned when a function is called on unmapped memory
	ErrUnmappedMemory = errors.New("unmapped memory")

	// ErrIndexOutOfBound is returned when given offset out of mapped region
	ErrIndexOutOfBound = errors.New("offset out of mapped region")
)
View Source
var (
	// ErrKeyAndValSize is returned when given key and value size is too big.
	ErrKeyAndValSize = errors.New("key and value size too big")

	// ErrTxClosed is returned when committing or rolling back a transaction
	// that has already been committed or rolled back.
	ErrTxClosed = errors.New("tx is closed")

	// ErrTxNotWritable is returned when performing a write operation on
	// a read-only transaction.
	ErrTxNotWritable = errors.New("tx not writable")

	// ErrKeyEmpty is returned if an empty key is passed on an update function.
	ErrKeyEmpty = errors.New("key cannot be empty")

	// ErrBucketEmpty is returned if bucket is empty.
	ErrBucketEmpty = errors.New("bucket is empty")

	// ErrRangeScan is returned when range scanning not found the result
	ErrRangeScan = errors.New("range scans not found")

	// ErrPrefixScan is returned when prefix scanning not found the result
	ErrPrefixScan = errors.New("prefix scans not found")

	// ErrPrefixSearchScan is returned when prefix and search scanning not found the result
	ErrPrefixSearchScan = errors.New("prefix and search scans not found")

	// ErrNotFoundKey is returned when key not found int the bucket on an view function.
	ErrNotFoundKey = errors.New("key not found in the bucket")
)
View Source
var DefaultOptions = Options{
	EntryIdxMode:         HintKeyValAndRAMIdxMode,
	SegmentSize:          defaultSegmentSize,
	NodeNum:              1,
	RWMode:               FileIO,
	SyncEnable:           true,
	StartFileLoadingMode: MMap,
}

DefaultOptions represents the default options.

Functions

func ErrBucketAndKey added in v0.2.0

func ErrBucketAndKey(bucket string, key []byte) error

ErrBucketAndKey returns when bucket or key not found.

func ErrNotFoundKeyInBucket added in v0.2.0

func ErrNotFoundKeyInBucket(bucket string, key []byte) error

ErrNotFoundKeyInBucket returns when key not in the bucket.

func ErrSeparatorForListKey added in v0.2.0

func ErrSeparatorForListKey() error

ErrSeparatorForListKey returns when list key contains the SeparatorForListKey.

func ErrSeparatorForZSetKey added in v0.2.0

func ErrSeparatorForZSetKey() error

ErrSeparatorForZSetKey returns when zSet key contains the SeparatorForZSetKey flag.

func ErrWhenBuildListIdx added in v0.2.0

func ErrWhenBuildListIdx(err error) error

ErrWhenBuildListIdx returns err when build listIdx

func IsExpired

func IsExpired(ttl uint32, timestamp uint64) bool

IsExpired checks the ttl if expired or not.

func SortFID added in v0.5.0

func SortFID(BPTreeRootIdxGroup []*BPTreeRootIdx, by sortBy)

SortFID sorts BPTreeRootIdx data.

func SortedEntryKeys

func SortedEntryKeys(m map[string]*Entry) (keys []string, es map[string]*Entry)

SortedEntryKeys returns sorted entries.

func Truncate added in v0.4.0

func Truncate(path string, capacity int64, f *os.File) error

Truncate changes the size of the file.

Types

type BPTree

type BPTree struct {
	ValidKeyCount int // the number of the key that not expired or deleted
	FirstKey      []byte
	LastKey       []byte
	LastAddress   int64
	Filepath      string
	// contains filtered or unexported fields
}

BPTree records root node and valid key number.

func NewTree

func NewTree() *BPTree

NewTree returns a newly initialized BPTree Object that implements the BPTree.

func (*BPTree) All added in v0.5.0

func (t *BPTree) All() (records Records, err error)

All returns all records in the b+ tree.

func (*BPTree) Find

func (t *BPTree) Find(key []byte) (*Record, error)

Find retrieves record at the given key.

func (*BPTree) FindLeaf

func (t *BPTree) FindLeaf(key []byte) *Node

FindLeaf returns leaf at the given key.

func (*BPTree) Insert

func (t *BPTree) Insert(key []byte, e *Entry, h *Hint, countFlag bool) error

Insert inserts record to the b+ tree, and if the key exists, update the record and the counter(if countFlag set true,it will start count).

func (*BPTree) PrefixScan

func (t *BPTree) PrefixScan(prefix []byte, offsetNum int, limitNum int) (records Records, off int, err error)

PrefixScan returns records at the given prefix and limitNum limitNum: limit the number of the scanned records return.

func (*BPTree) PrefixSearchScan added in v0.6.0

func (t *BPTree) PrefixSearchScan(prefix []byte, reg string, offsetNum int, limitNum int) (records Records, off int, err error)

PrefixSearchScan returns records at the given prefix, match regular expression and limitNum limitNum: limit the number of the scanned records return.

func (*BPTree) Range

func (t *BPTree) Range(start, end []byte) (records Records, err error)

Range returns records at the given start key and end key.

func (*BPTree) SetKeyPosMap added in v0.5.0

func (t *BPTree) SetKeyPosMap(keyPosMap map[string]int64)

SetKeyPosMap sets the key offset of all entries in the b+ tree.

func (*BPTree) ToBinary added in v0.5.0

func (t *BPTree) ToBinary(n *Node) (result []byte, err error)

ToBinary represents convert to a binary node.

func (*BPTree) WriteNode added in v0.5.0

func (t *BPTree) WriteNode(n *Node, off int64, syncEnable bool, fd *os.File) (number int, err error)

WriteNode writes a binary node to the File starting at byte offset off. It returns the number of bytes written and an error, if any. WriteAt returns a non-nil error when n != len(b).

func (*BPTree) WriteNodes added in v0.5.0

func (t *BPTree) WriteNodes(rwMode RWMode, syncEnable bool, flag int) error

WriteNodes writes all nodes in the b+ tree to the File starting at byte offset off.

type BPTreeIdx

type BPTreeIdx map[string]*BPTree

BPTreeIdx represents the B+ tree index

type BPTreeRootIdx added in v0.5.0

type BPTreeRootIdx struct {
	// contains filtered or unexported fields
}

BPTreeRootIdx represents the b+ tree root index.

func ReadBPTreeRootIdxAt added in v0.5.0

func ReadBPTreeRootIdxAt(fd *os.File, off int64) (*BPTreeRootIdx, error)

ReadBPTreeRootIdxAt reads BPTreeRootIdx entry from the File starting at byte offset off.

func (*BPTreeRootIdx) Encode added in v0.5.0

func (bri *BPTreeRootIdx) Encode() []byte

Encode returns the slice after the BPTreeRootIdx be encoded.

func (*BPTreeRootIdx) GetCrc added in v0.5.0

func (bri *BPTreeRootIdx) GetCrc(buf []byte) uint32

GetCrc returns the crc at given buf slice.

func (*BPTreeRootIdx) IsZero added in v0.5.0

func (bri *BPTreeRootIdx) IsZero() bool

IsZero checks if the BPTreeRootIdx entry is zero or not.

func (*BPTreeRootIdx) Persistence added in v0.5.0

func (bri *BPTreeRootIdx) Persistence(path string, offset int64, syncEnable bool) (number int, err error)

Persistence writes BPTreeRootIdx entry to the File starting at byte offset off.

func (*BPTreeRootIdx) Size added in v0.5.0

func (bri *BPTreeRootIdx) Size() int64

Size returns the size of the BPTreeRootIdx entry.

type BPTreeRootIdxWrapper added in v0.5.0

type BPTreeRootIdxWrapper struct {
	BSGroup []*BPTreeRootIdx
	// contains filtered or unexported fields
}

BPTreeRootIdxWrapper records BSGroup and by, in order to sort.

func (BPTreeRootIdxWrapper) Len added in v0.5.0

func (bsw BPTreeRootIdxWrapper) Len() int

Len is the number of elements in the collection bsw.BSGroup.

func (BPTreeRootIdxWrapper) Less added in v0.5.0

func (bsw BPTreeRootIdxWrapper) Less(i, j int) bool

Less reports whether the element with index i should sort before the element with index j.

func (BPTreeRootIdxWrapper) Swap added in v0.5.0

func (bsw BPTreeRootIdxWrapper) Swap(i, j int)

Swap swaps the elements with indexes i and j.

type BinaryNode added in v0.5.0

type BinaryNode struct {
	// hint offset
	Keys [order - 1]int64
	// 1. not leaf node represents node address
	// 2. leaf node represents data address
	Pointers    [order]int64
	IsLeaf      uint16
	KeysNum     uint16
	Address     int64
	NextAddress int64
}

BinaryNode represents binary node.

func ReadNode added in v0.5.0

func ReadNode(Filepath string, address int64) (bn *BinaryNode, err error)

ReadNode reads a binary node at given Filepath and address.

type BucketMeta added in v0.5.0

type BucketMeta struct {
	// contains filtered or unexported fields
}

BucketMeta represents the bucket's meta-information.

func ReadBucketMeta added in v0.5.0

func ReadBucketMeta(name string) (bucketMeta *BucketMeta, err error)

ReadBucketMeta returns bucketMeta at given file path name.

func (*BucketMeta) Encode added in v0.5.0

func (bm *BucketMeta) Encode() []byte

Encode returns the slice after the BucketMeta be encoded.

func (*BucketMeta) GetCrc added in v0.5.0

func (bm *BucketMeta) GetCrc(buf []byte) uint32

GetCrc returns the crc at given buf slice.

func (*BucketMeta) Size added in v0.5.0

func (bm *BucketMeta) Size() int64

Size returns the size of the BucketMeta.

type BucketMetasIdx added in v0.5.0

type BucketMetasIdx map[string]*BucketMeta

BucketMetasIdx represents the index of the bucket's meta-information

type DB

type DB struct {
	BPTreeIdx            BPTreeIdx // Hint Index
	BPTreeRootIdxes      []*BPTreeRootIdx
	BPTreeKeyEntryPosMap map[string]int64 // key = bucket+key  val = EntryPos

	SetIdx                  SetIdx
	SortedSetIdx            SortedSetIdx
	ListIdx                 ListIdx
	ActiveFile              *DataFile
	ActiveBPTreeIdx         *BPTree
	ActiveCommittedTxIdsIdx *BPTree

	MaxFileID int64

	KeyCount int // total key number ,include expired, deleted, repeated.
	// contains filtered or unexported fields
}

DB represents a collection of buckets that persist on disk.

func Open

func Open(opt Options) (*DB, error)

Open returns a newly initialized DB object.

func (*DB) Backup

func (db *DB) Backup(dir string) error

Backup copies the database to file directory at the given dir.

func (*DB) Begin

func (db *DB) Begin(writable bool) (tx *Tx, err error)

Begin opens a new transaction. Multiple read-only transactions can be opened at the same time but there can only be one read/write transaction at a time. Attempting to open a read/write transactions while another one is in progress will result in blocking until the current read/write transaction is completed. All transactions must be closed by calling Commit() or Rollback() when done.

func (*DB) Close

func (db *DB) Close() error

Close releases all db resources.

func (*DB) Merge

func (db *DB) Merge() error

Merge removes dirty data and reduce data redundancy,following these steps:

1. Filter delete or expired entry.

2. Write entry to activeFile if the key not exist,if exist miss this write operation.

3. Filter the entry which is committed.

4. At last remove the merged files.

Caveat: Merge is Called means starting multiple write transactions, and it will effect the other write request. so execute it at the appropriate time.

func (*DB) Update

func (db *DB) Update(fn func(tx *Tx) error) error

Update executes a function within a managed read/write transaction.

func (*DB) View

func (db *DB) View(fn func(tx *Tx) error) error

View executes a function within a managed read-only transaction.

type DataFile

type DataFile struct {
	ActualSize int64
	// contains filtered or unexported fields
}

DataFile records about data file information.

func NewDataFile

func NewDataFile(path string, capacity int64, rwMode RWMode) (df *DataFile, err error)

NewDataFile returns a newly initialized DataFile object.

func (*DataFile) Close added in v0.4.0

func (df *DataFile) Close() (err error)

Close closes the RWManager. If RWManager is FileRWManager represents closes the File, rendering it unusable for I/O. If RWManager is a MMapRWManager represents Unmap deletes the memory mapped region, flushes any remaining changes.

func (*DataFile) ReadAt

func (df *DataFile) ReadAt(off int) (e *Entry, err error)

ReadAt returns entry at the given off(offset).

func (*DataFile) Sync added in v0.3.0

func (df *DataFile) Sync() (err error)

Sync commits the current contents of the file to stable storage. Typically, this means flushing the file system's in-memory copy of recently written data to disk.

func (*DataFile) WriteAt

func (df *DataFile) WriteAt(b []byte, off int64) (n int, err error)

WriteAt copies data to mapped region from the b slice starting at given off and returns number of bytes copied to the mapped region.

type Entries

type Entries []*Entry

Entries represents entries

type Entry

type Entry struct {
	Key   []byte
	Value []byte
	Meta  *MetaData
	// contains filtered or unexported fields
}

Entry represents the data item.

func (*Entry) Encode

func (e *Entry) Encode() []byte

Encode returns the slice after the entry be encoded.

the entry stored format:
|----------------------------------------------------------------------------------------------------------------|
|  crc  | timestamp | ksz | valueSize | flag  | TTL  |bucketSize| status | ds   | txId |  bucket |  key  | value |
|----------------------------------------------------------------------------------------------------------------|
| uint32| uint64  |uint32 |  uint32 | uint16  | uint32| uint32 | uint16 | uint16 |uint64 |[]byte|[]byte | []byte |
|----------------------------------------------------------------------------------------------------------------|

func (*Entry) GetCrc

func (e *Entry) GetCrc(buf []byte) uint32

GetCrc returns the crc at given buf slice.

func (*Entry) IsZero

func (e *Entry) IsZero() bool

IsZero checks if the entry is zero or not.

func (*Entry) Size

func (e *Entry) Size() int64

Size returns the size of the entry.

type EntryIdxMode

type EntryIdxMode int

EntryIdxMode represents entry index mode.

const (
	// HintKeyValAndRAMIdxMode represents ram index (key and value) mode.
	HintKeyValAndRAMIdxMode EntryIdxMode = iota

	// HintKeyAndRAMIdxMode represents ram index (only key) mode.
	HintKeyAndRAMIdxMode

	// HintBPTSparseIdxMode represents b+ tree sparse index mode.
	HintBPTSparseIdxMode
)

type FileIORWManager added in v0.4.0

type FileIORWManager struct {
	// contains filtered or unexported fields
}

FileIORWManager represents the RWManager which using standard I/O.

func NewFileIORWManager added in v0.4.0

func NewFileIORWManager(path string, capacity int64) (*FileIORWManager, error)

NewFileIORWManager returns a newly initialized FileIORWManager.

func (*FileIORWManager) Close added in v0.4.0

func (fm *FileIORWManager) Close() (err error)

Close closes the File, rendering it unusable for I/O. On files that support SetDeadline, any pending I/O operations will be canceled and return immediately with an error. `Close` is a wrapper of the *File.Close.

func (*FileIORWManager) ReadAt added in v0.4.0

func (fm *FileIORWManager) ReadAt(b []byte, off int64) (n int, err error)

ReadAt reads len(b) bytes from the File starting at byte offset off. `ReadAt` is a wrapper of the *File.ReadAt.

func (*FileIORWManager) Sync added in v0.4.0

func (fm *FileIORWManager) Sync() (err error)

Sync commits the current contents of the file to stable storage. Typically, this means flushing the file system's in-memory copy of recently written data to disk. `Sync` is a wrapper of the *File.Sync.

func (*FileIORWManager) WriteAt added in v0.4.0

func (fm *FileIORWManager) WriteAt(b []byte, off int64) (n int, err error)

WriteAt writes len(b) bytes to the File starting at byte offset off. `WriteAt` is a wrapper of the *File.WriteAt.

type Hint

type Hint struct {
	// contains filtered or unexported fields
}

Hint represents the index of the key

type ListIdx added in v0.2.0

type ListIdx map[string]*list.List

ListIdx represents the list index

type MMapRWManager added in v0.4.0

type MMapRWManager struct {
	// contains filtered or unexported fields
}

MMapRWManager represents the RWManager which using mmap.

func NewMMapRWManager added in v0.4.0

func NewMMapRWManager(path string, capacity int64) (*MMapRWManager, error)

NewMMapRWManager returns a newly initialized MMapRWManager.

func (*MMapRWManager) Close added in v0.4.0

func (mm *MMapRWManager) Close() (err error)

Close deletes the memory mapped region, flushes any remaining changes

func (*MMapRWManager) ReadAt added in v0.4.0

func (mm *MMapRWManager) ReadAt(b []byte, off int64) (n int, err error)

ReadAt copies data to b slice from mapped region starting at given off and returns number of bytes copied to the b slice.

func (*MMapRWManager) Sync added in v0.4.0

func (mm *MMapRWManager) Sync() (err error)

Sync synchronizes the mapping's contents to the file's contents on disk.

func (*MMapRWManager) WriteAt added in v0.4.0

func (mm *MMapRWManager) WriteAt(b []byte, off int64) (n int, err error)

WriteAt copies data to mapped region from the b slice starting at given off and returns number of bytes copied to the mapped region.

type MetaData

type MetaData struct {
	TTL  uint32
	Flag uint16 // delete / set
	// contains filtered or unexported fields
}

MetaData represents the meta information of the data item.

type Node

type Node struct {
	Keys [][]byte

	KeysNum int
	Next    *Node
	Address int64
	// contains filtered or unexported fields
}

Node records keys and pointers and parent node.

type Options

type Options struct {
	// Dir represents Open the database located in which dir.
	Dir string

	// EntryIdxMode represents using which mode to index the entries.
	EntryIdxMode EntryIdxMode

	// RWMode represents the read and write mode.
	// RWMode includes two options: FileIO and MMap.
	// FileIO represents the read and write mode using standard I/O.
	// MMap represents the read and write mode using mmap.
	RWMode      RWMode
	SegmentSize int64

	// NodeNum represents the node number.
	// Default NodeNum is 1. NodeNum range [1,1023].
	NodeNum int64

	// SyncEnable represents if call Sync() function.
	// if SyncEnable is false, high write performance but potential data loss likely.
	// if SyncEnable is true, slower but persistent.
	SyncEnable bool

	// StartFileLoadingMode represents when open a database which RWMode to load files.
	StartFileLoadingMode RWMode
}

Options records params for creating DB object.

type RWManager added in v0.4.0

type RWManager interface {
	WriteAt(b []byte, off int64) (n int, err error)
	ReadAt(b []byte, off int64) (n int, err error)
	Sync() (err error)
	Close() (err error)
}

RWManager represents an interface to a RWManager.

type RWMode added in v0.4.0

type RWMode int

RWMode represents the read and write mode.

const (
	//FileIO represents the read and write mode using standard I/O.
	FileIO RWMode = iota

	//MMap represents the read and write mode using mmap.
	MMap
)

type Record

type Record struct {
	H *Hint
	E *Entry
}

Record records entry and hint.

func (*Record) IsExpired added in v0.2.0

func (r *Record) IsExpired() bool

IsExpired returns the record if expired or not.

func (*Record) UpdateRecord added in v0.2.0

func (r *Record) UpdateRecord(h *Hint, e *Entry) error

UpdateRecord updates the record.

type Records

type Records []*Record

Records records multi-records as result when is called Range or PrefixScan.

type SetIdx added in v0.2.0

type SetIdx map[string]*set.Set

SetIdx represents the set index

type SortedSetIdx added in v0.2.0

type SortedSetIdx map[string]*zset.SortedSet

SortedSetIdx represents the sorted set index

type Tx

type Tx struct {
	ReservedStoreTxIDIdxes map[int64]*BPTree
	// contains filtered or unexported fields
}

Tx represents a transaction.

func (*Tx) Commit

func (tx *Tx) Commit() error

Commit commits the transaction, following these steps:

1. check the length of pendingWrites.If there are no writes, return immediately.

2. check if the ActiveFile has not enough space to store entry. if not, call rotateActiveFile function.

3. write pendingWrites to disk, if a non-nil error,return the error.

4. build Hint index.

5. Unlock the database and clear the db field.

func (*Tx) Delete

func (tx *Tx) Delete(bucket string, key []byte) error

Delete removes a key from the bucket at given bucket and key.

func (*Tx) FindLeafOnDisk added in v0.5.0

func (tx *Tx) FindLeafOnDisk(fID int64, rootOff int64, key, newKey []byte) (bn *BinaryNode, err error)

FindLeafOnDisk returns binary leaf node on disk at given fId, rootOff and key.

func (*Tx) FindOnDisk added in v0.5.0

func (tx *Tx) FindOnDisk(fID uint64, rootOff uint64, key, newKey []byte) (entry *Entry, err error)

FindOnDisk returns entry on disk at given fID, rootOff and key.

func (*Tx) FindTxIDOnDisk added in v0.5.0

func (tx *Tx) FindTxIDOnDisk(fID, txID uint64) (ok bool, err error)

FindTxIDOnDisk returns if txId on disk at given fid and txID.

func (*Tx) Get

func (tx *Tx) Get(bucket string, key []byte) (e *Entry, err error)

Get retrieves the value for a key in the bucket. The returned value is only valid for the life of the transaction.

func (*Tx) GetAll added in v0.5.0

func (tx *Tx) GetAll(bucket string) (entries Entries, err error)

GetAll returns all keys and values of the bucket stored at given bucket.

func (*Tx) LPeek added in v0.2.0

func (tx *Tx) LPeek(bucket string, key []byte) (item []byte, err error)

LPeek returns the first element of the list stored in the bucket at given bucket and key.

func (*Tx) LPop added in v0.2.0

func (tx *Tx) LPop(bucket string, key []byte) (item []byte, err error)

LPop removes and returns the first element of the list stored in the bucket at given bucket and key.

func (*Tx) LPush added in v0.2.0

func (tx *Tx) LPush(bucket string, key []byte, values ...[]byte) error

LPush inserts the values at the head of the list stored in the bucket at given bucket,key and values.

func (*Tx) LRange added in v0.2.0

func (tx *Tx) LRange(bucket string, key []byte, start, end int) (list [][]byte, err error)

LRange returns the specified elements of the list stored in the bucket at given bucket,key, start and end. The offsets start and stop are zero-based indexes 0 being the first element of the list (the head of the list), 1 being the next element and so on. Start and end can also be negative numbers indicating offsets from the end of the list, where -1 is the last element of the list, -2 the penultimate element and so on.

func (*Tx) LRem added in v0.2.0

func (tx *Tx) LRem(bucket string, key []byte, count int, value []byte) (removedNum int, err error)

LRem removes the first count occurrences of elements equal to value from the list stored in the bucket at given bucket,key,count. The count argument influences the operation in the following ways: count > 0: Remove elements equal to value moving from head to tail. count < 0: Remove elements equal to value moving from tail to head. count = 0: Remove all elements equal to value.

func (*Tx) LSet added in v0.2.0

func (tx *Tx) LSet(bucket string, key []byte, index int, value []byte) error

LSet sets the list element at index to value.

func (*Tx) LSize added in v0.2.0

func (tx *Tx) LSize(bucket string, key []byte) (int, error)

LSize returns the size of key in the bucket in the bucket at given bucket and key.

func (*Tx) LTrim added in v0.2.0

func (tx *Tx) LTrim(bucket string, key []byte, start, end int) error

LTrim trims an existing list so that it will contain only the specified range of elements specified. the offsets start and stop are zero-based indexes 0 being the first element of the list (the head of the list), 1 being the next element and so on. start and end can also be negative numbers indicating offsets from the end of the list, where -1 is the last element of the list, -2 the penultimate element and so on.

func (*Tx) PrefixScan

func (tx *Tx) PrefixScan(bucket string, prefix []byte, offsetNum int, limitNum int) (es Entries, off int, err error)

PrefixScan iterates over a key prefix at given bucket, prefix and limitNum. LimitNum will limit the number of entries return.

func (*Tx) PrefixSearchScan added in v0.6.0

func (tx *Tx) PrefixSearchScan(bucket string, prefix []byte, reg string, offsetNum int, limitNum int) (es Entries, off int, err error)

PrefixSearchScan iterates over a key prefix at given bucket, prefix, match regular expression and limitNum. LimitNum will limit the number of entries return.

func (*Tx) Put

func (tx *Tx) Put(bucket string, key, value []byte, ttl uint32) error

Put sets the value for a key in the bucket. a wrapper of the function put.

func (*Tx) PutWithTimestamp added in v0.6.0

func (tx *Tx) PutWithTimestamp(bucket string, key, value []byte, ttl uint32, timestamp uint64) error

func (*Tx) RPeek added in v0.2.0

func (tx *Tx) RPeek(bucket string, key []byte) (item []byte, err error)

RPeek returns the last element of the list stored in the bucket at given bucket and key.

func (*Tx) RPop added in v0.2.0

func (tx *Tx) RPop(bucket string, key []byte) (item []byte, err error)

RPop removes and returns the last element of the list stored in the bucket at given bucket and key.

func (*Tx) RPush added in v0.2.0

func (tx *Tx) RPush(bucket string, key []byte, values ...[]byte) error

RPush inserts the values at the tail of the list stored in the bucket at given bucket,key and values.

func (*Tx) RangeScan

func (tx *Tx) RangeScan(bucket string, start, end []byte) (es Entries, err error)

RangeScan query a range at given bucket, start and end slice.

func (*Tx) Rollback

func (tx *Tx) Rollback() error

Rollback closes the transaction.

func (*Tx) SAdd added in v0.2.0

func (tx *Tx) SAdd(bucket string, key []byte, items ...[]byte) error

SAdd adds the specified members to the set stored int the bucket at given bucket,key and items.

func (*Tx) SAreMembers added in v0.2.0

func (tx *Tx) SAreMembers(bucket string, key []byte, items ...[]byte) (bool, error)

SAreMembers returns if the specified members are the member of the set int the bucket at given bucket,key and items.

func (*Tx) SCard added in v0.2.0

func (tx *Tx) SCard(bucket string, key []byte) (int, error)

SCard returns the set cardinality (number of elements) of the set stored in the bucket at given bucket and key.

func (*Tx) SDiffByOneBucket added in v0.2.0

func (tx *Tx) SDiffByOneBucket(bucket string, key1, key2 []byte) (list [][]byte, err error)

SDiffByOneBucket returns the members of the set resulting from the difference between the first set and all the successive sets in one bucket.

func (*Tx) SDiffByTwoBuckets added in v0.2.0

func (tx *Tx) SDiffByTwoBuckets(bucket1 string, key1 []byte, bucket2 string, key2 []byte) (list [][]byte, err error)

SDiffByTwoBuckets returns the members of the set resulting from the difference between the first set and all the successive sets in two buckets.

func (*Tx) SHasKey added in v0.2.0

func (tx *Tx) SHasKey(bucket string, key []byte) (bool, error)

SHasKey returns if the set in the bucket at given bucket and key.

func (*Tx) SIsMember added in v0.2.0

func (tx *Tx) SIsMember(bucket string, key, item []byte) (bool, error)

SIsMember returns if member is a member of the set stored int the bucket at given bucket,key and item.

func (*Tx) SMembers added in v0.2.0

func (tx *Tx) SMembers(bucket string, key []byte) (list [][]byte, err error)

SMembers returns all the members of the set value stored int the bucket at given bucket and key.

func (*Tx) SMoveByOneBucket added in v0.2.0

func (tx *Tx) SMoveByOneBucket(bucket string, key1, key2, item []byte) (bool, error)

SMoveByOneBucket moves member from the set at source to the set at destination in one bucket.

func (*Tx) SMoveByTwoBuckets added in v0.2.0

func (tx *Tx) SMoveByTwoBuckets(bucket1 string, key1 []byte, bucket2 string, key2, item []byte) (bool, error)

SMoveByTwoBuckets moves member from the set at source to the set at destination in two buckets.

func (*Tx) SPop added in v0.2.0

func (tx *Tx) SPop(bucket string, key []byte) ([]byte, error)

SPop removes and returns one or more random elements from the set value store in the bucket at given bucket and key.

func (*Tx) SRem added in v0.2.0

func (tx *Tx) SRem(bucket string, key []byte, items ...[]byte) error

SRem removes the specified members from the set stored int the bucket at given bucket,key and items.

func (*Tx) SUnionByOneBucket added in v0.2.0

func (tx *Tx) SUnionByOneBucket(bucket string, key1, key2 []byte) (list [][]byte, err error)

SUnionByOneBucket the members of the set resulting from the union of all the given sets in one bucket.

func (*Tx) SUnionByTwoBuckets added in v0.2.0

func (tx *Tx) SUnionByTwoBuckets(bucket1 string, key1 []byte, bucket2 string, key2 []byte) (list [][]byte, err error)

SUnionByTwoBuckets the members of the set resulting from the union of all the given sets in two buckets.

func (*Tx) ZAdd added in v0.2.0

func (tx *Tx) ZAdd(bucket string, key []byte, score float64, val []byte) error

ZAdd adds the specified member key with the specified score and specified val to the sorted set stored at bucket.

func (*Tx) ZCard added in v0.2.0

func (tx *Tx) ZCard(bucket string) (int, error)

ZCard returns the sorted set cardinality (number of elements) of the sorted set stored at bucket.

func (*Tx) ZCount added in v0.2.0

func (tx *Tx) ZCount(bucket string, start, end float64, opts *zset.GetByScoreRangeOptions) (int, error)

ZCount returns the number of elements in the sorted set at bucket with a score between min and max and opts. opts includes the following parameters: Limit int // limit the max nodes to return ExcludeStart bool // exclude start value, so it search in interval (start, end] or (start, end) ExcludeEnd bool // exclude end value, so it search in interval [start, end) or (start, end)

func (*Tx) ZGetByKey added in v0.2.0

func (tx *Tx) ZGetByKey(bucket string, key []byte) (*zset.SortedSetNode, error)

ZGetByKey returns node in the bucket at given bucket and key.

func (*Tx) ZMembers added in v0.2.0

func (tx *Tx) ZMembers(bucket string) (map[string]*zset.SortedSetNode, error)

ZMembers returns all the members of the set value stored at bucket.

func (*Tx) ZPeekMax added in v0.2.0

func (tx *Tx) ZPeekMax(bucket string) (*zset.SortedSetNode, error)

ZPeekMax returns the member with the highest score in the sorted set stored at bucket.

func (*Tx) ZPeekMin added in v0.2.0

func (tx *Tx) ZPeekMin(bucket string) (*zset.SortedSetNode, error)

ZPeekMin returns the member with the lowest score in the sorted set stored at bucket.

func (*Tx) ZPopMax added in v0.2.0

func (tx *Tx) ZPopMax(bucket string) (*zset.SortedSetNode, error)

ZPopMax removes and returns the member with the highest score in the sorted set stored at bucket.

func (*Tx) ZPopMin added in v0.2.0

func (tx *Tx) ZPopMin(bucket string) (*zset.SortedSetNode, error)

ZPopMin removes and returns the member with the lowest score in the sorted set stored at bucket.

func (*Tx) ZRangeByRank added in v0.2.0

func (tx *Tx) ZRangeByRank(bucket string, start, end int) ([]*zset.SortedSetNode, error)

ZRangeByRank returns all the elements in the sorted set in one bucket and key with a rank between start and end (including elements with rank equal to start or end).

func (*Tx) ZRangeByScore added in v0.2.0

func (tx *Tx) ZRangeByScore(bucket string, start, end float64, opts *zset.GetByScoreRangeOptions) ([]*zset.SortedSetNode, error)

ZRangeByScore returns all the elements in the sorted set at bucket with a score between min and max.

func (*Tx) ZRank added in v0.2.0

func (tx *Tx) ZRank(bucket string, key []byte) (int, error)

ZRank returns the rank of member in the sorted set stored in the bucket at given bucket and key, with the scores ordered from low to high.

func (*Tx) ZRem added in v0.2.0

func (tx *Tx) ZRem(bucket, key string) error

ZRem removes the specified members from the sorted set stored in one bucket at given bucket and key.

func (*Tx) ZRemRangeByRank added in v0.2.0

func (tx *Tx) ZRemRangeByRank(bucket string, start, end int) error

ZRemRangeByRank removes all elements in the sorted set stored in one bucket at given bucket with rank between start and end. the rank is 1-based integer. Rank 1 means the first node; Rank -1 means the last node.

func (*Tx) ZRevRank added in v0.2.0

func (tx *Tx) ZRevRank(bucket string, key []byte) (int, error)

ZRevRank returns the rank of member in the sorted set stored in the bucket at given bucket and key, with the scores ordered from high to low.

func (*Tx) ZScore added in v0.2.0

func (tx *Tx) ZScore(bucket string, key []byte) (float64, error)

ZScore returns the score of member in the sorted set in the bucket at given bucket and key.

Directories

Path Synopsis
ds
set
examples
set

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL