graviton

package module
v0.0.0-...-2c248a5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 30, 2022 License: GPL-3.0 Imports: 13 Imported by: 39

README

Graviton Database: ZFS for key-value stores.

Graviton Database is simple, fast, versioned, authenticated, embeddable key-value store database in pure GOLANG.
Graviton Database in short is like "ZFS for key-value stores" in which every write is tracked, versioned and authenticated with cryptographic proofs. Additionally it is possible to take snapshots of database. Also it is possible to use simple copy,rsync commands for database backup even during live updates without any possibilities of database corruption.

Project Status

Graviton is currently alpha software. Almost full unit test coverage and randomized black box testing are used to ensure database consistency and thread safety. The project already has 100% code coverage. A number of decisions such as change,rename APIs, handling errors, hashing algorithms etc. are being evaluated and open for improvements and suggestions.

Features

Graviton Database in short is "ZFS for key-value stores".

  • Authenticated data store (All keys, values are backed by blake 256 bit checksum).
  • Append only data store.
  • Support of 2^64 trees (Theoretically) within a single data store. Trees can be named and thus used as buckets.
  • Support of values version tracking. All committed changes are versioned with ability to visit them at any point in time.
  • Snapshots (Multi tree commits in a single version causing multi bucket sync, each snapshot can be visited, appended and further modified, keys deleted, values modified etc., new keys, values stored.)
  • Ability to iterate over all key-value pairs in a tree.
  • Ability to diff between 2 trees in linear time and report all changes of Insertions, Deletions, Modifications.)
  • Minimal and simplified API.
  • Theoretically support Exabyte data store, Multi TeraByte tested internally.
  • Decoupled storage layer, allowing use of object stores such as Ceph, AWS etc.
  • Ability to generate cryptographic proofs which can prove key existance or non-existance (Cryptographic Proofs are around 1 KB.)
  • Superfast proof generation time of around 1000 proofs per second per core.
  • Support for disk based filesystem based persistant stores.
  • Support for memory based non-persistant stores.
  • 100% code coverage

Table of Contents

  1. Getting Started
  2. Installing
  3. Opening and Using the Database
  4. Graviton Tree
  5. Using key,value pairs
  6. Iterating over keys
  7. Snapshots
  8. Diffing (Diffing of 2 trees to detect changes between versions or compare 2 arbitrary trees in linear time.)
  9. GravitonDB Backups
  10. Stress testing
  11. Graviton Internals
  12. Lines of Code
  13. TODO
  14. Comparison with other databases (Mysql, Postgres, LevelDB, RocksDB, LMDB, Bolt etc.)
  15. License

GNU General Public License v3.0

Getting Started

Installing

To start using Graviton DB, install Go and run go get:

go get github.com/deroproject/graviton/...

This will retrieve the library and build the library

Opening and Using the Database

The top-level object in Graviton is a Store. It is represented as a directory with multiple files on server's disk and represents a consistent snapshot of your data at all times.

Example code to open database:

package main

import "fmt"
import "github.com/deroproject/graviton"

func main() {
   //store, _ := graviton.NewDiskStore("/tmp/testdb")   // create a new testdb in "/tmp/testdb"
    store, _ := graviton.NewMemStore()            // create a new  DB in RAM
    ss, _ := store.LoadSnapshot(0)           // load most recent snapshot
    tree, _ := ss.GetTree("root")            // use or create tree named "root"
    tree.Put([]byte("key"), []byte("value")) // insert a value
    graviton.Commit(tree)                  // commit the tree
    value, _ := tree.Get([]byte("key"))
    fmt.Printf("value retrived from DB \"%s\"\n", string(value))
}

//NOTE: Linux (or other platforms) have open file limit for 1024. 
//    Default limits allows upto 2TB of Graviton databases.
Graviton Tree

A Tree in Graviton DB acts like a bucket in BoltDB or a ZFS dataset. It is named and can contain upto 128 byte names. Any store can contain infinite trees. Each tree can also contain infinite key-value pairs. However, practically being limited by the server or system storage space.

Each tree can be accessed with its merkle root hash using "GetTreeWithRootHash" API. Also each tree maintains its own separate version number and any specific version can be used GetTreeWithVersion. Note that each tree can also have arbitrary tags and any tagged tree can be accessed using the tag GetTreeWithTag. Also, 2 arbitrary trees can diffed in linear time and relevant changes detected.

NOTE: Tree tags or names cannot start with ':' .
Using key,value pairs

To save a key/value pair to a tree ( or bucket), use the tree.Put() function:

        tree, _ := ss.GetTree("root") 
        tree.Put([]byte("answer"), []byte("44")) // insert a value
        graviton.Commit(tree)  // make the tree persistant by storing it in backend disk

This will set the value of the "answer" key to "44" in the root tree. To retrieve this value, we can use the tree.Get() function:

	tree, _ := ss.GetTree("root") 
	v,_ := tree.Get([]byte("answer"))
	fmt.Printf("The answer is: %s\n", v)

The Get() function returns an error because its operation is guaranteed to work (unless there is some kind of system failure which we try to report). If the key exists then it will return its byte slice value. If it doesn't exist then it will return an error.

Iterating over keys

Graviton stores its keys in hash byte-sorted order within a tree. This makes sequential iteration over these keys extremely fast. To iterate over keys GravitonDB uses a Cursor:

	// Assume "root" tree exists and has keys
    tree, _ := store.GetTree("root") 
	c := tree.Cursor()

	for k, v, err := c.First(); err == nil; k, v, err = c.Next() { 
		fmt.Printf("key=%s, value=%s\n", k, v)
	}

The cursor allows you to move to a specific point in the list of keys and move forward or backward through the keys one at a time.

The following functions are available on the cursor:

First()  Move to the first key.
Last()   Move to the last key.
Next()   Move to the next key.
Prev()   Move to the previous key.

Each of those functions has a return signature of (key []byte, value []byte, err error). When you have iterated to the end of the cursor then Next() will return an error ErrNoMoreKeys. You must seek to a position using First(), Last() before calling Next() or Prev(). If you do not seek to a position then these functions will return an error.

Snapshots

Snapshot refers to collective state of all buckets + data + history. Each commit( tree.Commit() or Commit(tree1, tree2 .....)) creates a new snapshot in the store.Each snapshot is represented by an incremental uint64 number, 0 represents most recent snapshot. Snapshots can be used to access any arbitrary state of entire database at any point in time.

Example code for snapshots:

package main

import "fmt"
import "github.com/deroproject/graviton"

func main() {
	   key := []byte("key1")
   //store, _ := graviton.NewDiskStore("/tmp/testdb")   // create a new testdb in "/tmp/testdb"
   store, _ := graviton.NewMemStore()          // create a new  DB in RAM
   ss, _ := store.LoadSnapshot(0)         // load most recent snapshot
   tree, _ := ss.GetTree("root")          // use or create tree named "root"
   tree.Put(key, []byte("commit_value1")) // insert a value
   commit1, _ := graviton.Commit(tree)         // commit the tree
   tree.Put(key, []byte("commit_value2")) // overwrite existing value
   commit2, _ := graviton.Commit(tree)         // commit the tree again

   // at this point, you have done 2 commits
   // at first commit or snapshot,  "root" tree contains  "key1 : commit_value1"
   // at second commit or snapshot,  "root" tree contains  "key1 : commit_value2"

   // we will traverse now commit1 snapshot
   ss, _ = store.LoadSnapshot(commit1)
   tree, _ = ss.GetTree("root")
   value, err := tree.Get(key)
   fmt.Printf(" snapshot%d  key %s value %s err %s\n", ss.GetVersion(), string(key), string(value), err)

   // we will traverse now commit2 snapshot
   ss, _ = store.LoadSnapshot(commit2)
   tree, _ = ss.GetTree("root")
   value, err = tree.Get(key)
   fmt.Printf(" snapshot%d  key %s value %s err %s\n", ss.GetVersion(), string(key), string(value), err)
}
Diffing
Diffing of 2 trees to detect changes between versions or compare 2 arbitrary trees in linear time.

Two arbitrary trees can be diffed in linear time to detect changes. Changes are of 3 types insertions, deletions and modifications (Same key but value changed). If the reported changes are applied to base tree, it will be equivalent to the head tree being compared.

func Diff(base_tree, head_tree *Tree, deleted, modified, inserted DiffHandler) (err error)

Diffhandler is a callback function of the following type having k,v as arguments

type DiffHandler func(k, v []byte)

The algorithm is linear time in the number of changes. Eg. a tree with billion KVs can be diffed with parent almost instantaneously.

GravitonDB Backups

Use simple commands like cp, copy or rsync to sync a Graviton database even while the database is being updated. However, as the database might be continuously appending, backup will always lag a bit. And note that the database or backups will NEVER get corrupted during copying while commits are being done.

Stress Testing

A mini tool to do single thread testing is provided which can be used to perform various tests on memory or disk backend.

go run github.com/deroproject/graviton/cmd/stress

See help using --help argument. To use disk backend, use --memory=false

Graviton Internals

Internally, all trees are stored within a base-2 merkle with collapsing path. This means if tree has 4 billion key-value pairs, it will only be 32 level deep.This leads to tremendous savings in storage space.This also means when you modify an existing key-value, only limited amount of nodes are touched.

Lines of Code
~/tools/gocloc   --by-file  node_inner.go tree.go snapshot.go proof.go node_leaf.go  store.go node.go  hash.go  const.go doc.go  diff_tree.go cursor.go 
-----------------------------------------------------------------
File           files          blank        comment           code
-----------------------------------------------------------------
node_inner.go                    76             33            364
store.go                         69             22            250
tree.go                          75             71            250
proof.go                         30             16            171
snapshot.go                      36             18            155
node_leaf.go                     29              3            150
diff_tree.go                     34             33            133
cursor.go                        21             15            106
node.go                           5              3             35
const.go                          4              0             21
hash.go                           7              2             19
doc.go                           16             42              1
-----------------------------------------------------------------
TOTAL             12            402            258           1655
-----------------------------------------------------------------

TODO

  • Currently it is not optimized for speed and GC (Garbage collection).
  • Expose/build metrics.
  • Currently, we have error reportingapi to reports rot bits, but nothing about disks corruption, should we discard such error design and make the API simpler (except snapshots, tree loading, commiting, no more errors ). More discussion required on this hard-disk failures,errors etc. required.
Comparison with other databases

None of the following databases provides ability to traverse back-in-time for each and every commit. GravitonDB is the only DB which provides back-in-time. Also presently GravitonDB is the only database which can diff between 2 trees in linear time. Let's compare between other features of some databases.

Postgres, MySQL, & other relational databases

Relational databases structure data into rows and are only accessible through the use of SQL. This approach provides flexibility in how you store and query your data but also incurs overhead in parsing and planning SQL statements. GravitonDB accesses all data by a byte slice key. This makes GravitonDB fast to read and write data by key but provides no built-in support for joining values together.

Most relational databases (with the exception of SQLite) are standalone servers that run separately from the application. This gives systems flexibility to connect multiple application servers to a single database server but also adds overhead in serializing and transporting data over the network. Graviton runs as a library included in your application so all data access has to go through your application's process. This brings data closer to your application but limits multi-process access to the data.

LevelDB, RocksDB

LevelDB and its derivatives (RocksDB, HyperLevelDB) are similar to Graviton in that they are libraries bundled into the application, However, their underlying structure is a log-structured merge-tree (LSM tree). An LSM tree optimizes random writes by using a write ahead log and multi-tiered, sorted files called SSTables. Graviton uses a base 2 merkle tree internally. Both approaches have trade-offs.

If you require a high random write throughput or you need to use spinning disks then LevelDB could be a good choice unless there are requirements of versioning, authenticated proofs or other features of Graviton database.

LMDB, BoltDB

LMDB, Bolt are architecturally similar. Both use a B+ tree, have ACID semantics with fully serializable transactions, and support lock-free MVCC using a single writer and multiple readers.

In-addition LMDB heavily focuses on raw performance while BoltDB focus on simplicity and ease of use. For example, LMDB allows several unsafe actions such as direct writes for the sake of performance. Bolt opts to disallow actions which can leave the database in a corrupted state. The only exception to this in Bolt is DB.NoSync.GravitonDB does not leave the database in corrupted state at any point in time.

In-addition LMDB, BoltDB doesn't support versioning, snapshots, linear diffing etc. features only Graviton provides such features for now.

License

GNU General Public License v3.0

Documentation

Overview

GravitonDB in short is "ZFS for key-value stores".

GravitonDB is a pure Go key/value store having features unmatched by other software (such as Boltdb, berkeleydb, mysql, postgresql etc). The goal of the project is to provide a simple, fast, reliable, versioned, authenticated database for projects which require such features.

Since GravitonDB is meant to be used as such a low-level piece of functionality, simplicity is key. The API will be small and only focus on getting values and setting values. That's it.

GravitonDB is a key value store having
	1) cryptographically authenticated ( a single hash verifies complete data store)
	2) append only
	3) versioning
	4) Writable snapshots
	5) exabyte storage capable, (tested upto Terabytes, if you can help with testing petabyte capability please discuss with authors)
	6) cryptographic proof capability to prove existence/non-existence of any arbitrary key
	7) ability to diff 2 trees in linear time
	8) designed and developed for DERO blockchain

Features

	* Authenticated data store ( all keys, values are backed by blake 256 bit checksum)
	* Append only data store
	* Support of 2^64 trees ( Theoretically ) within a single data store. trees can be named and thus used as buckets
	* Versioning support ( all committed changes are versioned with ability to visit them at any point in time )
	* Snapshots ( multi tree commits  in a single version causing multi tree sync, each snapshot can be visited, appended and further modified, keys deleted, values modified etc, new keys, values stored )
	* Ability to iterate over all key-value pairs in a tree
	* Ability to diff between 2 trees in linear time and report all changes (insertions,deletions,modifications)
	* Minimal, small, simplified API
	* Theoretically support Exabyte data store, multi Terabyte tested internally
	* Decoupled storage layer, allowing use of object stores such as Ceph, AWS etc
	* Ability to generate cryptographic proofs which can prove key existance or non-existance ( proofs are around 1 KB )
	* Superfast proof generation time of around 1000 proofs per second per core
	* Support for disk based filesystem based persistant stores.
	* Support for memory based non-persistant stores
	* 100% code coverage
	* this is alpha software, we are still evaluating  a number of ideas

Eg. Minimal code, to write and read back a value (error checking is skipped)

store, _ := NewDiskStore("/tmp/testdb")   // create a new testdb in "/tmp/testdb"
ss, _ := store.LoadSnapshot(0)            // load most recent snapshot
tree, _ := ss.GetTree("root")             // use or create tree named "root"
tree.Put([]byte("key"), []byte("value"))  // insert a value
Commit(tree)                             // commit the tree
value, _ = tree.Get([]byte("key"))

Eg, Snapshots, see github.com/deroproject/gravitondb/examples/snapshot_example//snapshot_example.go

The design enables infinite trees with infinite snapshots. The design enables designs such as dedeuplicating backups, blockchains which enable proving their data ( both state and content) to users etc

Index

Constants

View Source
const (
	HASHSIZE_BYTES  = 32 // we currently are using blake hash which is 256 bits or 32 bytes
	HASHSIZE        = HASHSIZE_BYTES
	HASHSIZE_BITS   = HASHSIZE_BYTES * 8     // hash size in bits
	MINBLOCK        = 512                    // max block size excluding value
	MAX_KEYSIZE     = MINBLOCK - 64          // 64 bytes are reserved, keys are limited to 448 bytes
	MAX_FILE_SIZE   = 2 * 1024 * 1024 * 1024 // 2GB since we use split files to store data chunks
	MAX_VALUE_SIZE  = 100 * 1024 * 1024      // values are limited to this size
	TREE_NAME_LIMIT = 127                    // TREE name cannot be larger than this in bytes ( not in utf8 chars)
)

Variables

View Source
var (
	ErrNotFound         = errors.New("leaf not found")
	ErrVersionNotStored = errors.New("no such version")
	ErrCorruption       = errors.New("Data Corruption")
	ErrNoMoreKeys       = errors.New("No more keys exist")
)

Functions

func Commit

func Commit(trees ...*Tree) (committed_version uint64, err error)

Commit the tree (or a number of trees) to persistance, write a new snapshot which can be accessed henceforth without any modifications Commiting multiple trees or multiple changes as batch is much more effecient than committing each change independently.

func Diff

func Diff(base_tree, head_tree *Tree, deleted, modified, inserted DiffHandler) (err error)

This function can be used to diff 2 trees and thus find all the keys which have been deleted, modified, inserted. The algorithm is linear time in the number of changes. Eg. a tree with billion KVs can be diffed with parent almost instantaneously. Todo : API redesign, should we give only keys, since values can be obtained later on !!

func Sum

func Sum(key []byte) [HASHSIZE]byte

Types

type Cursor

type Cursor struct {
	// contains filtered or unexported fields
}

Cursor represents an iterator that can traverse over all key/value pairs in a tree in hash sorted order. Cursors can be obtained from a tree and are valid as long as the tree is valid. Keys and values returned from the cursor are only valid for the life of the transaction. Changing tree (before committing) while traversing with a cursor may cause it to be invalidated and return unexpected keys and/or values. You must reposition your cursor after mutating data.

func (*Cursor) First

func (c *Cursor) First() (k, v []byte, err error)

First moves the cursor to the first item in the tree and returns its key and value. If the tree is empty then an error is returned. The returned key and value are only valid for the life of the tree.

func (*Cursor) Last

func (c *Cursor) Last() (k, v []byte, err error)

Last moves the cursor to the last item in the tree and returns its key and value. If the tree is empty then an error is returned. The returned key and value are only valid for the life of the tree.

func (*Cursor) Next

func (c *Cursor) Next() (k, v []byte, err error)

Next moves the cursor to the next item in the tree and returns its key and value.If the tree is empty then an error is returned.If the cursor is at the end of the tree, then an error is returned. The returned key and value are only valid for the life of the tree.

func (*Cursor) Prev

func (c *Cursor) Prev() (k, v []byte, err error)

Prev moves the cursor to the prev item in the tree and returns its key and value.If the tree is empty then an error is returned.If the cursor is at the end of the tree, then an error is returned. The returned key and value are only valid for the life of the tree.

func (*Cursor) SpecialFirst

func (c *Cursor) SpecialFirst(section []byte, validbits uint) (k, v []byte, err error)

sets a root for the cursor, so the cursor visits only a specific prefix keys

type DiffHandler

type DiffHandler func(k, v []byte)

All changes are reported of this type, deleted, modified, inserted

type Proof

type Proof struct {
	// contains filtered or unexported fields
}

This structure is used to prove existence/non-existence of a key

func NewProof

func NewProof() *Proof

func (*Proof) Marshal

func (p *Proof) Marshal() []byte

Serialize the proof to a byte array

func (*Proof) MarshalTo

func (p *Proof) MarshalTo(b *bytes.Buffer)

Serialize the proof to a bytes Buffer

the following are the size requirements for proof
1 byte for version
1 byte for type
varint trace length
32 byte(HASHSIZE) tracebits, bit 1 is set  if hash is not zerohash
32 byte(HASHSIZE) * number of trace bits set
if collision is there 32 byte(HASHSIZE) key , 32 byte(HASHSIZE) value
if member, varint lenth var int length prefixed value
dead end = 0

func (*Proof) Reset

func (p *Proof) Reset()

prepare the structure for reuse

func (*Proof) Unmarshal

func (p *Proof) Unmarshal(buf []byte) error

Unmarshal follows reverse of marshal to deserialize the array of bytes to proof for verification.

func (*Proof) Value

func (p *Proof) Value() []byte

if the proof is for existence for a key, it's associated value can be read here

func (*Proof) VerifyMembership

func (p *Proof) VerifyMembership(root [HASHSIZE]byte, key []byte) bool

func (*Proof) VerifyNonMembership

func (p *Proof) VerifyNonMembership(root [HASHSIZE]byte, key []byte) bool

type Snapshot

type Snapshot struct {
	// contains filtered or unexported fields
}
Snapshot are used to access any arbitrary snapshot of entire database at any point in time
snapshot refers to collective state of all trees + data (key-values) + history
each commit ( tree.Commit() or Commit(tree1, tree2 .....)) creates a new snapshot
each snapshot is  represented by an incrementing uint64 number, 0 represents most recent snapshot.

TODO we may need to provide API to export DB at specific snapshot version

func (*Snapshot) GetTree

func (s *Snapshot) GetTree(treename string) (*Tree, error)

Gets most recent tree committed to the store

func (*Snapshot) GetTreeHighestVersion

func (s *Snapshot) GetTreeHighestVersion(treename string) (uint64, error)

Gets highest stored version number of the specific tree

func (*Snapshot) GetTreeWithRootHash

func (s *Snapshot) GetTreeWithRootHash(roothash []byte) (*Tree, error)

Gets the tree which has specific roothash

func (*Snapshot) GetTreeWithTag

func (s *Snapshot) GetTreeWithTag(tag string) (*Tree, error)

Gets the tree which has specific tag

NOTE: same tags might point to different trees in different snapshots of db

func (*Snapshot) GetTreeWithVersion

func (s *Snapshot) GetTreeWithVersion(treename string, version uint64) (*Tree, error)

Load a versioned tree from the store all trees have there own version number

func (*Snapshot) GetVersion

func (s *Snapshot) GetVersion() uint64

Gets the snapshot version number

type Store

type Store struct {
	// contains filtered or unexported fields
}

Store is the backend which is used to store the data in serialized form to disk. The data is stored in files in split format and total number of files can be 4 billion. each file is upto 2 GB in size, this limit has been placed to support FAT32 which restricts files to 4GB

func NewDiskStore

func NewDiskStore(basepath string) (*Store, error)

open/create a disk based store, if the directory pre-exists, it is used as is. Since we are an append only keyvalue store, we do not delete any data.

func NewMemStore

func NewMemStore() (*Store, error)

start a new memory backed store which may be useful for testing and other temporaray use cases.

func (*Store) Close

func (store *Store) Close()

func (*Store) LoadSnapshot

func (store *Store) LoadSnapshot(version uint64) (*Snapshot, error)

Load a specific snapshot from the store, 0th version = load most recent version as a special case note: 0th tree is not stored in disk also note that commits are being done so versions might be change

func (*Store) ReadVersionData

func (s *Store) ReadVersionData(version uint64) (findex uint32, fpos uint32, err error)

versions are 1 based

type Tree

type Tree struct {
	Tags []string // tags used while commit, will get cleaned after commit
	// contains filtered or unexported fields
}

Tree structure which is the end result, TODO: tree does not cache anything currently, just caching the top level tree entries, will increase the speed by X multiplication factors.

func (*Tree) Commit

func (t *Tree) Commit(tags ...string) error

commit the tree to disk, the current version

func (*Tree) Cursor

func (t *Tree) Cursor() Cursor

get Cursor which is used as an iterator that can traverse over all key/value pairs in a tree in hash sorted order.

func (*Tree) Delete

func (t *Tree) Delete(key []byte) error

delete a specific key from the tree

func (*Tree) Discard

func (t *Tree) Discard() error

Reload the tree from the disk, causing all current changes to be discarded,

func (*Tree) GenerateProof

func (t *Tree) GenerateProof(key []byte) (*Proof, error)

Generate proof of any key, which can be used to prove whether the key exists or not.Please note that the tree root hash (tree.Hash()) is not part of the structure and must be available to the verifier separately for eg. in an encrypted blockchain, the entire state is carried forward from block to block, this state can be queried from a number of sources and then it is verified

func (*Tree) Get

func (t *Tree) Get(key []byte) ([]byte, error)

Get a specifically value associated with a key TODO, we need to expose this in other forms so as memory allocations and better error detection could be done

func (*Tree) GetKeyValueFromHash

func (t *Tree) GetKeyValueFromHash(keyhashc []byte) (int, []byte, []byte, error)

we only have a keyhash and need to get both the key,value

func (*Tree) GetKeyValueFromKey

func (t *Tree) GetKeyValueFromKey(key []byte) (int, []byte, []byte, error)

we have a key and need to get both the key,value

func (*Tree) GetName

func (t *Tree) GetName() string

func (*Tree) GetParentVersion

func (t *Tree) GetParentVersion() uint64

Get parent version number of tree from which this tree was derived, they might not be sequential but they will be monotonically increasing this can be used to build out a DAG

func (*Tree) GetVersion

func (t *Tree) GetVersion() uint64

Get current version number of tree

func (*Tree) Hash

func (t *Tree) Hash() (h [HASHSIZE]byte, err error)

Give the merkle hash of the entire tree

func (*Tree) IsDirty

func (t *Tree) IsDirty() bool

Check whether the tree is currently dirty or not

func (*Tree) KeyCountEstimate

func (t *Tree) KeyCountEstimate() (count int64)

estimate number of keys that exist in the tree very crude but only used for use display

func (*Tree) Put

func (t *Tree) Put(key, value []byte) error

put a key value in the tree, if the value exists, it's overwritten. ToDO: it should ignore duplicate key value, if first using a get and then a put

func (*Tree) Random

func (t *Tree) Random() (k, v []byte, err error)

Random returns a random key,value from the tree, provided a tree has keys the following are limitations a tree containing 0 key, value pairs will return err randomness depends on number of keys, if tree contains only 1 value, it will be ported etc

Directories

Path Synopsis
cmd
examples

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL