badger

package module
v0.8.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 30, 2017 License: Apache-2.0 Imports: 26 Imported by: 0

README

Badger GoDoc Go Report Card Build Status Appveyor Coverage Status

An embeddable, persistent, simple and fast key-value (KV) store, written purely in Go. It's meant to be a performant alternative to non Go based key-value stores like RocksDB.

Badger sketch

About

Badger is written out of frustration with existing KV stores which are either written in pure Go and slow, or fast but require usage of Cgo. Badger aims to provide an equal or better speed compared to industry leading KV stores (like RocksDB), while maintaining the entire code base in pure Go.

  1. Introducing Badger: A fast key-value store written natively in Go
  2. Make Badger crash resilient with ALICE

Video Tutorials

Installation and Usage

go get -v github.com/dgraph-io/badger

If you want to run tests, also get testing dependencies by passing in -t flag.

go get -t -v github.com/dgraph-io/badger

From here, follow docs for usage.

Documentation

Badger documentation is located at godoc.org.

Design Goals

Badger has these design goals in mind:

  • Write it purely in Go.
  • Use latest research to build the fastest KV store for data sets spanning terabytes.
  • Keep it simple, stupid. No support for transactions, versioning or snapshots -- anything that can be done outside of the store should be done outside.
  • Optimize for SSDs (more below).
Non-Goals
  • Try to be a database.

Users

Badger is currently being used by Dgraph.

If you're using Badger in a project, let us know.

Design

Badger is based on WiscKey paper by University of Wisconsin, Madison.

In simplest terms, keys are stored in LSM trees along with pointers to values, which would be stored in write-ahead log files, aka value logs. Keys would typically be kept in RAM, values would be served directly off SSD.

Optimizations for SSD

SSDs are best at doing serial writes (like HDD) and random reads (unlike HDD). Each write reduces the lifecycle of an SSD, so Badger aims to reduce the write amplification of a typical LSM tree.

It achieves this by separating the keys from values. Keys tend to be smaller in size and are stored in the LSM tree. Values (which tend to be larger in size) are stored in value logs, which also double as write ahead logs for fault tolerance.

Only a pointer to the value is stored along with the key, which significantly reduces the size of each KV pair in LSM tree. This allows storing lot more KV pairs per table. For e.g., a table of size 64MB can store 2 million KV pairs assuming an average key size of 16 bytes, and a value pointer of 16 bytes (with prefix diffing in Badger, the average key sizes stored in a table would be lower). Thus, lesser compactions are required to achieve stability for the LSM tree, which results in fewer writes (all writes being serial).

It might be a good idea on ext4 to periodically invoke fstrim in case the file system does not quickly reuse space from deleted files.

Nature of LSM trees

Because only keys (and value pointers) are stored in LSM tree, Badger generates much smaller LSM trees. Even for huge datasets, these smaller trees can fit nicely in RAM allowing for lot quicker accesses to and iteration through keys. For random gets, keys can be quickly looked up from within RAM, giving access to the value pointer. Then only a single pointed read from SSD (random read) is done to retrieve value. This improves random get performance significantly compared to traditional LSM tree design used by other KV stores.

Comparisons

Feature Badger RocksDB BoltDB
Design LSM tree with value log LSM tree only B+ tree
High RW Performance Yes Yes No
Designed for SSDs Yes (with latest research 1) Not specifically 2 No
Embeddable Yes Yes Yes
Sorted KV access Yes Yes Yes
Pure Go (no Cgo) Yes No Yes
Transactions No (but provides compare and set operations) Yes (but non-ACID) Yes, ACID
Snapshots No Yes Yes

1 Badger is based on a paper called WiscKey by University of Wisconsin, Madison, which saw big wins with separating values from keys, significantly reducing the write amplification compared to a typical LSM tree.

2 RocksDB is an SSD optimized version of LevelDB, which was designed specifically for rotating disks. As such RocksDB's design isn't aimed at SSDs.

Benchmarks

RocksDB (Link to blog post)

RocksDB Benchmarks

Crash Consistency

Badger is crash resilient. Any update which was applied successfully before a crash, would be available after the crash. Badger achieves this via its value log.

Badger's value log is a write-ahead log (WAL). Every update to Badger is written to this log first, before being applied to the LSM tree. Badger maintains a monotonically increasing pointer (head) in the LSM tree, pointing to the last update offset in the value log. As and when LSM table is persisted, the head gets persisted along with. Thus, the head always points to the latest persisted offset in the value log. Every time Badger opens the directory, it would first replay the updates after the head in order, bringing the updates back into the LSM tree; before it allows any reads or writes. This technique ensures data persistence in face of crashes.

Furthermore, Badger can be run with SyncWrites option, which would open the WAL with O_DSYNC flag, hence syncing the writes to disk on every write.

Frequently Asked Questions

  • My writes are really slow. Why?

You're probably doing writes serially, using Set. To get the best write performance, use BatchSet, and call it concurrently from multiple goroutines.

  • I don't see any disk write. Why?

If you're using Badger with SyncWrites=false, then your writes might not be written to value log and won't get synced to disk immediately. Writes to LSM tree are done inmemory first, before they get compacted to disk. The compaction would only happen once MaxTableSize has been reached. So, if you're doing a few writes and then checking, you might not see anything on disk. Once you Close the store, you'll see these writes on disk.

  • Which instances should I use for Badger?

We recommend using instances which provide local SSD storage, without any limit on the maximum IOPS. In AWS, these are storage optimized instances like i3. They provide local SSDs which clock 100K IOPS over 4KB blocks easily.

  • Are there any Go specific settings that I should use?

We highly recommend setting a high number for GOMAXPROCS, which allows Go to observe the full IOPS throughput provided by modern SSDs. In Dgraph, we have set it to 128. For more details, see this thread.

Contact

Documentation

Overview

Package badger implements an embeddable, simple and fast key-value store, written in pure Go. It is designed to be highly performant for both reads and writes simultaneously. Badger uses LSM tree, along with a value log, to separate keys from values, hence reducing both write amplification and the size of the LSM tree. This allows LSM tree to be served entirely from RAM, while the values are served from SSD. As values get larger, this results in increasingly faster Set() and Get() performance than top of the line KV stores in market today.

Example
package main

import (
	"fmt"
	"io/ioutil"

	"github.com/dgraph-io/badger"
)

func main() {
	opt := badger.DefaultOptions
	dir, _ := ioutil.TempDir("", "badger")
	opt.Dir = dir
	opt.ValueDir = dir
	kv, _ := badger.NewKV(&opt)

	key := []byte("hello")

	kv.Set(key, []byte("world"), 0x00)
	fmt.Printf("SET %s world\n", key)

	var item badger.KVItem
	if err := kv.Get(key, &item); err != nil {
		fmt.Printf("Error while getting key: %q", key)
		return
	}
	var val []byte
	err := item.Value(func(v []byte) error {
		val = make([]byte, len(v))
		copy(val, v)
		return nil
	})
	if err != nil {
		fmt.Printf("Error while getting value for key: %q", key)
		return
	}

	fmt.Printf("GET %s %s\n", key, val)

	if err := kv.CompareAndSet(key, []byte("venus"), 100); err != nil {
		fmt.Println("CAS counter mismatch")
	} else {
		if err = kv.Get(key, &item); err != nil {
			fmt.Printf("Error while getting key: %q", key)
		}

		err := item.Value(func(v []byte) error {
			val = make([]byte, len(v))
			copy(val, v)
			return nil
		})

		if err != nil {
			fmt.Printf("Error while getting value for key: %q", key)
			return
		}

		fmt.Printf("Set to %s\n", val)
	}
	if err := kv.CompareAndSet(key, []byte("mars"), item.Counter()); err == nil {
		fmt.Println("Set to mars")
	} else {
		fmt.Printf("Unsuccessful write. Got error: %v\n", err)
	}

}
Output:

SET hello world
GET hello world
CAS counter mismatch
Set to mars

Index

Examples

Constants

View Source
const (
	BitDelete       byte  = 1 // Set if the key has been deleted.
	BitValuePointer byte  = 2 // Set if the value is NOT stored directly next to key.
	BitUnused       byte  = 4
	BitSetIfAbsent  byte  = 8 // Set if the key is set using SetIfAbsent.
	M               int64 = 1 << 20
)

Values have their first byte being byteData or byteDelete. This helps us distinguish between a key that has never been seen and a key that has been explicitly deleted.

View Source
const (
	// ManifestFilename is the filename for the manifest file.
	ManifestFilename = "MANIFEST"
)

Variables

View Source
var (
	// ErrRetry is returned when a log file containing the value is not found.
	// This usually indicates that it may have been garbage collected, and the
	// operation needs to be retried.
	ErrRetry = errors.New("Unable to find log file. Please retry")

	// ErrCasMismatch is returned when a CompareAndSet operation has failed due
	// to a counter mismatch.
	ErrCasMismatch = errors.New("CompareAndSet failed due to counter mismatch")

	// ErrKeyExists is returned by SetIfAbsent metadata bit is set, but the
	// key already exists in the store.
	ErrKeyExists = errors.New("SetIfAbsent failed since key already exists")

	// ErrThresholdZero is returned if threshold is set to zero, and value log GC is called.
	// In such a case, GC can't be run.
	ErrThresholdZero = errors.New(
		"Value log GC can't run because threshold is set to zero")

	// ErrNoRewrite is returned if a call for value log GC doesn't result in a log file rewrite.
	ErrNoRewrite = errors.New(
		"Value log GC attempt didn't result in any cleanup")

	// ErrRejected is returned if a value log GC is called either while another GC is running, or
	// after KV::Close has been called.
	ErrRejected = errors.New("Value log GC request rejected")

	// ErrInvalidRequest is returned if the user request is invalid.
	ErrInvalidRequest = errors.New("Invalid request")
)
View Source
var DefaultIteratorOptions = IteratorOptions{
	PrefetchValues: true,
	PrefetchSize:   100,
	Reverse:        false,
}

DefaultIteratorOptions contains default options when iterating over Badger key-value stores.

View Source
var DefaultOptions = Options{
	DoNotCompact:        false,
	LevelOneSize:        256 << 20,
	LevelSizeMultiplier: 10,
	TableLoadingMode:    options.LoadToRAM,

	MaxLevels:               7,
	MaxTableSize:            64 << 20,
	NumCompactors:           3,
	NumLevelZeroTables:      5,
	NumLevelZeroTablesStall: 10,
	NumMemtables:            5,
	SyncWrites:              false,

	ValueLogFileSize: 1 << 30,
	ValueThreshold:   20,
}

DefaultOptions sets a list of recommended options for good performance. Feel free to modify these to suit your needs.

View Source
var ErrInvalidDir = errors.New("Invalid Dir, directory does not exist")

ErrInvalidDir is returned when Badger cannot find the directory from where it is supposed to load the key-value store.

View Source
var ErrValueLogSize = errors.New("Invalid ValueLogFileSize, must be between 1MB and 2GB")

ErrValueLogSize is returned when opt.ValueLogFileSize option is not within the valid range.

Functions

func OpenDir

func OpenDir(path string) (*os.File, error)

OpenDir opens a directory for syncing.

Types

type DirectoryLockGuard

type DirectoryLockGuard struct {
	// contains filtered or unexported fields
}

DirectoryLockGuard holds a lock on a directory and a pid file inside. The pid file isn't part of the locking mechanism, it's just advisory.

func AcquireDirectoryLock

func AcquireDirectoryLock(dirPath string, pidFileName string) (*DirectoryLockGuard, error)

AcquireDirectoryLock gets an exclusive lock on the directory (using flock). It writes our pid to dirPath/pidFileName for convenience.

func (*DirectoryLockGuard) Release

func (guard *DirectoryLockGuard) Release() error

Release deletes the pid file and releases our lock on the directory.

type Entry

type Entry struct {
	Key             []byte
	Meta            byte
	UserMeta        byte
	Value           []byte
	CASCounterCheck uint64 // If nonzero, we will check if existing casCounter matches.
	Error           error  // Error if any.
	// contains filtered or unexported fields
}

Entry provides Key, Value and if required, CASCounterCheck to kv.BatchSet() API. If CASCounterCheck is provided, it would be compared against the current casCounter assigned to this key-value. Set be done on this key only if the counters match.

func EntriesDelete

func EntriesDelete(s []*Entry, key []byte) []*Entry

EntriesDelete adds a Del to the list of entries.

func EntriesSet

func EntriesSet(s []*Entry, key, val []byte) []*Entry

EntriesSet adds a Set to the list of entries. Exposing this so that user does not have to specify the Entry directly.

type Iterator

type Iterator struct {
	// contains filtered or unexported fields
}

Iterator helps iterating over the KV pairs in a lexicographically sorted order.

func (*Iterator) Close

func (it *Iterator) Close()

Close would close the iterator. It is important to call this when you're done with iteration.

func (*Iterator) Item

func (it *Iterator) Item() *KVItem

Item returns pointer to the current KVItem. This item is only valid until it.Next() gets called.

func (*Iterator) Next

func (it *Iterator) Next()

Next would advance the iterator by one. Always check it.Valid() after a Next() to ensure you have access to a valid it.Item().

func (*Iterator) Rewind

func (it *Iterator) Rewind()

Rewind would rewind the iterator cursor all the way to zero-th position, which would be the smallest key if iterating forward, and largest if iterating backward. It does not keep track of whether the cursor started with a Seek().

func (*Iterator) Seek

func (it *Iterator) Seek(key []byte)

Seek would seek to the provided key if present. If absent, it would seek to the next smallest key greater than provided if iterating in the forward direction. Behavior would be reversed is iterating backwards.

func (*Iterator) Valid

func (it *Iterator) Valid() bool

Valid returns false when iteration is done.

func (*Iterator) ValidForPrefix

func (it *Iterator) ValidForPrefix(prefix []byte) bool

ValidForPrefix returns false when iteration is done or when the current key is not prefixed by the specified prefix.

type IteratorOptions

type IteratorOptions struct {
	// Indicates whether we should prefetch values during iteration and store them.
	PrefetchValues bool
	// How many KV pairs to prefetch while iterating. Valid only if PrefetchValues is true.
	PrefetchSize int
	Reverse      bool // Direction of iteration. False is forward, true is backward.
}

IteratorOptions is used to set options when iterating over Badger key-value stores.

type KV

type KV struct {
	sync.RWMutex // Guards list of inmemory tables, not individual reads and writes.
	// contains filtered or unexported fields
}

KV provides the various functions required to interact with Badger. KV is thread-safe.

func NewKV

func NewKV(optParam *Options) (out *KV, err error)

NewKV returns a new KV object.

func (*KV) Backup

func (db *KV) Backup(w io.Writer) error

func (*KV) BatchSet

func (s *KV) BatchSet(entries []*Entry) error

BatchSet applies a list of badger.Entry. If a request level error occurs it will be returned. Errors are also set on each Entry and must be checked individually.

Check(kv.BatchSet(entries))
for _, e := range entries {
   Check(e.Error)
}

func (*KV) BatchSetAsync

func (s *KV) BatchSetAsync(entries []*Entry, f func(error))

BatchSetAsync is the asynchronous version of BatchSet. It accepts a callback function which is called when all the sets are complete. If a request level error occurs, it will be passed back via the callback. The caller should still check for errors set on each Entry individually.

kv.BatchSetAsync(entries, func(err error)) {
   Check(err)
   for _, e := range entries {
      Check(e.Error)
   }
}
Example
package main

import (
	"fmt"
	"io/ioutil"
	"sync"

	"github.com/dgraph-io/badger"
)

func main() {
	opt := badger.DefaultOptions
	dir, _ := ioutil.TempDir("", "badger")
	opt.Dir = dir
	opt.SyncWrites = true
	opt.ValueDir = dir
	kv, _ := badger.NewKV(&opt)
	wg := new(sync.WaitGroup)
	wb := make([]*badger.Entry, 0, 100)

	wg.Add(1)
	// Async writes would be useful if you want to write some key-value pairs without waiting
	// for them to be complete and perform some cleanup when they are written.

	// In Dgraph we keep on flushing posting lists periodically to badger. We do it an async
	// manner and provide a callback to it which can do the cleanup when the writes are done.
	f := func(err error) {
		defer wg.Done()
		if err != nil {
			// At this point you can retry writing keys or send error over a channel to handle
			// in some other goroutine.
			fmt.Printf("Got error: %+v\n", err)
		}

		// Check for error in entries which could be non-nil if the user supplies a CasCounter.
		for _, e := range wb {
			if e.Error != nil {
				fmt.Printf("Got error: %+v\n", e.Error)
			}
		}

		// You can do cleanup now. Like deleting keys from cache.
		fmt.Println("All async sets complete.")
	}

	for i := 0; i < 100; i++ {
		k := []byte(fmt.Sprintf("%09d", i))
		wb = append(wb, &badger.Entry{
			Key:   k,
			Value: k,
		})
	}
	kv.BatchSetAsync(wb, f)
	fmt.Println("Finished writing keys to badger.")
	wg.Wait()

}
Output:

Finished writing keys to badger.
All async sets complete.

func (*KV) Close

func (s *KV) Close() (err error)

Close closes a KV. It's crucial to call it to ensure all the pending updates make their way to disk.

func (*KV) CompareAndDelete

func (s *KV) CompareAndDelete(key []byte, casCounter uint64) error

CompareAndDelete deletes a key ensuring that it has not been changed since last read. If existing key has different casCounter, this would not delete the key and return an error.

func (*KV) CompareAndDeleteAsync

func (s *KV) CompareAndDeleteAsync(key []byte, casCounter uint64, f func(error))

CompareAndDeleteAsync is the asynchronous version of CompareAndDelete. It accepts a callback function which is called when the CompareAndDelete completes. Any error encountered during execution is passed as an argument to the callback function.

func (*KV) CompareAndSet

func (s *KV) CompareAndSet(key []byte, val []byte, casCounter uint64) error

CompareAndSet sets the given value, ensuring that the no other Set operation has happened, since last read. If the key has a different casCounter, this would not update the key and return an error.

func (*KV) CompareAndSetAsync

func (s *KV) CompareAndSetAsync(key []byte, val []byte, casCounter uint64, f func(error))

CompareAndSetAsync is the asynchronous version of CompareAndSet. It accepts a callback function which is called when the CompareAndSet completes. Any error encountered during execution is passed as an argument to the callback function.

func (*KV) Delete

func (s *KV) Delete(key []byte) error

Delete deletes a key. Exposing this so that user does not have to specify the Entry directly. For example, BitDelete seems internal to badger.

func (*KV) DeleteAsync

func (s *KV) DeleteAsync(key []byte, f func(error))

DeleteAsync is the asynchronous version of Delete. It calls the callback function after deletion is complete. Any error encountered during the execution is passed as an argument to the callback function.

func (*KV) Exists

func (s *KV) Exists(key []byte) (bool, error)

Exists looks if a key exists. Returns true if the key exists otherwises return false. if err is not nil an error occurs during the key lookup and the existence of the key is unknown

func (*KV) Get

func (s *KV) Get(key []byte, item *KVItem) error

Get looks for key and returns a KVItem. If key is not found, item.Value() is nil.

func (*KV) NewIterator

func (s *KV) NewIterator(opt IteratorOptions) *Iterator

NewIterator returns a new iterator. Depending upon the options, either only keys, or both key-value pairs would be fetched. The keys are returned in lexicographically sorted order. Usage:

  opt := badger.DefaultIteratorOptions
  itr := kv.NewIterator(opt)
  for itr.Rewind(); itr.Valid(); itr.Next() {
    item := itr.Item()
    key := item.Key()
    var val []byte
    err = item.Value(func(v []byte) {
        val = make([]byte, len(v))
	       copy(val, v)
    }) 	// This could block while value is fetched from value log.
         // For key only iteration, set opt.PrefetchValues to false, and don't call
         // item.Value(func(v []byte)).

    // Remember that both key, val would become invalid in the next iteration of the loop.
    // So, if you need access to them outside, copy them or parse them.
  }
  itr.Close()

func (*KV) RunValueLogGC

func (s *KV) RunValueLogGC(discardRatio float64) error

RunValueLogGC would trigger a value log garbage collection with no guarantees that a call would result in a space reclaim. Every run would in the best case rewrite only one log file. So, repeated calls may be necessary.

The way it currently works is that it would randomly pick up a value log file, and sample it. If the sample shows that we can discard at least discardRatio space of that file, it would be rewritten. Else, an ErrNoRewrite error would be returned indicating that the GC didn't result in any file rewrite.

We recommend setting discardRatio to 0.5, thus indicating that a file be rewritten if half the space can be discarded. This results in a lifetime value log write amplification of 2 (1 from original write + 0.5 rewrite + 0.25 + 0.125 + ... = 2). Setting it to higher value would result in fewer space reclaims, while setting it to a lower value would result in more space reclaims at the cost of increased activity on the LSM tree. discardRatio must be in the range (0.0, 1.0), both endpoints excluded, otherwise an ErrInvalidRequest is returned.

Only one GC is allowed at a time. If another value log GC is running, or KV has been closed, this would return an ErrRejected.

Note: Every time GC is run, it would produce a spike of activity on the LSM tree.

func (*KV) Set

func (s *KV) Set(key, val []byte, userMeta byte) error

Set sets the provided value for a given key. If key is not present, it is created. If it is present, the existing value is overwritten with the one provided. Along with key and value, Set can also take an optional userMeta byte. This byte is stored alongside the key, and can be used as an aid to interpret the value or store other contextual bits corresponding to the key-value pair.

func (*KV) SetAsync

func (s *KV) SetAsync(key, val []byte, userMeta byte, f func(error))

SetAsync is the asynchronous version of Set. It accepts a callback function which is called when the set is complete. Any error encountered during execution is passed as an argument to the callback function.

func (*KV) SetIfAbsent

func (s *KV) SetIfAbsent(key, val []byte, userMeta byte) error

SetIfAbsent sets value of key if key is not present. If it is present, it returns the KeyExists error.

func (*KV) SetIfAbsentAsync

func (s *KV) SetIfAbsentAsync(key, val []byte, userMeta byte, f func(error)) error

SetIfAbsentAsync is the asynchronous version of SetIfAbsent. It accepts a callback function which is called when the operation is complete. Any error encountered during execution is passed as an argument to the callback function.

type KVItem

type KVItem struct {
	// contains filtered or unexported fields
}

KVItem is returned during iteration. Both the Key() and Value() output is only valid until iterator.Next() is called.

func (*KVItem) Counter

func (item *KVItem) Counter() uint64

Counter returns the CAS counter associated with the value.

func (*KVItem) EstimatedSize

func (item *KVItem) EstimatedSize() int64

EstimatedSize returns approximate size of the key-value pair.

This can be called while iterating through a store to quickly estimate the size of a range of key-value pairs (without fetching the corresponding values).

func (*KVItem) Key

func (item *KVItem) Key() []byte

Key returns the key. Remember to copy if you need to access it outside the iteration loop.

func (*KVItem) UserMeta

func (item *KVItem) UserMeta() byte

UserMeta returns the userMeta set by the user. Typically, this byte, optionally set by the user is used to interpret the value.

func (*KVItem) Value

func (item *KVItem) Value(consumer func([]byte) error) error

Value retrieves the value of the item from the value log. It calls the consumer function with a slice argument representing the value. In case of error, the consumer function is not called.

Note that the call to the consumer func happens synchronously.

Remember to parse or copy it if you need to reuse it. DO NOT modify or append to this slice; it would result in a panic.

type LevelManifest

type LevelManifest struct {
	Tables map[uint64]struct{} // Set of table id's
}

LevelManifest contains information about LSM tree levels in the MANIFEST file.

type Manifest

type Manifest struct {
	Levels []LevelManifest
	Tables map[uint64]TableManifest

	// Contains total number of creation and deletion changes in the manifest -- used to compute
	// whether it'd be useful to rewrite the manifest.
	Creations int
	Deletions int
}

Manifest represnts the contents of the MANIFEST file in a Badger store.

The MANIFEST file describes the startup state of the db -- all LSM files and what level they're at.

It consists of a sequence of ManifestChangeSet objects. Each of these is treated atomically, and contains a sequence of ManifestChange's (file creations/deletions) which we use to reconstruct the manifest at startup.

func ReplayManifestFile

func ReplayManifestFile(fp *os.File) (ret Manifest, truncOffset int64, err error)

ReplayManifestFile reads the manifest file and constructs two manifest objects. (We need one immutable copy and one mutable copy of the manifest. Easiest way is to construct two of them.) Also, returns the last offset after a completely read manifest entry -- the file must be truncated at that point before further appends are made (if there is a partial entry after that). In normal conditions, truncOffset is the file size.

type Options

type Options struct {
	// 1. Mandatory flags
	// -------------------
	// Directory to store the data in. Should exist and be writable.
	Dir string
	// Directory to store the value log in. Can be the same as Dir. Should
	// exist and be writable.
	ValueDir string

	// 2. Frequently modified flags
	// -----------------------------
	// Sync all writes to disk. Setting this to true would slow down data
	// loading significantly.
	SyncWrites bool

	// How should LSM tree be accessed.
	TableLoadingMode options.FileLoadingMode

	// 3. Flags that user might want to review
	// ----------------------------------------
	// The following affect all levels of LSM tree.
	MaxTableSize        int64 // Each table (or file) is at most this size.
	LevelSizeMultiplier int   // Equals SizeOf(Li+1)/SizeOf(Li).
	MaxLevels           int   // Maximum number of levels of compaction.
	// If value size >= this threshold, only store value offsets in tree.
	ValueThreshold int
	// Maximum number of tables to keep in memory, before stalling.
	NumMemtables int
	// The following affect how we handle LSM tree L0.
	// Maximum number of Level 0 tables before we start compacting.
	NumLevelZeroTables int

	// If we hit this number of Level 0 tables, we will stall until L0 is
	// compacted away.
	NumLevelZeroTablesStall int

	// Maximum total size for L1.
	LevelOneSize int64

	// Size of single value log file.
	ValueLogFileSize int64

	// Number of compaction workers to run concurrently.
	NumCompactors int

	// 4. Flags for testing purposes
	// ------------------------------
	DoNotCompact bool // Stops LSM tree from compactions.
	// contains filtered or unexported fields
}

Options are params for creating DB object.

type TableManifest

type TableManifest struct {
	Level uint8
}

TableManifest contains information about a specific level in the LSM tree.

Directories

Path Synopsis
cmd
badger_backup
badger_backup Usage: badger_backup --dir d [--value-dir v] [--backup-file b] This command makes a version-independent backup of the Badger database in a single file.
badger_backup Usage: badger_backup --dir d [--value-dir v] [--backup-file b] This command makes a version-independent backup of the Badger database in a single file.
badger_info
badger_info Usage: badger_info --dir x [--value-dir y] This command prints information about the badger key-value store.
badger_info Usage: badger_info --dir x [--value-dir y] This command prints information about the badger key-value store.
Package protos is a generated protocol buffer package.
Package protos is a generated protocol buffer package.
y

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL