rocksdb

package module
Version: v0.0.0-...-b65d32c Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 19, 2015 License: Apache-2.0 Imports: 3 Imported by: 1

README

A Go wrapper for RocksDB

See RocksDB.

This code is a fork of github.com/DanielMorsing/rocksdb to expose more RocksDB options.

Documentation

Overview

Package rocksdb is a fork of the levigo package with the identifiers changed to target rocksdb and the package name changed to rocksdb.

This was accomplished by running a sed script over the source code. Many thanks to Jeff Hodges for creating levigo without which this package would not exist.

Original package documentation follows.

Package rocksdb provides the ability to create and access LevelDB databases.

rocksdb.Open opens and creates databases.

opts := rocksdb.NewOptions()
opts.SetCache(rocksdb.NewLRUCache(3<<30))
opts.SetCreateIfMissing(true)
db, err := rocksdb.Open("/path/to/db", opts)

The DB struct returned by Open provides DB.Get, DB.Put and DB.Delete to modify and query the database.

ro := rocksdb.NewReadOptions()
wo := rocksdb.NewWriteOptions()
// if ro and wo are not used again, be sure to Close them.
data, err := db.Get(ro, []byte("key"))
...
err = db.Put(wo, []byte("anotherkey"), data)
...
err = db.Delete(wo, []byte("key"))

For bulk reads, use an Iterator. If you want to avoid disturbing your live traffic while doing the bulk read, be sure to call SetFillCache(false) on the ReadOptions you use when creating the Iterator.

ro := rocksdb.NewReadOptions()
ro.SetFillCache(false)
it := db.NewIterator(ro)
defer it.Close()
it.Seek(mykey)
for it = it; it.Valid(); it.Next() {
	munge(it.Key(), it.Value())
}
if err := it.GetError(); err != nil {
	...
}

Batched, atomic writes can be performed with a WriteBatch and DB.Write.

wb := rocksdb.NewWriteBatch()
// defer wb.Close or use wb.Clear and reuse.
wb.Delete([]byte("removed"))
wb.Put([]byte("added"), []byte("data"))
wb.Put([]byte("anotheradded"), []byte("more"))
err := db.Write(wo, wb)

If your working dataset does not fit in memory, you'll want to add a bloom filter to your database. NewBloomFilter and Options.SetFilterPolicy is what you want. NewBloomFilter is amount of bits in the filter to use per key in your database.

filter := rocksdb.NewBloomFilter(10)
opts.SetFilterPolicy(filter)
db, err := rocksdb.Open("/path/to/db", opts)

If you're using a custom comparator in your code, be aware you may have to make your own filter policy object.

This documentation is not a complete discussion of LevelDB. Please read the LevelDB documentation <http://code.google.com/p/rocksdb> for information on its operation. You'll find lots of goodies there.

Index

Constants

View Source
const (
	NoCompression     = CompressionOpt(0)
	SnappyCompression = CompressionOpt(1)
)

Known compression arguments for Options.SetCompression.

Variables

This section is empty.

Functions

func DestroyComparator

func DestroyComparator(cmp *C.rocksdb_comparator_t)

DestroyComparator deallocates a *C.rocksdb_comparator_t.

This is provided as a convienience to advanced users that have implemented their own comparators in C in their own code.

func DestroyDatabase

func DestroyDatabase(dbname string, o *Options) error

DestroyDatabase removes a database entirely, removing everything from the filesystem.

func RepairDatabase

func RepairDatabase(dbname string, o *Options) error

RepairDatabase attempts to repair a database.

If the database is unrepairable, an error is returned.

Types

type BackupEngine

type BackupEngine struct {
	Engine *C.rocksdb_backup_engine_t
}

func BackupEngineOpen

func BackupEngineOpen(o *Options, path string) (*BackupEngine, error)

func (*BackupEngine) Close

func (be *BackupEngine) Close()

func (*BackupEngine) CreateNewBackup

func (be *BackupEngine) CreateNewBackup(db *DB) error

func (*BackupEngine) GetBackupInfo

func (be *BackupEngine) GetBackupInfo() *BackupEngineInfo

func (*BackupEngine) RestoreDbFromLatestBackup

func (be *BackupEngine) RestoreDbFromLatestBackup(dbDir string, walDir string, options *RestoreOptions) error

type BackupEngineInfo

type BackupEngineInfo struct {
	Info *C.rocksdb_backup_engine_info_t
}

func (*BackupEngineInfo) BackupId

func (bei *BackupEngineInfo) BackupId(index int) int

func (*BackupEngineInfo) Count

func (bei *BackupEngineInfo) Count() int

func (*BackupEngineInfo) Destroy

func (bei *BackupEngineInfo) Destroy()

func (*BackupEngineInfo) NumberFiles

func (bei *BackupEngineInfo) NumberFiles(index int) int

func (*BackupEngineInfo) Size

func (bei *BackupEngineInfo) Size(index int) int

func (*BackupEngineInfo) Timestamp

func (bei *BackupEngineInfo) Timestamp(index int) int

type Cache

type Cache struct {
	Cache *C.rocksdb_cache_t
}

Cache is a cache used to store data read from data in memory.

Typically, NewLRUCache is all you will need, but advanced users may implement their own *C.rocksdb_cache_t and create a Cache.

To prevent memory leaks, a Cache must have Close called on it when it is no longer needed by the program. Note: if the process is shutting down, this may not be necessary and could be avoided to shorten shutdown time.

func NewLRUCache

func NewLRUCache(capacity int) *Cache

NewLRUCache creates a new Cache object with the capacity given.

To prevent memory leaks, Close should be called on the Cache when the program no longer needs it. Note: if the process is shutting down, this may not be necessary and could be avoided to shorten shutdown time.

func (*Cache) Close

func (c *Cache) Close()

Close deallocates the underlying memory of the Cache object.

type CompressionOpt

type CompressionOpt int

CompressionOpt is a value for Options.SetCompression.

type DB

type DB struct {
	Ldb *C.rocksdb_t
}

DB is a reusable handle to a LevelDB database on disk, created by Open.

To avoid memory and file descriptor leaks, call Close when the process no longer needs the handle. Calls to any DB method made after Close will panic.

The DB instance may be shared between goroutines. The usual data race conditions will occur if the same key is written to from more than one, of course.

func Open

func Open(dbname string, o *Options) (*DB, error)

Open opens a database.

Creating a new database is done by calling SetCreateIfMissing(true) on the Options passed to Open.

It is usually wise to set a Cache object on the Options with SetCache to keep recently used data from that database in memory.

func (*DB) Close

func (db *DB) Close()

Close closes the database, rendering it unusable for I/O, by deallocating the underlying handle.

Any attempts to use the DB after Close is called will panic.

func (*DB) CompactRange

func (db *DB) CompactRange(r Range)

CompactRange runs a manual compaction on the Range of keys given. This is not likely to be needed for typical usage.

func (*DB) Delete

func (db *DB) Delete(wo *WriteOptions, key []byte) error

Delete removes the data associated with the key from the database.

The key byte slice may be reused safely. Delete takes a copy of them before returning.

func (*DB) Get

func (db *DB) Get(ro *ReadOptions, key []byte) ([]byte, error)

Get returns the data associated with the key from the database.

If the key does not exist in the database, a nil []byte is returned. If the key does exist, but the data is zero-length in the database, a zero-length []byte will be returned.

The key byte slice may be reused safely. Get takes a copy of them before returning.

func (*DB) GetApproximateSizes

func (db *DB) GetApproximateSizes(ranges []Range) []uint64

GetApproximateSizes returns the approximate number of bytes of file system space used by one or more key ranges.

The keys counted will begin at Range.Start and end on the key before Range.Limit.

func (*DB) NewIterator

func (db *DB) NewIterator(ro *ReadOptions) *Iterator

NewIterator returns an Iterator over the the database that uses the ReadOptions given.

Often, this is used for large, offline bulk reads while serving live traffic. In that case, it may be wise to disable caching so that the data processed by the returned Iterator does not displace the already cached data. This can be done by calling SetFillCache(false) on the ReadOptions before passing it here.

Similiarly, ReadOptions.SetSnapshot is also useful.

func (*DB) NewSnapshot

func (db *DB) NewSnapshot() *Snapshot

NewSnapshot creates a new snapshot of the database.

The snapshot, when used in a ReadOptions, provides a consistent view of state of the database at the the snapshot was created.

To prevent memory leaks and resource strain in the database, the snapshot returned must be released with DB.ReleaseSnapshot method on the DB that created it.

See the LevelDB documentation for details.

func (*DB) PropertyValue

func (db *DB) PropertyValue(propName string) string

PropertyValue returns the value of a database property.

Examples of properties include "rocksdb.stats", "rocksdb.sstables", and "rocksdb.num-files-at-level0".

func (*DB) Put

func (db *DB) Put(wo *WriteOptions, key, value []byte) error

Put writes data associated with a key to the database.

If a nil []byte is passed in as value, it will be returned by Get as an zero-length slice.

The key and value byte slices may be reused safely. Put takes a copy of them before returning.

func (*DB) ReleaseSnapshot

func (db *DB) ReleaseSnapshot(snap *Snapshot)

ReleaseSnapshot removes the snapshot from the database's list of snapshots, and deallocates it.

func (*DB) Write

func (db *DB) Write(wo *WriteOptions, w *WriteBatch) error

Write atomically writes a WriteBatch to disk.

type DatabaseError

type DatabaseError string

func (DatabaseError) Error

func (e DatabaseError) Error() string

type Env

type Env struct {
	Env *C.rocksdb_env_t
}

Env is a system call environment used by a database.

Typically, NewDefaultEnv is all you need. Advanced users may create their own Env with a *C.rocksdb_env_t of their own creation.

To prevent memory leaks, an Env must have Close called on it when it is no longer needed by the program.

func NewDefaultEnv

func NewDefaultEnv() *Env

NewDefaultEnv creates a default environment for use in an Options.

To prevent memory leaks, the Env returned should be deallocated with Close.

func (*Env) Close

func (env *Env) Close()

Close deallocates the Env, freeing the underlying struct.

func (*Env) SetBackgroundThreads

func (e *Env) SetBackgroundThreads(n int)

func (*Env) SetHighPriorityBackgroundThreads

func (e *Env) SetHighPriorityBackgroundThreads(n int)

type FilterPolicy

type FilterPolicy struct {
	Policy *C.rocksdb_filterpolicy_t
}

FilterPolicy is a factory type that allows the LevelDB database to create a filter, such as a bloom filter, that is stored in the sstables and used by DB.Get to reduce reads.

An instance of this struct may be supplied to Options when opening a DB. Typical usage is to call NewBloomFilter to get an instance.

To prevent memory leaks, a FilterPolicy must have Close called on it when it is no longer needed by the program.

func NewBloomFilter

func NewBloomFilter(bitsPerKey int) *FilterPolicy

NewBloomFilter creates a filter policy that will create a bloom filter when necessary with the given number of bits per key.

See the FilterPolicy documentation for more.

func (*FilterPolicy) Close

func (fp *FilterPolicy) Close()

type Iterator

type Iterator struct {
	Iter *C.rocksdb_iterator_t
}

Iterator is a read-only iterator through a LevelDB database. It provides a way to seek to specific keys and iterate through the keyspace from that point, as well as access the values of those keys.

Care must be taken when using an Iterator. If the method Valid returns false, calls to Key, Value, Next, and Prev will result in panics. However, Seek, SeekToFirst, SeekToLast, GetError, Valid, and Close will still be safe to call.

GetError will only return an error in the event of a LevelDB error. It will return a nil on iterators that are simply invalid. Given that behavior, GetError is not a replacement for a Valid.

A typical use looks like:

db := rocksdb.Open(...)

it := db.NewIterator(readOpts)
defer it.Close()
it.Seek(mykey)
for it = it; it.Valid(); it.Next() {
	useKeyAndValue(it.Key(), it.Value())
}
if err := it.GetError() {
	...
}

To prevent memory leaks, an Iterator must have Close called on it when it is no longer needed by the program.

func (*Iterator) Close

func (it *Iterator) Close()

Close deallocates the given Iterator, freeing the underlying C struct.

func (*Iterator) GetError

func (it *Iterator) GetError() error

GetError returns an IteratorError from LevelDB if it had one during iteration.

This method is safe to call when Valid returns false.

func (*Iterator) Key

func (it *Iterator) Key() []byte

Key returns a copy the key in the database the iterator currently holds.

If Valid returns false, this method will panic.

func (*Iterator) Next

func (it *Iterator) Next()

Next moves the iterator to the next sequential key in the database, as defined by the Comparator in the ReadOptions used to create this Iterator.

If Valid returns false, this method will panic.

func (*Iterator) Prev

func (it *Iterator) Prev()

Prev moves the iterator to the previous sequential key in the database, as defined by the Comparator in the ReadOptions used to create this Iterator.

If Valid returns false, this method will panic.

func (*Iterator) Seek

func (it *Iterator) Seek(key []byte)

Seek moves the iterator the position of the key given or, if the key doesn't exist, the next key that does exist in the database. If the key doesn't exist, and there is no next key, the Iterator becomes invalid.

This method is safe to call when Valid returns false.

func (*Iterator) SeekToFirst

func (it *Iterator) SeekToFirst()

SeekToFirst moves the iterator to the first key in the database, as defined by the Comparator in the ReadOptions used to create this Iterator.

This method is safe to call when Valid returns false.

func (*Iterator) SeekToLast

func (it *Iterator) SeekToLast()

SeekToLast moves the iterator to the last key in the database, as defined by the Comparator in the ReadOptions used to create this Iterator.

This method is safe to call when Valid returns false.

func (*Iterator) Valid

func (it *Iterator) Valid() bool

Valid returns false only when an Iterator has iterated past either the first or the last key in the database.

func (*Iterator) Value

func (it *Iterator) Value() []byte

Value returns a copy of the value in the database the iterator currently holds.

If Valid returns false, this method will panic.

type IteratorError

type IteratorError string

func (IteratorError) Error

func (e IteratorError) Error() string

type Options

type Options struct {
	Opt *C.rocksdb_options_t
	// contains filtered or unexported fields
}

Options represent all of the available options when opening a database with Open. Options should be created with NewOptions.

It is usually with to call SetCache with a cache object. Otherwise, all data will be read off disk.

To prevent memory leaks, Close must be called on an Options when the program no longer needs it.

func NewOptions

func NewOptions() *Options

NewOptions allocates a new Options object.

func (*Options) Close

func (o *Options) Close()

Close deallocates the Options, freeing its underlying C struct.

func (*Options) IncreaseParallelism

func (o *Options) IncreaseParallelism(n int)

By default, RocksDB uses only one background thread for flush and compaction. Calling this function will set it up such that total of `total_threads` is used. Good value for `total_threads` is the number of cores. You almost definitely want to call this function if your system is bottlenecked by RocksDB.

func (*Options) SetAllowMMapReads

func (o *Options) SetAllowMMapReads(b bool)

Allow the OS to mmap file for reading sst tables. Default: false

func (*Options) SetAllowMMapWrites

func (o *Options) SetAllowMMapWrites(b bool)

Allow the OS to mmap file for writing. Default: false

func (*Options) SetAllowOSBuffer

func (o *Options) SetAllowOSBuffer(b bool)

Data being read from file storage may be buffered in the OS Default: true

func (*Options) SetBlockRestartInterval

func (o *Options) SetBlockRestartInterval(n int)

SetBlockRestartInterval is the number of keys between restarts points for delta encoding keys.

Most clients should leave this parameter alone. See the LevelDB documentation for details.

func (*Options) SetBlockSize

func (o *Options) SetBlockSize(s int)

SetBlockSize sets the approximate size of user data packed per block.

The default is roughly 4096 uncompressed bytes. A better setting depends on your use case. See the LevelDB documentation for details.

func (*Options) SetBytesPerSync

func (o *Options) SetBytesPerSync(n uint64)

Allows OS to incrementally sync files to disk while they are being written, asynchronously, in the background. Issue one request for every bytes_per_sync written. 0 turns it off. Default: 0

func (*Options) SetCache

func (o *Options) SetCache(cache *Cache)

SetCache places a cache object in the database when a database is opened.

This is usually wise to use. See also ReadOptions.SetFillCache.

func (*Options) SetComparator

func (o *Options) SetComparator(cmp *C.rocksdb_comparator_t)

SetComparator sets the comparator to be used for all read and write operations.

The comparator that created a database must be the same one (technically, one with the same name string) that is used to perform read and write operations.

The default comparator is usually sufficient.

func (*Options) SetCompression

func (o *Options) SetCompression(t CompressionOpt)

SetCompression sets whether to compress blocks using the specified compresssion algorithm.

The default value is SnappyCompression and it is fast enough that it is unlikely you want to turn it off. The other option is NoCompression.

If the LevelDB library was built without Snappy compression enabled, the SnappyCompression setting will be ignored.

func (*Options) SetCreateIfMissing

func (o *Options) SetCreateIfMissing(b bool)

SetCreateIfMissing causes Open to create a new database on disk if it does not already exist.

func (*Options) SetDisableDataSync

func (o *Options) SetDisableDataSync(b bool)

If true, then the contents of data files are not synced to stable storage. Their contents remain in the OS buffers till the OS decides to flush them. This option is good for bulk-loading of data. Once the bulk-loading is complete, please issue a sync to the OS to flush all dirty buffesrs to stable storage. Default: false

func (*Options) SetEnv

func (o *Options) SetEnv(env *Env)

SetEnv sets the Env object for the new database handle.

func (*Options) SetErrorIfExists

func (o *Options) SetErrorIfExists(error_if_exists bool)

already exists to throw an error.

func (*Options) SetFilterPolicy

func (o *Options) SetFilterPolicy(fp *FilterPolicy)

SetFilterPolicy causes Open to create a new database that will uses filter created from the filter policy passed in.

func (*Options) SetInfoLog

func (o *Options) SetInfoLog(log *C.rocksdb_logger_t)

SetInfoLog sets a *C.rocksdb_logger_t object as the informational logger for the database.

func (*Options) SetLevel0FileNumCompactionTrigger

func (o *Options) SetLevel0FileNumCompactionTrigger(n int)

Number of files to trigger level-0 compaction. A value <0 means that level-0 compaction will not be triggered by number of files at all.

Default: 4

func (*Options) SetLogDir

func (o *Options) SetLogDir(dir string)

Tthe info LOG dir. If it is empty, the log files will be in the same dir as data. If it is non empty, the log files will be in the specified dir, and the db data dir's absolute path will be used as the log file name's prefix.

func (*Options) SetLogLevel

func (o *Options) SetLogLevel(n int)

func (*Options) SetMaxBackgroundCompactions

func (o *Options) SetMaxBackgroundCompactions(n int)

Maximum number of concurrent background compaction jobs, submitted to the default LOW priority thread pool. If you're increasing this, also consider increasing number of threads in LOW priority thread pool. For more information, see Env::SetBackgroundThreads Default: 1

func (*Options) SetMaxBackgroundFlushes

func (o *Options) SetMaxBackgroundFlushes(n int)

the HIGH priority thread pool.

By default, all background jobs (major compaction and memtable flush) go to the LOW priority pool. If this option is set to a positive number, memtable flush jobs will be submitted to the HIGH priority pool. It is important when the same Env is shared by multiple db instances. Without a separate pool, long running major compaction jobs could potentially block memtable flush jobs of other db instances, leading to unnecessary Put stalls.

If you're increasing this, also consider increasing number of threads in HIGH priority thread pool. For more information, see Env::SetBackgroundThreads Default: 1

func (*Options) SetMaxOpenFiles

func (o *Options) SetMaxOpenFiles(n int)

SetMaxOpenFiles sets the number of files than can be used at once by the database.

See the LevelDB documentation for details.

func (*Options) SetMaxWriteBufferNumber

func (o *Options) SetMaxWriteBufferNumber(n int)

The maximum number of write buffers that are built up in memory. The default and the minimum number is 2, so that when 1 write buffer is being flushed to storage, new writes can continue to the other write buffer. Default: 2

func (*Options) SetMinWriteBufferNumberToMerge

func (o *Options) SetMinWriteBufferNumberToMerge(n int)

The minimum number of write buffers that will be merged together before writing to storage. If set to 1, then all write buffers are fushed to L0 as individual files and this increases read amplification because a get request has to check in all of these files. Also, an in-memory merge may result in writing lesser data to storage if there are duplicate records in each of these individual write buffers. Default: 1

func (*Options) SetNumLevels

func (o *Options) SetNumLevels(n int)

Number of levels for this database

func (*Options) SetParanoidChecks

func (o *Options) SetParanoidChecks(pc bool)

SetParanoidChecks, when called with true, will cause the database to do aggressive checking of the data it is processing and will stop early if it detects errors.

See the LevelDB documentation docs for details.

func (*Options) SetReadOnly

func (o *Options) SetReadOnly(b bool)

func (*Options) SetStatsDumpPeriod

func (o *Options) SetStatsDumpPeriod(secs uint)

If not zero, dump rocksdb.stats to LOG every stats_dump_period_sec Default: 3600 (1 hour)

func (*Options) SetTargetFileSizeBase

func (o *Options) SetTargetFileSizeBase(n uint64)

Target file size for compaction. target_file_size_base is per-file size for level-1. Target file size for level L can be calculated by target_file_size_base * (target_file_size_multiplier ^ (L-1)) For example, if target_file_size_base is 2MB and target_file_size_multiplier is 10, then each file on level-1 will be 2MB, and each file on level 2 will be 20MB, and each file on level-3 will be 200MB. by default target_file_size_base is 2MB.

func (*Options) SetTargetFileSizeMultiplier

func (o *Options) SetTargetFileSizeMultiplier(n int)

by default target_file_size_multiplier is 1, which means by default files in different levels will have similar size.

func (*Options) SetWalDir

func (o *Options) SetWalDir(dir string)

The absolute dir path for write-ahead logs (WAL). If it is empty, the log files will be in the same dir as data,

dbname is used as the data dir by default

If it is non empty, the log files will be in kept the specified dir. When destroying the db,

all log files in wal_dir and the dir itself is deleted

func (*Options) SetWriteBufferSize

func (o *Options) SetWriteBufferSize(s int)

SetWriteBufferSize sets the number of bytes the database will build up in memory (backed by an unsorted log on disk) before converting to a sorted on-disk file.

type Range

type Range struct {
	Start []byte
	Limit []byte
}

Range is a range of keys in the database. GetApproximateSizes calls with it begin at the key Start and end right before the key Limit.

type ReadOptions

type ReadOptions struct {
	Opt *C.rocksdb_readoptions_t
}

ReadOptions represent all of the available options when reading from a database.

To prevent memory leaks, Close must called on a ReadOptions when the program no longer needs it.

func NewReadOptions

func NewReadOptions() *ReadOptions

NewReadOptions allocates a new ReadOptions object.

func (*ReadOptions) Close

func (ro *ReadOptions) Close()

Close deallocates the ReadOptions, freeing its underlying C struct.

func (*ReadOptions) SetFillCache

func (ro *ReadOptions) SetFillCache(b bool)

SetFillCache controls whether reads performed with this ReadOptions will fill the Cache of the server. It defaults to true.

It is useful to turn this off on ReadOptions for DB.Iterator (and DB.Get) calls used in offline threads to prevent bulk scans from flushing out live user data in the cache.

See also Options.SetCache

func (*ReadOptions) SetSnapshot

func (ro *ReadOptions) SetSnapshot(snap *Snapshot)

SetSnapshot causes reads to provided as they were when the passed in Snapshot was created by DB.NewSnapshot. This is useful for getting consistent reads during a bulk operation.

See the LevelDB documentation for details.

func (*ReadOptions) SetVerifyChecksums

func (ro *ReadOptions) SetVerifyChecksums(b bool)

SetVerifyChecksums controls whether all data read with this ReadOptions will be verified against corresponding checksums.

It defaults to false. See the LevelDB documentation for details.

type RestoreOptions

type RestoreOptions struct {
	Opt *C.rocksdb_restore_options_t
}

func CreateRestoreOptions

func CreateRestoreOptions() *RestoreOptions

func (*RestoreOptions) Destroy

func (ro *RestoreOptions) Destroy()

func (*RestoreOptions) SetKeepLogFiles

func (ro *RestoreOptions) SetKeepLogFiles(v int)

If true, restore won't overwrite the existing log files in wal_dir. It will also move all log files from archive directory to wal_dir. Use this option in combination with BackupableDBOptions::backup_log_files = false for persisting in-memory databases. Default: false

type Snapshot

type Snapshot struct {
	// contains filtered or unexported fields
}

Snapshot provides a consistent view of read operations in a DB. It is set on to a ReadOptions and passed in. It is only created by DB.NewSnapshot.

To prevent memory leaks and resource strain in the database, the snapshot returned must be released with DB.ReleaseSnapshot method on the DB that created it.

type WriteBatch

type WriteBatch struct {
	// contains filtered or unexported fields
}

WriteBatch is a batching of Puts, and Deletes to be written atomically to a database. A WriteBatch is written when passed to DB.Write.

To prevent memory leaks, call Close when the program no longer needs the WriteBatch object.

func NewWriteBatch

func NewWriteBatch() *WriteBatch

NewWriteBatch creates a fully allocated WriteBatch.

func (*WriteBatch) Clear

func (w *WriteBatch) Clear()

Clear removes all the enqueued Put and Deletes in the WriteBatch.

func (*WriteBatch) Close

func (w *WriteBatch) Close()

Close releases the underlying memory of a WriteBatch.

func (*WriteBatch) Delete

func (w *WriteBatch) Delete(key []byte)

Delete queues a deletion of the data at key to be deleted later.

The key byte slice may be reused safely. Delete takes a copy of them before returning.

func (*WriteBatch) Put

func (w *WriteBatch) Put(key, value []byte)

Put places a key-value pair into the WriteBatch for writing later.

Both the key and value byte slices may be reused as WriteBatch takes a copy of them before returning.

type WriteOptions

type WriteOptions struct {
	Opt *C.rocksdb_writeoptions_t
}

WriteOptions represent all of the available options when writeing from a database.

To prevent memory leaks, Close must called on a WriteOptions when the program no longer needs it.

func NewWriteOptions

func NewWriteOptions() *WriteOptions

NewWriteOptions allocates a new WriteOptions object.

func (*WriteOptions) Close

func (wo *WriteOptions) Close()

Close deallocates the WriteOptions, freeing its underlying C struct.

func (*WriteOptions) DisableWAL

func (wo *WriteOptions) DisableWAL(b bool)

func (*WriteOptions) SetSync

func (wo *WriteOptions) SetSync(b bool)

SetSync controls whether each write performed with this WriteOptions will be flushed from the operating system buffer cache before the write is considered complete.

If called with true, this will signficantly slow down writes. If called with false, and the host machine crashes, some recent writes may be lost. The default is false.

See the LevelDB documentation for details.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL