table

package
v2.0.0-rc.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 20, 2019 License: Apache-2.0 Imports: 18 Imported by: 2

README

Size of table is 127,618,890 bytes for all benchmarks.

BenchmarkRead

$ go test -bench ^BenchmarkRead$ -run ^$ -count 3
goos: linux
goarch: amd64
pkg: github.com/dgraph-io/badger/table
BenchmarkRead-16    	      10	 153281932 ns/op
BenchmarkRead-16    	      10	 153454443 ns/op
BenchmarkRead-16    	      10	 155349696 ns/op
PASS
ok  	github.com/dgraph-io/badger/table	23.549s

Size of table is 127,618,890 bytes, which is ~122MB.

The rate is ~783MB/s using LoadToRAM (when table is in RAM).

To read a 64MB table, this would take ~0.0817s, which is negligible.

BenchmarkReadAndBuild

$ go test -bench BenchmarkReadAndBuild -run ^$ -count 3
goos: linux
goarch: amd64
pkg: github.com/dgraph-io/badger/table
BenchmarkReadAndBuild-16    	       2	 945041628 ns/op
BenchmarkReadAndBuild-16    	       2	 947120893 ns/op
BenchmarkReadAndBuild-16    	       2	 954909506 ns/op
PASS
ok  	github.com/dgraph-io/badger/table	26.856s

The rate is ~127MB/s. To build a 64MB table, this would take ~0.5s. Note that this does NOT include the flushing of the table to disk. All we are doing above is reading one table (which is in RAM) and write one table in memory.

The table building takes 0.5-0.0817s ~ 0.4183s.

BenchmarkReadMerged

Below, we merge 5 tables. The total size remains unchanged at ~122M.

$ go test -bench ReadMerged -run ^$ -count 3
BenchmarkReadMerged-16   	       2	954475788 ns/op
BenchmarkReadMerged-16   	       2	955252462 ns/op
BenchmarkReadMerged-16  	       2	956857353 ns/op
PASS
ok  	github.com/dgraph-io/badger/table	33.327s

The rate is ~127MB/s. To read a 64MB table using merge iterator, this would take ~0.5s.

BenchmarkRandomRead

go test -bench BenchmarkRandomRead$ -run ^$ -count 3
goos: linux
goarch: amd64
pkg: github.com/dgraph-io/badger/table
BenchmarkRandomRead-16    	  300000	      3596 ns/op
BenchmarkRandomRead-16    	  300000	      3621 ns/op
BenchmarkRandomRead-16    	  300000	      3596 ns/op
PASS
ok  	github.com/dgraph-io/badger/table	44.727s

For random read benchmarking, we are randomly reading a key and verifying its value.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func IDToFilename

func IDToFilename(id uint64) string

IDToFilename does the inverse of ParseFileID

func NewFilename

func NewFilename(id uint64, dir string) string

NewFilename should be named TableFilepath -- it combines the dir with the ID to make a table filepath.

func ParseFileID

func ParseFileID(name string) (uint64, bool)

ParseFileID reads the file id out of a filename.

Types

type Builder

type Builder struct {
	// contains filtered or unexported fields
}

Builder is used in building a table.

func NewTableBuilder

func NewTableBuilder() *Builder

NewTableBuilder makes a new TableBuilder.

func (*Builder) Add

func (b *Builder) Add(key []byte, value y.ValueStruct) error

Add adds a key-value pair to the block. If doNotRestart is true, we will not restart even if b.counter >= restartInterval.

func (*Builder) Close

func (b *Builder) Close()

Close closes the TableBuilder.

func (*Builder) Empty

func (b *Builder) Empty() bool

Empty returns whether it's empty.

func (*Builder) Finish

func (b *Builder) Finish() []byte

Finish finishes the table by appending the index.

func (*Builder) ReachedCapacity

func (b *Builder) ReachedCapacity(cap int64) bool

ReachedCapacity returns true if we... roughly (?) reached capacity?

type ConcatIterator

type ConcatIterator struct {
	// contains filtered or unexported fields
}

ConcatIterator concatenates the sequences defined by several iterators. (It only works with TableIterators, probably just because it's faster to not be so generic.)

func NewConcatIterator

func NewConcatIterator(tbls []*Table, reversed bool) *ConcatIterator

NewConcatIterator creates a new concatenated iterator

func (*ConcatIterator) Close

func (s *ConcatIterator) Close() error

Close implements y.Interface.

func (*ConcatIterator) Key

func (s *ConcatIterator) Key() []byte

Key implements y.Interface

func (*ConcatIterator) Next

func (s *ConcatIterator) Next()

Next advances our concat iterator.

func (*ConcatIterator) Rewind

func (s *ConcatIterator) Rewind()

Rewind implements y.Interface

func (*ConcatIterator) Seek

func (s *ConcatIterator) Seek(key []byte)

Seek brings us to element >= key if reversed is false. Otherwise, <= key.

func (*ConcatIterator) Valid

func (s *ConcatIterator) Valid() bool

Valid implements y.Interface

func (*ConcatIterator) Value

func (s *ConcatIterator) Value() y.ValueStruct

Value implements y.Interface

type Iterator

type Iterator struct {
	// contains filtered or unexported fields
}

Iterator is an iterator for a Table.

func (*Iterator) Close

func (itr *Iterator) Close() error

Close closes the iterator (and it must be called).

func (*Iterator) Key

func (itr *Iterator) Key() []byte

Key follows the y.Iterator interface

func (*Iterator) Next

func (itr *Iterator) Next()

Next follows the y.Iterator interface

func (*Iterator) Rewind

func (itr *Iterator) Rewind()

Rewind follows the y.Iterator interface

func (*Iterator) Seek

func (itr *Iterator) Seek(key []byte)

Seek follows the y.Iterator interface

func (*Iterator) Valid

func (itr *Iterator) Valid() bool

Valid follows the y.Iterator interface

func (*Iterator) Value

func (itr *Iterator) Value() (ret y.ValueStruct)

Value follows the y.Iterator interface

type Table

type Table struct {
	sync.Mutex

	Checksum []byte
	// contains filtered or unexported fields
}

Table represents a loaded table file with the info we have about it

func OpenTable

func OpenTable(fd *os.File, mode options.FileLoadingMode, cksum []byte) (*Table, error)

OpenTable assumes file has only one table and opens it. Takes ownership of fd upon function entry. Returns a table with one reference count on it (decrementing which may delete the file! -- consider t.Close() instead). The fd has to writeable because we call Truncate on it before deleting.

func (*Table) Biggest

func (t *Table) Biggest() []byte

Biggest is its biggest key, or nil if there are none

func (*Table) Close

func (t *Table) Close() error

Close closes the open table. (Releases resources back to the OS.)

func (*Table) DecrRef

func (t *Table) DecrRef() error

DecrRef decrements the refcount and possibly deletes the table

func (*Table) DoesNotHave

func (t *Table) DoesNotHave(key []byte) bool

DoesNotHave returns true if (but not "only if") the table does not have the key. It does a bloom filter lookup.

func (*Table) Filename

func (t *Table) Filename() string

Filename is NOT the file name. Just kidding, it is.

func (*Table) ID

func (t *Table) ID() uint64

ID is the table's ID number (used to make the file name).

func (*Table) IncrRef

func (t *Table) IncrRef()

IncrRef increments the refcount (having to do with whether the file should be deleted)

func (*Table) NewIterator

func (t *Table) NewIterator(reversed bool) *Iterator

NewIterator returns a new iterator of the Table

func (*Table) Size

func (t *Table) Size() int64

Size is its file size in bytes

func (*Table) Smallest

func (t *Table) Smallest() []byte

Smallest is its smallest key, or nil if there are none

type TableInterface

type TableInterface interface {
	Smallest() []byte
	Biggest() []byte
	DoesNotHave(key []byte) bool
}

TableInterface is useful for testing.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL