database

package
v1.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 21, 2019 License: GPL-3.0 Imports: 28 Imported by: 0

Documentation

Overview

Package database implements various types of databases used in Klaytn. This package is used to read/write data from/to the persistent layer.

Overview of database package

DBManager is the interface used by the consumers of database package. databaseManager is the implementation of DBManager interface. It contains cacheManager and a list of Database interfaces. cacheManager caches data stored in the persistent layer, to decrease the direct access to the persistent layer. Database is the interface for persistent layer implementation. Currently there are 4 implementations, levelDB, memDB, badgerDB and partitionedDB.

Source Files

  • badger_database.go : implementation of badgerDB, which wraps github.com/dgraph-io/badger
  • cache_manager.go : implementation of cacheManager, which manages cache layer over persistent layer
  • db_manager.go : contains DBManager and databaseManager
  • interface.go : interfaces used outside database package
  • leveldb_database.go : implementation of levelDB, which wraps github.com/syndtr/goleveldb
  • memory_database.go : implementation of MemDB, which wraps go native map structure
  • metrics.go : metrics used in database package, mostly related to cacheManager
  • partitioned_database.go : implementation of partitionedDB, which wraps a list of Database interface
  • schema.go : prefixes and suffixes for database keys and database key generating functions

Index

Constants

View Source
const IdealBatchSize = 100 * 1024
View Source
const (
	MinOpenFilesCacheCapacity = 16
)

Variables

View Source
var (

	// Chain index prefixes (use `i` + single byte to avoid mixing data types).
	BloomBitsIndexPrefix = []byte("iB") // BloomBitsIndexPrefix is the data table of a chain indexer to track its progress

)

The fields below define the low level database schema prefixing.

View Source
var OpenFileLimit = 64

Functions

func BloomBitsKey

func BloomBitsKey(bit uint, section uint64, hash common.Hash) []byte

bloomBitsKey = bloomBitsPrefix + bit (uint16 big endian) + section (uint64 big endian) + hash

func GetDefaultLevelDBOption

func GetDefaultLevelDBOption() *opt.Options

GetDefaultLevelDBOption returns default LevelDB option copied from defaultLevelDBOption. defaultLevelDBOption has fields with minimum values.

func GetOpenFilesLimit

func GetOpenFilesLimit() int

GetOpenFilesLimit raises out the number of allowed file handles per process for Klaytn and returns half of the allowance to assign to the database.

func IsPow2

func IsPow2(num uint) bool

IsPow2 checks if the given number is power of two or not.

func NewBadgerDB

func NewBadgerDB(dbDir string) (*badgerDB, error)

func NewLevelDB

func NewLevelDB(dbc *DBConfig, entryType DBEntryType) (*levelDB, error)

func NewLevelDBWithOption

func NewLevelDBWithOption(dbPath string, ldbOption *opt.Options) (*levelDB, error)

NewLevelDBWithOption explicitly receives LevelDB option to construct a LevelDB object.

func PutAndWriteBatchesOverThreshold

func PutAndWriteBatchesOverThreshold(batch Batch, key, val []byte) error

func SenderTxHashToTxHashKey

func SenderTxHashToTxHashKey(senderTxHash common.Hash) []byte

func TxLookupKey

func TxLookupKey(hash common.Hash) []byte

TxLookupKey = txLookupPrefix + hash

func WriteBatches

func WriteBatches(batches ...Batch) (int, error)

func WriteBatchesOverThreshold

func WriteBatchesOverThreshold(batches ...Batch) (int, error)

Types

type Batch

type Batch interface {
	Putter
	ValueSize() int // amount of data in the batch
	Write() error
	// Reset resets the batch for reuse
	Reset()
}

Batch is a write-only database that commits changes to its host database when Write is called. Batch cannot be used concurrently.

type DBConfig

type DBConfig struct {
	// General configurations for all types of DB.
	Dir                    string
	DBType                 DBType
	Partitioned            bool
	NumStateTriePartitions uint
	ParallelDBWrite        bool
	OpenFilesLimit         int

	// LevelDB related configurations.
	LevelDBCacheSize   int // LevelDBCacheSize = BlockCacheCapacity + WriteBuffer
	LevelDBCompression LevelDBCompressionType
	LevelDBBufferPool  bool
}

DBConfig handles database related configurations.

type DBEntryType

type DBEntryType uint8
const (
	BodyDB DBEntryType
	ReceiptsDB
	StateTrieDB
	TxLookUpEntryDB
	MiscDB
)

type DBManager

type DBManager interface {
	IsParallelDBWrite() bool

	Close()
	NewBatch(dbType DBEntryType) Batch
	GetMemDB() *MemDB
	GetDBConfig() *DBConfig

	// from accessors_chain.go
	ReadCanonicalHash(number uint64) common.Hash
	WriteCanonicalHash(hash common.Hash, number uint64)
	DeleteCanonicalHash(number uint64)

	ReadHeadHeaderHash() common.Hash
	WriteHeadHeaderHash(hash common.Hash)

	ReadHeadBlockHash() common.Hash
	WriteHeadBlockHash(hash common.Hash)

	ReadHeadFastBlockHash() common.Hash
	WriteHeadFastBlockHash(hash common.Hash)

	ReadFastTrieProgress() uint64
	WriteFastTrieProgress(count uint64)

	HasHeader(hash common.Hash, number uint64) bool
	ReadHeader(hash common.Hash, number uint64) *types.Header
	ReadHeaderRLP(hash common.Hash, number uint64) rlp.RawValue
	WriteHeader(header *types.Header)
	DeleteHeader(hash common.Hash, number uint64)
	ReadHeaderNumber(hash common.Hash) *uint64

	HasBody(hash common.Hash, number uint64) bool
	ReadBody(hash common.Hash, number uint64) *types.Body
	ReadBodyInCache(hash common.Hash) *types.Body
	ReadBodyRLP(hash common.Hash, number uint64) rlp.RawValue
	ReadBodyRLPByHash(hash common.Hash) rlp.RawValue
	WriteBody(hash common.Hash, number uint64, body *types.Body)
	PutBodyToBatch(batch Batch, hash common.Hash, number uint64, body *types.Body)
	WriteBodyRLP(hash common.Hash, number uint64, rlp rlp.RawValue)
	DeleteBody(hash common.Hash, number uint64)

	ReadTd(hash common.Hash, number uint64) *big.Int
	WriteTd(hash common.Hash, number uint64, td *big.Int)
	DeleteTd(hash common.Hash, number uint64)

	ReadReceipt(txHash common.Hash) (*types.Receipt, common.Hash, uint64, uint64)
	ReadReceipts(blockHash common.Hash, number uint64) types.Receipts
	ReadReceiptsByBlockHash(hash common.Hash) types.Receipts
	WriteReceipts(hash common.Hash, number uint64, receipts types.Receipts)
	PutReceiptsToBatch(batch Batch, hash common.Hash, number uint64, receipts types.Receipts)
	DeleteReceipts(hash common.Hash, number uint64)

	ReadBlock(hash common.Hash, number uint64) *types.Block
	ReadBlockByHash(hash common.Hash) *types.Block
	ReadBlockByNumber(number uint64) *types.Block
	HasBlock(hash common.Hash, number uint64) bool
	WriteBlock(block *types.Block)
	DeleteBlock(hash common.Hash, number uint64)

	FindCommonAncestor(a, b *types.Header) *types.Header

	ReadIstanbulSnapshot(hash common.Hash) ([]byte, error)
	WriteIstanbulSnapshot(hash common.Hash, blob []byte) error

	WriteMerkleProof(key, value []byte)

	ReadCachedTrieNode(hash common.Hash) ([]byte, error)
	ReadCachedTrieNodePreimage(secureKey []byte) ([]byte, error)

	ReadStateTrieNode(key []byte) ([]byte, error)
	HasStateTrieNode(key []byte) (bool, error)

	// from accessors_indexes.go
	ReadTxLookupEntry(hash common.Hash) (common.Hash, uint64, uint64)
	WriteTxLookupEntries(block *types.Block)
	WriteAndCacheTxLookupEntries(block *types.Block) error
	PutTxLookupEntriesToBatch(batch Batch, block *types.Block)
	DeleteTxLookupEntry(hash common.Hash)

	ReadTxAndLookupInfo(hash common.Hash) (*types.Transaction, common.Hash, uint64, uint64)

	NewSenderTxHashToTxHashBatch() Batch
	PutSenderTxHashToTxHashToBatch(batch Batch, senderTxHash, txHash common.Hash) error
	ReadTxHashFromSenderTxHash(senderTxHash common.Hash) common.Hash

	ReadBloomBits(bloomBitsKey []byte) ([]byte, error)
	WriteBloomBits(bloomBitsKey []byte, bits []byte) error

	ReadValidSections() ([]byte, error)
	WriteValidSections(encodedSections []byte)

	ReadSectionHead(encodedSection []byte) ([]byte, error)
	WriteSectionHead(encodedSection []byte, hash common.Hash)
	DeleteSectionHead(encodedSection []byte)

	// from accessors_metadata.go
	ReadDatabaseVersion() *uint64
	WriteDatabaseVersion(version uint64)

	ReadChainConfig(hash common.Hash) *params.ChainConfig
	WriteChainConfig(hash common.Hash, cfg *params.ChainConfig)

	ReadPreimage(hash common.Hash) []byte
	WritePreimages(number uint64, preimages map[common.Hash][]byte)

	// below operations are used in parent chain side, not child chain side.
	WriteChildChainTxHash(ccBlockHash common.Hash, ccTxHash common.Hash)
	ConvertChildChainBlockHashToParentChainTxHash(scBlockHash common.Hash) common.Hash

	WriteLastIndexedBlockNumber(blockNum uint64)
	GetLastIndexedBlockNumber() uint64

	// below operations are used in child chain side, not parent chain side.
	WriteAnchoredBlockNumber(blockNum uint64)
	ReadAnchoredBlockNumber() uint64

	WriteReceiptFromParentChain(blockHash common.Hash, receipt *types.Receipt)
	ReadReceiptFromParentChain(blockHash common.Hash) *types.Receipt

	WriteHandleTxHashFromRequestTxHash(rTx, hTx common.Hash)
	ReadHandleTxHashFromRequestTxHash(rTx common.Hash) common.Hash

	// cacheManager related functions.
	ClearHeaderChainCache()
	ClearBlockChainCache()
	ReadTxAndLookupInfoInCache(hash common.Hash) (*types.Transaction, common.Hash, uint64, uint64)
	ReadBlockReceiptsInCache(blockHash common.Hash) types.Receipts
	ReadTxReceiptInCache(txHash common.Hash) *types.Receipt

	// snapshot in clique(ConsensusClique) consensus
	WriteCliqueSnapshot(snapshotBlockHash common.Hash, encodedSnapshot []byte) error
	ReadCliqueSnapshot(snapshotBlockHash common.Hash) ([]byte, error)

	// Governance related functions
	WriteGovernance(data map[string]interface{}, num uint64) error
	WriteGovernanceIdx(num uint64) error
	ReadGovernance(num uint64) (map[string]interface{}, error)
	ReadRecentGovernanceIdx(count int) ([]uint64, error)
	ReadGovernanceAtNumber(num uint64, epoch uint64) (uint64, map[string]interface{}, error)
	WriteGovernanceState(b []byte) error
	ReadGovernanceState() ([]byte, error)
}

func NewDBManager

func NewDBManager(dbc *DBConfig) DBManager

NewDBManager returns DBManager interface. If Partitioned is true, each Database will have its own LevelDB. If not, each Database will share one common LevelDB.

func NewLevelDBManagerForTest

func NewLevelDBManagerForTest(dbc *DBConfig, levelDBOption *opt.Options) (DBManager, error)

NewLevelDBManagerForTest returns a DBManager, consisted of only LevelDB. It also accepts LevelDB option, opt.Options.

func NewMemoryDBManager

func NewMemoryDBManager() DBManager

type DBType

type DBType uint8
const (
	LevelDB DBType
	BadgerDB
	MemoryDB
	PartitionedDB
)

func (DBType) String

func (dbType DBType) String() string

type Database

type Database interface {
	Putter
	Get(key []byte) ([]byte, error)
	Has(key []byte) (bool, error)
	Delete(key []byte) error
	Close()
	NewBatch() Batch
	Type() DBType
	Meter(prefix string)
}

Database wraps all database operations. All methods are safe for concurrent use.

type LevelDBCompressionType

type LevelDBCompressionType uint8
const (
	AllNoCompression LevelDBCompressionType = iota
	ReceiptOnlySnappyCompression
	StateTrieOnlyNoCompression
	AllSnappyCompression
)

type MemDB

type MemDB struct {
	// contains filtered or unexported fields
}

* This is a test memory database. Do not use for any production it does not get persisted

func NewMemDB

func NewMemDB() *MemDB

func NewMemDBWithCap

func NewMemDBWithCap(size int) *MemDB

func (*MemDB) Close

func (db *MemDB) Close()

func (*MemDB) Delete

func (db *MemDB) Delete(key []byte) error

func (*MemDB) Get

func (db *MemDB) Get(key []byte) ([]byte, error)

func (*MemDB) Has

func (db *MemDB) Has(key []byte) (bool, error)

func (*MemDB) Keys

func (db *MemDB) Keys() [][]byte

func (*MemDB) Len

func (db *MemDB) Len() int

func (*MemDB) Meter

func (db *MemDB) Meter(prefix string)

func (*MemDB) NewBatch

func (db *MemDB) NewBatch() Batch

func (*MemDB) Put

func (db *MemDB) Put(key []byte, value []byte) error

func (*MemDB) Type

func (db *MemDB) Type() DBType

type Putter

type Putter interface {
	Put(key []byte, value []byte) error
}

Putter wraps the database write operation supported by both batches and regular databases.

type TransactionLookup

type TransactionLookup struct {
	Tx *types.Transaction
	*TxLookupEntry
}

type TxLookupEntry

type TxLookupEntry struct {
	BlockHash  common.Hash
	BlockIndex uint64
	Index      uint64
}

TxLookupEntry is a positional metadata to help looking up the data content of a transaction or receipt given only its hash.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL