reindexer

package module
v4.6.0+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 11, 2022 License: Apache-2.0 Imports: 23 Imported by: 0

README

Reindexer

GoDoc Build Status Build Status

Reindexer is an embeddable, in-memory, document-oriented database with a high-level Query builder interface.

Reindexer's goal is to provide fast search with complex queries. We at Restream weren't happy with Elasticsearch and created Reindexer as a more performant alternative.

The core is written in C++ and the application level API is in Go.

This document describes Go connector and its API. To get information about reindexer server and HTTP API refer to reindexer documentation

Versions overview

There are two LTS-versions of reindexer available: v3.x.x and v4.x.x.

3.x.x is currently our mainstream branch and 4.x.x (release/4 branch) is beta-version with experimental RAFT-cluster and sharding support. Storages are compatible between those versions, hovewer, replication configs are totally different. Versions 3 and 4 are geting all the same bugfixes and features (except replication-related ones).

Table of contents:

Features

Key features:

  • Sortable indices
  • Aggregation queries
  • Indices on array fields
  • Complex primary keys
  • Composite indices
  • Join operations
  • Full-text search
  • Up to 64 indices for one namespace
  • ORM-like query interface
  • SQL queries
Performance

Performance has been our top priority from the start, and we think we managed to get it pretty good. Benchmarks show that Reindexer's performance is on par with a typical key-value database. On a single CPU core, we get:

  • up to 500K queries/sec for queries SELECT * FROM items WHERE id='?'
  • up to 50K queries/sec for queries SELECT * FROM items WHERE year > 2010 AND name = 'string' AND id IN (....)
  • up to 20K queries/sec for queries SELECT * FROM items WHERE year > 2010 AND name = 'string' JOIN subitems ON ...

See benchmarking results and more details in benchmarking section

Memory Consumption

Reindexer aims to consume as little memory as possible; most queries are processed without memory allocs at all.

To achieve that, several optimizations are employed, both on the C++ and Go level:

  • Documents and indices are stored in dense binary C++ structs, so they don't impose any load on Go's garbage collector.

  • String duplicates are merged.

  • Memory overhead is about 32 bytes per document + ≈4-16 bytes per each search index.

  • There is an object cache on the Go level for deserialized documents produced after query execution. Future queries use pre-deserialized documents, which cuts repeated deserialization and allocation costs

  • The Query interface uses sync.Pool for reusing internal structures and buffers. Combining of these techings lets Reindexer execute most of queries without any allocations.

Reindexer has internal full text search engine. Full text search usage documentation and examples are here

Disk Storage

Reindexer can store documents to and load documents from disk via LevelDB. Documents are written to the storage backend asynchronously by large batches automatically in background.

When a namespace is created, all its documents are stored into RAM, so the queries on these documents run entirely in in-memory mode.

Replication

Reindexer supports synchronious and asynchronious replication. Check replication documentation here

Sharding

Reindexer has some basic support for sharding. Check sharding documentation here

Usage

Here is complete example of basic Reindexer usage:

package main

// Import package
import (
	"fmt"
	"math/rand"

	"github.com/restream/reindexer"
	// choose how the Reindexer binds to the app (in this case "builtin," which means link Reindexer as a static library)
	_ "github.com/restream/reindexer/bindings/builtin"

	// OR link Reindexer as static library with bundled server.
	// _ "github.com/restream/reindexer/bindings/builtinserver"
	// "github.com/restream/reindexer/bindings/builtinserver/config"

)

// Define struct with reindex tags
type Item struct {
	ID       int64  `reindex:"id,,pk"`    // 'id' is primary key
	Name     string `reindex:"name"`      // add index by 'name' field
	Articles []int  `reindex:"articles"`  // add index by articles 'articles' array
	Year     int    `reindex:"year,tree"` // add sortable index by 'year' field
}

func main() {
	// Init a database instance and choose the binding (builtin)
	db := reindexer.NewReindex("builtin:///tmp/reindex/testdb")

	// OR - Init a database instance and choose the binding (connect to server)
	// Database should be created explicitly via reindexer_tool or via WithCreateDBIfMissing option:
	// If server security mode is enabled, then username and password are mandatory
	// db := reindexer.NewReindex("cproto://user:pass@127.0.0.1:6534/testdb", reindexer.WithCreateDBIfMissing())

	// OR - Init a database instance and choose the binding (builtin, with bundled server)
	// serverConfig := config.DefaultServerConfig ()
	// If server security mode is enabled, then username and password are mandatory
	// db := reindexer.NewReindex("builtinserver://user:pass@testdb",reindexer.WithServerConfig(100*time.Second, serverConfig))

	// Create new namespace with name 'items', which will store structs of type 'Item'
	db.OpenNamespace("items", reindexer.DefaultNamespaceOptions(), Item{})

	// Generate dataset
	for i := 0; i < 100000; i++ {
		err := db.Upsert("items", &Item{
			ID:       int64(i),
			Name:     "Vasya",
			Articles: []int{rand.Int() % 100, rand.Int() % 100},
			Year:     2000 + rand.Int()%50,
		})
		if err != nil {
			panic(err)
		}
	}

	// Query a single document
	elem, found := db.Query("items").
		Where("id", reindexer.EQ, 40).
		Get()

	if found {
		item := elem.(*Item)
		fmt.Println("Found document:", *item)
	}

	// Query multiple documents
	query := db.Query("items").
		Sort("year", false).                          // Sort results by 'year' field in ascending order
		WhereString("name", reindexer.EQ, "Vasya").   // 'name' must be 'Vasya'
		WhereInt("year", reindexer.GT, 2020).         // 'year' must be greater than 2020
		WhereInt("articles", reindexer.SET, 6, 1, 8). // 'articles' must contain one of [6,1,8]
		Limit(10).                                    // Return maximum 10 documents
		Offset(0).                                    // from 0 position
		ReqTotal()                                    // Calculate the total count of matching documents

	// Execute the query and return an iterator
	iterator := query.Exec()
	// Iterator must be closed
	defer iterator.Close()

	fmt.Println("Found", iterator.TotalCount(), "total documents, first", iterator.Count(), "documents:")

	// Iterate over results
	for iterator.Next() {
		// Get the next document and cast it to a pointer
		elem := iterator.Object().(*Item)
		fmt.Println(*elem)
	}
	// Check the error
	if err := iterator.Error(); err != nil {
		panic(err)
	}
}

There are also some basic samples for C++ and Go here

SQL compatible interface

As alternative to Query builder Reindexer provides SQL compatible query interface. Here is sample of SQL interface usage:

    ...
	iterator := db.ExecSQL ("SELECT * FROM items WHERE name='Vasya' AND year > 2020 AND articles IN (6,1,8) ORDER BY year LIMIT 10")
    ...

Please note, that Query builder interface is preferable way: It have more features, and faster than SQL interface

String literals should be enclosed in single quotes.

Composite indexes should be enclosed in double quotes.

	SELECT * FROM items WHERE "field1+field2" = 'Vasya'

If the field name does not start with alpha, '_' or '#' it must be enclosed in double quotes, examples:

	UPDATE items DROP "123"
	SELECT * FROM ns WHERE "123" = 'some_value'
	SELECT * FROM ns WHERE "123abc" = 123
	DELETE FROM ns WHERE "123abc123" = 111

Installation

Reindexer can run in 3 different modes:

  • embedded (builtin) Reindexer is embedded into application as static library, and does not reuqire separate server proccess.
  • embedded with server (builtinserver) Reindexer is embedded into application as static library, and start server. In this mode other clients can connect to application via cproto or http.
  • standalone Reindexer run as standalone server, application connects to Reindexer via network
Installation for server mode
  1. Install Reindexer Server
  2. go get -a github.com/restream/reindexer
Official docker image

The simplest way to get reindexer server, is pulling & run docker image from dockerhub.

docker run -p9088:9088 -p6534:6534 -it reindexer/reindexer

Dockerfile

Installation for embedded mode
Prerequirements

Reindexer's core is written in C++17 and uses LevelDB as the storage backend, so the Cmake, C++17 toolchain and LevelDB must be installed before installing Reindexer.

To build Reindexer, g++ 8+, clang 7+ or mingw64 is required.

Get Reindexer
go get -a github.com/restream/reindexer
bash $GOPATH/src/github.com/restream/reindexer/dependencies.sh
go generate github.com/restream/reindexer/bindings/builtin
# Optional (build builtin server binding)
go generate github.com/restream/reindexer/bindings/builtinserver

Advanced Usage

Index Types and Their Capabilities

Internally, structs are split into two parts:

  • indexed fields, marked with reindex struct tag
  • tuple of non-indexed fields

Queries are possible only on the indexed fields, marked with reindex tag. The reindex tag contains the index name, type, and additional options:

reindex:"<name>[[,<type>],<opts>]"

  • name – index name.
  • type – index type:
    • hash – fast select by EQ and SET match. Used by default. Allows slow and inefficient sorting by field.
    • tree – fast select by RANGE, GT, and LT matches. A bit slower for EQ and SET matches than hash index. Allows fast sorting results by field.
    • text – full text search index. Usage details of full text search is described here
    • - – column index. Can't perform fast select because it's implemented with full-scan technic. Has the smallest memory overhead.
    • ttl - TTL index that works only with int64 fields. These indexes are quite convenient for representation of date fields (stored as UNIX timestamps) that expire after specified amount of seconds.
    • rtree - available only DWITHIN match. Acceptable only for [2]float64 field type. For details see geometry subsection.
  • opts – additional index options:
    • pk – field is part of a primary key. Struct must have at least 1 field tagged with pk
    • composite – create composite index. The field type must be an empty struct: struct{}.
    • joined – field is a recipient for join. The field type must be []*SubitemType.
    • dense - reduce index size. For hash and tree it will save 8 bytes per unique key value. For - it will save 4-8 bytes per each element. Useful for indexes with high selectivity, but for tree and hash indexes with low selectivity can seriously decrease update performance. Also dense will slow down wide fullscan queries on - indexes, due to lack of CPU cache optimization.
    • sparse - Row (document) contains a value of Sparse index only in case if it's set on purpose - there are no empty (or default) records of this type of indexes in the row (document). It allows to save RAM but it will cost you performance - it works a bit slower than regular indexes.
    • collate_numeric - create string index that provides values order in numeric sequence. The field type must be a string.
    • collate_ascii - create case-insensitive string index works with ASCII. The field type must be a string.
    • collate_utf8 - create case-insensitive string index works with UTF8. The field type must be a string.
    • collate_custom=<ORDER> - create custom order string index. The field type must be a string. <ORDER> is sequence of letters, which defines sort order.
    • linear, quadratic, greene or rstar - specify algorithm for construction of rtree index (by default rstar). For details see geometry subsection.

Fields with regular indexes are not nullable. Condition is NULL is supported only by sparse and array indexes.

Nested Structs

By default Reindexer scans all nested structs and adds their fields to the namespace (as well as indexes specified).

type Actor struct {
	Name string `reindex:"actor_name"`
}

type BaseItem struct {
	ID int64 `reindex:"id,hash,pk"`
}

type ComplexItem struct {
	BaseItem         // Index fields of BaseItem will be added to reindex
	actor    []Actor // Index fields of Actor will be added to reindex as arrays
	Name     string  `reindex:"name"`
	Year     int     `reindex:"year,tree"`
	parent   *Item   `reindex:"-"` // Index fields of parent will NOT be added to reindex
}
Sort

Reindexer can sort documents by fields (including nested and fields of joined namespaces) or by expressions in ascending or descending order.

Sort expressions can contain fields names (including nested and fields of joined namespaces) of int, float or bool type, numbers, functions rank(), abs() and ST_Distance(), parenthesis and arithmetic operations: +, - (unary and binary), * and /. If field name followed by '+' they must be separated by space (to distinguish composite index name). Fields of joined namespaces writes in form joined_namespace.field.

Abs() means absolute value of an argument.

Rank() means fulltext rank of match and is applicable only in fulltext query.

ST_Distance() means distance between geometry points (see geometry subsection). The points could be columns in current or joined namespaces or fixed point in format ST_GeomFromText('point(1 -3)')

In SQL query sort expression must be quoted.

type Person struct {
	Name string `reindex:"name"`
	Age  int    `reindex:"age"`
}

type City struct {
	Id                 int        `reindex:"id"`
	NumberOfPopulation int        `reindex:"population"`
	Center             [2]float64 `reindex:"center,rtree,linear"`
}

type Actor struct {
	ID          int        `reindex:"id"`
	PersonData  Person     `reindex:"person"`
	Price       int        `reindex:"price"`
	Description string     `reindex:"description,text"`
	BirthPlace  int        `reindex:"birth_place_id"`
	Location    [2]float64 `reindex:"location,rtree,greene"`
}
....

query := db.Query("actors").Sort("id", true)           // Sort by field
....
query = db.Query("actors").Sort("person.age", true)   // Sort by nested field
....
// Sort by joined field
// Works for inner join only, when each item from left namespace has exactly one joined item from right namespace
query = db.Query("actors").
	InnerJoin(db.Query("cities")).On("birth_place_id", reindexer.EQ, "id").
	Sort("cities.population", true)
....
// Sort by expression:
query = db.Query("actors").Sort("person.age / -10 + price / 1000 * (id - 5)", true)
....
query = db.Query("actors").Where("description", reindexer.EQ, "ququ").
    Sort("rank() + id / 100", true)   // Sort with fulltext rank
....
// Sort by geometry distance
query = db.Query("actors").
    Join(db.Query("cities")).On("birth_place_id", reindexer.EQ, "id").
    Sort("ST_Distance(cities.center, ST_GeomFromText('point(1 -3)'))", true).
    Sort("ST_Distance(location, cities.center)", true)
....
// In SQL query:
iterator := db.ExecSQL ("SELECT * FROM actors ORDER BY person.name ASC")
....
iterator := db.ExecSQL ("SELECT * FROM actors WHERE description = 'ququ' ORDER BY 'rank() + id / 100' DESC")
....
iterator := db.ExecSQL ("SELECT * FROM actors ORDER BY 'ST_Distance(location, ST_GeomFromText(\'point(1 -3)\'))' ASC")

It is also possible to set a custom sort order like this

type SortModeCustomItem struct {
	ID      int    `reindex:"id,,pk"`
	InsItem string `reindex:"item_custom,hash,collate_custom=a-zA-Z0-9"`
}

or like this

type SortModeCustomItem struct {
	ID      int    `reindex:"id,,pk"`
	InsItem string `reindex:"item_custom,hash,collate_custom=АаБбВвГгДдЕеЖжЗзИиКкЛлМмНнОоПпРрСсТтУуФфХхЦцЧчШшЩщЪъЫыЬьЭ-ЯAaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz0-9ЁёЙйэ-я"`
}

The very first character in this list has the highest priority, priority of the last character is the smallest one. It means that sorting algorithm will put items that start with the first character before others. If some characters are skipped their priorities would have their usual values (according to characters in the list).

Counting

Reindexer supports 2 versiong of counting aggregration:

  • count() - this aggregation counts all the documents, which match query's filters;
  • count_cached() - this allows reindexer to use cached count results. Generally this method is more fast (especially for queries with low limit() value), however may return outdated results, if cache was not invalidated after some documents' changes.

Go example for count():

query := db.Query("items").
		WhereString("name", reindexer.EQ, "Vasya").
		Limit(10).
		ReqTotal()
it := query.MustExec()
it.TotalCount() // Get total count of items, matching condition "WhereString("name", reindexer.EQ, "Vasya")"

Go example for count_cached():

query := db.Query("items").
		WhereString("name", reindexer.EQ, "Vasya").
		Limit(10).
		CachedTotal()
it := query.MustExec()
it.TotalCount() // Get total count of items, matching condition "WhereString("name", reindexer.EQ, "Vasya"), using queries cache"

If you need to get total count field in JSON representation via Go-API, you have to set name for this field (otherwise it won't be serialized to JSON):

query := db.Query("items").
		WhereString("name", reindexer.EQ, "Vasya").
		Limit(10).
		ReqTotal("my_total_count_name")
json, err := query.ExecToJson().FetchAll()
Text pattern search with LIKE condition

For simple searching text pattern in string fields condition LIKE can be used. It search strings which match a pattern. In the pattern _ means any char and % means any sequence of chars.

Go example:

	query := db.Query("items").
		Where("field", reindexer.LIKE, "pattern")

SQL example:
```sql
	SELECT * FROM items WHERE fields LIKE 'pattern'

'me_t' corresponds to 'meet', 'meat', 'melt' and so on '%tion' corresponds to 'tion', 'condition', 'creation' and so on

CAUTION: condition LIKE uses scan method. It can be used for debug purposes or within queries with another good selective conditions.

Generally for full text search with reasonable speed we recommend to use fulltext index.

Update queries

UPDATE queries are used to modify existing items of a namespace. There are several kinds of update queries: updating existing fields, adding new fields and dropping existing non-indexed fields.

UPDATE Sql-Syntax

UPDATE nsName
SET field1 = value1, field2 = value2, ..
WHERE condition;

It is also possible to use arithmetic expressions with +, -, /, * and brackets

UPDATE NS SET field1 = field2+field3-(field4+5)/2

including functions like now(), sec() and serial(). To use expressions from Golang code SetExpression() method needs to be called instead of Set().

To make an array-field empty

UPDATE NS SET arrayfield = [] where id = 100

and set it to null

UPDATE NS SET field = null where id > 100

In case of non-indexed fields, setting its value to a value of a different type will replace it completely; in case of indexed fields, it is only possible to convert it from adjacent type (integral types and bool), numeric strings (like "123456") to integral types and back. Setting indexed field to null resets it to a default value.

It is possible to add new fields to existing items

UPDATE Ns set newField = 'Brand new!' where id > 100

and even add a new field by a complex nested path like this

UPDATE Ns set nested.nested2.nested3.nested4.newField = 'new nested field!' where id > 100

will create the following nested objects: nested, nested2, nested3, nested4 and newField as a member of object nested4.

Example of using Update queries in golang code:

db.Query("items").Where("id", reindexer.EQ, 40).Set("field1", values).Update()

Reindexer enables to update and add object fields. Object can be set by either a struct, a map or a byte array (that is a JSON version of object representation).

type ClientData struct {
    Name          string `reindex:"name" json:"name"`
	Age           int    `reindex:"age" json:"age"`
	Address       int    `reindex:"year" json:"year"`
	Occupation    string `reindex:"occupation" json:"occupation"`
	TaxYear       int    `reindex:tax_year json:"tax_year"`
	TaxConsultant string `reindex:tax_consultant json:"tax_consultant"`
}
type Client struct {
	ID      int         `reindex:"id" json:"id"`
	Data    ClientData  `reindex:"client_data" json:"client_data"`
	...
}
clientData := updateClientData(clientId)
db.Query("clients").Where("id", reindexer.EQ, 100).SetObject("client_data", clientData).Update()

In this case, Map in golang can only work with string as a key. map[string]interface{} is a perfect choice.

Updating of object field by Sql statement:

UPDATE clients SET client_data = {"Name":"John Doe","Age":40,"Address":"Fifth Avenue, Manhattan","Occupation":"Bank Manager","TaxYear":1999,"TaxConsultant":"Jane Smith"} where id = 100;

UPDATE Sql-Syntax of queries that drop existing non-indexed fields:

UPDATE nsName
DROP field1, field2, ..
WHERE condition;
db.Query("items").Where("id", reindexer.EQ, 40).Drop("field1").Update()

Reindexer update mechanism enables to modify array fields: to modify a certain item of an existing array or even to replace an entire field.

To update an item subscription operator syntax is used:

update ns set array[*].prices[0] = 9999 where id = 5

where * means all items.

To update entire array the following is used:

update ns set prices = [999, 1999, 2999] where id = 9

any non-indexed field can be easily converted to array using this syntax.

Reindexer also allows to update items of object arrays:

update ns set extra.objects[0] = {"Id":0,"Description":"Updated!"} where id = 9

also like this

db.Query("clients").Where("id", reindexer.EQ, 100).SetObject("extra.objects[0]", updatedValue).Update()

To add items to an existing array the following syntax is supported:

update ns set integer_array = integer_array || [5,6,7,8]

and

update ns set integer_array = [1,2,3,4,5] || integer_array

The first one adds elements to the end of integer_array, the second one adds 5 items to the front of it. To make this code work in Golang SetExpression() should be used instead of Set().

To remove item by index you should do the following:

update ns drop array[5]
Transactions and batch update

Reindexer supports transactions. Transaction are performs atomic namespace update. There are synchronous and async transaction available. To start transaction method db.BeginTx() is used. This method creates transaction object, which provides usual Update/Upsert/Insert/Delete interface for application. For RPC clients there is transactions count limitation - each connection can't has more than 1024 opened transactions at the same time.

Synchronous mode

	// Create new transaction object
	tx, err := db.BeginTx("items");
	if err != nil {
		panic(err)
	}
	// Fill transaction object
	tx.Upsert(&Item{ID: 100})
	tx.Upsert(&Item{ID: 101})
	tx.Query().WhereInt("id", reindexer.EQ, 102).Set("Name", "Petya").Update()
	// Apply transaction
	if err := tx.Commit(); err != nil {
		panic(err)
	}

Async batch mode

For speed up insertion of bulk records async mode can be used.


	// Create new transaction object
	tx, err := db.BeginTx("items");
	if err != nil {
		panic(err)
	}
	// Prepare transaction object async.
	tx.UpsertAsync(&Item{ID: 100},func(err error) {})
	tx.UpsertAsync(&Item{ID: 100},func(err error) {})
	// Wait for async operations done, and apply transaction.
	if err := tx.Commit(); err != nil {
		panic(err)
	}

The second argument of UpsertAsync is completion function, which will be called after receiving server response. Also, if any error occurred during prepare process, then tx.Commit should return an error. So it is enough, to check error returned by tx.Commit - to be sure, that all data has been successfully committed or not.

Transactions commit strategies

Depending on amount of changes in transaction there are 2 possible Commit strategies:

  • Locked atomic update. Reindexer locks namespace and applying all changes under common lock. This mode is used with small amounts of changes.
  • Copy & atomic replace. In this mode Reindexer makes namespace's snapshot, applying all changes to this snapshot, and atomically replaces namespace without lock

Data amount for choosing Commit strategy can be choose in namespaces config. Check fields StartCopyPolicyTxSize, CopyPolicyMultiplier and TxSizeToAlwaysCopy in struct DBNamespacesConfig(describer.go)

Implementation notes
  1. Transaction object is not thread safe and can't be used from different goroutines;
  2. Transaction object holds Reindexer's resources, therefore application should explicitly call Rollback or Commit, otherwise resources will leak;
  3. It is safe to call Rollback after Commit;
  4. It is possible to call Query from transaction by call tx.Query("ns").Exec() ...;
  5. Only serializable isolation is available, i.e. each transaction takes exclusive lock over the target namespace until all of the steps of the transaction commited.
Join

Reindexer can join documents from multiple namespaces into a single result:

type Actor struct {
	ID        int    `reindex:"id"`
	Name      string `reindex:"name"`
	IsVisible bool   `reindex:"is_visible"`
}

type ItemWithJoin struct {
	ID          int      `reindex:"id"`
	Name        string   `reindex:"name"`
	ActorsIDs   []int    `reindex:"actors_ids"`
	ActorsNames []int    `reindex:"actors_names"`
	Actors      []*Actor `reindex:"actors,,joined"`
}
....

query := db.Query("items_with_join").Join(
	db.Query("actors").
		WhereBool("is_visible", reindexer.EQ, true),
	"actors"
).On("actors_ids", reindexer.SET, "id")

query.Exec ()

In this example, Reindexer uses reflection under the hood to create Actor slice and copy Actor struct.

Join query may have from one to several On conditions connected with And (by default), Or or Not operators:

query := db.Query("items_with_join").
	Join(
		db.Query("actors").
			WhereBool("is_visible", reindexer.EQ, true),
		"actors").
	On("actors_ids", reindexer.SET, "id").
	Or().
	On("actors_names", reindexer.SET, "name")

An InnerJoin combines data from two namespaces where there is a match on the joining fields in both namespaces. A LeftJoin returns all valid items from the namespaces on the left side of the LeftJoin keyword, along with the values from the table on the right side, or nothing if a matching item doesn't exist. Join is an alias for LeftJoin.

InnerJoins can be used as a condition in Where clause:

query1 := db.Query("items_with_join").
	WhereInt("id", reindexer.RANGE, []int{0, 100}).
	Or().
	InnerJoin(db.Query("actors").WhereString("name", reindexer.EQ, "ActorName"), "actors").
	On("actors_ids", reindexer.SET, "id").
	Or().
	InnerJoin(db.Query("actors").WhereInt("id", reindexer.RANGE, []int{100, 200}), "actors").
	On("actors_ids", reindexer.SET, "id")

query2 := db.Query("items_with_join").
	WhereInt("id", reindexer.RANGE, []int{0, 100}).
	Or().
	OpenBracket().
		InnerJoin(db.Query("actors").WhereString("name", reindexer.EQ, "ActorName"), "actors").
		On("actors_ids", reindexer.SET, "id").
		InnerJoin(db.Query("actors").WhereInt("id", reindexer.RANGE, []int{100, 200}), "actors").
		On("actors_ids", reindexer.SET, "id").
	CloseBracket()

query3 := db.Query("items_with_join").
	WhereInt("id", reindexer.RANGE, []int{0, 100}).
	Or().
	InnerJoin(db.Query("actors").WhereInt("id", reindexer.RANGE, []int{100, 200}), "actors").
	On("actors_ids", reindexer.SET, "id").
	Limit(0)

Note that usually Or operator implements short-circuiting for Where conditions: if the previous condition is true the next one is not evaluated. But in case of InnerJoin it works differently: in query1 (from the example above) both InnerJoin conditions are evaluated despite the result of WhereInt. Limit(0) as part of InnerJoin (query3 from the example above) does not join any data - it works like a filter only to verify conditions.

Joinable interface

To avoid using reflection, Item can implement Joinable interface. If that implemented, Reindexer uses this instead of the slow reflection-based implementation. This increases overall performance by 10-20%, and reduces the amount of allocations.

// Joinable interface implementation.
// Join adds items from the joined namespace to the `ItemWithJoin` object.
// When calling Joinable interface, additional context variable can be passed to implement extra logic in Join.
func (item *ItemWithJoin) Join(field string, subitems []interface{}, context interface{}) {

	switch field {
	case "actors":
		for _, joinItem := range subitems {
			item.Actors = append(item.Actors, joinItem.(*Actor))
		}
	}
}
Complex Primary Keys and Composite Indexes

A Document can have multiple fields as a primary key. To enable this feature add composite index to struct. Composite index is an index that involves multiple fields, it can be used instead of several separate indexes.

type Item struct {
	ID    int64 `reindex:"id"`     // 'id' is a part of a primary key
	SubID int   `reindex:"sub_id"` // 'sub_id' is a part of a primary key
	// Fields
	//	....
	// Composite index
	_ struct{} `reindex:"id+sub_id,,composite,pk"`
}

OR

type Item struct {
	ID       int64 `reindex:"id,-"`         // 'id' is a part of primary key, WITHOUT personal searchable index
	SubID    int   `reindex:"sub_id,-"`     // 'sub_id' is a part of a primary key, WITHOUT a personal searchable index
	SubSubID int   `reindex:"sub_sub_id,-"` // 'sub_sub_id' is a part of a primary key WITHOUT a personal searchable index

	// Fields
	// ....

	// Composite index
	_ struct{} `reindex:"id+sub_id+sub_sub_id,,composite,pk"`
}

Also composite indexes are useful for sorting results by multiple fields:

type Item struct {
	ID     int64 `reindex:"id,,pk"`
	Rating int   `reindex:"rating"`
	Year   int   `reindex:"year"`

	// Composite index
	_ struct{} `reindex:"rating+year,tree,composite"`
}

...
	// Sort query results by rating first, then by year
	query := db.Query("items").Sort("rating+year", true)

	// Sort query results by rating first, then by year, and put item where rating == 5 and year == 2010 first
	query := db.Query("items").Sort("rating+year", true,[]interface{}{5,2010})

For make query to the composite index, pass []interface{} to .WhereComposite function of Query builder:

	// Get results where rating == 5 and year == 2010
	query := db.Query("items").WhereComposite("rating+year", reindexer.EQ,[]interface{}{5,2010})
Aggregations

Reindexer allows to retrieve aggregated results. Currently Average, Sum, Minimum, Maximum Facet and Distinct aggregations are supported.

  • AggregateMax - get maximum field value
  • AggregateMin - get minimum field value
  • AggregateSum - get sum field value
  • AggregateAvg - get average field value
  • AggregateFacet - get fields facet value
  • Distinct - get list of unique values of the field

In order to support aggregation, Query has methods AggregateAvg, AggregateSum, AggregateMin, AggregateMax, AggregateFacet and Distinct those should be called before the Query execution: this will ask reindexer to calculate data aggregations. Aggregation Facet is applicable to multiple data columns and the result of that could be sorted by any data column or 'count' and cutted off by offset and limit. In order to support this functionality method AggregateFacet returns AggregationFacetRequest which has methods Sort, Limit and Offset.

To get aggregation results, Iterator has method AggResults: it is available after query execution and returns slice of results.

Example code for aggregate items by price and name


	query := db.Query("items")
	query.AggregateMax("price")
	query.AggregateFacet("name", "price").Sort("name", true).Sort("count", false).Offset(10).Limit(100)
	iterator := query.Exec()

	aggMaxRes := iterator.AggResults()[0]

	fmt.Printf ("max price = %d", aggMaxRes.Value)

	aggFacetRes := iterator.AggResults()[1]

	fmt.Printf ("'name' 'price' -> count")
	for _, facet := range aggFacetRes.Facets {
		fmt.Printf ("'%s' '%s' -> %d", facet.Values[0], facet.Values[1], facet.Count)
	}


	query := db.Query("items")
	query.Distinct("name").Distinct("price")
	iterator := query.Exec()

	aggResults := iterator.aggResults()

	distNames := aggResults[0]
	fmt.Println ("names:")
	for _, name := range distNames.Distincts {
		fmt.Println(name)
	}

	distPrices := aggResults[1]
	fmt.Println ("prices:")
	for _, price := range distPrices.Distincts {
		fmt.Println(price)
	}
Search in array fields with matching array indexes

Reindexer allows to search data in array fields when matching values have same indexes positions. For instance, we've got an array of structures:

type Elem struct {
   F1 int `reindex:"f1"`
   F2 int `reindex:"f2"`
}

type A struct {
   Elems []Elem
}

Common attempt to search values in this array

db.Query("Namespace").Where("f1",EQ,1).Where("f2",EQ,2)

finds all items of array Elem[] where f1 is equal to 1 and f2 is equal to 2.

EqualPosition function allows to search in array fields with equal indexes. Queries like this:

db.Query("Namespace").Where("f1", reindexer.GE, 5).Where("f2", reindexer.EQ, 100).EqualPosition("f1", "f2")

or

SELECT * FROM Namespace WHERE f1 >= 5 AND f2 = 100 EQUAL_POSITION(f1,f2);

will find all the items of array Elem[] with equal array indexes where f1 is greater or equal to 5 and f2 is equal to 100 (for instance, query returned 5 items where only 3rd elements of both arrays have appropriate values).

With complex expressions (expressions with brackets) equal_position() could be within a bracket:

SELECT * FROM Namespace WHERE (f1 >= 5 AND f2 = 100 EQUAL_POSITION(f1,f2)) OR (f3 = 3 AND f4 < 4 AND f5 = 7 EQUAL_POSITION(f3,f4,f5));
SELECT * FROM Namespace WHERE (f1 >= 5 AND f2 = 100 AND f3 = 3 AND f4 < 4 EQUAL_POSITION(f1,f3) EQUAL_POSITION(f2,f4)) OR (f5 = 3 AND f6 < 4 AND f7 = 7 EQUAL_POSITION(f5,f7));
SELECT * FROM Namespace WHERE f1 >= 5 AND (f2 = 100 AND f3 = 3 AND f4 < 4 EQUAL_POSITION(f2,f3)) AND f5 = 3 AND f6 < 4 EQUAL_POSITION(f1,f5,f6);

equal_position doesn't work with the following conditions: IS NULL, IS EMPTY and IN(with empty parameter list).

Atomic on update functions

There are atomic functions, which executes under namespace lock, and therefore guarantees data consistency:

  • serial() - sequence of integer, useful for auto-increment keys
  • now() - current time stamp, useful for data synchronization. It may have one of the following arguments: msec, usec, nsec and sec. The “sec” argument is used by default.

These functions can be passed to Upsert/Insert/Update in 3-rd and next arguments.

If these functions are provided, the passed by reference item will be changed to updated value

   // set ID field from serial generator
   db.Insert ("items",&item,"id=serial()")

   // set current timestamp in nanoseconds to updated_at field
   db.Update ("items",&item,"updated_at=now(NSEC)")

   // set current timestamp and ID
   db.Upsert ("items",&item,"updated_at=now(NSEC)","id=serial()")

Expire Data from Namespace by Setting TTL

Data expiration is useful for some classes of information, including machine generated event data, logs, and session information that only need to persist for a limited period of time.

Reindexer makes it possible to set TTL (time to live) for Namespace items. Adding TtlIndex to Namespace automatically removes items after a specified number of seconds.

Ttl indexes work only with int64 fields and store UNIX timestamp data. Items containing ttl index expire after expire_after seconds. Example of declaring TtlIndex in Golang:

            type NamespaceExample struct {
                ID   int    `reindex:"id,,pk" json:"id"`
                Date int64  `reindex:"date,ttl,,expire_after=3600" json:"date"`
            }
            ...
            ns.Date = time.Now().Unix()

In this case items of namespace NamespaceExample expire in 3600 seconds after NamespaceExample.Date field value (which is UNIX timestamp).

A TTL index supports queries in the same way non-TTL indexes do.

Direct JSON operations
Upsert data in JSON format

If source data is available in JSON format, then it is possible to improve performance of Upsert/Delete operations by directly passing JSON to reindexer. JSON deserialization will be done by C++ code, without extra allocs/deserialization in Go code.

Upsert or Delete functions can process JSON just by passing []byte argument with json

	json := []byte (`{"id":1,"name":"test"}`)
	db.Upsert  ("items",json)

It is just faster equivalent of:

	item := &Item{}
	json.Unmarshal ([]byte (`{"id":1,"name":"test"}`),item)
	db.Upsert ("items",item)
Get Query results in JSON format

In case of requirement to serialize results of Query in JSON format, then it is possible to improve performance by directly obtaining results in JSON format from reindexer. JSON serialization will be done by C++ code, without extra allocs/serialization in Go code.

...
	iterator := db.Query("items").
		Select ("id","name").        // Filter output JSON: Select only "id" and "name" fields of items, another fields will be omitted
		Limit (1).
		ExecToJson ("root_object")   // Name of root object of output JSON

	json,err := iterator.FetchAll()
	// Check the error
	if err != nil {
		panic(err)
	}
	fmt.Printf ("%s\n",string (json))
...

This code will print something like:

{ "root_object": [{ "id": 1, "name": "test" }] }
Using object cache

To avoid race conditions, by default object cache is turned off and all objects are allocated and deserialized from reindexer internal format (called CJSON) per each query. The deserialization is uses reflection, so its speed is not optimal (in fact CJSON deserialization is ~3-10x faster than JSON, and ~1.2x faster than GOB), but performance is still seriously limited by reflection overhead.

There are 2 ways to enable object cache:

  • Provide DeepCopy interface
  • Ask query return shared objects from cache
DeepCopy interface

If object is implements DeepCopy interface, then reindexer will turn on object cache and use DeepCopy interface to copy objects from cache to query results. The DeepCopy interface is responsible to make deep copy of source object.

Here is sample of DeepCopy interface implementation

func (item *Item) DeepCopy () interface {} {
	copyItem := &Item{
		ID: item.ID,
		Name: item.Name,
		Articles: make ([]int,cap (item.Articles),len (item.Articles)),
		Year: item.Year,
	}
	copy (copyItem.Articles,item.Articles)
	return copyItem
}

There are available code generation tool gencopy, which can automatically generate DeepCopy interface for structs.

Get shared objects from object cache (USE WITH CAUTION)

To speed up queries and do not allocate new objects per each query it is possible ask query return objects directly from object cache. For enable this behavior, call AllowUnsafe(true) on Iterator.

WARNING: when used AllowUnsafe(true) queries returns shared pointers to structs in object cache. Therefore application MUST NOT modify returned objects.

	res, err := db.Query("items").WhereInt ("id",reindexer.EQ,1).Exec().AllowUnsafe(true).FetchAll()
	if err != nil {
		panic (err)
	}

	if len (res) > 1 {
		// item is SHARED pointer to struct in object cache
		item = res[0].(*Item)

		// It's OK - fmt.Printf will not modify item
		fmt.Printf ("%v",item)

		// It's WRONG - can race, and will corrupt data in object cache
		item.Name = "new name"
	}
Limit size of object cache

By default maximum size of object cache is 256000 items for each namespace. To change maximum size use ObjCacheSize method of NameapaceOptions, passed to OpenNamespace. e.g.

	// Set object cache limit to 4096 items
	db.OpenNamespace("items_with_huge_cache", reindexer.DefaultNamespaceOptions().ObjCacheSize(4096), Item{})
Geometry

The only supported geometry data type is 2D point, which implemented in Golang as [2]float64.

In SQL, a point can be created as ST_GeomFromText('point(1 -3)').

The only supported request for geometry field is to find all points within a distance from a point. DWithin(field_name, point, distance) as on example below.

Corresponding SQL function is ST_DWithin(field_name, point, distance).

RTree index can be created for points. To do so, rtree and linear, quadratic, greene or rstar tags should be declared. linear, quadratic, greene or rstar means which algorithm of RTree construction would be used. Here algorithms are listed in order from optimized for insertion to optimized for search. But it depends on data. Test which is more appropriate for you. Default algorithm is rstar.

type Item struct {
	id              int        `reindex:"id,,pk"`
	pointIndexed    [2]float64 `reindex:"point_indexed,rtree,linear"`
	pointNonIndexed [2]float64 `json:"point_non_indexed"`
}

query1 := db.Query("items").DWithin("point_indexed", [2]float64{-1.0, 1.0}, 4.0)
SELECT * FROM items WHERE ST_DWithin(point_non_indexed, ST_GeomFromText('point(1 -3.5)'), 5.0);

Logging, debug and profiling

Turn on logger

Reindexer logger can be turned on by db.SetLogger() method, just like in this snippet of code:

type Logger struct {
}
func (Logger) Printf(level int, format string, msg ...interface{}) {
	log.Printf(format, msg...)
}
...
	db.SetLogger (Logger{})
Debug queries

Another useful feature is debug print of processed Queries. To debug print queries details there are 2 methods:

  • db.SetDefaultQueryDebug(namespace string,level int) - it globally enables print details of all queries by namespace

  • query.Debug(level int) - print details of query execution level is level of verbosity:

  • reindexer.INFO - will print only query conditions

  • reindexer.TRACE - will print query conditions and execution details with timings

  • query.Explain () - calculate and store query execution details.

  • iterator.GetExplainResults () - return query execution details

Custom allocators support

Reindexer has support for TCMalloc (which is also a part of GPerfTools) and JEMalloc allocators (check ENABLE_TCMALLOC and ENABLE_JEMALLOC in CMakeLists.txt).

If you have built standalone server from sources available allocators will be detected and used automatically.

In go:generate builds and prebuilt packages reindexer has TCMalloc support, however none of TCMalloc libraries will be linked automatically. To force allocator's libraries linkage LD_PRELOAD with required library has to be used:

LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_and_profiler.so ./my_executable

Custom allocator may be handy to track memory consumation, profile heap/CPU or to improve general perforamnce.

Profiling
Heap profiling

Because reindexer core is written in C++ all calls to reindexer and their memory consumption are not visible for go profiler. To profile reindexer core there are cgo profiler available. cgo profiler now is part of reindexer, but it can be used with any another cgo code.

Usage of cgo profiler is very similar with usage of go profiler.

  1. Add import:
import _ "github.com/restream/reindexer/pprof"
  1. If your application is not already running an http server, you need to start one. Add "net/http" and "log" to your imports and the following code to your main function:
go func() {
	log.Println(http.ListenAndServe("localhost:6060", nil))
}()
  1. Run application with environment variable HEAPPROFILE=/tmp/pprof
  2. Then use the pprof tool to look at the heap profile:
pprof -symbolize remote http://localhost:6060/debug/cgo/pprof/heap
CPU profiling

Internal reindexer's profiler is based on gperf_tools library and unable to get CPU profile via Go runtime. However, go profiler may be used with symbolizer to retrieve C++ CPU usage.

  1. Add import:
import _ "net/http/pprof"
  1. If your application is not already running an http server, you need to start one. Add "net/http" and "log" to your imports and the following code to your main function:
go func() {
	log.Println(http.ListenAndServe("localhost:6060", nil))
}()
  1. Run application with environment variable REINDEXER_CGOBACKTRACE=1
  2. Then use the pprof tool to get CPU profile:
pprof -symbolize remote http://localhost:6060/debug/pprof/profile?seconds=10
Known issues

Due to internal Golang's specific it's not recommended to try to get CPU and heap profiles simultaneously, because it may cause deadlock inside the profiler.

Integration with other program languages

A list of connectors for work with Reindexer via other program languages (TBC later):

Pyreindexer for Python

Pyreindexer is official connector, and maintained by Reindexer's team. It supports both builtin and standalone modes. Before installation reindexer-dev (version >= 2.10) should be installed. See installation instructions for details.

  • Support modes: standalone, builtin
  • API Used: binary ABI, cproto
  • Dependency on reindexer library (reindexer-dev package): yes

For install run:

pip3 install pyreindexer

https://github.com/Restream/reindexer-py https://pypi.org/project/pyreindexer/ Python version >=3.6 is required).

Reindexer for Java
  • Support modes: standalone, builtin, builtinserver
  • API Used: binary ABI, cproto
  • Dependency on reindexer library (reindexer-dev package): yes, for builtin & builtinserver

Reindexer for java is official connector, and maintained by Reindexer's team. It supports both builtin and standalone modes. For enable builtin mode support reindexer-dev (version >= 3.1.0) should be installed. See installation instructions for details.

For install reindexer to Java or Kotlin project add the following lines to maven project file

<dependency>
    <groupId>com.github.restream</groupId>
    <artifactId>rx-connector</artifactId>
    <version>[LATEST_VERSION]</version>
</dependency>

https://github.com/Restream/reindexer-java
Note: Java version >= 1.8 is required.

Spring wrapper

Spring wrapper for Java-connector: https://github.com/evgeniycheban/spring-data-reindexer

3rd party open source connectors
PHP

https://github.com/Smolevich/reindexer-client

  • Support modes: standalone only
  • API Used: HTTP REST API
  • Dependency on reindexer library (reindexer-dev package): no
Rust

https://github.com/coinrust/reindexer-rs

  • Support modes: standalone, builtin
  • API Used: binary ABI, cproto
  • Dependency on reindexer library (reindexer-dev package): yes
.NET

https://github.com/oruchreis/ReindexerNet

  • Support modes: builtin
  • API Used: binary ABI
  • Dependency on reindexer library (reindexer-dev package): yes

Limitations and known issues

Currently Reindexer is stable and production ready, but it is still a work in progress, so there are some limitations and issues:

  • Internal C++ API is not stabilized and is subject to change.

Getting help

You can get help in several ways:

  1. Join Reindexer Telegram group
  2. Write an issue

Documentation

Index

Constants

View Source
const (
	ConfigNamespaceName           = "#config"
	MemstatsNamespaceName         = "#memstats"
	NamespacesNamespaceName       = "#namespaces"
	PerfstatsNamespaceName        = "#perfstats"
	QueriesperfstatsNamespaceName = "#queriesperfstats"
	ClientsStatsNamespaceName     = "#clientsstats"
	ReplicationStatsNamespaceName = "#replicationstats"
)
View Source
const (
	// Reconnect to the next node in the list
	ReconnectStrategyNext = ReconnectStrategy("next")
	// Reconnect to the random node in the list
	ReconnectStrategyRandom = ReconnectStrategy("random")
	// Reconnect to the synchnized node (which was the part of the last consensus in synchronous cluster)
	ReconnectStrategySynchronized = ReconnectStrategy("synchronized")
	// Always choose cluster's leader
	ReconnectStrategyPrefferWrite = ReconnectStrategy("preffer_write")
	// Always choose cluster's follower
	ReconnectStrategyReadOnly = ReconnectStrategy("read_only")
	// Choose follower, when it's possible. Otherwise reconnect to leader
	ReconnectStrategyPrefferRead = ReconnectStrategy("preffer_read")
)
View Source
const (
	QueryStrictModeNone    = bindings.QueryStrictModeNone    // Allows any fields in coditions, but doesn't check actual values for non-existing names
	QueryStrictModeNames   = bindings.QueryStrictModeNames   // Allows only valid fields and indexes in conditions. Otherwise query will return error
	QueryStrictModeIndexes = bindings.QueryStrictModeIndexes // Allows only indexes in conditions. Otherwise query will return error
)
View Source
const (
	CollateNone    = bindings.CollateNone
	CollateASCII   = bindings.CollateASCII
	CollateUTF8    = bindings.CollateUTF8
	CollateNumeric = bindings.CollateNumeric
	CollateCustom  = bindings.CollateCustom
)
View Source
const (
	// Equal '='
	EQ = bindings.EQ
	// Greater '>'
	GT = bindings.GT
	// Lower '<'
	LT = bindings.LT
	// Greater or equal '>=' (GT|EQ)
	GE = bindings.GE
	// Lower or equal '<'
	LE = bindings.LE
	// One of set 'IN []'
	SET = bindings.SET
	// All of set
	ALLSET = bindings.ALLSET
	// In range
	RANGE = bindings.RANGE
	// Any value
	ANY = bindings.ANY
	// Empty value (usualy zero len array)
	EMPTY = bindings.EMPTY
	// String like pattern
	LIKE = bindings.LIKE
	// Geometry DWithin
	DWITHIN = bindings.DWITHIN
)

Condition types

View Source
const (
	// ERROR Log level
	ERROR = bindings.ERROR
	// WARNING Log level
	WARNING = bindings.WARNING
	// INFO Log level
	INFO = bindings.INFO
	// TRACE Log level
	TRACE = bindings.TRACE
)
View Source
const (
	AggAvg      = bindings.AggAvg
	AggSum      = bindings.AggSum
	AggFacet    = bindings.AggFacet
	AggMin      = bindings.AggMin
	AggMax      = bindings.AggMax
	AggDistinct = bindings.AggDistinct
)

Aggregation funcs

View Source
const (
	ErrCodeOK               = bindings.ErrOK
	ErrCodeParseSQL         = bindings.ErrParseSQL
	ErrCodeQueryExec        = bindings.ErrQueryExec
	ErrCodeParams           = bindings.ErrParams
	ErrCodeLogic            = bindings.ErrLogic
	ErrCodeParseJson        = bindings.ErrParseJson
	ErrCodeParseDSL         = bindings.ErrParseDSL
	ErrCodeConflict         = bindings.ErrConflict
	ErrCodeParseBin         = bindings.ErrParseBin
	ErrCodeForbidden        = bindings.ErrForbidden
	ErrCodeWasRelock        = bindings.ErrWasRelock
	ErrCodeNotValid         = bindings.ErrNotValid
	ErrCodeNetwork          = bindings.ErrNetwork
	ErrCodeNotFound         = bindings.ErrNotFound
	ErrCodeStateInvalidated = bindings.ErrStateInvalidated
	ErrCodeTimeout          = bindings.ErrTimeout
)

Reindexer error codes

Variables

View Source
var (
	ErrEmptyNamespace    = bindings.NewError("rq: empty namespace name", ErrCodeParams)
	ErrEmptyFieldName    = bindings.NewError("rq: empty field name in filter", ErrCodeParams)
	ErrEmptyAggFieldName = bindings.NewError("rq: empty field name in aggregation", ErrCodeParams)
	ErrCondType          = bindings.NewError("rq: cond type not found", ErrCodeParams)
	ErrOpInvalid         = bindings.NewError("rq: op is invalid", ErrCodeParams)
	ErrAggInvalid        = bindings.NewError("rq: agg is invalid", ErrCodeParams)
	ErrNoPK              = bindings.NewError("rq: No pk field in struct", ErrCodeParams)
	ErrWrongType         = bindings.NewError("rq: Wrong type of item", ErrCodeParams)
	ErrMustBePointer     = bindings.NewError("rq: Argument must be a pointer to element, not element", ErrCodeParams)
	ErrNotFound          = bindings.NewError("rq: Not found", ErrCodeNotFound)
	ErrDeepCopyType      = bindings.NewError("rq: DeepCopy() returns wrong type", ErrCodeParams)
)

Functions

func CreateInt64FromLSN

func CreateInt64FromLSN(v LsnT) int64

func GetCondType

func GetCondType(name string) (int, error)

func WithAppName

func WithAppName(appName string) interface{}

func WithCgoLimit added in v1.9.3

func WithCgoLimit(cgoLimit int) interface{}

func WithConnPoolSize added in v1.9.3

func WithConnPoolSize(connPoolSize int) interface{}

func WithCreateDBIfMissing

func WithCreateDBIfMissing() interface{}

func WithDedicatedServerThreads

func WithDedicatedServerThreads() interface{}

func WithNetCompression

func WithNetCompression() interface{}

func WithReconnectionStrategy

func WithReconnectionStrategy(strategy ReconnectStrategy, allowUnknownNodes bool) interface{}

WithReconnectionStrategy allows to configure the behavior during reconnect after error. Strategy used for reconnect to server on connection error AllowUnknownNodes allows to add dsn from cluster node, that was not set in client dsn list Warning: you should not mix async and sync nodes' DSNs in initial DSNs' list, unless you really know what you are doing

func WithRetryAttempts added in v1.9.5

func WithRetryAttempts(read int, write int) interface{}

func WithServerConfig added in v1.9.6

func WithServerConfig(startupTimeout time.Duration, serverConfig *config.ServerConfig) interface{}

func WithTimeouts

func WithTimeouts(loginTimeout time.Duration, requestTimeout time.Duration) interface{}

Types

type AggregateFacetRequest

type AggregateFacetRequest struct {
	// contains filtered or unexported fields
}

func (*AggregateFacetRequest) Limit

func (*AggregateFacetRequest) Offset

func (r *AggregateFacetRequest) Offset(offset int) *AggregateFacetRequest

func (*AggregateFacetRequest) Sort

Use field 'count' to sort by facet's count value.

type AggregationResult added in v1.10.0

type AggregationResult struct {
	Fields []string `json:"fields"`
	Type   string   `json:"type"`
	Value  float64  `json:"value,omitempty"`
	Facets []struct {
		Values []string `json:"values"`
		Count  int      `json:"count"`
	} `json:"facets,omitempty"`
	Distincts []string `json:"distincts,omitempty"`
}

type CacheMemStat

type CacheMemStat struct {
	// Total memory consumption by this cache
	TotalSize int64 `json:"total_size"`
	// Count of used elements stored in this cache
	ItemsCount int64 `json:"items_count"`
	// Count of empty elements slots in this cache
	EmptyCount int64 `json:"empty_count"`
	// Number of hits of queries, to store results in cache
	HitCountLimit int64 `json:"hit_count_limit"`
}

CacheMemStat information about reindexer's cache memory consumption

type ClientConnectionStat

type ClientConnectionStat struct {
	// Connection identifier
	ConnectionId int64 `json:"connection_id"`
	// client ip address
	Ip string `json:"ip"`
	// User name
	UserName string `json:"user_name"`
	// User right
	UserRights string `json:"user_rights"`
	// Database name
	DbName string `json:"db_name"`
	// Current activity
	CurrentActivity string `json:"current_activity"`
	// Server start time in unix timestamp
	StartTime int64 `json:"start_time"`
	// Receive bytes
	RecvBytes int64 `json:"recv_bytes"`
	// Sent bytes
	SentBytes int64 `json:"sent_bytes"`
	// Client version string
	ClientVersion string `json:"client_version"`
	// Send buffer size
	SendBufBytes int64 `json:"send_buf_bytes"`
	// Timestamp of last send operation (ms)
	LastSendTs int64 `json:"last_send_ts"`
	// Timestamp of last recv operation (ms)
	LastRecvTs int64 `json:"last_recv_ts"`
	// Current send rate (bytes/s)
	SendRate int `json:"send_rate"`
	// Current recv rate (bytes/s)
	RecvRate int `json:"recv_rate"`
	// Active transactions count
	TxCount int `json:"tx_count"`
}

ClientConnectionStat is information about client connection

type DBAsyncReplicationConfig

type DBAsyncReplicationConfig struct {
	// Replication role. One of: none, leader, follower
	Role string `json:"role"`
	// Replication mode for mixed 'sync cluster + async replication' configs. One of: default, from_sync_leader
	ReplicationMode string `json:"replication_mode"`
	// force resync on logic error conditions
	ForceSyncOnLogicError bool `json:"force_sync_on_logic_error"`
	// force resync on wrong data hash conditions
	ForceSyncOnWrongDataHash bool `json:"force_sync_on_wrong_data_hash"`
	// Network timeout for online updates (s)
	UpdatesTimeout int `json:"online_updates_timeout_sec"`
	// Network timeout for wal/force syncs (s)
	SyncTimeout int `json:"sync_timeout_sec"`
	// Number of parallel replication threads
	SyncThreads int `json:"sync_threads"`
	// Max number of concurrent force/wal syncs per replication thread
	ConcurrentSyncsPerThread int `json:"syncs_per_thread"`
	// Number of coroutines for online-updates batching (per each namespace of each node)
	BatchingReoutines int `json:"batching_routines_count"`
	// Enable compression for replication network operations
	EnableCompression bool `json:"enable_compression"`
	// List of namespaces for replication. If emply, all namespaces. All replicated namespaces will become read only for slave
	Namespaces []string `json:"namespaces"`
	// Reconnect interval after replication error (ms)
	RetrySyncInterval int `json:"retry_sync_interval_msec"`
	// List of follower-nodes for async replication
	Nodes []DBAsyncReplicationNode `json:"nodes"`
}

DBAsyncReplicationConfig is part of reindexer configuration contains async replication options

type DBAsyncReplicationNode

type DBAsyncReplicationNode struct {
	// Node's DSN. It must to have cproto format (e.g. 'cproto://<ip>:<port>/<db>')
	DSN string `json:"dsn"`
	// List of namespaces to replicate on this specific node. If nil, list from main replication config will be used
	Namespaces []string `json:"namespaces"`
}

DBAsyncReplicationNode

type DBConfigItem added in v1.9.3

type DBConfigItem struct {
	Type             string                    `json:"type"`
	Profiling        *DBProfilingConfig        `json:"profiling,omitempty"`
	Namespaces       *[]DBNamespacesConfig     `json:"namespaces,omitempty"`
	Replication      *DBReplicationConfig      `json:"replication,omitempty"`
	AsyncReplication *DBAsyncReplicationConfig `json:"async_replication,omitempty"`
}

DBConfigItem is structure stored in system '#config` namespace

type DBNamespacesConfig

type DBNamespacesConfig struct {
	// Name of namespace, or `*` for setting to all namespaces
	Namespace string `json:"namespace"`
	// Log level of queries core logger
	LogLevel string `json:"log_level"`
	// Join cache mode. Can be one of on, off, aggressive
	JoinCacheMode string `json:"join_cache_mode"`
	// Enable namespace lazy load (namespace shoud be loaded from disk on first call, not at reindexer startup)
	Lazyload bool `json:"lazyload"`
	// Unload namespace data from RAM after this idle timeout in seconds. If 0, then data should not be unloaded
	UnloadIdleThreshold int `json:"unload_idle_threshold"`
	// Enable namespace copying for transaction with steps count greater than this value (if copy_politics_multiplier also allows this)
	StartCopyPolicyTxSize int `json:"start_copy_policy_tx_size"`
	// Disables copy policy if namespace size is greater than copy_policy_multiplier * start_copy_policy_tx_size
	CopyPolicyMultiplier int `json:"copy_policy_multiplier"`
	// Force namespace copying for transaction with steps count greater than this value
	TxSizeToAlwaysCopy int `json:"tx_size_to_always_copy"`
	// Timeout before background indexes optimization start after last update. 0 - disable optimizations
	OptimizationTimeout int `json:"optimization_timeout_ms"`
	// Maximum number of background threads of sort indexes optimization. 0 - disable sort optimizations
	OptimizationSortWorkers int `json:"optimization_sort_workers"`
	// Maximum WAL size for this namespace (maximum count of WAL records)
	WALSize int64 `json:"wal_size"`
	// Minimum preselect size for optimization of inner join by injection of filters. It is using if (MaxPreselectPart * ns.size) is less than this value
	MinPreselectSize int64 `json:"min_preselect_size"`
	// Maximum preselect size for optimization of inner join by injection of filters
	MaxPreselectSize int64 `json:"max_preselect_size"`
	// Maximum preselect part of namespace's items for optimization of inner join by injection of filters
	MaxPreselectPart float64 `json:"max_preselect_part"`
	// Enables 'simple counting mode' for index updates tracker. This will increase index optimization time, however may reduce insertion time
	IndexUpdatesCountingMode bool `json:"index_updates_counting_mode"`
	// Enables synchronous storage flush inside write-calls, if async updates count is more than SyncStorageFlushLimit
	// 0 - disables synchronous storage flush (default). In this case storage will be flushed in background thread only
	SyncStorageFlushLimit int `json:"sync_storage_flush_limit"`
}

DBNamespacesConfig is part of reindexer configuration contains namespaces options

type DBProfilingConfig added in v1.9.3

type DBProfilingConfig struct {
	// Minimum query execution time to be recoreded in #queriesperfstats namespace
	QueriesThresholdUS int `json:"queries_threshold_us"`
	// Enables tracking memory statistics
	MemStats bool `json:"memstats"`
	// Enables tracking overal perofrmance statistics
	PerfStats bool `json:"perfstats"`
	// Enables recording of queries perofrmance statistics
	QueriesPerfStats bool `json:"queriesperfstats"`
	// Enables recording of activity statistics into #activitystats namespace
	ActivityStats bool `json:"activitystats"`
}

DBProfilingConfig is part of reindexer configuration contains profiling options

type DBReplicationConfig

type DBReplicationConfig struct {
	// Server ID - must be unique for each node (available values: 0-999)
	ServerID int `json:"server_id"`
	// Cluster ID - must be same for client and for master
	ClusterID int `json:"cluster_id"`
}

DBReplicationConfig is part of reindexer configuration contains general node settings for replication

type DeepCopy

type DeepCopy interface {
	DeepCopy() interface{}
}

type Error

type Error interface {
	Error() string
	Code() int
}

Error - reindexer Error interface

type ExplainResults added in v1.10.0

type ExplainResults struct {
	// Total query execution time
	TotalUs int `json:"total_us"`
	// Query prepare and optimize time
	PrepareUs int `json:"prepare_us"`
	// Indexes keys selection time
	IndexesUs int `json:"indexes_us"`
	// Query post process time
	PostprocessUS int `json:"postprocess_us"`
	// Intersection loop time
	LoopUs int `json:"loop_us"`
	// Index, which used for sort results
	SortIndex string `json:"sort_index"`
	// General sort time
	GeneralSortUs int `json:"general_sort_us"`
	// Optimization of sort by uncompleted index has been performed
	SortByUncommittedIndex bool `json:"sort_by_uncommitted_index"`
	// Filter selectors, used to proccess query conditions
	Selectors []ExplainSelector `json:"selectors"`
}

ExplainResults presents query plan

type ExplainSelector

type ExplainSelector struct {
	// Field or index name
	Field string `json:"field"`
	// Method, used to process condition
	Method string `json:"method"`
	// Number of uniq keys, processed by this selector (may be incorrect, in case of internal query optimization/caching
	Keys int `json:"keys"`
	// Count of comparators used, for this selector
	Comparators int `json:"comparators"`
	// Cost expectation of this selector
	Cost float64 `json:"cost"`
	// Count of processed documents, matched this selector
	Matched int `json:"matched"`
	// Count of scanned documents by this selector
	Items int `json:"items"`
	// Preselect in joined namespace execution explainings
	ExplainPreselect *ExplainResults `json:"explain_preselect,omitempty"`
	// One of selects in joined namespace execution explainings
	ExplainSelect *ExplainResults   `json:"explain_select,omitempty"`
	Selectors     []ExplainSelector `json:"selectors,omitempty"`
}

type FtFastConfig

type FtFastConfig struct {
	// boost of bm25 ranking. default value 1.
	Bm25Boost float64 `json:"bm25_boost"`
	// weight of bm25 rank in final rank.
	// 0: bm25 will not change final rank.
	// 1: bm25 will affect to final rank in 0 - 100% range
	Bm25Weight float64 `json:"bm25_weight"`
	// boost of search query term distance in found document. default vaule 1
	DistanceBoost float64 `json:"distance_boost"`
	// weight of search query terms distance in found document in final rank.
	// 0: distance will not change final rank.
	// 1: distance will affect to final rank in 0 - 100% range
	DistanceWeight float64 `json:"distance_weight"`
	// boost of search query term length. default value 1
	TermLenBoost float64 `json:"term_len_boost"`
	// weight of search query term length in final rank.
	// 0: term length will not change final rank.
	// 1: term length will affect to final rank in 0 - 100% range
	TermLenWeight float64 `json:"term_len_weight"`
	// boost of search query term position. default value 1
	PositionBoost float64 `json:"position_boost"`
	// weight of search query term position in final rank.
	// 0: term position will not change final rank.
	// 1: term position will affect to final rank in 0 - 100% range
	PositionWeight float64 `json:"position_weight"`
	// Boost of full match of search phrase with doc
	FullMatchBoost float64 `json:"full_match_boost"`
	// Relevancy step of partial match: relevancy = kFullMatchProc - partialMatchDecrease * (non matched symbols) / (matched symbols)
	// For example: partialMatchDecrease: 15, word in index 'terminator', pattern 'termin'. matched: 6 symbols, unmatched: 4. relevancy = 100 - (15*4)/6 = 80
	PartialMatchDecrease int `json:"partial_match_decrease"`
	// Minimum rank of found documents
	MinRelevancy float64 `json:"min_relevancy"`
	// Maximum possible typos in word.
	// 0: typos is disabled, words with typos will not match
	// N: words with N possible typos will match
	// It is not recommended to set more than 2 possible typo: It will serously increase RAM usage, and decrease search speed
	MaxTypos int `json:"max_typos"`
	// Maximum word length for building and matching variants with typos. Default value is 15
	MaxTypoLen int `json:"max_typo_len"`
	// Maximum commit steps - set it 1 for always full rebuild - it can be from 1 to 500
	MaxRebuildSteps int `json:"max_rebuild_steps"`
	// Maximum words in one commit - it can be from 5 to DOUBLE_MAX
	MaxStepSize int `json:"max_step_size"`
	// Maximum documents which will be processed in merge query results
	// Default value is 20000. Increasing this value may refine ranking
	// of queries with high frequency words
	MergeLimit int `json:"merge_limit"`
	// List of used stemmers
	Stemmers []string `json:"stemmers"`
	// Enable translit variants processing
	EnableTranslit bool `json:"enable_translit"`
	// Enable wrong keyboard layout variants processing
	EnableKbLayout bool `json:"enable_kb_layout"`
	// List of stop words. Words from this list will be ignored in documents and queries
	StopWords []string `json:"stop_words"`
	// List of synonyms for replacement
	Synonyms []struct {
		// List source tokens in query, which will be replaced with alternatives
		Tokens []string `json:"tokens"`
		// List of alternatives, which will be used for search documents
		Alternatives []string `json:"alternatives"`
	} `json:"synonyms"`
	// Log level of full text search engine
	LogLevel int `json:"log_level"`
	// Enable search by numbers as words and backwards
	EnableNumbersSearch bool `json:"enable_numbers_search"`
	// Enable auto index warmup after atomic namespace copy on transaction
	EnableWarmupOnNsCopy bool `json:"enable_warmup_on_ns_copy"`
	// Extra symbols, which will be threated as parts of word to addition to letters and digits
	ExtraWordSymbols string `json:"extra_word_symbols"`
	// Ratio of summation of ranks of match one term in several fields
	SumRanksByFieldsRatio float64 `json:"sum_ranks_by_fields_ratio"`
	// Max number of highlighted areas for each field in each document (for snippet() and highlight()). '-1' means unlimited
	MaxAreasInDoc int `json:"max_areas_in_doc"`
	// Max total number of highlighted areas in ft result, when result still remains cacheable. '-1' means unlimited
	MaxTotalAreasToCache int `json:"max_total_areas_to_cache"`
	// Configuration for certain field
	FieldsCfg []FtFastFieldConfig `json:"fields,omitempty"`
	// Optimize the index by memory or by cpu
	Optimization string `json:"optimization,omitempty"`
	// Enable to execute others queries before the ft query
	EnablePreselectBeforeFt bool `json:"enable_preselect_before_ft"`
}

FtFastConfig configurarion of FullText search index

func DefaultFtFastConfig

func DefaultFtFastConfig() FtFastConfig

type FtFastFieldConfig

type FtFastFieldConfig struct {
	FieldName string `json:"field_name"`
	// boost of bm25 ranking. default value 1.
	Bm25Boost float64 `json:"bm25_boost"`
	// weight of bm25 rank in final rank.
	// 0: bm25 will not change final rank.
	// 1: bm25 will affect to final rank in 0 - 100% range
	Bm25Weight float64 `json:"bm25_weight"`
	// boost of search query term length. default value 1
	TermLenBoost float64 `json:"term_len_boost"`
	// weight of search query term length in final rank.
	// 0: term length will not change final rank.
	// 1: term length will affect to final rank in 0 - 100% range
	TermLenWeight float64 `json:"term_len_weight"`
	// boost of search query term position. default value 1
	PositionBoost float64 `json:"position_boost"`
	// weight of search query term position in final rank.
	// 0: term position will not change final rank.
	// 1: term position will affect to final rank in 0 - 100% range
	PositionWeight float64 `json:"position_weight"`
}

func DefaultFtFastFieldConfig

func DefaultFtFastFieldConfig(fieldName string) FtFastFieldConfig

type FtFuzzyConfig

type FtFuzzyConfig struct {
	// max proc geting from src reqest
	MaxSrcProc float64 `json:"max_src_proc"`
	// max proc geting from dst reqest
	//usualy maxDstProc = 100 -MaxSrcProc but it's not nessary
	MaxDstProc float64 `json:"max_dst_proc"`
	// increse proc when found pos that are near between  source and dst string (0.0001-2)
	PosSourceBoost float64 `json:"pos_source_boost"`
	// Minim coof for pos that are neaer in src and dst (0.0001-2)
	PosSourceDistMin float64 `json:"pos_source_dist_min"`
	// increse proc when found pos that are near in source string (0.0001-2)
	PosSourceDistBoost float64 `json:"pos_source_dist_boost"`
	// increse proc when found pos that are near in dst string (0.0001-2)
	PosDstBoost float64 `json:"pos_dst_boost"`
	// decrese proc when found  not full thregramm - only start and end (0.0001-2)
	StartDecreeseBoost float64 `json:"start_decreese_boost"`
	// base decrese proc when found  not full thregramm - only start and end (0.0001-2)
	StartDefaultDecreese float64 `json:"start_default_decreese"`
	// Min relevance to show reqest
	MinOkProc float64 `json:"min_ok_proc"`
	// size of gramm (1-10)- for example
	//terminator BufferSize=3 __t _te ter erm rmi ...
	//terminator BufferSize=4 __te _ter term ermi rmin
	BufferSize int `json:"buffer_size"`
	// size of space in start and end of gramm (0-9) - for example
	//terminator SpaceSize=2 __t _te ter   ... tor or_ r__
	//terminator SpaceSize=1 _te  ter  ... tor or_
	SpaceSize int `json:"space_size"`
	// Maximum documents which will be processed in merge query results
	// Default value is 20000. Increasing this value may refine ranking
	// of queries with high frequency words
	MergeLimit int `json:"merge_limit"`
	// List of used stemmers
	Stemmers []string `json:"stemmers"`
	// Enable translit variants processing
	EnableTranslit bool `json:"enable_translit"`
	// Enable wrong keyboard layout variants processing
	EnableKbLayout bool `json:"enable_kb_layout"`
	// List of stop words. Words from this list will be ignored in documents and queries
	StopWords []string `json:"stop_words"`
	// Log level of full text search engine
	LogLevel int `json:"log_level"`
	// Extra symbols, which will be threated as parts of word to addition to letters and digits
	ExtraWordSymbols string `json:"extra_word_symbols"`
}

FtFuzzyConfig configurarion of FuzzyFullText search index

func DefaultFtFuzzyConfig

func DefaultFtFuzzyConfig() FtFuzzyConfig

type IndexDef added in v1.10.0

type IndexDef bindings.IndexDef

IndexDef - Inddex definition struct

type IndexDescription

type IndexDescription struct {
	IndexDef

	IsSortable bool     `json:"is_sortable"`
	IsFulltext bool     `json:"is_fulltext"`
	Conditions []string `json:"conditions"`
}

type Iterator

type Iterator struct {
	// contains filtered or unexported fields
}

Iterator presents query results

func (*Iterator) AggResults

func (it *Iterator) AggResults() (v []AggregationResult)

AggResults returns aggregation results (if present)

func (*Iterator) AllowUnsafe

func (it *Iterator) AllowUnsafe(allow bool) *Iterator

AllowUnsafe takes bool, that enable or disable unsafe behavior.

When AllowUnsafe is true and object cache is enabled resulting objects will not be copied for each query. That means possible race conditions. But it's good speedup, without overhead for copying.

By default reindexer guarantees that every object its safe to use in multithread.

func (*Iterator) Close

func (it *Iterator) Close()

Close closes the iterator and freed CGO resources

func (*Iterator) Count

func (it *Iterator) Count() int

Count returns count if query results

func (*Iterator) Error

func (it *Iterator) Error() error

Error returns query error if it's present.

func (*Iterator) FetchAll

func (it *Iterator) FetchAll() (items []interface{}, err error)

FetchAll returns all query results as slice []interface{} and closes the iterator.

func (*Iterator) FetchAllWithRank

func (it *Iterator) FetchAllWithRank() (items []interface{}, ranks []int, err error)

FetchAllWithRank returns resulting slice of objects and slice of objects ranks. Closes iterator after use.

func (*Iterator) FetchOne

func (it *Iterator) FetchOne() (item interface{}, err error)

FetchOne returns first element and closes the iterator. When it's impossible (count is 0) err will be ErrNotFound.

func (*Iterator) GetAggreatedValue

func (it *Iterator) GetAggreatedValue(idx int) float64

GetAggreatedValue - Return aggregation sum of field

func (*Iterator) GetExplainResults added in v1.10.0

func (it *Iterator) GetExplainResults() (*ExplainResults, error)

GetExplainResults returns JSON bytes with explain results

func (*Iterator) GetTagsMatcherInfo

func (it *Iterator) GetTagsMatcherInfo(nsName string) (stateToken int32, version int32)

Get namespace's tagsmatcher info

func (*Iterator) HasRank

func (it *Iterator) HasRank() bool

HasRank indicates if this iterator has info about search ranks.

func (*Iterator) JoinedObjects

func (it *Iterator) JoinedObjects(field string) (objects []interface{}, err error)

JoinedObjects returns objects slice, that result of join for the given field

func (*Iterator) Next

func (it *Iterator) Next() (hasNext bool)

func (*Iterator) NextObj

func (it *Iterator) NextObj(obj interface{}) (hasNext bool)

Next moves iterator pointer to the next element. Returns bool, that indicates the availability of the next elements. Decode result to given struct

func (*Iterator) Object

func (it *Iterator) Object() interface{}

Object returns current object. Will panic when pointer was not moved, Next() must be called before.

func (*Iterator) Rank

func (it *Iterator) Rank() int

Rank returns current object search rank. Will panic when pointer was not moved, Next() must be called before.

func (*Iterator) TotalCount

func (it *Iterator) TotalCount() int

TotalCount returns total count of objects (ignoring conditions of limit and offset)

type JSONIterator

type JSONIterator struct {
	// contains filtered or unexported fields
}

JSONIterator its iterator, but results presents as json documents

func (*JSONIterator) Close

func (it *JSONIterator) Close()

Close closes the iterator.

func (*JSONIterator) Count

func (it *JSONIterator) Count() int

Count returns count if query results

func (*JSONIterator) Error

func (it *JSONIterator) Error() error

Error returns query error if it's present.

func (*JSONIterator) FetchAll

func (it *JSONIterator) FetchAll() (json []byte, err error)

FetchAll returns bytes slice it's JSON array with results

func (*JSONIterator) GetExplainResults added in v1.10.0

func (it *JSONIterator) GetExplainResults() (*ExplainResults, error)

GetExplainResults returns JSON bytes with explain results

func (*JSONIterator) JSON

func (it *JSONIterator) JSON() (json []byte)

JSON returns JSON bytes with current document

func (*JSONIterator) Next

func (it *JSONIterator) Next() bool

Next moves iterator pointer to the next element. Returns bool, that indicates the availability of the next elements.

type JoinHandler

type JoinHandler func(field string, item interface{}, subitems []interface{}) (useAutomaticJoinStrategy bool)

JoinHandler it's function for handle join results. Returns bool, that indicates whether automatic join strategy still needs to be applied. If `useAutomaticJoinStrategy` is false - it means that JoinHandler takes full responsibility of performing join. If `useAutomaticJoinStrategy` is true - it means JoinHandler will perform only part of the work, required during join, the rest will be done using automatic join strategy. Automatic join strategy is defined as: - use Join method to perform join (in case item implements Joinable interface) - use reflection to perform join otherwise

type Joinable

type Joinable interface {
	Join(field string, subitems []interface{}, context interface{})
}

Joinable is an interface for append joined items

type Logger

type Logger interface {
	Printf(level int, fmt string, msg ...interface{})
}

Logger interface for reindexer

type LsnT

type LsnT struct {
	// Operation counter
	Counter int64 `json:"counter"`
	// Node identifyer
	ServerId int `json:"server_id"`
}

Operation counter and server id

func CreateLSNFromInt64

func CreateLSNFromInt64(v int64) LsnT

func (*LsnT) IsCompatibleWith

func (lsn *LsnT) IsCompatibleWith(o LsnT) bool

func (*LsnT) IsEmpty

func (lsn *LsnT) IsEmpty() bool

func (*LsnT) IsNewerThen

func (lsn *LsnT) IsNewerThen(o LsnT) bool

type NamespaceDescription

type NamespaceDescription struct {
	Name           string             `json:"name"`
	Indexes        []IndexDescription `json:"indexes"`
	StorageEnabled bool               `json:"storage_enabled"`
}

type NamespaceMemStat added in v1.9.3

type NamespaceMemStat struct {
	// Name of namespace
	Name string `json:"name"`
	// [[deperecated]]. do not use
	StorageError string `json:"storage_error"`
	// Filesystem path to namespace storage
	StoragePath string `json:"storage_path"`
	// Status of disk storage
	StorageOK bool `json:"storage_ok"`
	// Background indexes optimization has been completed
	OptimizationCompleted bool `json:"optimization_completed"`
	// Total count of documents in namespace
	ItemsCount int64 `json:"items_count,omitempty"`
	// Count of emopy(unused) slots in namespace
	EmptyItemsCount int64 `json:"empty_items_count"`
	// Size of strings deleted from namespace, but still used in queryResults
	StringsWaitingToBeDeletedSize int64 `json:"strings_waiting_to_be_deleted_size"`
	// Summary of total namespace memory consumption
	Total struct {
		// Total memory size of stored documents, including system structures
		DataSize int64 `json:"data_size"`
		// Total memory consumption of namespace's indexes
		IndexesSize int64 `json:"indexes_size"`
		// Total memory consumption of namespace's caches. e.g. idset and join caches
		CacheSize int64 `json:"cache_size"`
		// Total memory size, occupated by index optimizer (in bytes)
		IndexOptimizerMemory int64 `json:"index_optimizer_memory"`
	} `json:"total"`
	// Replication status of namespace
	Replication struct {
		// Last Log Sequence Number (LSN) of applied namespace modification
		LastLSN LsnT `json:"last_lsn_v2"`
		// Namespace version counter
		NSVersion LsnT `json:"ns_version"`
		// Temporary namespace flag
		Temporary bool `json:"temporary"`
		// Number of storage's master <-> slave switches
		IncarnationCounter int64 `json:"incarnation_counter"`
		// Hashsum of all records in namespace
		DataHash uint64 `json:"data_hash"`
		// Data count
		DataCount int `json:"data_count"`
		// Write Ahead Log (WAL) records count
		WalCount int64 `json:"wal_count"`
		// Total memory consumption of Write Ahead Log (WAL)
		WalSize int64 `json:"wal_size"`
		// Data updated timestamp
		UpdatedUnixNano int64 `json:"updated_unix_nano"`
		// Cluster info
		ClusterizationStatus struct {
			// Current leader server ID (for RAFT-cluster only)
			LeadeID int `json:"leader_id"`
			// Current role in cluster: cluster_replica, simple_replica or none
			Role string `json:"role"`
		} `json:"clusterization_status"`
	} `json:"replication"`

	// Indexes memory statistic
	Indexes []struct {
		// Name of index. There are special index with name `-tuple`. It's stores original document's json structure with non indexe fields
		Name string `json:"name"`
		// Count of unique keys values stored in index
		UniqKeysCount int64 `json:"unique_keys_count"`
		// Total memory consumption of documents's data, holded by index
		DataSize int64 `json:"data_size"`
		// Total memory consumption of SORT statement and `GT`, `LT` conditions optimized structures. Applicabe only to `tree` indexes
		SortOrdresSize int64 `json:"sort_orders_size"`
		// Total memory consumption of reverse index vectors. For `store` ndexes always 0
		IDSetPlainSize int64 `json:"idset_plain_size"`
		// Total memory consumption of reverse index b-tree structures. For `dense` and `store` indexes always 0
		IDSetBTreeSize int64 `json:"idset_btree_size"`
		// Total memory consumption of fulltext search structures
		FulltextSize int64 `json:"fulltext_size"`
		// Idset cache stats. Stores merged reverse index results of SELECT field IN(...) by IN(...) keys
		IDSetCache CacheMemStat `json:"idset_cache"`
		// Updates count, pending in index updates tracker
		TrackedUpdatesCount int64 `json:"tracked_updates_count"`
		// Buckets count in index updates tracker map
		TrackedUpdatesBuckets int64 `json:"tracked_updates_buckets"`
		// Updates tracker map size in bytes
		TrackedUpdatesSize int64 `json:"tracked_updates_size"`
	} `json:"indexes"`
	// Join cache stats. Stores results of selects to right table by ON condition
	JoinCache CacheMemStat `json:"join_cache"`
	// Query cache stats. Stores results of SELECT COUNT(*) by Where conditions
	QueryCache CacheMemStat `json:"query_cache"`
}

NamespaceMemStat information about reindexer's namespace memory statisctics and located in '#memstats' system namespace

type NamespaceOptions

type NamespaceOptions struct {
	// contains filtered or unexported fields
}

NamespaceOptions is options for namespace

func DefaultNamespaceOptions

func DefaultNamespaceOptions() *NamespaceOptions

DefaultNamespaceOptions return default namespace options

func (*NamespaceOptions) DisableObjCache

func (opts *NamespaceOptions) DisableObjCache() *NamespaceOptions

func (*NamespaceOptions) DropOnFileFormatError

func (opts *NamespaceOptions) DropOnFileFormatError() *NamespaceOptions

func (*NamespaceOptions) DropOnIndexesConflict

func (opts *NamespaceOptions) DropOnIndexesConflict() *NamespaceOptions

func (*NamespaceOptions) NoStorage

func (opts *NamespaceOptions) NoStorage() *NamespaceOptions

func (*NamespaceOptions) ObjCacheSize

func (opts *NamespaceOptions) ObjCacheSize(count int) *NamespaceOptions

Set maximum items count in Object Cache. Default is 256000

type NamespacePerfStat added in v1.9.3

type NamespacePerfStat struct {
	// Name of namespace
	Name string `json:"name"`
	// Performance statistics for update operations
	Updates PerfStat `json:"updates"`
	// Performance statistics for select operations
	Selects PerfStat `json:"selects"`
	// Performance statistics for transactions
	Transactions TxPerfStat `json:"transactions"`
}

NamespacePerfStat is information about namespace's performance statistics and located in '#perfstats' system namespace

type PerfStat added in v1.9.3

type PerfStat struct {
	// Total count of queries to this object
	TotalQueriesCount int64 `json:"total_queries_count"`
	// Average latency (execution time) for queries to this object
	TotalAvgLatencyUs int64 `json:"total_avg_latency_us"`
	// Average waiting time for acquiring lock to this object
	TotalAvgLockTimeUs int64 `json:"total_avg_lock_time_us"`
	// Count of queries to this object, requested at last second
	LastSecQPS int64 `json:"last_sec_qps"`
	// Average latency (execution time) for queries to this object at last second
	LastSecAvgLatencyUs int64 `json:"last_sec_avg_latency_us"`
	// Average waiting time for acquiring lock to this object at last second
	LastSecAvgLockTimeUs int64 `json:"last_sec_avg_lock_time_us"`
	// Minimal latency value
	MinLatencyUs int64 `json:"min_latency_us"`
	// Maximum latency value
	MaxLatencyUs int64 `json:"max_latency_us"`
	// Standard deviation of latency values
	LatencyStddev int64 `json:"latency_stddev"`
}

PerfStat is information about different reinexer's objects performance statistics

type Query

type Query struct {
	Namespace string
	// contains filtered or unexported fields
}

Query to DB object

func (*Query) AggregateAvg

func (q *Query) AggregateAvg(field string)

func (*Query) AggregateFacet

func (q *Query) AggregateFacet(fields ...string) *AggregateFacetRequest

fields should not be empty.

func (*Query) AggregateMax

func (q *Query) AggregateMax(field string)

func (*Query) AggregateMin

func (q *Query) AggregateMin(field string)

func (*Query) AggregateSum

func (q *Query) AggregateSum(field string)

func (*Query) CachedTotal

func (q *Query) CachedTotal(totalNames ...string) *Query

CachedTotal Request cached total items calculation

func (*Query) CloseBracket

func (q *Query) CloseBracket() *Query

CloseBracket - Close bracket for where condition to DB query

func (*Query) DWithin

func (q *Query) DWithin(index string, point [2]float64, distance float64) *Query

DWithin - Add DWithin condition to DB query

func (*Query) Debug

func (q *Query) Debug(level int) *Query

Debug - Set debug level

func (*Query) Delete

func (q *Query) Delete() (int, error)

Delete will execute query, and delete items, matches query On sucess return number of deleted elements

func (*Query) DeleteCtx

func (q *Query) DeleteCtx(ctx context.Context) (int, error)

DeleteCtx will execute query, and delete items, matches query On sucess return number of deleted elements

func (*Query) Distinct

func (q *Query) Distinct(distinctIndex string) *Query

Distinct - Return only items with uniq value of field

func (*Query) Drop

func (q *Query) Drop(field string) *Query

Drop removes field from item within Update statement

func (*Query) EqualPosition added in v1.10.0

func (q *Query) EqualPosition(fields ...string) *Query

Adds equal position fields to arrays

func (*Query) Exec

func (q *Query) Exec() *Iterator

Exec will execute query, and return slice of items

func (*Query) ExecCtx

func (q *Query) ExecCtx(ctx context.Context) *Iterator

ExecCtx will execute query, and return slice of items

func (*Query) ExecToJson

func (q *Query) ExecToJson(jsonRoots ...string) *JSONIterator

ExecToJson will execute query, and return iterator

func (*Query) ExecToJsonCtx

func (q *Query) ExecToJsonCtx(ctx context.Context, jsonRoots ...string) *JSONIterator

ExecToJsonCtx will execute query, and return iterator

func (*Query) Explain added in v1.10.0

func (q *Query) Explain() *Query

Explain - Request explain for query

func (*Query) FetchCount added in v1.5.0

func (q *Query) FetchCount(n int) *Query

FetchCount sets the number of items that will be fetched by one operation When n <= 0 query will fetch all results in one operation

func (*Query) Functions added in v1.9.2

func (q *Query) Functions(fields ...string) *Query

Functions add optional select functions (e.g highlight or snippet ) to fields of result's objects

func (*Query) Get

func (q *Query) Get() (item interface{}, found bool)

Get will execute query, and return 1 st item, panic on error

func (*Query) GetCtx

func (q *Query) GetCtx(ctx context.Context) (item interface{}, found bool)

GetCtx will execute query, and return 1 st item, panic on error

func (*Query) GetJson

func (q *Query) GetJson() (json []byte, found bool)

GetJson will execute query, and return 1 st item, panic on error

func (*Query) GetJsonCtx

func (q *Query) GetJsonCtx(ctx context.Context) (json []byte, found bool)

GetJsonCtx will execute query, and return 1 st item, panic on error

func (*Query) InnerJoin

func (q *Query) InnerJoin(q2 *Query, field string) *Query

InnerJoin joins 2 queries Items from the 1-st query are filtered by and expanded with the data from the 2-nd query

`field` parameter serves as unique identifier for the join between `q` and `q2` One of the conditions below must hold for `field` parameter in order for InnerJoin to work: - namespace of `q2` contains `field` as one of its fields marked as `joined` - `q` has a join handler (registered via `q.JoinHandler(...)` call) with the same `field` value

func (*Query) Join

func (q *Query) Join(q2 *Query, field string) *Query

Join is an alias for LeftJoin

func (*Query) JoinHandler

func (q *Query) JoinHandler(field string, handler JoinHandler) *Query

JoinHandler registers join handler that will be called when join, registered on `field` value, finds a match

func (*Query) LeftJoin

func (q *Query) LeftJoin(q2 *Query, field string) *Query

LeftJoin joins 2 queries Items from the 1-st query are expanded with the data from the 2-nd query

`field` parameter serves as unique identifier for the join between `q` and `q2` One of the conditions below must hold for `field` parameter in order for LeftJoin to work: - namespace of `q2` contains `field` as one of its fields marked as `joined` - `q` has a join handler (registered via `q.JoinHandler(...)` call) with the same `field` value

func (*Query) Limit

func (q *Query) Limit(limitItems int) *Query

Limit - Set limit (count) of returned items

func (*Query) MakeCopy

func (q *Query) MakeCopy(db *Reindexer) *Query

MakeCopy - copy of query with same or other db, resets query context

func (*Query) Match

func (q *Query) Match(index string, keys ...string) *Query

WhereString - Add where condition to DB query with string args

func (*Query) Merge

func (q *Query) Merge(q2 *Query) *Query

Merge 2 queries

func (*Query) MustExec

func (q *Query) MustExec() *Iterator

MustExec will execute query, and return iterator, panic on error

func (*Query) MustExecCtx

func (q *Query) MustExecCtx(ctx context.Context) *Iterator

MustExecCtx will execute query, and return iterator, panic on error

func (*Query) Not

func (q *Query) Not() *Query

Not - next condition will added with NOT AND Implements short-circuiting: if the previous condition is failed the next will not be evaluated

func (*Query) Offset

func (q *Query) Offset(startOffset int) *Query

Offset - Set start offset of returned items

func (*Query) On

func (q *Query) On(index string, condition int, joinIndex string) *Query

On specifies join condition

`index` parameter specifies which field from `q` namespace should be used during join `condition` parameter specifies how `q` will be joined with the latest join query issued on `q` (e.g. `EQ`/`GT`/`SET`/...) `joinIndex` parameter specifies which field from namespace for the latest join query issued on `q` should be used during join

func (*Query) OpenBracket

func (q *Query) OpenBracket() *Query

OpenBracket - Open bracket for where condition to DB query

func (*Query) Or

func (q *Query) Or() *Query

OR - next condition will added with OR Implements short-circuiting: if the previous condition is successful the next will not be evaluated, but except Join conditions

func (*Query) ReqTotal

func (q *Query) ReqTotal(totalNames ...string) *Query

ReqTotal Request total items calculation

func (*Query) Select

func (q *Query) Select(fields ...string) *Query

Select add filter to fields of result's objects

func (*Query) Set

func (q *Query) Set(field string, values interface{}) *Query

Set adds update field request for update query

func (*Query) SetContext

func (q *Query) SetContext(ctx interface{}) *Query

SetContext set interface, which will be passed to Joined interface

func (*Query) SetExpression

func (q *Query) SetExpression(field string, value string) *Query

SetExpression updates indexed field by arithmetical expression

func (*Query) SetObject

func (q *Query) SetObject(field string, values interface{}) *Query

SetObject adds update of object field request for update query

func (*Query) Sort

func (q *Query) Sort(sortIndex string, desc bool, values ...interface{}) *Query

Sort - Apply sort order to returned from query items If values argument specified, then items equal to values, if found will be placed in the top positions For composite indexes values must be []interface{}, with value of each subindex Forced sort is support for the first sorting field only

func (*Query) Strict

func (q *Query) Strict(mode QueryStrictMode) *Query

Strict - Set query strict mode

func (*Query) Update

func (q *Query) Update() *Iterator

Update will execute query, and update fields in items, which matches query On sucess return number of update elements

func (*Query) UpdateCtx

func (q *Query) UpdateCtx(ctx context.Context) *Iterator

UpdateCtx will execute query, and update fields in items, which matches query On sucess return number of update elements

func (*Query) Where

func (q *Query) Where(index string, condition int, keys interface{}) *Query

Where - Add where condition to DB query For composite indexes keys must be []interface{}, with value of each subindex

func (*Query) WhereBetweenFields

func (q *Query) WhereBetweenFields(firstField string, condition int, secondField string) *Query

Where - Add comparing two fields where condition to DB query For composite indexes keys must be []interface{}, with value of each subindex

func (*Query) WhereBool

func (q *Query) WhereBool(index string, condition int, keys ...bool) *Query

WhereString - Add where condition to DB query with bool args

func (*Query) WhereComposite added in v1.9.2

func (q *Query) WhereComposite(index string, condition int, keys ...interface{}) *Query

WhereComposite - Add where condition to DB query with interface args for composite indexes

func (*Query) WhereDouble

func (q *Query) WhereDouble(index string, condition int, keys ...float64) *Query

WhereDouble - Add where condition to DB query with float args

func (*Query) WhereInt

func (q *Query) WhereInt(index string, condition int, keys ...int) *Query

WhereInt - Add where condition to DB query with int args

func (*Query) WhereInt32 added in v1.10.0

func (q *Query) WhereInt32(index string, condition int, keys ...int32) *Query

WhereInt - Add where condition to DB query with int args

func (*Query) WhereInt64

func (q *Query) WhereInt64(index string, condition int, keys ...int64) *Query

WhereInt64 - Add where condition to DB query with int64 args

func (*Query) WhereString

func (q *Query) WhereString(index string, condition int, keys ...string) *Query

WhereString - Add where condition to DB query with string args

func (*Query) WithRank

func (q *Query) WithRank() *Query

Output fulltext rank Allowed only with fulltext query

type QueryPerfStat added in v1.9.3

type QueryPerfStat struct {
	Query string `json:"query"`
	PerfStat
}

QueryPerfStat is information about query's performance statistics and located in '#queriesperfstats' system namespace

type QueryStrictMode

type QueryStrictMode int

Strict modes for queries

type ReconnectStrategy

type ReconnectStrategy string

type Reindexer

type Reindexer struct {
	// contains filtered or unexported fields
}

Reindexer The reindxer state struct

func NewReindex

func NewReindex(dsn interface{}, options ...interface{}) *Reindexer

NewReindex Create new instanse of Reindexer DB Returns pointer to created instance

func (*Reindexer) AddIndex added in v1.9.7

func (db *Reindexer) AddIndex(namespace string, indexDef ...IndexDef) error

AddIndex - add index.

func (*Reindexer) BeginTx

func (db *Reindexer) BeginTx(namespace string) (*Tx, error)

func (*Reindexer) Close added in v1.9.7

func (db *Reindexer) Close()

func (*Reindexer) CloseNamespace

func (db *Reindexer) CloseNamespace(namespace string) error

CloseNamespace - close namespace, but keep storage

func (*Reindexer) ConfigureIndex

func (db *Reindexer) ConfigureIndex(namespace, index string, config interface{}) error

ConfigureIndex - congigure index. config argument must be struct with index configuration Deprecated: Use UpdateIndex instead.

func (*Reindexer) Delete

func (db *Reindexer) Delete(namespace string, item interface{}, precepts ...string) error

Delete - remove single item from namespace by PK Item must be the same type as item passed to OpenNamespace, or []byte with json data If the precepts are provided and the item is a pointer, the value pointed by item will be updated

func (*Reindexer) DescribeNamespace

func (db *Reindexer) DescribeNamespace(namespace string) (*NamespaceDescription, error)

DescribeNamespace makes a 'SELECT * FROM #namespaces' query to database. Return NamespaceDescription results, error

func (*Reindexer) DescribeNamespaces

func (db *Reindexer) DescribeNamespaces() ([]*NamespaceDescription, error)

DescribeNamespaces makes a 'SELECT * FROM #namespaces' query to database. Return NamespaceDescription results, error

func (*Reindexer) DropIndex added in v1.9.3

func (db *Reindexer) DropIndex(namespace, index string) error

DropIndex - drop index.

func (*Reindexer) DropNamespace

func (db *Reindexer) DropNamespace(namespace string) error

DropNamespace - drop whole namespace from DB

func (*Reindexer) EnableStorage

func (db *Reindexer) EnableStorage(storagePath string) error

EnableStorage enables persistent storage of data Deprecated: storage path should be passed as DSN part to reindexer.NewReindex (""), e.g. reindexer.NewReindexer ("builtin:///tmp/reindex").

func (*Reindexer) ExecSQL

func (db *Reindexer) ExecSQL(query string) *Iterator

ExecSQL make query to database. Query is a SQL statement. Return Iterator.

func (*Reindexer) ExecSQLToJSON

func (db *Reindexer) ExecSQLToJSON(query string) *JSONIterator

ExecSQLToJSON make query to database. Query is a SQL statement. Return JSONIterator.

func (*Reindexer) GetMeta added in v1.10.0

func (db *Reindexer) GetMeta(namespace, key string) ([]byte, error)

func (*Reindexer) GetNamespaceMemStat added in v1.9.3

func (db *Reindexer) GetNamespaceMemStat(namespace string) (*NamespaceMemStat, error)

GetNamespaceMemStat makes a 'SELECT * FROM #memstat' query to database. Return NamespaceMemStat results, error

func (*Reindexer) GetNamespacesMemStat added in v1.9.3

func (db *Reindexer) GetNamespacesMemStat() ([]*NamespaceMemStat, error)

GetNamespacesMemStat makes a 'SELECT * FROM #memstats' query to database. Return NamespaceMemStat results, error

func (*Reindexer) GetStats

func (db *Reindexer) GetStats() bindings.Stats

GetStats Get local thread reindexer usage stats Deprecated: Use SELECT * FROM '#perfstats' to get performance statistics.

func (*Reindexer) Insert

func (db *Reindexer) Insert(namespace string, item interface{}, precepts ...string) (int, error)

Insert item to namespace by PK Item must be the same type as item passed to OpenNamespace, or []byte with json data Return 0, if no item was inserted, 1 if item was inserted If the precepts are provided and the item is a pointer, the value pointed by item will be updated

func (*Reindexer) MustBeginTx

func (db *Reindexer) MustBeginTx(namespace string) *Tx

MustBeginTx - start update transaction, panic on error

func (*Reindexer) OpenNamespace

func (db *Reindexer) OpenNamespace(namespace string, opts *NamespaceOptions, s interface{}) (err error)

OpenNamespace Open or create new namespace and indexes based on passed struct. IndexDef fields of struct are marked by `reindex:` tag

func (*Reindexer) Ping

func (db *Reindexer) Ping() error

Ping checks connection with reindexer

func (*Reindexer) PutMeta added in v1.10.0

func (db *Reindexer) PutMeta(namespace, key string, data []byte) error

func (*Reindexer) Query

func (db *Reindexer) Query(namespace string) *Query

Query Create new Query for building request

func (*Reindexer) QueryFrom

func (db *Reindexer) QueryFrom(d dsl.DSL) (*Query, error)

QueryFrom - create query from DSL and execute it

func (*Reindexer) RegisterNamespace

func (db *Reindexer) RegisterNamespace(namespace string, opts *NamespaceOptions, s interface{}) (err error)

RegisterNamespace Register go type against namespace. There are no data and indexes changes will be performed

func (*Reindexer) RenameNamespace

func (db *Reindexer) RenameNamespace(srcNsName string, dstNsName string) error

RenameNamespace - Rename namespace. If namespace with dstNsName exists, then it is replaced.

func (*Reindexer) RenameNs

func (db *Reindexer) RenameNs(srcNsName string, dstNsName string)

func (*Reindexer) ReopenLogFiles

func (db *Reindexer) ReopenLogFiles() error

ReopenLogFiles reopens log files

func (*Reindexer) ResetCaches

func (db *Reindexer) ResetCaches()

func (*Reindexer) ResetStats

func (db *Reindexer) ResetStats()

ResetStats Reset local thread reindexer usage stats Deprecated: no longer used.

func (*Reindexer) SetDefaultQueryDebug

func (db *Reindexer) SetDefaultQueryDebug(namespace string, level int) error

SetDefaultQueryDebug sets default debug level for queries to namespaces

func (*Reindexer) SetLogger

func (db *Reindexer) SetLogger(log Logger)

SetLogger sets logger interface for output reindexer logs

func (*Reindexer) Status added in v1.10.0

func (db *Reindexer) Status() bindings.Status

Status will return current db status

func (*Reindexer) TruncateNamespace

func (db *Reindexer) TruncateNamespace(namespace string) error

TruncateNamespace - delete all items from namespace

func (*Reindexer) Update

func (db *Reindexer) Update(namespace string, item interface{}, precepts ...string) (int, error)

Update item to namespace by PK Item must be the same type as item passed to OpenNamespace, or []byte with json data Return 0, if no item was updated, 1 if item was updated If the precepts are provided and the item is a pointer, the value pointed by item will be updated

func (*Reindexer) UpdateIndex added in v1.9.7

func (db *Reindexer) UpdateIndex(namespace string, indexDef IndexDef) error

UpdateIndex - update index.

func (*Reindexer) Upsert

func (db *Reindexer) Upsert(namespace string, item interface{}, precepts ...string) error

Upsert (Insert or Update) item to index Item must be the same type as item passed to OpenNamespace, or []byte with json If the precepts are provided and the item is a pointer, the value pointed by item will be updated

func (*Reindexer) WithContext

func (db *Reindexer) WithContext(ctx context.Context) *Reindexer

WithContext Add context to next method call

type ReplicationStat

type ReplicationStat struct {
	// Replication type: either "async" or "cluster"
	Type string `json:"type"`
	// Global WAL-syncs' stats
	WALSync ReplicationSyncStat `json:"wal_sync"`
	// Global force-syncs' stats
	ForceSync ReplicationSyncStat `json:"force_sync"`
	// Leader's initial sync statistic (for "cluster" type only)
	InitialSyncStat struct {
		// WAL-syncs' stats
		WALSync ReplicationSyncStat `json:"wal_sync"`
		// Force-syncs' stats
		ForceSync ReplicationSyncStat `json:"force_sync"`
		// Total initial sync time
		TotalTimeUs int64 `json:"total_time_us"`
	} `json:"initial_sync"`
	// Count of online updates, awaiting rpelication
	PendingUpdatesCount int64 `json:"pending_updates_count"`
	// Total allocated online updates count (including those, which already were replicated, but was not deallocated yet)
	AllocatedUpdatesCount int64 `json:"allocated_updates_count"`
	// Total allocated online updates size in bytes
	AllocatedUpdatesSize int64 `json:"allocated_updates_size"`
	// Info about each node
	ReplicationNodeStat []struct {
		// Node's DSN
		DSN string `json:"dsn"`
		// Node's server ID
		ServerID int `json:"server_id"`
		// Online updates, awaiting replication to this node
		PendingUpdatesCount int64 `json:"pending_updates_count"`
		// Network status: "none", "online", "offline", "raft_error"
		Status string `json:"status"`
		// Replication role: "none", "follower", "leader", "candidate"
		Role string `json:"role"`
		// Node's sync state: "none", "syncing", "awaiting_resync", "online_replication", "initial_leader_sync"
		SyncState string `json:"sync_state"`
		// Shows synchroniztion state for raft-cluster node (false if node is outdated)
		IsSynchronized bool `json:"is_synchronized"`
	} `json:"nodes"`
}

ReplicationStat replication statistic

type ReplicationSyncStat

type ReplicationSyncStat struct {
	// Syncs count
	Count int64 `json:"count"`
	// Average sync time
	AvgTimeUs int64 `json:"avg_time_us"`
	// Max sync time
	MaxTimeUs int64 `json:"max_time_us"`
}

ReplicationSyncStat WAL/force sync statistic

type Tx

type Tx struct {
	// contains filtered or unexported fields
}

Tx is transaction object. Transaction are performs atomic namespace update. There are synchronous and async transaction available. To start transaction method `db.BeginTx()` is used. This method creates transaction object

func (*Tx) AwaitResults

func (tx *Tx) AwaitResults() *Tx

AwaitResults awaits async requests completion

func (*Tx) Commit

func (tx *Tx) Commit() error

Commit - apply changes. Commit also waits for all async operations done, and then apply changes. if any error occurred during prepare process, then tx.Commit should return an error. So it is enough, to check error returned by Commit - to be sure that all data has been successfully committed or not.

func (*Tx) CommitWithCount

func (tx *Tx) CommitWithCount() (count int, err error)

CommitWithCount apply changes, and return count of changed items

func (*Tx) Delete

func (tx *Tx) Delete(item interface{}, precepts ...string) error

Delete - remove item by id from namespace

func (*Tx) DeleteAsync

func (tx *Tx) DeleteAsync(item interface{}, cmpl bindings.Completion, precepts ...string) error

DeleteAsync - remove item by id from namespace. Calls completion on result

func (*Tx) DeleteJSON

func (tx *Tx) DeleteJSON(json []byte, precepts ...string) error

DeleteJSON - remove item by id from namespace

func (*Tx) DeleteJSONAsync

func (tx *Tx) DeleteJSONAsync(json []byte, cmpl bindings.Completion, precepts ...string) error

DeleteJSONAsync - remove item by id from namespace. Calls completion on result

func (*Tx) Insert

func (tx *Tx) Insert(item interface{}, precepts ...string) error

func (*Tx) InsertAsync

func (tx *Tx) InsertAsync(item interface{}, cmpl bindings.Completion, precepts ...string) error

InsertAsync Insert item to namespace. Calls completion on result

func (*Tx) MustCommit

func (tx *Tx) MustCommit() int

MustCommit apply changes and starts panic on errors

func (*Tx) Query

func (tx *Tx) Query() *Query

func (*Tx) Rollback

func (tx *Tx) Rollback() error

Rollback transaction. It is safe to call Rollback after Commit

func (*Tx) Update

func (tx *Tx) Update(item interface{}, precepts ...string) error

func (*Tx) UpdateAsync

func (tx *Tx) UpdateAsync(item interface{}, cmpl bindings.Completion, precepts ...string) error

UpdateAsync Update item to namespace. Calls completion on result

func (*Tx) Upsert

func (tx *Tx) Upsert(item interface{}, precepts ...string) error

Upsert (Insert or Update) item to namespace

func (*Tx) UpsertAsync

func (tx *Tx) UpsertAsync(item interface{}, cmpl bindings.Completion, precepts ...string) error

UpsertAsync (Insert or Update) item to namespace. Calls completion on result

func (*Tx) UpsertJSON

func (tx *Tx) UpsertJSON(json []byte, precepts ...string) error

UpsertJSON (Insert or Update) item to namespace

func (*Tx) UpsertJSONAsync

func (tx *Tx) UpsertJSONAsync(json []byte, cmpl bindings.Completion, precepts ...string) error

UpsertJSONAsync (Insert or Update) item to index. Calls completion on result

type TxPerfStat

type TxPerfStat struct {
	// Total transactions count for namespace
	TotalCount int64 `json:"total_count"`
	// Total namespace copy operations
	TotalCopyCount int64 `json:"total_copy_count"`
	// Average steps count in transactions for this namespace
	AvgStepsCount int64 `json:"avg_steps_count"`
	// Minimum steps count in transactions for this namespace
	MinStepsCount int64 `json:"min_steps_count"`
	// Maximum steps count in transactions for this namespace
	MaxStepsCount int64 `json:"max_steps_count"`
	// Average transaction preparation time usec
	AvgPrepareTimeUs int64 `json:"avg_prepare_time_us"`
	// Minimum transaction preparation time usec
	MinPrepareTimeUs int64 `json:"min_prepare_time_us"`
	// Maximum transaction preparation time usec
	MaxPrepareTimeUs int64 `json:"max_prepare_time_us"`
	// Average transaction commit time usec
	AvgCommitTimeUs int64 `json:"avg_commit_time_us"`
	// Minimum transaction commit time usec
	MinCommitTimeUs int64 `json:"min_commit_time_us"`
	// Maximum transaction commit time usec
	MaxCommitTimeUs int64 `json:"max_commit_time_us"`
	// Average namespace copy time usec
	AvgCopyTimeUs int64 `json:"avg_copy_time_us"`
	// Maximum namespace copy time usec
	MinCopyTimeUs int64 `json:"min_copy_time_us"`
	// Minimum namespace copy time usec
	MaxCopyTimeUs int64 `json:"max_copy_time_us"`
}

TxPerfStat is information about transactions performance statistics

Directories

Path Synopsis
Package jsonschema uses reflection to generate JSON Schemas from Go types [1].
Package jsonschema uses reflection to generate JSON Schemas from Go types [1].
samples
go
test
ft

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL