bigtable

package
v0.15.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 10, 2018 License: MIT Imports: 22 Imported by: 2

README

Google BigTable Data source

Provides SQL Layer on top of Google BigTable storage.

Dataux BigTable

Links

SQL -> BigTable

  • key Big-Table primarily has a single index based on key.
BigTable API SQL Query
Tables show tables;
Column Families describe mytable;
WHERE select count(*) from table WHERE exists(a); Some of these are pushed down to big-table if avaialble.
filter: terms select * from table WHERE year IN (2015,2014,2013);
filter: gte, range select * from table WHERE year BETWEEN 2012 AND 2014
aggs min, max, avg, sum select min(year), max(year), avg(year), sum(year) from table WHERE exists(a); These are poly-filled in the distributed query engine.

Details

  • Table vs Column Familes: Data is stored very, very different in bigtable compared to traditional table oriented db's. Different column-familes (data-types) can share a key. Think of these as a group of tables in sql that you would have a foreign-key shared.

Documentation

Overview

Package bigtable implements a data source (backend) to allow dataux to query google bigtable

Index

Constants

View Source
const (
	DataSourceLabel = "bigtable"
)

Variables

View Source
var (
	ErrNoSchema = fmt.Errorf("No schema or configuration exists")

	SchemaRefreshInterval = time.Duration(time.Minute * 5)
)
View Source
var (
	// DefaultLimit for rows from bigtable
	DefaultLimit = 5000

	// Timeout default
	Timeout = 10 * time.Second
)

Functions

func Mutation

func Mutation(fam string, vals []driver.Value, cols []string) *bigtable.Mutation

Mutation function

Types

type Mutator

type Mutator struct {
	// contains filtered or unexported fields
}

Mutator a bigtable mutator connection

type ResultReader

type ResultReader struct {
	*exec.TaskBase

	Total int
	Req   *SqlToBT
	// contains filtered or unexported fields
}

ResultReader implements result paging, reading

func NewResultReader

func NewResultReader(req *SqlToBT) *ResultReader

func (*ResultReader) Close

func (m *ResultReader) Close() error

func (*ResultReader) Run

func (m *ResultReader) Run() error

Runs the Google BigTable exec

type ResultReaderNext

type ResultReaderNext struct {
	*ResultReader
}

A wrapper, allowing us to implement sql/driver Next() interface

which is different than qlbridge/datasource Next()

type Source

type Source struct {
	// contains filtered or unexported fields
}

Source is a BigTable datasource, this provides Reads, Insert, Update, Delete - singleton shared instance - creates clients to bigtable (clients perform queries) - provides schema info about bigtable table/column-families

func (*Source) Close

func (m *Source) Close() error

func (*Source) DataSource

func (m *Source) DataSource() schema.Source

func (*Source) Init

func (m *Source) Init()

func (*Source) Open

func (m *Source) Open(tableName string) (schema.Conn, error)

func (*Source) Setup

func (m *Source) Setup(ss *schema.Schema) error

func (*Source) Table

func (m *Source) Table(table string) (*schema.Table, error)

func (*Source) Tables

func (m *Source) Tables() []string

type SqlToBT

type SqlToBT struct {
	*exec.TaskBase
	// contains filtered or unexported fields
}

SqlToBT Convert a Sql Query to a bigtable read/write rows - responsible for pushing down as much logic to bigtable as possible - dialect translator

func NewSqlToBT

func NewSqlToBT(s *Source, t *schema.Table) *SqlToBT

NewSqlToBT create a SQL ast -> BigTable Rows/Filters/Mutations converter

func (*SqlToBT) CreateMutator

func (m *SqlToBT) CreateMutator(pc interface{}) (schema.ConnMutator, error)

CreateMutator part of Mutator interface to allow data sources create a stateful mutation context for update/delete operations.

func (*SqlToBT) Delete

func (m *SqlToBT) Delete(key driver.Value) (int, error)

Delete delete by row key

func (*SqlToBT) DeleteExpression

func (m *SqlToBT) DeleteExpression(p interface{}, where expr.Node) (int, error)

DeleteExpression - delete by expression (where clause) - For where columns contained in Partition Keys we can push to bigtable - for others we might have to do a select -> delete

func (*SqlToBT) Put

func (m *SqlToBT) Put(ctx context.Context, key schema.Key, val interface{}) (schema.Key, error)

Put Interface for mutation (insert, update)

func (*SqlToBT) PutMulti

func (m *SqlToBT) PutMulti(ctx context.Context, keys []schema.Key, src interface{}) ([]schema.Key, error)

PutMulti write multiple rows.

func (*SqlToBT) WalkExecSource

func (m *SqlToBT) WalkExecSource(p *plan.Source) (exec.Task, error)

WalkExecSource an interface of executor that allows this source to create its own execution Task so that it can push down as much as it can to bigtable.

func (*SqlToBT) WalkSourceSelect

func (m *SqlToBT) WalkSourceSelect(planner plan.Planner, p *plan.Source) (plan.Task, error)

WalkSourceSelect An interface implemented by this connection allowing the planner to push down as much logic into this source as possible

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL