client

package
v0.33.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 26, 2024 License: Apache-2.0 Imports: 29 Imported by: 0

Documentation

Overview

This package allows you to interface with a Deephaven server over a network connection using Go. It can upload, manipulate, and download tables, among other features.

To get started, use client.NewClient to connect to the server. The Client can then be used to perform operations. See the provided examples in the examples/ folder or the individual code documentation for more.

Online docs for the client can be found at https://pkg.go.dev/github.com/deephaven/deephaven-core/go/pkg/client

The Go API uses Records from the Apache Arrow package as tables. The docs for the Arrow package can be found at the following link: https://pkg.go.dev/github.com/apache/arrow/go/v8

All methods for all structs in this package are goroutine-safe unless otherwise specified.

Example (FetchTable)

If you want to access tables from previous sessions or from the web UI, you will need to use OpenTable.

This example requires a Deephaven server to connect to, so it will not work on pkg.go.dev.

package main

import (
	"context"
	"fmt"

	"github.com/deephaven/deephaven-core/go/internal/test_tools"
	"github.com/deephaven/deephaven-core/go/pkg/client"
)

func main() {
	// A context is used to set timeouts and deadlines for requests or cancel requests.
	// If you don't have any specific requirements, context.Background() is a good default.
	ctx := context.Background()

	// Let's start a client connection using python as the script language ("groovy" is the other option).
	// Note that the client language must match the language the server was started with.
	cl, err := client.NewClient(ctx, test_tools.GetHost(), test_tools.GetPort(), test_tools.GetAuthType(), test_tools.GetAuthToken(), client.WithConsole("python"))
	if err != nil {
		fmt.Println("error when connecting to server:", err.Error())
		return
	}

	// First, let's make an empty table with ten rows.
	tbl, err := cl.EmptyTable(ctx, 10)
	if err != nil {
		fmt.Println("error when making table:", err.Error())
		return
	}

	// We can bind the table to a variable, so that it doesn't disappear when the client closes.
	err = cl.BindToVariable(ctx, "my_table", tbl)
	if err != nil {
		fmt.Println("error when binding table:", err.Error())
		return
	}

	// Now we can close the table and client locally, but the table will stick around on the server.
	tbl.Release(ctx)
	cl.Close()

	// Now let's make a new connection, completely unrelated to the old one.
	cl, err = client.NewClient(ctx, test_tools.GetHost(), test_tools.GetPort(), test_tools.GetAuthType(), test_tools.GetAuthToken())
	if err != nil {
		fmt.Println("error when connecting to localhost port 10000:", err.Error())
		return
	}

	// Now, we can open the table from the previous session, and it will work fine.
	tbl, err = cl.OpenTable(ctx, "my_table")
	if err != nil {
		fmt.Println("error when opening table:", err.Error())
	}

	fmt.Println("Successfully opened the old table!")

	tbl.Release(ctx)
	cl.Close()

}
Output:

Successfully opened the old table!
Example (ImportTable)

This example shows off the ability to upload tables to the Deephaven server, perform some operations on them, and then download them to access the modified data.

This example requires a Deephaven server to connect to, so it will not work on pkg.go.dev.

package main

import (
	"context"
	"fmt"

	"github.com/deephaven/deephaven-core/go/internal/test_tools"
	"github.com/deephaven/deephaven-core/go/pkg/client"
)

func main() {
	// A context is used to set timeouts and deadlines for requests or cancel requests.
	// If you don't have any specific requirements, context.Background() is a good default.
	ctx := context.Background()

	cl, err := client.NewClient(ctx, test_tools.GetHost(), test_tools.GetPort(), test_tools.GetAuthType(), test_tools.GetAuthToken())
	if err != nil {
		fmt.Println("error when connecting to server:", err.Error())
		return
	}
	defer cl.Close()

	// First, we need some Arrow record we want to upload.
	sampleRecord := test_tools.ExampleRecord()
	// Note that Arrow records should be eventually released.
	defer sampleRecord.Release()

	fmt.Println("Data Before:")
	test_tools.RecordPrint(sampleRecord)

	// Now we upload the record so that we can manipulate its data using the server.
	// We get back a TableHandle, which is a reference to a table on the server.
	table, err := cl.ImportTable(ctx, sampleRecord)
	if err != nil {
		fmt.Println("error when importing table:", err.Error())
		return
	}
	// Any tables you create should be eventually released.
	defer table.Release(ctx)

	// Now we can do a bunch of operations on the table we imported, if we like...

	// Note that table operations return new tables; they don't modify old tables.
	sortedTable, err := table.Sort(ctx, "Close")
	if err != nil {
		fmt.Println("error when sorting:", err.Error())
		return
	}
	defer sortedTable.Release(ctx)
	filteredTable, err := sortedTable.Where(ctx, "Volume >= 20000")
	if err != nil {
		fmt.Println("error when filtering:", err.Error())
		return
	}
	defer filteredTable.Release(ctx)

	// If we want to see the data we sorted and filtered, we can snapshot the table to get a Record back.
	filteredRecord, err := filteredTable.Snapshot(ctx)
	if err != nil {
		fmt.Println("error when filtering:", err.Error())
		return
	}
	defer filteredRecord.Release()

	fmt.Println("Data After:")
	test_tools.RecordPrint(filteredRecord)

}
Output:

Data Before:
record:
  schema:
  fields: 3
    - Ticker: type=utf8
    - Close: type=float32
    - Volume: type=int32
  rows: 7
  col[0][Ticker]: ["XRX" "XYZZY" "IBM" "GME" "AAPL" "ZNGA" "T"]
  col[1][Close]: [53.8 88.5 38.7 453 26.7 544.9 13.4]
  col[2][Volume]: [87000 6060842 138000 138000000 19000 48300 1500]

Data After:
record:
  schema:
  fields: 3
    - Ticker: type=utf8, nullable
        metadata: ["deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "java.lang.String"]
    - Close: type=float32, nullable
       metadata: ["deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "float"]
    - Volume: type=int32, nullable
        metadata: ["deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "int"]
  metadata: ["deephaven:attribute.AddOnly": "true", "deephaven:attribute.AppendOnly": "true", "deephaven:attribute.SortedColumns": "Close=Ascending", "deephaven:attribute_type.AddOnly": "java.lang.Boolean", "deephaven:attribute_type.AppendOnly": "java.lang.Boolean", "deephaven:attribute_type.SortedColumns": "java.lang.String"]
  rows: 5
  col[0][Ticker]: ["IBM" "XRX" "XYZZY" "GME" "ZNGA"]
  col[1][Close]: [38.7 53.8 88.5 453 544.9]
  col[2][Volume]: [138000 87000 6060842 138000000 48300]
Example (InputTable)

This example shows how to use Input Tables. Input Tables make are a generic interface for streaming data from any source, so you can use Deephaven's streaming table processing power for anything.

This example requires a Deephaven server to connect to, so it will not work on pkg.go.dev.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/apache/arrow/go/v8/arrow"
	"github.com/deephaven/deephaven-core/go/internal/test_tools"
	"github.com/deephaven/deephaven-core/go/pkg/client"
)

func main() {
	// A context is used to set timeouts and deadlines for requests or cancel requests.
	// If you don't have any specific requirements, context.Background() is a good default.
	ctx := context.Background()

	cl, err := client.NewClient(ctx, test_tools.GetHost(), test_tools.GetPort(), test_tools.GetAuthType(), test_tools.GetAuthToken())
	if err != nil {
		fmt.Println("error when connecting to server:", err.Error())
		return
	}
	defer cl.Close()

	// First, let's make a schema our input table is going to use.
	// This describes the name and data types each of its columns will have.
	schema := arrow.NewSchema(
		[]arrow.Field{
			{Name: "Ticker", Type: arrow.BinaryTypes.String},
			{Name: "Close", Type: arrow.PrimitiveTypes.Float32},
			{Name: "Volume", Type: arrow.PrimitiveTypes.Int32},
		},
		nil,
	)

	// Then we can actually make the input table.
	// It will start empty, but we're going to add more data to it.
	// This is a key-backed input table, so it will make sure the "Ticker" column stays unique.
	// This is in contrast to an append-only table, which will append rows to the end of the table.
	inputTable, err := cl.NewKeyBackedInputTableFromSchema(ctx, schema, "Ticker")
	if err != nil {
		fmt.Println("error when creating InputTable", err.Error())
		return
	}
	// Any tables you create should be eventually released.
	defer inputTable.Release(ctx)

	// Now let's create a table derived from the input table.
	// When we update the input table, this table will update too.
	outputTable, err := inputTable.Where(ctx, "Close > 50.0")
	if err != nil {
		fmt.Println("error when filtering input table", err.Error())
		return
	}
	defer outputTable.Release(ctx)

	// Now, let's get some new data to add to the input table.
	// We import the data so that it is available on the server.
	newDataRec := test_tools.ExampleRecord()
	// Note that Arrow records must be eventually released.
	defer newDataRec.Release()
	newDataTable, err := cl.ImportTable(ctx, newDataRec)
	if err != nil {
		fmt.Println("error when importing new data", err.Error())
		return
	}
	defer newDataTable.Release(ctx)

	// Now we can add the new data we just imported to our input table.
	// Since this is a key-backed table, it will add any rows with new keys
	// and replace any rows with keys that already exist.
	// Since there's currently nothing in the table,
	// this call will add all the rows of the new data to the input table.
	err = inputTable.AddTable(ctx, newDataTable)
	if err != nil {
		fmt.Println("error when adding new data to table", err.Error())
		return
	}

	// Changes made to an input table may not propogate to other tables immediately.
	// Thus, we need to check the output in a loop to see if our output table has updated.
	// In a future version of the API, streaming table updates will make this kind of check unnecessary.
	timeout := time.After(time.Second * 5)
	for {
		// If this loop is still running after five seconds,
		// it will terminate because of this timer.
		select {
		case <-timeout:
			fmt.Println("the output table did not update in time")
			return
		default:
		}

		// Now, we take a snapshot of the outputTable to see what data it currently contains.
		// We should see the new rows we added, filtered by the condition we specified when creating outputTable.
		// However, we might just see an empty table if the new rows haven't been processed yet.
		outputRec, err := outputTable.Snapshot(ctx)
		if err != nil {
			fmt.Println("error when snapshotting table", err.Error())
			return
		}

		if outputRec.NumRows() == 0 {
			// The new rows we added haven't propogated to the output table yet.
			// We just discard this record and snapshot again.
			outputRec.Release()
			continue
		}

		fmt.Println("Got the output table!")
		test_tools.RecordPrint(outputRec)
		outputRec.Release()
		break
	}

}
Output:

Got the output table!
record:
  schema:
  fields: 3
    - Ticker: type=utf8, nullable
        metadata: ["deephaven:inputtable.isKey": "true", "deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "java.lang.String"]
    - Close: type=float32, nullable
       metadata: ["deephaven:inputtable.isKey": "false", "deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "float"]
    - Volume: type=int32, nullable
        metadata: ["deephaven:inputtable.isKey": "false", "deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "int"]
  metadata: ["deephaven:unsent.attribute.InputTable": ""]
  rows: 4
  col[0][Ticker]: ["XRX" "XYZZY" "GME" "ZNGA"]
  col[1][Close]: [53.8 88.5 453 544.9]
  col[2][Volume]: [87000 6060842 138000000 48300]
Example (RunScript)

This example shows how you can run a server-side script directly via the client and how you can use the script results in the client.

This example requires a Deephaven server to connect to, so it will not work on pkg.go.dev.

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/deephaven/deephaven-core/go/internal/test_tools"
	"github.com/deephaven/deephaven-core/go/pkg/client"
)

func main() {
	// A context is used to set timeouts and deadlines for requests or cancel requests.
	// If you don't have any specific requirements, context.Background() is a good default.
	ctx := context.Background()

	// Let's start a client connection using python as the script language ("groovy" is the other option).
	// Note that the client language must match the language the server was started with.
	cl, err := client.NewClient(ctx, test_tools.GetHost(), test_tools.GetPort(), test_tools.GetAuthType(), test_tools.GetAuthToken(), client.WithConsole("python"))
	if err != nil {
		fmt.Println("error when connecting to server:", err.Error())
		return
	}
	defer cl.Close()

	// First, let's create a new TimeTable, starting one second ago, that gets a new row every 100 ms.
	startTime := time.Now().Add(time.Duration(-1) * time.Second)
	timeTable, err := cl.TimeTable(ctx, time.Duration(100)*time.Millisecond, startTime)
	if err != nil {
		fmt.Println("error when creating new time table:", err.Error())
		return
	}
	// Any tables you create should be eventually released.
	defer timeTable.Release(ctx)

	// Next, let's bind the table to a variable so we can use it in the script.
	// This also makes the table visible to other clients or to the web UI.
	err = cl.BindToVariable(ctx, "my_example_table", timeTable)
	if err != nil {
		fmt.Println("error when binding table to variable:", err.Error())
		return
	}

	// Now, let's run a script to do some arbitrary operations on my_example_table...
	err = cl.RunScript(ctx,
		`
example_table_2 = my_example_table.update(["UpperBinned = upperBin(Timestamp, SECOND)"]).head(5)
`)
	if err != nil {
		fmt.Println("error when running script:", err.Error())
		return
	}

	// Now, we can open example_table_2 to use locally.
	exampleTable2, err := cl.OpenTable(ctx, "example_table_2")
	if err != nil {
		fmt.Println("error when opening table:", err.Error())
		return
	}
	// Don't forget to release it!
	defer exampleTable2.Release(ctx)

	// And if we want to see what data is currently in example_table_2, we can take a snapshot.
	exampleSnapshot, err := exampleTable2.Snapshot(ctx)
	if err != nil {
		fmt.Println("error when snapshotting table:", err.Error())
		return
	}
	// Arrow records must also be released when not used anymore.
	defer exampleSnapshot.Release()

	fmt.Println("Got table snapshot!")
	fmt.Printf("It has %d rows and %d columns", exampleSnapshot.NumRows(), exampleSnapshot.NumCols())

}
Output:

Got table snapshot!
It has 5 rows and 2 columns
Example (TableOps)

This example shows how to manipulate tables using the client.

There are two different ways to manipulate tables: immediate table operations, and query-graph table operations. See the doc comments for doQueryOps and doImmediateOps for an explanation of each. Don't be afraid to mix and match both as the situation requires!

This example requires a Deephaven server to connect to, so it will not work on pkg.go.dev.

package main

import (
	"context"
	"fmt"

	"github.com/deephaven/deephaven-core/go/internal/test_tools"
	"github.com/deephaven/deephaven-core/go/pkg/client"
)

// This example shows how to manipulate tables using the client.
//
// There are two different ways to manipulate tables: immediate table operations, and query-graph table operations.
// See the doc comments for doQueryOps and doImmediateOps for an explanation of each.
// Don't be afraid to mix and match both as the situation requires!
//
// This example requires a Deephaven server to connect to, so it will not work on pkg.go.dev.
func main() {
	normalResult, err := doImmediateOps()
	if err != nil {
		fmt.Println("encountered an error:", err.Error())
		return
	}

	queryResult, err := doQueryOps()
	if err != nil {
		fmt.Println("encountered an error:", err.Error())
		return
	}

	if normalResult != queryResult {
		fmt.Println("results differed!", err.Error())
	}

	fmt.Println(queryResult)

}

// This function demonstrates how to use immediate table operations.
//
// Immediate table operations take in tables as inputs, and immediately return a table (or an error) as an output.
// They allow for more fine-grained error handling and debugging than query-graph table operations, at the cost of being more verbose.
func doImmediateOps() (string, error) {
	// A context is used to set timeouts and deadlines for requests or cancel requests.
	// If you don't have any specific requirements, context.Background() is a good default.
	ctx := context.Background()

	cl, err := client.NewClient(ctx, test_tools.GetHost(), test_tools.GetPort(), test_tools.GetAuthType(), test_tools.GetAuthToken())
	if err != nil {
		fmt.Println("error when connecting to server:", err.Error())
		return "", err
	}
	defer cl.Close()

	// First, let's create some example data to manipulate.
	sampleRecord := test_tools.ExampleRecord()
	// Note that Arrow records must eventually be released.
	defer sampleRecord.Release()

	// Now we upload the record as a table on the server.
	// We get back a TableHandle, which is a reference to a table on the server.
	baseTable, err := cl.ImportTable(ctx, sampleRecord)
	if err != nil {
		fmt.Println("error when uploading table:", err.Error())
		return "", err
	}
	// Table handles should be released when they are no longer needed.
	defer baseTable.Release(ctx)

	// Now, let's start doing table operations.
	// Maybe I don't like companies whose names are too long or too short, so let's keep only the ones in the middle.
	midStocks, err := baseTable.Where(ctx, "Ticker.length() == 3 || Ticker.length() == 4")
	if err != nil {
		fmt.Println("error when filtering table:", err.Error())
		return "", err
	}
	defer midStocks.Release(ctx)

	// We can also create completely new tables with the client too.
	// Let's make a table whose columns are powers of ten.
	powTenTable, err := cl.EmptyTable(ctx, 10)
	if err != nil {
		fmt.Println("error when creating an empty table:", err.Error())
		return "", err
	}
	defer powTenTable.Release(ctx)
	powTenTable, err = powTenTable.Update(ctx, "Magnitude = (int)pow(10, ii)")
	if err != nil {
		fmt.Println("error when updating a table:", err.Error())
		return "", err
	}
	defer powTenTable.Release(ctx)

	// What if I want to bin the companies according to the magnitude of the Volume column?
	// We can perform an as-of join between two tables to produce another table.
	magStocks, err := midStocks.
		AsOfJoin(ctx, powTenTable, []string{"Volume = Magnitude"}, nil, client.MatchRuleLessThanEqual)
	if err != nil {
		fmt.Println("error when doing an as-of join:", err.Error())
		return "", err
	}
	defer magStocks.Release(ctx)

	// Now, if we want to see the data in each of our tables, we can take snapshots.
	midRecord, err := midStocks.Snapshot(ctx)
	if err != nil {
		fmt.Println("error when snapshotting:", err.Error())
		return "", err
	}
	defer midRecord.Release()
	magRecord, err := magStocks.Snapshot(ctx)
	if err != nil {
		fmt.Println("error when snapshotting:", err.Error())
		return "", err
	}
	defer magRecord.Release()

	return fmt.Sprintf("Data Before:\n%s\nNew data:\n%s\n%s", test_tools.RecordString(sampleRecord), test_tools.RecordString(midRecord), test_tools.RecordString(magRecord)), nil
}

// This function demonstrates how to use query-graph table operations.
//
// Query-graph operations allow you to build up an arbitrary number of table operations into a single object (known as the "query graph")
// and then execute all of the table operations at once.
// This simplifies error handling, is much more concise, and can be more efficient than doing immediate table operations.
func doQueryOps() (string, error) {
	// A context is used to set timeouts and deadlines for requests or cancel requests.
	// If you don't have any specific requirements, context.Background() is a good default.
	ctx := context.Background()

	cl, err := client.NewClient(ctx, test_tools.GetHost(), test_tools.GetPort(), test_tools.GetAuthType(), test_tools.GetAuthToken())
	if err != nil {
		fmt.Println("error when connecting to server:", err.Error())
		return "", err
	}
	defer cl.Close()

	// First, let's create some example data to manipulate.
	sampleRecord := test_tools.ExampleRecord()
	// Note that Arrow records must eventually be released.
	defer sampleRecord.Release()

	// Now we upload the record as a table on the server.
	// We get back a TableHandle, which is a reference to a table on the server.
	baseTable, err := cl.ImportTable(ctx, sampleRecord)
	if err != nil {
		fmt.Println("error when uploading table:", err.Error())
		return "", err
	}
	// Table handles should be released when they are no longer needed.
	defer baseTable.Release(ctx)

	// Now, let's start building a query graph.
	// Maybe I don't like companies whose names are too long or too short, so let's keep only the ones in the middle.
	// Unlike with immediate operations, here midStocks is a QueryNode instead of an actual TableHandle.
	// A QueryNode just holds a list of operations to be performed; it doesn't do anything until it's executed (see below).
	midStocks := baseTable.Query().
		Where("Ticker.length() == 3 || Ticker.length() == 4")

	// We can create completely new tables in the query graph too.
	// Let's make a table whose columns are powers of ten.
	powTenTable := cl.
		EmptyTableQuery(10).
		Update("Magnitude = (int)pow(10, ii)")

	// What if I want to bin the companies according to the magnitude of the Volume column?
	// Query-graph methods can take other query nodes as arguments to build up arbitrarily complicated requests,
	// so we can perform an as-of join between two query nodes just fine.
	magStocks := midStocks.
		AsOfJoin(powTenTable, []string{"Volume = Magnitude"}, nil, client.MatchRuleLessThanEqual)

	// And now, we can execute the query graph we have built.
	// This turns our QueryNodes into usable TableHandles.
	tables, err := cl.ExecBatch(ctx, midStocks, magStocks)
	if err != nil {
		fmt.Println("error when executing query:", err.Error())
		return "", err
	}
	// The order of the tables in the returned list is the same as the order of the QueryNodes passed as arguments.
	midTable, magTable := tables[0], tables[1]
	defer midTable.Release(ctx)
	defer magTable.Release(ctx)

	// Now, if we want to see the data in each of our tables, we can take snapshots.
	midRecord, err := midTable.Snapshot(ctx)
	if err != nil {
		fmt.Println("error when snapshotting:", err.Error())
		return "", err
	}
	defer midRecord.Release()
	magRecord, err := magTable.Snapshot(ctx)
	if err != nil {
		fmt.Println("error when snapshotting:", err.Error())
		return "", err
	}
	defer magRecord.Release()

	return fmt.Sprintf("Data Before:\n%s\nNew data:\n%s\n%s", test_tools.RecordString(sampleRecord), test_tools.RecordString(midRecord), test_tools.RecordString(magRecord)), nil
}
Output:

Data Before:
record:
  schema:
  fields: 3
    - Ticker: type=utf8
    - Close: type=float32
    - Volume: type=int32
  rows: 7
  col[0][Ticker]: ["XRX" "XYZZY" "IBM" "GME" "AAPL" "ZNGA" "T"]
  col[1][Close]: [53.8 88.5 38.7 453 26.7 544.9 13.4]
  col[2][Volume]: [87000 6060842 138000 138000000 19000 48300 1500]

New data:
record:
  schema:
  fields: 3
    - Ticker: type=utf8, nullable
        metadata: ["deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "java.lang.String"]
    - Close: type=float32, nullable
       metadata: ["deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "float"]
    - Volume: type=int32, nullable
        metadata: ["deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "int"]
  metadata: ["deephaven:attribute.AddOnly": "true", "deephaven:attribute.AppendOnly": "true", "deephaven:attribute_type.AddOnly": "java.lang.Boolean", "deephaven:attribute_type.AppendOnly": "java.lang.Boolean"]
  rows: 5
  col[0][Ticker]: ["XRX" "IBM" "GME" "AAPL" "ZNGA"]
  col[1][Close]: [53.8 38.7 453 26.7 544.9]
  col[2][Volume]: [87000 138000 138000000 19000 48300]

record:
  schema:
  fields: 4
    - Ticker: type=utf8, nullable
        metadata: ["deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "java.lang.String"]
    - Close: type=float32, nullable
       metadata: ["deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "float"]
    - Volume: type=int32, nullable
        metadata: ["deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "int"]
    - Magnitude: type=int32, nullable
           metadata: ["deephaven:isDateFormat": "false", "deephaven:isNumberFormat": "false", "deephaven:isPartitioning": "false", "deephaven:isRowStyle": "false", "deephaven:isSortable": "true", "deephaven:isStyle": "false", "deephaven:type": "int"]
  rows: 5
  col[0][Ticker]: ["XRX" "IBM" "GME" "AAPL" "ZNGA"]
  col[1][Close]: [53.8 38.7 453 26.7 544.9]
  col[2][Volume]: [87000 138000 138000000 19000 48300]
  col[3][Magnitude]: [10000 100000 100000000 10000 10000]

Index

Examples

Constants

View Source
const DefaultAuth = "Anonymous"

DefaultAuth is the default authentication method.

View Source
const TokenTimeoutConfigConstant = "http.session.durationMs"

TokenTimeoutConfigConstant is the configuration constant specifying the token timeout interval.

Variables

View Source
var ErrClosedClient = errors.New("client is closed")

ErrClosedClient is returned as an error when trying to perform a network operation on a client that has been closed.

View Source
var ErrDifferentClients = errors.New("tried to use tables from different clients")

ErrDifferentClients is returned when performing a table operation on handles that come from different Client structs.

View Source
var ErrEmptyMerge = errors.New("no non-nil tables were provided to merge")

ErrEmptyMerge is returned by merge operations when all of the table arguments are nil (or when no table arguments are provided at all).

View Source
var ErrInvalidTableHandle = errors.New("tried to use a nil, zero-value, or released table handle")

ErrInvalidTableHandle is returned by most table methods when called on a table handle that contains its zero value or has been already released.

View Source
var ErrNoConsole = errors.New("the client was not started with console support (see WithConsole)")

ErrNoConsole is returned by

Functions

This section is empty.

Types

type AggBuilder

type AggBuilder struct {
	// contains filtered or unexported fields
}

AggBuilder is the main way to construct aggregations with multiple parts in them. Each one of the methods is the same as the corresponding method on a QueryNode. The columns to aggregate over are selected in AggBy.

func NewAggBuilder

func NewAggBuilder() *AggBuilder

func (*AggBuilder) AbsSum

func (b *AggBuilder) AbsSum(cols ...string) *AggBuilder

Sum creates an aggregator that computes the total sum of absolute values, within an aggregation group, for each input column. The source columns are specified by cols.

func (*AggBuilder) Avg

func (b *AggBuilder) Avg(cols ...string) *AggBuilder

Avg creates an aggregator that computes the average (mean) of values, within an aggregation group, for each input column. The source columns are specified by cols.

func (*AggBuilder) Count

func (b *AggBuilder) Count(col string) *AggBuilder

Count returns an aggregator that computes the number of elements within an aggregation group. The count of each group is stored in a new column named after the col argument.

func (*AggBuilder) First

func (b *AggBuilder) First(cols ...string) *AggBuilder

First creates an aggregator that computes the first value, within an aggregation group, for each input column. The source columns are specified by cols.

func (*AggBuilder) Group

func (b *AggBuilder) Group(cols ...string) *AggBuilder

Group creates an aggregator that computes an array of all values within an aggregation group, for each input column. The source columns are specified by cols.

func (*AggBuilder) Last

func (b *AggBuilder) Last(cols ...string) *AggBuilder

Last creates an aggregator that computes the last value, within an aggregation group, for each input column. The source columns are specified by cols.

func (*AggBuilder) Max

func (b *AggBuilder) Max(cols ...string) *AggBuilder

Max returns an aggregator that computes the maximum value, within an aggregation group, for each input column. The source columns are specified by cols.

func (*AggBuilder) Median

func (b *AggBuilder) Median(cols ...string) *AggBuilder

Median creates an aggregator that computes the median value, within an aggregation group, for each input column. The source columns are specified by cols.

func (*AggBuilder) Min

func (b *AggBuilder) Min(cols ...string) *AggBuilder

Min creates an aggregator that computes the minimum value, within an aggregation group, for each input column. The source columns are specified by cols.

func (*AggBuilder) Percentile

func (b *AggBuilder) Percentile(percentile float64, cols ...string) *AggBuilder

Percentile returns an aggregator that computes the designated percentile of values, within an aggregation group, for each input column. The source columns are specified by cols.

func (*AggBuilder) StdDev

func (b *AggBuilder) StdDev(cols ...string) *AggBuilder

Std returns an aggregator that computes the sample standard deviation of values, within an aggregation group, for each input column. The source columns are specified by cols.

Sample standard deviation is calculated using `Bessel's correction <https://en.wikipedia.org/wiki/Bessel%27s_correction>`_, which ensures that the sample variance will be an unbiased estimator of population variance.

func (*AggBuilder) Sum

func (b *AggBuilder) Sum(cols ...string) *AggBuilder

Sum returns an aggregator that computes the total sum of values, within an aggregation group, for each input column. The source columns are specified by cols.

func (*AggBuilder) Variance

func (b *AggBuilder) Variance(cols ...string) *AggBuilder

Var returns an aggregator that computes the sample variance of values, within an aggregation group, for each input column. The source columns are specified by cols.

Sample variance is calculated using `Bessel's correction <https://en.wikipedia.org/wiki/Bessel%27s_correction>`_, which ensures that the sample variance will be an unbiased estimator of population variance.

func (*AggBuilder) WeightedAvg

func (b *AggBuilder) WeightedAvg(weightCol string, cols ...string) *AggBuilder

WeightedAvg returns an aggregator that computes the weighted average of values, within an aggregation group, for each input column. The column to weight by is specified by weightCol. The source columns are specified by cols.

type AppendOnlyInputTable

type AppendOnlyInputTable struct {
	*TableHandle
}

AppendOnlyInputTable is a handle to an append-only input table on the server. The only difference between this handle and a normal table handle is the ability to add data to it using AddTable.

func (*AppendOnlyInputTable) AddTable

func (th *AppendOnlyInputTable) AddTable(ctx context.Context, toAdd *TableHandle) error

AddTable appends data from the given table to the end of this table. This will automatically update all tables derived from this one.

type Client

type Client struct {
	// contains filtered or unexported fields
}

Maintains a connection to a Deephaven server. It can be used to run scripts, create new tables, execute queries, etc. Check the various methods of Client to learn more.

func NewClient

func NewClient(ctx context.Context, host string, port string, authType string, authToken string, options ...ClientOption) (client *Client, err error)

NewClient starts a connection to a Deephaven server.

The client should be closed using Close() after it is done being used.

Keepalive messages are sent automatically by the client to the server at a regular interval so that the connection remains open. The provided context is saved and used to send keepalive messages.

host, port, and auth are used to connect to the Deephaven server. host and port are the Deephaven server host and port.

authType is the type of authentication to use. This can be 'Anonymous', 'Basic', or any custom-built authenticator in the server, such as "io.deephaven.authentication.psk.PskAuthenticationHandler", The default is 'Anonymous'. To see what authentication methods are available on the Deephaven server, navigate to: http://<host>:<port>/jsapi/authentication/.

authToken is the authentication token string. When authType is 'Basic', it must be "user:password"; when auth_type is DefaultAuth, it will be ignored; when auth_type is a custom-built authenticator, it must conform to the specific requirement of the authenticator.

The option arguments can be used to specify other settings for the client. See the With<XYZ> methods (e.g. WithConsole) for details on what options are available.

func (Client) BindToVariable

func (console Client) BindToVariable(ctx context.Context, name string, table *TableHandle) error

BindToVariable binds a table reference to a given name on the server so that it can be referenced by other clients or the web UI.

If WithConsole was not passed when creating the client, this will return ErrNoConsole.

func (*Client) Close

func (client *Client) Close() error

Close closes the connection to the server and frees any associated resources. Once this method is called, the client and any TableHandles from it cannot be used.

func (*Client) Closed

func (client *Client) Closed() bool

Closed checks if the client is closed, i.e. it can no longer perform operations on the server.

func (Client) EmptyTable

func (ts Client) EmptyTable(ctx context.Context, numRows int64) (*TableHandle, error)

EmptyTable creates a new empty table in the global scope.

The table will have zero columns and the specified number of rows.

func (Client) EmptyTableQuery

func (ts Client) EmptyTableQuery(numRows int64) QueryNode

EmptyTableQuery is like EmptyTable, except it can be used as part of a query graph.

func (*Client) ExecBatch

func (client *Client) ExecBatch(ctx context.Context, nodes ...QueryNode) ([]*TableHandle, error)

ExecBatch executes a query graph on the server and returns the resulting tables.

All of the operations in the query graph will be performed in a single request, so ExecBatch is usually more efficient than ExecSerial.

If this function completes successfully, the number of tables returned will always match the number of query nodes passed. The first table in the returned list corresponds to the first node argument, the second table in the returned list corresponds to the second node argument, etc.

This may return a QueryError if the query is invalid.

See the TableOps example and the QueryNode docs for more details on how this method should be used.

func (*Client) ExecSerial

func (client *Client) ExecSerial(ctx context.Context, nodes ...QueryNode) ([]*TableHandle, error)

ExecSerial executes a query graph on the server and returns the resulting tables.

This function makes a request for each table operation in the query graph. Consider using ExecBatch to batch all of the table operations into a single request, which can be more efficient.

If this function completes successfully, the number of tables returned will always match the number of query nodes passed. The first table in the returned list corresponds to the first node argument, the second table in the returned list corresponds to the second node argument, etc.

This may return a QueryError if the query is invalid.

See the TableOps example and the QueryNode docs for more details on how this method should be used.

func (Client) ImportTable

func (fs Client) ImportTable(ctx context.Context, rec arrow.Record) (*TableHandle, error)

ImportTable uploads a table to the Deephaven server. The table can then be manipulated and referenced using the returned TableHandle.

func (*Client) ListOpenableTables

func (client *Client) ListOpenableTables(ctx context.Context) ([]string, error)

ListOpenableTables returns a list of the (global) tables that can be opened with OpenTable. Tables bound to variables by other clients or the web UI will show up in this list, though it is not guaranteed how long it will take for new tables to appear.

func (Client) NewAppendOnlyInputTableFromSchema

func (its Client) NewAppendOnlyInputTableFromSchema(ctx context.Context, schema *arrow.Schema) (*AppendOnlyInputTable, error)

NewAppendOnlyInputTableFromSchema creates a new append-only input table with columns according to the provided schema.

func (Client) NewAppendOnlyInputTableFromTable

func (its Client) NewAppendOnlyInputTableFromTable(ctx context.Context, table *TableHandle) (*AppendOnlyInputTable, error)

NewAppendOnlyInputTableFromTable creates a new append-only input table with the same columns as the provided table.

func (Client) NewKeyBackedInputTableFromSchema

func (its Client) NewKeyBackedInputTableFromSchema(ctx context.Context, schema *arrow.Schema, keyColumns ...string) (*KeyBackedInputTable, error)

NewKeyBackedInputTableFromSchema creates a new key-backed input table with columns according to the provided schema. The columns to use as the keys are specified by keyColumns.

func (Client) NewKeyBackedInputTableFromTable

func (its Client) NewKeyBackedInputTableFromTable(ctx context.Context, table *TableHandle, keyColumns ...string) (*KeyBackedInputTable, error)

NewKeyBackedInputTableFromTable creates a new key-backed input table with the same columns as the provided table. The columns to use as the keys are specified by keyColumns.

func (Client) OpenTable

func (ts Client) OpenTable(ctx context.Context, name string) (*TableHandle, error)

OpenTable opens a globally-scoped table with the given name on the server.

func (*Client) RunScript

func (client *Client) RunScript(ctx context.Context, script string) error

RunScript executes a script on the deephaven server.

The script language depends on the argument passed to WithConsole when creating the client. If WithConsole was not provided when creating the client, this method will return ErrNoConsole.

func (Client) TimeTable

func (ts Client) TimeTable(ctx context.Context, period any, startTime any) (*TableHandle, error)

TimeTable creates a ticking time table in the global scope. The period is time between adding new rows to the table. It needs to be either a signed integer type, time.Duration, or string. The startTime is the time of the first row in the table. It needs to be either a signed integer type, time.Time or a string.

func (Client) TimeTableQuery

func (ts Client) TimeTableQuery(period time.Duration, startTime time.Time) QueryNode

TimeTableQuery is like TimeTable, except it can be used as part of a query graph.

type ClientOption

type ClientOption interface {
	// contains filtered or unexported methods
}

A ClientOption configures some aspect of a client connection when passed to NewClient. See the With<XYZ> methods for possible client options.

func WithConsole

func WithConsole(scriptLanguage string) ClientOption

WithConsole allows the client to run scripts on the server using the RunScript method and bind tables to variables using BindToVariable.

The script language can be either "python" or "groovy", and must match the language used on the server.

func WithNoTableLeakWarning

func WithNoTableLeakWarning() ClientOption

WithNoTableLeakWarning disables the automatic TableHandle leak check.

Normally, a warning is printed whenever a TableHandle is forgotten without calling Release on it, and a GC finalizer automatically frees the table. However, TableHandles are automatically released by the server whenever a client connection closes. So, it can be okay for short-lived clients that don't create large tables to forget their TableHandles and rely on them being freed when the client closes.

There is no guarantee on when the GC will run, so long-lived clients that forget their TableHandles can end up exhausting the server's resources before any of the handles are GCed automatically.

type KeyBackedInputTable

type KeyBackedInputTable struct {
	*TableHandle
}

KeyBackedInputTable is a handle to a key-backed input table on the server. The only difference between this handle and a normal table handle is the ability to add and remove data to and from it using AddTable and DeleteTable.

func (*KeyBackedInputTable) AddTable

func (th *KeyBackedInputTable) AddTable(ctx context.Context, toAdd *TableHandle) error

AddTable merges the keys from the given table into this table. If a key does not exist in the current table, the entire row is added, otherwise the new data row replaces the existing key. This will automatically update all tables derived from this one.

func (*KeyBackedInputTable) DeleteTable

func (th *KeyBackedInputTable) DeleteTable(ctx context.Context, toDelete *TableHandle) error

DeleteTable deletes the rows with the given keys from this table. The provided table must consist only of columns that were specified as key columns in the input table. This will automatically update all tables derived from this one.

type MatchRule

type MatchRule int

A comparison rule for use with AsOfJoin. See its documentation for more details.

const (
	MatchRuleLessThanEqual MatchRule = iota // Less-than-or-equal, used for an as-of join.
	MatchRuleLessThan
	MatchRuleGreaterThanEqual // Greater-than-or-equal, used for a reverse as-of join.
	MatchRuleGreaterThan
)

func (MatchRule) String

func (mr MatchRule) String() string

type QueryError

type QueryError struct {
	// contains filtered or unexported fields
}

A QueryError may be returned by ExecSerial or ExecBatch as the result of an invalid query.

func (QueryError) Error

func (err QueryError) Error() string

Error returns detailed information about all of the sub-errors that occured inside this query error. Each sub-error is given a pseudo-traceback of the query operations that caused it.

func (QueryError) Unwrap

func (err QueryError) Unwrap() error

Unwrap returns the first part of the query error.

type QueryNode

type QueryNode struct {
	// contains filtered or unexported fields
}

A QueryNode is a pointer somewhere into a "query graph". A "query graph" is effectively a list of table operations that can be executed all at once.

Table operations on a QueryNode return other QueryNodes. Several operations can be chained together to build up an entire query graph, which can then be executed using Client.ExecSerial or Client.ExecBatch to turn the QueryNode into a TableHandle. See the TableOps example for more details on how to use query-graph table operations.

All QueryNode methods are goroutine-safe.

func MergeQuery

func MergeQuery(sortBy string, tables ...QueryNode) QueryNode

MergeQuery combines two or more tables into one table as part of a query. This essentially appends the tables on top of each other.

If sortBy is provided, the resulting table will be sorted based on that column.

"nil Table" nodes (i.e. QueryNodes returned from calling Query() on a nil *TableHandle) are ignored.

At least one non-nil query node must be provided, otherwise an ErrEmptyMerge will be returned by ExecSerial or ExecBatch.

func (QueryNode) AbsSumBy

func (qb QueryNode) AbsSumBy(by ...string) QueryNode

AbsSumBy returns the total sum of absolute values for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (QueryNode) AggBy

func (qb QueryNode) AggBy(agg *AggBuilder, columnsToGroupBy ...string) QueryNode

AggBy applies a list of aggregations to table data. See the docs on AggBuilder for details on what each of the aggregation types do.

func (QueryNode) AsOfJoin

func (qb QueryNode) AsOfJoin(rightTable QueryNode, matchColumns []string, joins []string, matchRule MatchRule) QueryNode

AsOfJoin joins data from a pair of tables - a left and right table - based upon one or more match columns. The match columns establish key identifiers in the left table that will be used to find data in the right table. Any data types can be chosen as keys.

When using AsOfJoin, the first N-1 match columns are exactly matched. The last match column is used to find the key values from the right table that are closest to the values in the left table without going over the left value. For example, when using MatchRuleLessThanEqual, if the right table contains a value 5 and the left table contains values 4 and 6, the right table's 5 will be matched on the left table's 6.

The output table contains all of the rows and columns of the left table plus additional columns containing data from the right table. For columns optionally appended to the left table, row values equal the row values from the right table where the keys from the left table most closely match the keys from the right table, as defined above. If there is no matching key in the right table, appended row values are NULL.

matchColumns is the columns to match.

joins is the columns to add from the right table.

matchRule is the match rule for the join. Use MatchRuleLessThanEqual for a normal as-of join, or MatchRuleGreaterThanEqual for a reverse-as-of-join.

func (QueryNode) AvgBy

func (qb QueryNode) AvgBy(by ...string) QueryNode

AvgBy returns the average (mean) of each non-key column for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (QueryNode) Count

func (qb QueryNode) Count(col string) QueryNode

Count counts the number of values in the specified column and returns it as a table with one row and one column.

func (QueryNode) CountBy

func (qb QueryNode) CountBy(resultCol string, by ...string) QueryNode

CountBy returns the number of rows for each group. The count of each group is stored in a new column named after the resultCol argument.

func (QueryNode) DropColumns

func (qb QueryNode) DropColumns(cols ...string) QueryNode

DropColumns creates a table with the same number of rows as the source table but omits any columns included in the arguments.

func (QueryNode) ExactJoin

func (qb QueryNode) ExactJoin(rightTable QueryNode, matchOn []string, joins []string) QueryNode

ExactJoin joins data from a pair of tables - a left and right table - based upon a set of match columns. The match columns establish key identifiers in the left table that will be used to find data in the right table. Any data types can be chosen as keys, and keys can be constructed from multiple values.

The output table contains all of the rows and columns of the left table plus additional columns containing data from the right table. For columns appended to the left table, row values equal the row values from the right table where the key values in the left and right tables are equal. If there are zero or multiple matches, the operation will fail.

matchOn is the columns to match.

joins is the columns to add from the right table.

func (QueryNode) FirstBy

func (qb QueryNode) FirstBy(by ...string) QueryNode

FirstBy returns the first row for each group. If no columns are given, only the first row of the table is returned.

func (QueryNode) GroupBy

func (qb QueryNode) GroupBy(by ...string) QueryNode

GroupBy groups column content into arrays. Columns not in the aggregation become array-type. If no group-by columns are given, the content of each column is grouped into its own array.

func (QueryNode) Head

func (qb QueryNode) Head(numRows int64) QueryNode

Head returns a table with a specific number of rows from the beginning of the source table.

func (QueryNode) HeadBy

func (qb QueryNode) HeadBy(numRows int64, columnsToGroupBy ...string) QueryNode

HeadBy returns the first numRows rows for each group.

func (QueryNode) Join

func (qb QueryNode) Join(rightTable QueryNode, matchOn []string, joins []string, reserveBits int32) QueryNode

Join joins data from a pair of tables - a left and right table - based upon a set of match columns. The match columns establish key identifiers in the left table that will be used to find data in the right table. Any data types can be chosen as keys, and keys can be constructed from multiple values.

The output table contains rows that have matching values in both tables. Rows that do not have matching criteria will not be included in the result. If there are multiple matches between a row from the left table and rows from the right table, all matching combinations will be included. If no match columns are specified, every combination of left and right table rows is included.

matchOn is the columns to match.

joins is the columns to add from the right table.

reserveBits is the number of bits of key-space to initially reserve per group. Set it to 10 if you are unsure.

func (QueryNode) LastBy

func (qb QueryNode) LastBy(by ...string) QueryNode

LastBy returns the last row for each group. If no columns are given, only the last row of the table is returned.

func (QueryNode) LazyUpdate

func (qb QueryNode) LazyUpdate(formulas ...string) QueryNode

LazyUpdate creates a new table containing a new, cached, formula column for each argument. The returned table also includes all the original columns from the source table.

func (QueryNode) MaxBy

func (qb QueryNode) MaxBy(by ...string) QueryNode

MaxBy returns the maximum value for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (QueryNode) MedianBy

func (qb QueryNode) MedianBy(by ...string) QueryNode

MedianBy returns the median value for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (QueryNode) MinBy

func (qb QueryNode) MinBy(by ...string) QueryNode

MinBy returns the minimum value for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (QueryNode) NaturalJoin

func (qb QueryNode) NaturalJoin(rightTable QueryNode, matchOn []string, joins []string) QueryNode

NaturalJoin joins data from a pair of tables - a left and right table - based upon one or more match columns. The match columns establish key identifiers in the left table that will be used to find data in the right table. Any data types can be chosen as keys.

The output table contains all of the rows and columns of the left table plus additional columns containing data from the right table. For columns appended to the left table, row values equal the row values from the right table where the key values in the left and right tables are equal. If there is no matching key in the right table, appended row values are NULL. If there are multiple matches, the operation will fail.

matchOn is the columns to match.

joins is the columns to add from the right table.

func (QueryNode) Select

func (qb QueryNode) Select(formulas ...string) QueryNode

Select creates a new in-memory table that includes one column for each argument. Any columns not specified in the arguments will not appear in the resulting table.

func (QueryNode) SelectDistinct

func (qb QueryNode) SelectDistinct(columnNames ...string) QueryNode

SelectDistinct creates a new table containing all of the unique values for a set of key columns. When SelectDistinct is used on multiple columns, it looks for distinct sets of values in the selected columns.

func (QueryNode) Sort

func (qb QueryNode) Sort(cols ...string) QueryNode

Sort returns a new table with rows sorted in a smallest to largest order based on the listed column(s).

func (QueryNode) SortBy

func (qb QueryNode) SortBy(cols ...SortColumn) QueryNode

Sort returns a new table with rows sorted in the order specified by the listed column(s).

func (QueryNode) StdBy

func (qb QueryNode) StdBy(by ...string) QueryNode

StdBy returns the sample standard deviation for each group. Null values are ignored. Columns not used in the grouping must be numeric.

Sample standard deviation is calculated using `Bessel's correction <https://en.wikipedia.org/wiki/Bessel%27s_correction>`_, which ensures that the sample variance will be an unbiased estimator of population variance.

func (QueryNode) SumBy

func (qb QueryNode) SumBy(by ...string) QueryNode

SumBy returns the total sum for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (QueryNode) Tail

func (qb QueryNode) Tail(numRows int64) QueryNode

Tail returns a table with a specific number of rows from the end of the source table.

func (QueryNode) TailBy

func (qb QueryNode) TailBy(numRows int64, columnsToGroupBy ...string) QueryNode

TailBy returns the last numRows rows for each group.

func (QueryNode) Ungroup

func (qb QueryNode) Ungroup(colsToUngroupBy []string, nullFill bool) QueryNode

Ungroup ungroups column content. It is the inverse of the GroupBy method. Ungroup unwraps columns containing either Deephaven arrays or Java arrays. nullFill indicates whether or not missing cells may be filled with null. Set it to true if you are unsure.

func (QueryNode) Update

func (qb QueryNode) Update(formulas ...string) QueryNode

Update creates a new table containing a new, in-memory column for each argument. The returned table also includes all the original columns from the source table.

func (QueryNode) UpdateView

func (qb QueryNode) UpdateView(formulas ...string) QueryNode

UpdateView creates a new table containing a new, formula column for each argument. When using UpdateView, the new columns are not stored in memory. Rather, a formula is stored that is used to recalculate each cell every time it is accessed. The returned table also includes all the original columns from the source table.

func (QueryNode) VarBy

func (qb QueryNode) VarBy(by ...string) QueryNode

VarBy returns the sample variance for each group. Null values are ignored. Columns not used in the grouping must be numeric.

Sample variance is calculated using `Bessel's correction <https://en.wikipedia.org/wiki/Bessel%27s_correction>`_, which ensures that the sample variance will be an unbiased estimator of population variance.

func (QueryNode) View

func (qb QueryNode) View(formulas ...string) QueryNode

View creates a new formula table that includes one column for each argument. When using view, the data being requested is not stored in memory. Rather, a formula is stored that is used to recalculate each cell every time it is accessed.

func (QueryNode) Where

func (qb QueryNode) Where(filters ...string) QueryNode

Where filters rows of data from the source table. It returns a new table with only the rows meeting the filter criteria of the source table.

type SortColumn

type SortColumn struct {
	// contains filtered or unexported fields
}

SortColumn is a pair of a column and a direction to sort it by.

func SortAsc

func SortAsc(colName string) SortColumn

SortAsc specifies that a particular column should be sorted in ascending order

func SortDsc

func SortDsc(colName string) SortColumn

SortDsc specifies that a particular column should be sorted in descending order

type TableHandle

type TableHandle struct {
	// contains filtered or unexported fields
}

A TableHandle is a reference to a table stored on the deephaven server.

It should eventually be released using Release() once it is no longer needed on the client. Releasing a table handle does not affect table handles derived from it. Once a TableHandle has been released, no other methods should be called on it.

All TableHandle methods are goroutine-safe.

A TableHandle's zero value acts identically to a TableHandle that has been released. A nil TableHandle pointer also acts like a released table with one key exception: The Merge and MergeQuery methods will simply ignore nil handles.

See the TableOps example for more details on how to manipulate and use TableHandles.

func Merge

func Merge(ctx context.Context, sortBy string, tables ...*TableHandle) (*TableHandle, error)

Merge combines two or more tables into one table. This essentially appends the tables on top of each other.

If sortBy is provided, the resulting table will be sorted based on that column.

Any nil TableHandle pointers passed in are ignored. At least one non-nil *TableHandle must be provided, otherwise an ErrEmptyMerge will be returned.

func (*TableHandle) AbsSumBy

func (th *TableHandle) AbsSumBy(ctx context.Context, cols ...string) (*TableHandle, error)

AbsSumBy returns the total sum of absolute values for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (*TableHandle) AggBy

func (th *TableHandle) AggBy(ctx context.Context, agg *AggBuilder, by ...string) (*TableHandle, error)

AggBy applies a list of aggregations to table data. See the docs on AggBuilder for details on what each of the aggregation types do.

func (*TableHandle) AsOfJoin

func (th *TableHandle) AsOfJoin(ctx context.Context, rightTable *TableHandle, on []string, joins []string, matchRule MatchRule) (*TableHandle, error)

AsOfJoin joins data from a pair of tables - a left and right table - based upon one or more match columns. The match columns establish key identifiers in the left table that will be used to find data in the right table. Any data types can be chosen as keys.

When using AsOfJoin, the first N-1 match columns are exactly matched. The last match column is used to find the key values from the right table that are closest to the values in the left table without going over the left value. For example, when using MatchRuleLessThanEqual, if the right table contains a value 5 and the left table contains values 4 and 6, the right table's 5 will be matched on the left table's 6.

The output table contains all of the rows and columns of the left table plus additional columns containing data from the right table. For columns optionally appended to the left table, row values equal the row values from the right table where the keys from the left table most closely match the keys from the right table, as defined above. If there is no matching key in the right table, appended row values are NULL.

matchColumns is the columns to match.

joins is the columns to add from the right table.

matchRule is the match rule for the join. Use MatchRuleLessThanEqual for a normal as-of join, or MatchRuleGreaterThanEqual for a reverse-as-of-join.

func (*TableHandle) AvgBy

func (th *TableHandle) AvgBy(ctx context.Context, cols ...string) (*TableHandle, error)

AvgBy returns the average (mean) of each non-key column for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (*TableHandle) Count

func (th *TableHandle) Count(ctx context.Context, col string) (*TableHandle, error)

Count counts the number of values in the specified column and returns it as a table with one row and one column.

func (*TableHandle) CountBy

func (th *TableHandle) CountBy(ctx context.Context, resultCol string, cols ...string) (*TableHandle, error)

CountBy returns the number of rows for each group. The count of each group is stored in a new column named after the resultCol argument.

func (*TableHandle) DropColumns

func (th *TableHandle) DropColumns(ctx context.Context, cols ...string) (*TableHandle, error)

DropColumns creates a table with the same number of rows as the source table but omits any columns included in the arguments.

func (*TableHandle) ExactJoin

func (th *TableHandle) ExactJoin(ctx context.Context, rightTable *TableHandle, on []string, joins []string) (*TableHandle, error)

ExactJoin joins data from a pair of tables - a left and right table - based upon a set of match columns. The match columns establish key identifiers in the left table that will be used to find data in the right table. Any data types can be chosen as keys, and keys can be constructed from multiple values.

The output table contains all of the rows and columns of the left table plus additional columns containing data from the right table. For columns appended to the left table, row values equal the row values from the right table where the key values in the left and right tables are equal. If there are zero or multiple matches, the operation will fail.

matchOn is the columns to match.

joins is the columns to add from the right table.

func (*TableHandle) FirstBy

func (th *TableHandle) FirstBy(ctx context.Context, cols ...string) (*TableHandle, error)

FirstBy returns the first row for each group. If no columns are given, only the first row of the table is returned.

func (*TableHandle) GroupBy

func (th *TableHandle) GroupBy(ctx context.Context, by ...string) (*TableHandle, error)

GroupBy groups column content into arrays. Columns not in the aggregation become array-type. If no group-by columns are given, the content of each column is grouped into its own array.

func (*TableHandle) Head

func (th *TableHandle) Head(ctx context.Context, numRows int64) (*TableHandle, error)

Head returns a table with a specific number of rows from the beginning of the source table.

func (*TableHandle) HeadBy

func (th *TableHandle) HeadBy(ctx context.Context, numRows int64, columnsToGroupBy ...string) (*TableHandle, error)

HeadBy returns the first numRows rows for each group.

func (*TableHandle) IsStatic

func (th *TableHandle) IsStatic() bool

IsStatic returns false for dynamic tables, like streaming tables or time tables.

func (*TableHandle) IsValid

func (th *TableHandle) IsValid() bool

IsValid returns true if the handle is valid, i.e. table operations can be performed on it. No methods can be called on invalid TableHandles except for Release.

func (*TableHandle) Join

func (th *TableHandle) Join(ctx context.Context, rightTable *TableHandle, on []string, joins []string, reserveBits int32) (*TableHandle, error)

Join joins data from a pair of tables - a left and right table - based upon a set of match columns. The match columns establish key identifiers in the left table that will be used to find data in the right table. Any data types can be chosen as keys, and keys can be constructed from multiple values.

The output table contains rows that have matching values in both tables. Rows that do not have matching criteria will not be included in the result. If there are multiple matches between a row from the left table and rows from the right table, all matching combinations will be included. If no match columns are specified, every combination of left and right table rows is included.

matchOn is the columns to match.

joins is the columns to add from the right table.

reserveBits is the number of bits of key-space to initially reserve per group. Set it to 10 if unsure.

func (*TableHandle) LastBy

func (th *TableHandle) LastBy(ctx context.Context, cols ...string) (*TableHandle, error)

LastBy returns the last row for each group. If no columns are given, only the last row of the table is returned.

func (*TableHandle) LazyUpdate

func (th *TableHandle) LazyUpdate(ctx context.Context, formulas ...string) (*TableHandle, error)

LazyUpdate creates a new table containing a new, cached, formula column for each argument. The returned table also includes all the original columns from the source table.

func (*TableHandle) MaxBy

func (th *TableHandle) MaxBy(ctx context.Context, cols ...string) (*TableHandle, error)

MaxBy returns the maximum value for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (*TableHandle) MedianBy

func (th *TableHandle) MedianBy(ctx context.Context, cols ...string) (*TableHandle, error)

MedianBy returns the median value for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (*TableHandle) MinBy

func (th *TableHandle) MinBy(ctx context.Context, cols ...string) (*TableHandle, error)

MinBy returns the minimum value for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (*TableHandle) NaturalJoin

func (th *TableHandle) NaturalJoin(ctx context.Context, rightTable *TableHandle, on []string, joins []string) (*TableHandle, error)

NaturalJoin joins data from a pair of tables - a left and right table - based upon one or more match columns. The match columns establish key identifiers in the left table that will be used to find data in the right table. Any data types can be chosen as keys.

The output table contains all of the rows and columns of the left table plus additional columns containing data from the right table. For columns appended to the left table, row values equal the row values from the right table where the key values in the left and right tables are equal. If there is no matching key in the right table, appended row values are NULL. If there are multiple matches, the operation will fail.

matchOn is the columns to match.

joins is the columns to add from the right table.

func (*TableHandle) NumRows

func (th *TableHandle) NumRows() (numRows int64, ok bool)

NumRows returns the number of rows in the table. The return value is only ok if IsStatic() is true, since only static tables have a fixed number of rows.

func (*TableHandle) Query

func (th *TableHandle) Query() QueryNode

Query creates a new QueryNode based on this table.

Table operations can be performed on query nodes to get more query nodes. The nodes can then be turned back into TableHandles using the Client.ExecSerial or Client.ExecBatch methods.

See the docs for QueryNode or the TableOps example for more details on how to use query-graph operations.

func (*TableHandle) Release

func (th *TableHandle) Release(ctx context.Context) error

Release releases this table handle's resources on the server. The TableHandle is no longer usable after Release is called. It is safe to call Release multiple times.

func (*TableHandle) Select

func (th *TableHandle) Select(ctx context.Context, formulas ...string) (*TableHandle, error)

Select creates a new in-memory table that includes one column for each argument. Any columns not specified in the arguments will not appear in the resulting table.

func (*TableHandle) SelectDistinct

func (th *TableHandle) SelectDistinct(ctx context.Context, columns ...string) (*TableHandle, error)

SelectDistinct creates a new table containing all of the unique values for a set of key columns. When SelectDistinct is used on multiple columns, it looks for distinct sets of values in the selected columns.

func (*TableHandle) Snapshot

func (th *TableHandle) Snapshot(ctx context.Context) (arrow.Record, error)

Snapshot downloads the current state of the table from the server and returns it as an Arrow Record.

If a Record is returned successfully, it must be freed later with arrow.record.Release().

func (*TableHandle) Sort

func (th *TableHandle) Sort(ctx context.Context, cols ...string) (*TableHandle, error)

Sort returns a new table with rows sorted in a smallest to largest order based on the listed column(s).

func (*TableHandle) SortBy

func (th *TableHandle) SortBy(ctx context.Context, cols ...SortColumn) (*TableHandle, error)

Sort returns a new table with rows sorted in the order specified by the listed column(s).

func (*TableHandle) StdBy

func (th *TableHandle) StdBy(ctx context.Context, cols ...string) (*TableHandle, error)

StdBy returns the sample standard deviation for each group. Null values are ignored. Columns not used in the grouping must be numeric.

Sample standard deviation is calculated using `Bessel's correction <https://en.wikipedia.org/wiki/Bessel%27s_correction>`_, which ensures that the sample variance will be an unbiased estimator of population variance.

func (*TableHandle) SumBy

func (th *TableHandle) SumBy(ctx context.Context, cols ...string) (*TableHandle, error)

SumBy returns the total sum for each group. Null values are ignored. Columns not used in the grouping must be numeric.

func (*TableHandle) Tail

func (th *TableHandle) Tail(ctx context.Context, numRows int64) (*TableHandle, error)

Tail returns a table with a specific number of rows from the end of the source table.

func (*TableHandle) TailBy

func (th *TableHandle) TailBy(ctx context.Context, numRows int64, columnsToGroupBy ...string) (*TableHandle, error)

TailBy returns the last numRows rows for each group.

func (*TableHandle) Ungroup

func (th *TableHandle) Ungroup(ctx context.Context, cols []string, nullFill bool) (*TableHandle, error)

Ungroup ungroups column content. It is the inverse of the GroupBy method. Ungroup unwraps columns containing either Deephaven arrays or Java arrays. nullFill indicates whether or not missing cells may be filled with null. Set it to true if unsure.

func (*TableHandle) Update

func (th *TableHandle) Update(ctx context.Context, formulas ...string) (*TableHandle, error)

Update creates a new table containing a new, in-memory column for each argument. The returned table also includes all the original columns from the source table.

func (*TableHandle) UpdateView

func (th *TableHandle) UpdateView(ctx context.Context, formulas ...string) (*TableHandle, error)

UpdateView creates a new table containing a new, formula column for each argument. When using UpdateView, the new columns are not stored in memory. Rather, a formula is stored that is used to recalculate each cell every time it is accessed. The returned table also includes all the original columns from the source table.

func (*TableHandle) VarBy

func (th *TableHandle) VarBy(ctx context.Context, cols ...string) (*TableHandle, error)

VarBy returns the sample variance for each group. Null values are ignored. Columns not used in the grouping must be numeric.

Sample variance is calculated using `Bessel's correction <https://en.wikipedia.org/wiki/Bessel%27s_correction>`_, which ensures that the sample variance will be an unbiased estimator of population variance.

func (*TableHandle) View

func (th *TableHandle) View(ctx context.Context, formulas ...string) (*TableHandle, error)

View creates a new formula table that includes one column for each argument. When using view, the data being requested is not stored in memory. Rather, a formula is stored that is used to recalculate each cell every time it is accessed.

func (*TableHandle) Where

func (th *TableHandle) Where(ctx context.Context, filters ...string) (*TableHandle, error)

Where filters rows of data from the source table. It returns a new table with only the rows meeting the filter criteria of the source table.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL