graph

package
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 28, 2023 License: Apache-2.0 Imports: 12 Imported by: 0

Documentation

Overview

Package graph is the core package for GoMLX. It is used to create and run computation graphs on XLA -- a just-in-time compiler that allows for very efficient numerical computations. It also includes an autograd system and many useful higher level machine learning tools.

The main elements in the package are:

  • Manager: holds information on the device where numerical computations will be run. Everything runs within the scope of a Manager. Currently, it supports "Host" (using the CPU for numerical computation, using the Eigen library) and "CUDA" for GPUs. TPU support is also planned.

  • Graph: created by the Manager, this is used to construct a computation graph that can then be "just-in-time" compiled and executed efficiently. To construct a Graph one put together nodes or "ops" defining the operations.

  • Node: represents the result of an operation ("op" for short). E.g: Add, Sub, Mul, Sigmoid, ReshapeWithShape, etc. Each node has a fixed shape that is known in "graph building time" (see discussion below).

  • Context: created by the Manager, a higher level abstraction convenient when building gradient descent based models (like Neural Networks). It organizes Variable objects into "scope", which usually holds the learnable weights for ML. It also allows for loading/saving of these values.

## Deferred error Handling

Graph (and its Nodes) and Context methods instead of returning error stores them -- or stores the first error that happened during the building of a Graph. This way the user doesn't need to check for errors at every op -- which severely impact readability. Instead, the user can check for the error only at the very end of building a Graph. Since the stack trace is preserved, it's easy to trace where and what caused the error.

Notice that unfortunately there is no way to statically, in compile time, check for many of the errors that for human would be relatively easy to spot without running the program. There is no way in Go (and any other language I know) to run arbitrary logic in compile time.

## Delayed Execution

When using ML frameworks, it usually requires the user to think about different "times" that things are happening. The same is true for GoMLX, and it is helpful to keep those in mind upfront, to have the right mental model:

  • **Compile time**: this is during Go compilation. Some amount of type checking is done here, but most of the tensor compatibility cannot be done statically here, unfortunately. Even if for a human it would be obvious without running a program that some operation among different shaped tensors shouldn't be allowed, there is no way in Go (and any other language I know) to run arbitrary logic in compile time to validate tensor shape compatibility. So most of the checking is left to "graph building time".

  • **Graph building time**: this is when one is building a computation Graph, using the various ops (Add, Sub, Mul, ReduceSum, etc.). No actual computation happens here, just the building of the Graph (it's a kind of program) that will be executed later. This happens in "runtime", meaning after the Go program is compiled. And only in graph building time that proper shapes are checked, and good error (with stack traces) are reported back. This means that development often involves coding, and then running the Graph building to see if shapes are correct and what one wants -- Graph building is very fast, since not data is actually manipulated. Creating tests that just build the graph is the recommended way to develop.

  • **Computation/Training/Evaluation time**: this happens after the Graph is built and compiled, and all one does is feed values in, and get the computation out -- using a very fast just-in-time compiled code. error reports here are terser and harder to debug, but usually most of the issues are caught in Graph building time. TODO: improve handling and reporting of things like NaNs appearing in the computation.

TODO: Allow some of the ops to accept constant values directly, so one can write something like

`Add(x, 1.0)` as opposed to `Add(x, Const(g, 1.0))`

Index

Constants

View Source
const DefaultExecMaxCacheSize = 10
View Source
const InvalidNodeId = NodeId(-1)

InvalidNodeId indicates a node that failed to be created.

View Source
const InvalidNodeXlaHandle = NodeXlaHandle(-1)

InvalidNodeXlaHandle is returned when trying to add an invalid node, or a node that has no XLA counterpart (usually some form of No-op).

View Source
const InvalidParameterHandle = ParameterHandle(-1)

InvalidParameterHandle represents an invalid (or non-existent) parameter.

View Source
const (
	MaxSizeToPrint = 5
)
View Source
const NotAParameterStr = "NOT_A_PARAMETER"

Variables

View Source
var GetDefaultPlatform = xla.GetDefaultPlatform

GetDefaultPlatform returns the default list of platforms. Returns `(string, error)`.

View Source
var GetPlatforms = xla.GetPlatforms

GetPlatforms lists the available platforms. Returns `([]string, error)`.

View Source
var VJPRegistration = map[xla.NodeType]VJP{
	xla.ConstantNode:           nilVJP,
	xla.ParameterNode:          nilVJP,
	xla.WhereNode:              whereVJP,
	xla.NegNode:                negVJP,
	xla.AbsNode:                absVJP,
	xla.ExpNode:                expVJP,
	xla.LogNode:                logVJP,
	xla.Log1pNode:              log1pVJP,
	xla.TanhNode:               tanhVJP,
	xla.AddNode:                addVJP,
	xla.SubNode:                subVJP,
	xla.MulNode:                mulVJP,
	xla.DivNode:                divVJP,
	xla.SqrtNode:               sqrtVJP,
	xla.MaxNode:                minMaxVJP,
	xla.MinNode:                minMaxVJP,
	xla.ReshapeNode:            reshapeVJP,
	xla.ReduceSumNode:          reduceSumVJP,
	xla.ReduceMaxNode:          reduceMaxVJP,
	xla.LogisticNode:           logisticVJP,
	xla.DotNode:                dotVJP,
	xla.DotGeneralNode:         dotGeneralVJP,
	xla.SliceNode:              sliceVJP,
	xla.GatherNode:             gatherVJP,
	xla.ConcatenateNode:        concatenateVJP,
	xla.ConvGeneralDilatedNode: convGeneralDilatedVJP,
	xla.ReduceWindowNode:       reduceWindowVJP,
	xla.BatchNormTrainingNode:  batchNormVJP,
	xla.TransposeNode:          transposeVJP,
	xla.BroadcastInDimNode:     broadcastInDimVJP,
}

VJPRegistration maps each node type to its implementation of VJP. If implementing a new op, or for experimentation, one can dynamically change this.

Notice xla.GetTupleElementNode is specialized inside the main reverse autodiff code, and is not in the table here.

Functions

func AdjustAxis

func AdjustAxis(operand *Node, axis int) int

AdjustAxis returns the axis adjusted to the operand shape, in case the axis given is negative.

Types

type AxisRangeDef

type AxisRangeDef struct {
	Start, End, StrideValue int
	Full, NoEnd             bool
}

AxisRangeDef defines the range of an axis to include in a Slice.

Use AxisRange below to create it.

Full means to include the whole range (and ignore Start/End), and NoEnd means from Start to the full dimension of the axis.

Optional (if Stride != 0) it can set the stride for the axis as well.

Consider using AxisRange below to construct AxisRangeDef values.

TODO: Add strides.

func AxisRange

func AxisRange(indices ...int) AxisRangeDef

AxisRange creates a AxisRangeDef to be used in Slice. The indices can have 0, 1 or 2 elements: - If no elements are given, it's assumed to be full. - If one element is given, it's assumed to be the start, and the range should be taken to the end. - If two elements are given, they should be the start and end. - If more than 2 elements are given, they are ignored.

func (AxisRangeDef) Stride

func (ar AxisRangeDef) Stride(stride int) AxisRangeDef

Stride returns a copy of the AxisRangeDef with Stride set to the given stride.

type ConvolutionBuilder

type ConvolutionBuilder struct {
	// contains filtered or unexported fields
}

ConvolutionBuilder is a helper to build a convolution computation. Create it with Convolve, set the desired parameters and when set, call `IsNil()`.

func Convolve

func Convolve(x, kernel *Node) *ConvolutionBuilder

Convolve prepares a convolution on x with the given kernel for arbitrary number of spatial dimensions (1D, 2D, 3D, etc.).

It is very flexible and to ease setting its parameters it returns a ConvolutionBuilder for configuration. Once it is set up call `ConvolutionBuilder.Done` and it will return the convolved x. Browse through ConvolutionBuilder to see the capabilities, and the defaults.

The shape of x should be `[batch, <spatial_dimensions...>, input_channels]` if configured with `ConvolutionBuilder.ChannelsLast()`, the default. If one sets `ConvolutionBuilder.ChannelsFirst()`, the shape should be `[batch, input_channels, <spatial_dimensions...>]` instead.

The shape of kernel should be `[<spatial_dimensions...>, input_channels, output_channels]` if configured with `ConvolutionBuilder.ChannelsLast()`, the default. If one sets `ConvolutionBuilder.ChannelsFirst()`, the shape should be `[input_channels, <spatial_dimensions...>, output_channels]` instead.

Notice x and kernel must have the same rank.

We follow the Keras convention of calling the depth/feature/channels dimension **channels** and likewise we use **kernel** instead of filters (but they mean the same).

func (*ConvolutionBuilder) AxesConfig

AxesConfig specify the exact configuration of the axes on the input (x/input and kernel) and output of the Convolve operation. This is advanced (and may not be supported in every backend), but it's powerful. Consider using ChannelsAfter or ChannelsFirst instead. The default is ChannelsAfter.

func (*ConvolutionBuilder) ChannelsAfter

func (conv *ConvolutionBuilder) ChannelsAfter() *ConvolutionBuilder

ChannelsAfter specify the order of the dimensions for x and kernel. This is the default. For more fine control see AxesConfig.

If this is set x should be shaped `[batch, <spatial_dimensions...>, channels]`, and kernel should be shaped `[<spatial_dimensions...>, channels, output_channels]`.

func (*ConvolutionBuilder) ChannelsFirst

func (conv *ConvolutionBuilder) ChannelsFirst() *ConvolutionBuilder

ChannelsFirst specify the order of the dimensions for x and kernel. The default is ChannelsAfter, and for more fine control see AxesConfig.

If this is set x should be shaped `[batch, channels, <spatial_dimensions...>]`, and kernel should be shaped `[channels, <spatial_dimensions...>, output_channels]`.

func (*ConvolutionBuilder) DilationPerDim

func (conv *ConvolutionBuilder) DilationPerDim(dilations ...int) *ConvolutionBuilder

DilationPerDim sets the kernel dilations for each spatial dimension of the convolution. The default is 1 for every dimension.

Specifies the kernel up-sampling rate. In the literature, the same parameter is sometimes called input stride or dilation. The effective kernel size used for the convolution will be `kernel_shape + (kernel_shape - 1) * (dilation - 1)`, obtained by inserting (dilation-1) zeros between consecutive elements of the original filter in the spatial dimension.

One cannot use strides and dilation at the same time.

func (*ConvolutionBuilder) Dilations

func (conv *ConvolutionBuilder) Dilations(dilation int) *ConvolutionBuilder

Dilations sets the dilations of the convolution. It sets the same value for every dimension. The default is 1.

Specifies the kernel up-sampling rate. In the literature, the same parameter is sometimes called input stride or dilation. The effective kernel size used for the convolution will be `kernel_shape + (kernel_shape - 1) * (dilation - 1)`, obtained by inserting (dilation-1) zeros between consecutive elements of the original filter in the spatial dimension.

One cannot use strides and dilation at the same time.

func (*ConvolutionBuilder) Done

func (conv *ConvolutionBuilder) Done() *Node

Done indicates that the convolve operation is finished being configured and it updates the computation graph with convolution, and returns the resulting Node.

func (*ConvolutionBuilder) InputDilationPerDim

func (conv *ConvolutionBuilder) InputDilationPerDim(dilations ...int) *ConvolutionBuilder

InputDilationPerDim is used when generating the gradient of a convolution with strides. It effectively inserts zeros in the input, making it effectively larger than it actually is. The gradient of Convolve with input dilation is not implemented yet, careful.

func (*ConvolutionBuilder) NoPadding

func (conv *ConvolutionBuilder) NoPadding() *ConvolutionBuilder

NoPadding removes any paddings, so if the kernel spatial dimensions > 1, the output shape will be reduced on the edges. This is the default.

See also PadSame and PaddingPerDim.

func (*ConvolutionBuilder) PadSame

func (conv *ConvolutionBuilder) PadSame() *ConvolutionBuilder

PadSame adds paddings on the edges of x such that in the end the output of the convolution has the same shape as the input (assuming strides=1).

The default is no padding. See also NoPadding and PaddingPerDim.

func (*ConvolutionBuilder) PaddingPerDim

func (conv *ConvolutionBuilder) PaddingPerDim(paddings [][2]int) *ConvolutionBuilder

PaddingPerDim specifies the paddings at the start and at the end to use per spatial dimension, that means one pair ([2]int) per spatial dimension.

The default is no padding. See also NoPadding and PadSame.

func (*ConvolutionBuilder) StridePerDim

func (conv *ConvolutionBuilder) StridePerDim(strides ...int) *ConvolutionBuilder

StridePerDim sets the strides for each spatial dimension of the convolution. The default is 1 for every dimension.

The stride is how many steps to move after a convolution. A value of 2 will half the input size, since a convolution will be done at every other position, and so on. It can be defined separately per dimension.

One cannot use strides and dilation at the same time.

func (*ConvolutionBuilder) Strides

func (conv *ConvolutionBuilder) Strides(strides int) *ConvolutionBuilder

Strides sets the strides of the convolution. It sets the same value for every dimension. The default is 1.

The stride is how many steps to move after a convolution. A value of 2 will half the input size, since a convolution will be done at every other position, and so on. It can be defined separately per dimension.

One cannot use strides and dilation at the same time.

type ConvolveAxesConfig

type ConvolveAxesConfig struct {
	InputBatch, InputChannel int
	InputSpatial             []int

	KernelInputChannel, KernelOutputChannel int
	KernelSpatial                           []int

	OutputBatch, OutputChannel int
	OutputSpatial              []int
}

ConvolveAxesConfig defines the interpretation of each axis of the input/kernel/output tensors. There must be the same number of spatial dimensions (axes) for each of the 3 tensors. Input and output has batch and channel axes. Kernel has inputChannel and outputChannel axes.

type Exec

type Exec struct {
	// contains filtered or unexported fields
}

Exec creates and executes computation graphs as needed based on the inputs shapes.

It simplifies the process of executing a graph building function with real values. For example, assume you wrote:

def LengthGraph(x *Node) *Node {
  return Sqrt(ReduceAllSum(Mul(x, x)))
}

To actually use it with real values, one need to build the graph to a specific shape of x, and then execute it, which is not straight forward -- JIT compilation makes things faster, but it imposes some bureaucracy.

With Exec one can do:

var Length = NewExec(LengthGraph)
x0 := []float32{4}
fmt.Printf("Length(%v) = %v\n", x0, Length.Call(x0)[0].Value())
x1 := []float64{1, 2, 3}
fmt.Printf("Length(%v) = %v\n", x1, Length.Call(x1)[0].Value())

Notice that both calls to Length.Call will need to create different graphs (for different shapes of the input), but they will be cached, and if the same shapes are used in Call again, the cached compiled graph is reused.

Also Call outputs a slice with all the outputs, even when there is only one output.

If there are no inputs (for instance for some initialization function), then one needs to take a *Graph as the first parameter of graphFn. Example:

```
iotaMatrixExec := NewExec(func (g *Graph) *Node {
	return IotaFull(g, types.Make(types.Float32, 3, 3))
})
fmt.Printf("IotaFull(3x3 matrix, float32)=%v\n", iotaMatrixExec.Call()[0].Value().([][]float32))
```

The need to build different graphs for different shapes can be expensive when sizes of the inputs varies a lot. The usual solution is to use shapes with size in a power scale (for instance powers of 2) and masking of tensors for unused slices. For safety concerns there are a maximum number of different instantiations of the graph. It can be set or disabled with SetMaxCache.

Errors are returned inside the returned tensors.

There is concurrency safety with the cache, but XLA concurrency is not documented. TODO: figure it out.

func NewExec

func NewExec[F ExecGraphFn](manager *Manager, graphFn F) *Exec

NewExec constructs an Exec object that uses the given graphFn to build computation graphs. graphFn should take *Node as input and return a *Node. It's a wrapper for NewExecAny, but uses generics to type check that graphFn is valid.

func NewExecAny

func NewExecAny(manager *Manager, graphFn any) (*Exec, error)

NewExecAny constructs an Exec object that uses the given graphFn to build computation graphs. graphFn take only *Node parameters as input and return one or more *Node. Except if there are no inputs, in which case graphFn needs to take a *Graph as the first parameter.

If any input or output parameter of graphFn is not a *Node (or *Graph is there are no inputs), or if there are no inputs or outputs, it returns an error.

func (*Exec) Call

func (e *Exec) Call(args ...any) ([]tensor.Tensor, error)

Call parses the arguments into tensors (if they are not yet) and executes the graph corresponding to the shapes of the arguments. If a graph does not yet exist one is created, compiled and cached for the shapes.

It returns the outputs in a slice, even if there is only one output, or an error.

func (*Exec) CallWithGraph

func (e *Exec) CallWithGraph(args ...any) (results []tensor.Tensor, g *Graph, err error)

CallWithGraph is similar to Call, but it also returns the computation graph used in the call. Since Exec creates different computation graphs for different set of parameters, this can help disambiguate in case the user needs to use the Graph for something else.

It returns the outputs in a slice, even if there is only one output, and the graph used to execute the computation or an error.

func (*Exec) Finalize

func (e *Exec) Finalize()

Finalize clears the cache, finalizing the graphs. The Exec object shouldn't be used after that.

func (*Exec) GetNodeLogger

func (e *Exec) GetNodeLogger() LoggerFn

GetNodeLogger returns the currently registered LoggerFn.

func (*Exec) InDevice

func (e *Exec) InDevice(deviceNum int) *Exec

InDevice sets the device num to be used by graphs constructed by Exec. This should be called before any invocations of Call(). It returns a reference to itself so calls can be cascaded.

func (*Exec) Name

func (e *Exec) Name() string

Name returns the Exec name, a string used as prefix for Graph construction.

func (*Exec) SetMaxCache

func (e *Exec) SetMaxCache(maxCacheSize int) *Exec

SetMaxCache sets the maximum size of the cache. Set it to -1 to have unlimited cache size. It returns a reference to itself so calls can be cascaded.

func (*Exec) SetName

func (e *Exec) SetName(name string) *Exec

SetName sets the name of Exec, used to provide the name to graphs created. This should be called before any invocations of Call(). It returns a reference to itself so calls can be cascaded.

func (*Exec) SetNodeLogger

func (e *Exec) SetNodeLogger(loggerFn LoggerFn)

SetNodeLogger with the function to be called for the nodes marked for logging during execution. If set to nil nothing will be logged.

func (*Exec) SetSideParamsHook

func (e *Exec) SetSideParamsHook(fn SideParamsFn) *Exec

SetSideParamsHook makes Exec call the given function everytime before executing a graph with the list of parameters.

Side parameters are parameters created by the graphFn itself, and are not passed to it as input parameters. These could be variables in a model, or some global values. Exec has no knowledge of them, hence cannot set their values, and this serves as a hook to set them up just before the graph is executed.

The function is called anyway, even if there are no side parameters to be set, so it can be used as a hook just before graph execution.

SideParamsFn is a function that takes as input a slice of Device tensors that will be passed as input to graph execution. The first elements of the slice are the input parameters to graphFn function (given during the construction of Exec), and they will be filled already with the correct values.

type ExecGraphFn

type ExecGraphFn interface {
	func(*Graph) *Node |
		func(*Node) *Node |
		func(*Node, *Node) *Node |
		func(*Node, *Node, *Node) *Node |
		func(*Node, *Node, *Node, *Node) *Node |
		func(*Node, *Node, *Node, *Node, *Node) *Node |
		func(*Node, *Node, *Node, *Node, *Node, *Node) *Node |
		func([]*Node) *Node |

		func(*Graph) (*Node, *Node) |
		func(*Node) (*Node, *Node) |
		func(*Node, *Node) (*Node, *Node) |
		func(*Node, *Node, *Node) (*Node, *Node) |
		func(*Node, *Node, *Node, *Node) (*Node, *Node) |
		func(*Node, *Node, *Node, *Node, *Node) (*Node, *Node) |
		func(*Node, *Node, *Node, *Node, *Node, *Node) (*Node, *Node) |
		func([]*Node) (*Node, *Node) |

		func(*Graph) (*Node, *Node, *Node) |
		func(*Node) (*Node, *Node, *Node) |
		func(*Node, *Node) (*Node, *Node, *Node) |
		func(*Node, *Node, *Node) (*Node, *Node, *Node) |
		func(*Node, *Node, *Node, *Node) (*Node, *Node, *Node) |
		func(*Node, *Node, *Node, *Node, *Node) (*Node, *Node, *Node) |
		func(*Node, *Node, *Node, *Node, *Node, *Node) (*Node, *Node, *Node) |
		func([]*Node) (*Node, *Node, *Node) |

		func(*Graph) []*Node |
		func(*Node) []*Node |
		func(*Node, *Node) []*Node |
		func(*Node, *Node, *Node) []*Node |
		func(*Node, *Node, *Node, *Node) []*Node |
		func(*Node, *Node, *Node, *Node, *Node) []*Node |
		func(*Node, *Node, *Node, *Node, *Node, *Node) []*Node |
		func([]*Node) []*Node
}

ExecGraphFn is a type parameter for accepted function types for NewExec constructor.

type Graph

type Graph struct {
	// contains filtered or unexported fields
}

Graph with the operations and dependencies needed to run a computation.

It uses a deferred error reporting model, where if any error happens during the building of a model the first error is stored, and all further operations become no-ops. At the very end one can check with Graph.Error() if any error occurred and report that: it includes a stack trace. See discussion on package documentation.

func (*Graph) AOTCompile

func (g *Graph) AOTCompile() ([]byte, error)

AOTCompile returns the Ahead-Of-Time compiled version of the graph, that can be used for execution later.

The graph needs to be compiled. And it is AOT-compiled to the same platform it was already compiled -- TODO: cross-compile.

It returns a binary serialized format that can be executed later, without linking the whole GoMLX machinery. See tutorial on instructions and an example of how to do this.

func (*Graph) Client

func (g *Graph) Client() *xla.Client

Client return the client where this Graph is located, it's given by its manager and indicates the device for the computations.

func (*Graph) Compile

func (g *Graph) Compile(outputs ...*Node)

Compile just-in-time (JIT) compiles the Graph into a Computation that can be executed. If the output node is not given, it assumes it's the last node created in the graph. If more than one output is provided, it creates a tuple of those elements, and when executed the graph will output a Tuple.

func (*Graph) ConvertToStableHLO

func (g *Graph) ConvertToStableHLO() (*xla.StableHLO, error)

ConvertToStableHLO returns the StableHLO C++ object for the compiled graph. The graph needs to be compiled.

func (*Graph) DeviceNum

func (g *Graph) DeviceNum() int

DeviceNum returns the device number where this Graph is executed/built.

func (*Graph) Error

func (g *Graph) Error() error

Error returns the first error that happened during the building of the Graph. It's just a convenience method to report errors, so they can be handled at the end of Graph building (as opposed to at every step). See also `Ok`, which reports whether there were any errors. Node creation methods (all the math ops) become no-op if the graph has an error.

func (*Graph) Finalize

func (g *Graph) Finalize()

Finalize frees the associated data with the compiled graph (if it is compiled) and all the nodes. The graph is left in an unusable state.

func (*Graph) GraphId

func (g *Graph) GraphId() GraphId

GraphId is a unique id (within a Manager) of the graph. It's a counter that starts with 0.

func (*Graph) InvalidNode

func (g *Graph) InvalidNode() *Node

InvalidNode returns an empty node. This is usually what is returned by operations when the graph is in error.

func (*Graph) LoggedNodes

func (g *Graph) LoggedNodes() (nodes []*Node)

LoggedNodes returns all nodes from the graph marked to be logged. Exec object makes use of this information and logs those values when executing the graph.

func (*Graph) Manager

func (g *Graph) Manager() *Manager

Manager this Graph is attached to.

func (*Graph) MustCompile

func (g *Graph) MustCompile(output ...*Node)

MustCompile calls Compile and panics if an error happened.

func (*Graph) MustOk

func (g *Graph) MustOk()

MustOk panics if graph is not ok, printing stack of where the error happened. Otherwise, it's a no-op.

func (*Graph) MustRun

func (g *Graph) MustRun(params ParamsMap) *tensor.Device

MustRun is an alias to RunError that panics in case of error.

func (*Graph) Name

func (g *Graph) Name() string

Name of the computation this Graph defines, set during its construction.

func (*Graph) NodeById

func (g *Graph) NodeById(id NodeId) *Node

func (*Graph) NumParameters

func (g *Graph) NumParameters() int

NumParameters returns the number of parameters created for this graph.

func (*Graph) Ok

func (g *Graph) Ok() bool

Ok returns whether there were no errors during the computation Graph building so far.

func (*Graph) Parameter

func (g *Graph) Parameter(name string, shape shapes.Shape) (node *Node)

Parameter registers an input parameter for a computation Graph (e.g: a feature used as input). It can be used in two different ways: as a Node when building the Graph, so when defining a function that uses the parameter, or as the key in the map of the inputs when executing the computation Graph (see Manager.RunError).

func (*Graph) ParameterByIndex

func (g *Graph) ParameterByIndex(ii int) *Node

ParameterByIndex returns the ii-th parameter, in order of creation, registered for this graph.

func (*Graph) ParameterByName

func (g *Graph) ParameterByName(name string) (node *Node)

ParameterByName returns the parameter registered with the given name. Returns nil if the parameter with the given name hasn't been registered (see Parameter method).

func (*Graph) ResetError

func (g *Graph) ResetError()

ResetError clears the Graph error state. This will not fix any underlying causes of the error, and may leave the Graph in an unstable, undefined state. Used only for convenience for testing, when Graph errors are deliberately (for testing) being created, and we want to reset them (as opposed to creating a new Graph).

func (*Graph) Run

func (g *Graph) Run(params ParamsMap) *tensor.Device

Run runs the graph with the given parameters, and optionally the output node to calculate.

The params can use Go values, Local tensors or Device tensors. Go values and Local tensors will be transferred to Device tensors (located in the Manager's accelerator memory) before the graph is executed.

Any errors are reported in the returned tensor.Device.

func (*Graph) RunError

func (g *Graph) RunError(params ParamsMap) (*tensor.Device, error)

RunError runs the graph with the given parameters, and optionally the output node to calculate.

The params can use Go values, Local tensors or Device tensors. Go values and Local tensors will be transferred to Device tensors (located in the Manager's accelerator memory) before the graph is executed.

func (*Graph) RunWithTensors

func (g *Graph) RunWithTensors(params []*tensor.Device) (*tensor.Device, error)

RunWithTensors is a slightly faster execution path for the graph, but inputs must be provided already in Device tensors and in order.

func (*Graph) SetError

func (g *Graph) SetError(err error)

SetError for the Graph. After an error is set, most operations become no-ops. Only the first error is kept.

func (*Graph) SetErrorf

func (g *Graph) SetErrorf(format string, args ...any)

SetErrorf is similar to SetError, but allows formatting in place. It also automatically adds stack trace.

func (*Graph) SetTraced

func (g *Graph) SetTraced(tracked bool)

SetTraced defines whether each node creation is traced. If true, every node will save a stack-trace of where it was created, which is helpful for debugging. See Node.Track().

func (*Graph) String

func (g *Graph) String() string

String converts the Graph to a multi-Graph string.

type GraphId

type GraphId int

GraphId is a unique Graph id within a manager.

type LoggerFn

type LoggerFn func(messages []string, values []tensor.Tensor)

LoggerFn is the function used to log nodes marked for logging. It is called after the Call method, with the list of messages and corresponding values of the evaluated nodes.

type Manager

type Manager struct {
	// contains filtered or unexported fields
}

Manager sets up an execution "server" (?? whatever runs stuff in XLA ?), including managing the memory in the accelerator.

The Manager is used to create computation graphs (Graph), and then JIT-compile them to a "computation" (in XLA), that can be executed.

func (*Manager) Client

func (m *Manager) Client() *xla.Client

func (*Manager) ClientId

func (m *Manager) ClientId() xla.ClientId

func (*Manager) DefaultDeviceNum

func (m *Manager) DefaultDeviceNum() int

DefaultDeviceNum returns the default device number for the device associated with this Manager.

func (*Manager) DeviceCount

func (m *Manager) DeviceCount() int

DeviceCount returns the device count associated with this Manager.

func (*Manager) NewGraph

func (m *Manager) NewGraph(name string) *Graph

NewGraph constructs an empty Graph. If name is set to "", a unique name is picked. Uses DeviceNumber == 0.

func (*Manager) NewGraphWithDeviceNum

func (m *Manager) NewGraphWithDeviceNum(name string, deviceNum int) *Graph

NewGraphWithDeviceNum constructs an empty Graph, and sets to use the given device number. If name is set to "", a unique name is picked.

func (*Manager) Platform

func (m *Manager) Platform() string

Platform returns the platform used by manager -- which may be different from the one requested, depending on availability.

type ManagerBuilder

type ManagerBuilder struct {
	// contains filtered or unexported fields
}

ManagerBuilder allow setting of options to build a Manager object.

func BuildManager

func BuildManager() *ManagerBuilder

BuildManager allows the creations a Manager object, used to create computation graphs and execute them. Optional parameters are Platform, NumReplicas and NumThreads (see ManagerBuilder methods). At the end call IsNil().

func (*ManagerBuilder) Done

func (b *ManagerBuilder) Done() (m *Manager, err error)

Done constructs the Manager.

func (*ManagerBuilder) MustDone

func (b *ManagerBuilder) MustDone() *Manager

MustDone constructs the Manager. It panics if there was an error.

func (*ManagerBuilder) NumReplicas

func (b *ManagerBuilder) NumReplicas(n int) *ManagerBuilder

NumReplicas sets number of replicas to use when building Manager. Defaults to 1.

func (*ManagerBuilder) NumThreads

func (b *ManagerBuilder) NumThreads(n int) *ManagerBuilder

NumThreads sets number of threads to use when building Manager. Defaults to -1, which indicates to use what is available.

func (*ManagerBuilder) Platform

func (b *ManagerBuilder) Platform(p string) *ManagerBuilder

Platform can be left empty (it will pick one per GetDefaultPlatform) or can be selected from one returned by GetPlatforms.

type Node

type Node struct {
	// contains filtered or unexported fields
}

Node implements Node and is a standard node implementation that using a xla.SerializedNode definition can be used by most ops (node types).

Almost every new node type implementation will rely on the Node.

func Abs

func Abs(x *Node) *Node

Abs adds to the graph the corresponding operation on the input node x.

func Add

func Add(x, y *Node) *Node

Add adds a node that sums the two nodes. Standard broadcasting rules apply (see documentation).

func AddScalar

func AddScalar(x *Node, scalar float64) *Node

AddScalar converts scalar to a constant with x's DType and returns `x + scalar` with proper broadcasting.

func And

func And(x, y *Node) *Node

And adds to the graph the corresponding operation on the two input nodes x and y. Only integer types. Standard broadcasting rules apply (see documentation).

func BatchNormInferenceXLA

func BatchNormInferenceXLA(operand, scale, offset, mean, variance *Node, epsilon float32, axis int) *Node

BatchNormInferenceXLA implements Batch Norm for inference. See details in https://www.tensorflow.org/xla/operation_semantics#batchnorminference.

The recommendation is not to use this function directly, instead rely on layers.BatchNorm(), which will create and maintain the necessary variables.

Based on paper "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" (Sergey Ioffe, Christian Szegedy), https://arxiv.org/abs/1502.03167.

func BatchNormTrainingXLA

func BatchNormTrainingXLA(operand, scale, offset *Node, epsilon float32, axis int) (normalized, batchMean, batchVariance *Node)

BatchNormTrainingXLA implements Batch Norm for training. See details in https://www.tensorflow.org/xla/operation_semantics#batchnormtraining.

It returns the normalized tensor, the batchMean and the batchVariance.

The recommendation is not to use this function directly, instead rely on layers.BatchNorm(), which will create and maintain the necessary variables.

Based on paper "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" (Sergey Ioffe, Christian Szegedy), https://arxiv.org/abs/1502.03167.

func BroadcastPrefix

func BroadcastPrefix(x *Node, dims []int) *Node

BroadcastPrefix adds dimensions to an array by duplicating the data in the array.

The new dimensions dims are inserted on the left, i.e. if broadcast_sizes has values {a0, ..., aN} and the operand shape has dimensions {b0, ..., bM} then the shape of the output has dimensions {a0, ..., aN, b0, ..., bM}.

The new dimensions id into copies of the operand, i.e.

output[i0, ..., iN, j0, ..., jM] = operand[j0, ..., jM]

func BroadcastToDims

func BroadcastToDims(x *Node, dimensions ...int) *Node

BroadcastToDims broadcasts x to the given dimensions. They must both have the same rank, and the dimensions in x being broadcast (that is, where it's corresponding requested dimension is different) must be of size 1.

This is a convenient wrapper for BroadcastToShape.

func BroadcastToShape

func BroadcastToShape(x *Node, shape shapes.Shape) *Node

BroadcastToShape broadcasts x to the given shape. They must both have the same rank, and the dimensions in x being broadcast (that is, where its corresponding dimension is shape is different) must be of size 1.

func Ceil

func Ceil(x *Node) *Node

Ceil adds to the graph the corresponding operation on the input node x.

func Clip

func Clip(x, min, max *Node) *Node

Clip is a short cut to `Min(max, Max(x, min))`, which returns the values of x cliped between min and max.

func Clz

func Clz(x *Node) *Node

Clz adds to the graph the "count leading zeroes" operation on the input node x.

func Concatenate

func Concatenate(operands []*Node, axis int) *Node

Concatenate results on the given dimension.

func Const

func Const(g *Graph, x interface{}) *Node

Const creates constant nodes in the Graph. It can take Local tensors as well as multidimensional slices (or scalars). It uses introspection (reflect package) to figure out the shape given a Go scalar/slice/array. If the value is unsupported, it sets the error in the Graph.

A tensor.Device (e.g., generated by another computation) will be converted to local first.

func ConstAs

func ConstAs(base *Node, x interface{}) *Node

ConstAs creates a constant (slice or scalar) of the same DType and on the same Graph as the given base.

func ConstAsDType

func ConstAsDType(g *Graph, dtype shapes.DType, x interface{}) *Node

ConstAsDType creates a constant of the given DType. It adds the convenience of converting x (slice or scalar) to the appropriate type. E.g:

Pi := ConstScalar(g, myDType, math.Pi)
PiAndE := ConstScalar(g, myDType, []float64{math.Pi, math.E})

func ConstLocal

func ConstLocal(g *Graph, x *tensor.Local) *Node

ConstLocal returns a newly created constant node for the tensor x.

func ConvertType

func ConvertType(x *Node, dtype shapes.DType) *Node

ConvertType converts x to a different primitive type. See shapes.Supported for the supported types.

func Cos

func Cos(x *Node) *Node

Cos adds to the graph the corresponding operation on the input node x.

func Diagonal

func Diagonal(g *Graph, dim int) *Node

Diagonal returns a diagonal boolean square matrix of shape `[dim, dim]`.

This can be combined with `Where` to select values of any arbitrary other matrix.

func DiagonalWithValue

func DiagonalWithValue(scalar *Node, dim int) *Node

DiagonalWithValue returns a diagonal matrix of shape `[dim, dim]` with scalar in the diagonal and zero elsewhere.

func Div

func Div(x, y *Node) *Node

Div adds to the graph the corresponding operation on the two input nodes x and y. Standard broadcasting rules apply (see documentation).

func Dot

func Dot(lhs, rhs *Node) *Node

Dot adds to the graph the corresponding operation on the two input nodes x and y. The exact semantics of this operation depend on the ranks of the operands:

| Input | Output | Semantics | | vector [n] dot vector [n] | scalar | vector dot product | | matrix [m x k] dot vector [k] | vector [m] matrix-vector multiplication | | matrix [m x k] dot matrix [k x n] | matrix [m x n] | matrix-matrix multiplication |

lhs -> left-hand-side; rhs -> right-hand-side The operation performs sum of products over the second dimension of lhs (or the first if it has rank 1) and the first dimension of rhs. These are the "contracted" dimensions. The contracted dimensions of lhs and rhs must be of the same size. In practice, it can be used to perform dot products between vectors, vector/matrix multiplications or matrix/matrix multiplications.

func Einsum

func Einsum(equation string, lhs, rhs *Node) *Node

Einsum evaluates the "Einstein summation" various types of products (inner/outer/batched) between 2 tensors, on arbitrary dimensions.

This is inspired on numpy's Einsum, a description of which can be seen in https://stackoverflow.com/questions/26089893/understanding-numpys-einsum/33641428#33641428.

The equation string describes what it to be made with each dimension, for each operand, separated by ",", and the format of the result after the "->" describes what is to be made for each dimension.

Examples:

* `Einsum("ij,jk->ik", matrixA, matrixB)` performs the usual matrix multiplication. * `Einsum("bij,bjk->bik", batchedMatrixA, batchedMatrixB)` performs a batched matrix multiplication. * `Einsum("i,i->", vectorA, vectorB)` performs a dot product. * `Einsum("i,j->ij", vectorA, vectorB)` performs an outer (cross) product between two vectors.

It also works for higher dimension tensors. Dimensions missing on the output (after "->") are reduce summed.

More examples in TensorFlow documentation: https://www.tensorflow.org/api_docs/python/tf/einsum

Notice though that this Einsum is only defined for operations between 2 operands:

* lhs -> left-hand-side operand. * rhs -> right-hand-side operand.

func Equal

func Equal(x, y *Node) *Node

Equal returns the element-wise operation to the graph.

// Standard broadcasting rules apply (see documentation).                                                                                                                //

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func EqualTotalOrder

func EqualTotalOrder(x, y *Node) *Node

EqualTotalOrder returns the element-wise operation to the graph.

Standard broadcasting rules apply (see documentation).

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func Exp

func Exp(x *Node) *Node

Exp adds to the graph the corresponding operation on the input node x.

func ExpandDims

func ExpandDims(x *Node, axes ...int) *Node

ExpandDims expands x creating new axes just before the axes given. If axes[ii] < 0, then they are counted from the end -- -1 represents a new axis after the end of the original shape. The new axes will be of dimension 1 (so the total size of and contents of the tensor remains the same), and the rank is increased by `len(axes)`.

Maybe it should be called ExpandAxes ... but to follow Tensorflow nomenclature.

func Expm1

func Expm1(x *Node) *Node

Expm1 adds to the graph the corresponding operation on the input node x.

func Fill

func Fill[T shapes.Number](g *Graph, shape shapes.Shape, value T) *Node

Fill creates a Node with a value with the given shape, filled with the given value. It's implemented indirectly using other nodes.

func Floor

func Floor(x *Node) *Node

Floor adds to the graph the corresponding operation on the input node x.

func Gather

func Gather(params, indices *Node) *Node

Gather values in params from pointer in indices. The output are slices of `params` selected by `indices`, stitched together.

Let's assume params has shape `[i_0, ..., i_M, s_0, ..., s_o]`, where:

  • `i_0, ..., i_N` are the N "indexed dimensions", that is, the dimensions indexed by `indices`.
  • `s_0, ..., s_S` are the S dimensions of the slices that are going to be "gathered" (copied over).

And let's assume indices has shape `[o_0,...,o_O, N]`, where:

  • `o_0, ..., o_O` are enumerations of the slices from `params` to gather. E.g: let's say O=1, and o_0=3, that means there will be 3 slices to gather.
  • Last dimension `N`: this is the number of indices in `params` to point to. `N` is the number of dimensions indexed `i_0, ..., i_N` in `params` above.

The output will have shape `[o_0,...,o_O, s_0, ... s_S]`, where:

  • `o_0, ..., o_O` come from indices, and are enumerations of the slices from params to gather.
  • `s_0, ..., s_S` are the slice sizes copied from params.

For example:

params := [][]float32{{0, 1, 2}, {3, 4, 5}, {6, 7, 8}}
indices := [][]int{{1}, {0}}
Gather(params, indices) would return {{3, 4, 5}, {0, 1, 2}}

In the case above params shape is interpreted as `[i_0=3, s_0=3]`, and indices' shape is `[o_0=2, N=1]`. The output shape is `[o_0=2, s_0=3]`.

func GetTupleElement

func GetTupleElement(tuple *Node, index int) *Node

GetTupleElement extracts one element from a Tuple.

func Gradient

func Gradient(output *Node, gradientNodes ...*Node) []*Node

Gradient creates new nodes for the gradients of the output with respect to each node in gradientNodes. The output must be a scalar -- otherwise this would be called Jacobian. TODO: Define a Jacobian.

func GreaterOrEqual

func GreaterOrEqual(x, y *Node) *Node

GreaterOrEqual returns the element-wise operation to the graph.

Standard broadcasting rules apply (see documentation).

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func GreaterOrEqualTotalOrder

func GreaterOrEqualTotalOrder(x, y *Node) *Node

GreaterOrEqualTotalOrder returns the element-wise operation to the graph.

Standard broadcasting rules apply (see documentation).

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func GreaterThan

func GreaterThan(x, y *Node) *Node

GreaterThan returns the element-wise operation to the graph.

Standard broadcasting rules apply (see documentation).

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func GreaterThanTotalOrder

func GreaterThanTotalOrder(x, y *Node) *Node

GreaterThanTotalOrder returns the element-wise operation to the graph.

Standard broadcasting rules apply (see documentation).

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func IndicesForShape

func IndicesForShape(g *Graph, shape shapes.Shape) *Node

IndicesForShape enumerates a list of indices for all elements of the given shape. It will always return a node with shape [shape.Size(), shape.Rank()]. E.g: if shape=[3, 2], it returns `[[0 0] [0 1] [1 0] [1 1] [2 0] [2 1]]`.

func Inverse

func Inverse(x *Node) *Node

Inverse returns (1/x).

func Iota

func Iota(g *Graph, shape shapes.Shape, iotaDimension int) *Node

Iota creates a constant of the given shape with increasing numbers (starting from 0) on the given dimension. So Iota([2,2], 1) returns [[0 1][0 1]], while Iota([2,2], 0) returns [[0 0][1 1]].

func IotaFull

func IotaFull(g *Graph, shape shapes.Shape) *Node

IotaFull creates a constant of the given shape with increasing numbers for all values. So `IotaFull([2,2])` returns `[[0 1][2 3]]`.

func L1Norm

func L1Norm(x *Node) *Node

L1Norm returns the L1 norm.

func L2Norm

func L2Norm(x *Node) *Node

L2Norm returns the L2 Norm (same as euclidean length) of x, given by Sqrt(\Sum{x_i^2}).

func LessOrEqual

func LessOrEqual(x, y *Node) *Node

LessOrEqual returns the element-wise operation to the graph.

Standard broadcasting rules apply (see documentation).

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func LessOrEqualTotalOrder

func LessOrEqualTotalOrder(x, y *Node) *Node

LessOrEqualTotalOrder returns the element-wise operation to the graph.

Standard broadcasting rules apply (see documentation).

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func LessThan

func LessThan(x, y *Node) *Node

LessThan returns the element-wise operation to the graph.

Standard broadcasting rules apply (see documentation).

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func LessThanTotalOrder

func LessThanTotalOrder(x, y *Node) *Node

LessThanTotalOrder returns the element-wise operation to the graph.

Standard broadcasting rules apply (see documentation).

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func Log

func Log(x *Node) *Node

Log adds to the graph the corresponding operation on the input node x.

func Log1p

func Log1p(x *Node) *Node

Log1p adds to the graph the corresponding operation on the input node x.

func Logistic

func Logistic(x *Node) *Node

Logistic returns a node with $1/(1+exp(-x))$. Alias to the Sigmoid function.

func LowerTriangular

func LowerTriangular(g *Graph, dim int) *Node

LowerTriangular returns a lower-triangular boolean square matrix of shape `[dim, dim]`.

This can be combined with `Where` to select values of any arbitrary other matrix.

func MaskedSoftmax

func MaskedSoftmax(logits, mask *Node, axes ...int) *Node

MaskedSoftmax computes softmax activations. It's the equivalent to ```

Exp(logits) / ExpandDims(ReduceSum(Exp(logits), -1), -1)

```

But implemented in a numerical stable way.

The list axes defines which axes is it supposed to run the softmax over (the axes that will be summed over). If no axes are given, it is assumed to be [-1], meaning, the last axes.

It ignores values for which the corresponding mask is false, and will return 0 for those fields. mask and logits must have the same shape.

func Max

func Max(lhs, rhs *Node) *Node

Max returns element-wise the max from lhs and rhs. Standard broadcasting rules apply (see documentation).

func Min

func Min(lhs, rhs *Node) *Node

Min returns the min from lhs and rhs for each element. Standard broadcasting rules apply (see documentation).

func MinusOne

func MinusOne(x *Node) *Node

MinusOne returns (x-1).

func Mod

func Mod(x, y *Node) *Node

Mod adds to the graph the module operation on the two input nodes x and y. Standard broadcasting rules apply (see documentation).

func Mul

func Mul(x, y *Node) *Node

Mul adds a node that multiplies the two nodes. Standard broadcasting rules apply (see documentation).

func MulScalar

func MulScalar(x *Node, scalar float64) *Node

MulScalar converts scalar to a constant with x's DType and returns `x * scalar` with proper broadcasting.

func Neg

func Neg(x *Node) *Node

Neg adds to the graph negative of the given node x.

func NoOp

func NoOp(x *Node) *Node

NoOp creates a new Node whose output equals the input. No new XLA op is created, so no costs are actually impose.

func Not

func Not(x *Node) *Node

Not adds to the graph the corresponding operation on the input node x.

func NotEqual

func NotEqual(x, y *Node) *Node

NotEqual returns the element-wise operation to the graph.

Standard broadcasting rules apply (see documentation).

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func NotEqualTotalOrder

func NotEqualTotalOrder(x, y *Node) *Node

NotEqualTotalOrder returns the element-wise operation to the graph.

Standard broadcasting rules apply (see documentation).

The "TotalOrder" version of the operation enforces `-NaN < -Inf < -Finite < -0 < +0 < +Finite < +Inf < +NaN`.

func OneHot

func OneHot(indices *Node, depth int, dtype shapes.DType) *Node

OneHot converts an integer numbers representing indices to it's "one-hot" representation, that is an expanded tensor with the indices position set to 1, and the other positions set to 0. The returned tensor has one extra dimension at the end. For example `OneHot([][]INT64{1, 0, 3}, 4, types.Float32)` returns `[][]F32{{0, 1, 0, 0}, {1, 0, 0, 0}, {0, 0, 0, 1}}` TODO: implement with Select once it's implemented, since it's likely going to be faster (TensorFlow uses that).

func OneMinus

func OneMinus(x *Node) *Node

OneMinus returns (1-x).

func OnePlus

func OnePlus(x *Node) *Node

OnePlus returns (1+x).

func Ones

func Ones(g *Graph, shape shapes.Shape) *Node

Ones creates a computation with the same shape as the input, but with the value 1. It's implemented indirectly using other nodes.

func OnesLike

func OnesLike(x *Node) *Node

OnesLike returns a tensor with the same shape of x, filled with 1's.

func Or

func Or(x, y *Node) *Node

Or adds to the graph the corresponding operation on the two input nodes x and y. Only integer types. Standard broadcasting rules apply (see documentation).

func Pad

func Pad(operand, fillValue *Node, axesConfig ...PadAxis) *Node

Pad injects padding on the start, end or interior (in between each element) of the given operand. There must be at most `operand.Rank()` axesConfig values. Missing PadAxis are assumed to zeros, that is, no padding for those axes.

func PositiveIndicator

func PositiveIndicator(x *Node) *Node

PositiveIndicator returns 1 where x >= 0, 0 otherwise. See also StrictlyPositiveIndicator. E.g: PositiveIndicator({1.0, 0.0001, 0, -0.2, -3.0}) -> [1, 1, 1, 0, 0], with the same shape/dtype as x.

func Pow

func Pow(lhs, rhs *Node) *Node

Pow adds lhs^(rhs) to the graph. Standard broadcasting rules apply (see documentation).

func RSqrt

func RSqrt(x *Node) *Node

RSqrt adds the 1/sqrt(x) operation to the graph.

func ReduceAllMaskedMax

func ReduceAllMaskedMax(x, mask *Node) *Node

ReduceAllMaskedMax reduces all dimensions to a scalar by taking the max.

It ignores values for which the corresponding mask is false. mask and x must have the same shape.

func ReduceAllMaskedSum

func ReduceAllMaskedSum(x, mask *Node) *Node

ReduceAllMaskedSum reduces all dimensions to a scalar by summing.

It ignores values for which the corresponding mask is false. mask and x must have the same shape.

func ReduceAllMax

func ReduceAllMax(x *Node) *Node

ReduceAllMax reduces all dimensions to a scalar by taking the max.

func ReduceAllMean

func ReduceAllMean(x *Node) *Node

ReduceAllMean reduces all dimensions to a scalar by taking the mean.

func ReduceAllMultiply

func ReduceAllMultiply(x *Node) *Node

ReduceAllMultiply reduces all dimensions to a scalar by multiplying.

func ReduceAllSum

func ReduceAllSum(x *Node) *Node

ReduceAllSum reduces all dimensions to a scalar by summing.

func ReduceAndKeep

func ReduceAndKeep(x *Node, reduceFn func(x *Node, reduceAxes ...int) *Node, reduceAxes ...int) *Node

ReduceAndKeep applies the given reduction function but regenerate the reduced dimensions with size 1.

func ReduceAndKeepMasked

func ReduceAndKeepMasked(x, mask *Node, reduceFn func(x, mask *Node, reduceAxes ...int) *Node, reduceAxes ...int) *Node

ReduceAndKeepMasked applies the given masked reduction function but regenerates the reduced dimensions with size 1.

func ReduceMaskedMax

func ReduceMaskedMax(x, mask *Node, reduceAxes ...int) *Node

ReduceMaskedMax reduces by taking the max over the elements of the selected axes of the x. If reduceAxes is nil, reduce over all dimensions to a scalar.

It ignores values for which the corresponding mask is false. mask and x must have the same shape.

func ReduceMaskedSum

func ReduceMaskedSum(x, mask *Node, reduceAxes ...int) *Node

ReduceMaskedSum reduces by summing over the elements of the selected axes of the x. If reduceAxes is nil, reduce over all dimensions to a scalar.

It ignores values for which the corresponding mask is false. mask and x must have the same shape.

func ReduceMax

func ReduceMax(x *Node, reduceAxes ...int) *Node

ReduceMax reduces by taking the max over the elements of the selected axes of the x. If reduceAxes is nil, reduce over all dimensions to a scalar.

func ReduceMean

func ReduceMean(x *Node, reduceAxes ...int) *Node

ReduceMean reduces by taking the mean over the elements of the selected axes of the x.

func ReduceMultiply

func ReduceMultiply(x *Node, reduceAxes ...int) *Node

ReduceMultiply reduces by summing over the elements of the selected axes of the x. If reduceAxes is nil, reduce over all dimensions to a scalar.

func ReduceSum

func ReduceSum(x *Node, reduceAxes ...int) *Node

ReduceSum reduces by summing over the elements of the selected axes of the x. If reduceAxes is nil, reduce over all dimensions to a scalar.

func Reshape

func Reshape(x *Node, dimensions ...int) *Node

Reshape x to the given dimensions. Total size cannot change. One dimension can be left as -1, in which case it will be set to match the size, if possible.

func ReshapeWithShape

func ReshapeWithShape(x *Node, shape shapes.Shape) *Node

ReshapeWithShape reshapes x to the dimensions given by shape. Total size cannot change, neither the DType is allowed to change. Conceptually, this is a limited form of "shape casting".

func Reverse

func Reverse(x *Node, axes ...int) *Node

Reverse returns x with the values for the given dimensions reversed, that is, the value indexed at `i` will be swapped with the value at indexed `(dimension_size - 1 - i)`. The shape remains the same.

func RngNormal

func RngNormal(mu, sigma *Node, shape shapes.Shape) *Node

RngNormal constructs an output of a given shape with random numbers generated following the normal distribution. The parameters mu and sigma, and output shape have to have a floating point elemental type. The parameters furthermore have to be scalar valued.

func RngUniform

func RngUniform(a, b *Node, shape shapes.Shape) *Node

RngUniform constructs an output of a given shape with random numbers generated following the uniform distribution over the interval "$[a,b($". The parameters and output element type have to be a boolean type, an integral type or a floating point types, and the types have to be consistent. Furthermore, the parameters need to be scalar valued. If `b <= a` the result is implementation-defined.

func Round

func Round(x *Node) *Node

Round adds to the graph the corresponding operation on the input node x.

func Scalar

func Scalar(g *Graph, dtype shapes.DType, value float64) *Node

Scalar returns a constant scalar with the given value.

func ScalarOne

func ScalarOne(g *Graph, dtype shapes.DType) *Node

ScalarOne returns a scalar constant 1 for the given DType.

func ScalarZero

func ScalarZero(g *Graph, dtype shapes.DType) *Node

ScalarZero returns a scalar constant 0 for the given DType.

func Scatter

func Scatter(indices, updates *Node, shape shapes.Shape) *Node

Scatter sums up the slices in updates into a new tensor of the given shape, at the locations pointed by indices. It does the opposite of Gather.

func ScatterAdd

func ScatterAdd(operand, indices, updates *Node) *Node

ScatterAdd adds up the slices in updates into the given operand tensor, at the locations pointed by indices. It does the opposite of Gather.

func Sigmoid

func Sigmoid(x *Node) *Node

Sigmoid returns a node with $1/(1+exp(-x))$. Alias to the Logistic function.

func Sign

func Sign(x *Node) *Node

Sign adds to the graph the corresponding operation on the input node x.

func SignPlusOrMinus

func SignPlusOrMinus(x *Node) *Node

SignPlusOrMinus return +1 or -1 whether x >= 0 or x < 0. It's similar to Sign, but where 0s are considered positive.

func Sin

func Sin(x *Node) *Node

Sin adds to the graph the corresponding operation on the input node x.

func Slice

func Slice(x *Node, axesRanges ...AxisRangeDef) *Node

Slice take slices of the operand.

Each axis can have a range defined as (start, end) pairs. Any axis for which a range is not specified is assumed to be taken in full. Consider using the shortcut AxisRange to define the ranges.

Examples:

- For `x = {1, 2, 3, 4}`:

  • `Slice(x) = {1, 2, 3, 4}` // AxisRangeDef not given is taken in full.
  • `Slice(x, AxisRange()) = {1, 2, 3, 4}` // Default for AxisRange is the full range.
  • `Slice(x, AxisRange(2)) = {3, 4}` // If only start is given, it is taken to the end.
  • `Slice(x, AxisRange(1,-1)) = {2, 3}` // Negative values are taken from the end of the axis dimension.

- For `x = {{1, 2, 3}, {4, 5, 6}}`:

  • `Slice(x, AxisRange(), AxisRange(0,1)) = {{1}, {4}}` // First axis taken in full, second axis only the first element.
  • `Slice(x, AxisRange(1,2)) = {{4, 5, 6}}` // Missing second AxisRangeDef, assumed to be taken in full.

If Slice is called with `x.shape = [5, 5, 5, 5]` and `axesRanges=AxisRange(1,2), AxisRange(), AxisRange(2), AxisRange(0,2)` would return a node shaped `[1, 5, 3, 2]`.

It also works with strides, use the AxisRangeDef.Stride() method to conveniently set it.

Example:

- For `x = {1, 2, 3, 4}`:

  • `Slice(x, AxisRange().Stride(2)) = {1, 3}` // The whole range, but with a stride of 2.

- For `x = {{1, 2, 3}, {4, 5, 6}}`:

  • `Slice(x, AxisRange().Stride(2), AxisRange(-1)) = {{3}}` // Take every 2nd row (so only the 1st here), the last column.

func SliceWithStridesXLA

func SliceWithStridesXLA(x *Node, starts, limits, strides []int) *Node

SliceWithStridesXLA is identical to SliceXLA but allows one to define the strides in each dimension. The length of starts, limits and strides must match the rank of x.

func SliceXLA

func SliceXLA(x *Node, starts, limits []int) *Node

SliceXLA the operand from the start indices to the limit indices; e.g.

     x
[ 0 1 2 3 ]

y [ 4 5 6 7 ] => slice(start={1, 1}, limit={2, 3}) => [ 5 6 ]

[ 8 9 a b ]

Note that "limit" means up-to-but-not-including; i.e. [start, limit) in 1D range notation.

The length of starts and limits must match the rank of x.

func Softmax

func Softmax(logits *Node, axes ...int) *Node

Softmax computes softmax activations. It's the equivalent to ```

Exp(logits) / ExpandDims(ReduceSum(Exp(logits), -1), -1)

```

But implemented in a numerical stable way.

The list axes defines which axes is it supposed to run the softmax over (the axes that will be summed over). If no axes are given, it is assumed to be [-1], meaning, the last axes.

func SplitTuple

func SplitTuple(tuple *Node) []*Node

SplitTuple is a convenience wrapper around GetTupleElement, it will return an array with all the nodes.

func Sqrt

func Sqrt(x *Node) *Node

Sqrt adds to the graph the corresponding operation on the input node x.

func Square

func Square(x *Node) *Node

Square returns x^2 point-wise. Same as `Mul(x, x)`.

func Squeeze

func Squeeze(x *Node, axes ...int) *Node

Squeeze removes `axes` of dimension 1. If `axes` is not set, all axes of dimension 1 are removed. Otherwise, only the provided `axes` are removed. If any of the given `axes` is not of dimension 1, an error is raised in the Graph and an invalid node is returned.

If all dimensions are reduced, it returns a scalar.

func StopGradient

func StopGradient(x *Node) *Node

StopGradient creates a new NoOp Node, through which gradients don't back-propagate. No new XLA op is created, so no costs are actually impose.

func StrictlyPositiveIndicator

func StrictlyPositiveIndicator(x *Node) *Node

StrictlyPositiveIndicator returns 1 where x > 0, 0 otherwise. E.g: StrictlyPositiveIndicator({1.0, 0.0001, 0, -0.2, -3.0}) -> [1, 1, 0, 0, 0], with the same shape/dtype as x.

func Sub

func Sub(x, y *Node) *Node

Sub adds to the graph the corresponding operation on the two input nodes x and y. Standard broadcasting rules apply (see documentation).

func Tanh

func Tanh(x *Node) *Node

Tanh adds to the graph the corresponding operation on the input node x.

func Transpose

func Transpose(x *Node, axisA, axisB int) *Node

Transpose returns x with the axes axisA and axisB transposed.

func TransposeAllDims

func TransposeAllDims(x *Node, permutation ...int) *Node

TransposeAllDims allows one to transpose any or all dimensions. It permutes the operand axes with the given permutation, so ∀ i, 0 ≤ i < rank ⇒ input_dimensions[permutations[i]] = output_dimensions[i].

func Tuple

func Tuple(nodes ...*Node) *Node

Tuple creates a tuple of several values. It's the means to returns several values from one Graph computation.

func UpperTriangular

func UpperTriangular(g *Graph, dim int) *Node

UpperTriangular returns a upper-triangular boolean square matrix of shape `[dim, dim]`.

This can be combined with `Where` to select values of any arbitrary other matrix.

func Where

func Where(condition, onTrue, onFalse *Node) *Node

Where takes element-wise values from onTrue or onFalse depending on the value of condition (expected to be boolean).

func Xor

func Xor(x, y *Node) *Node

Xor adds to the graph the corresponding operation on the two input nodes x and y. Only integer types. Standard broadcasting rules apply (see documentation).

func Zeros

func Zeros(g *Graph, shape shapes.Shape) *Node

Zeros creates a computation with the same shape as the input, but with the value 0. It's implemented indirectly using other nodes.

func ZerosLike

func ZerosLike(x *Node) *Node

ZerosLike returns a tensor with the same shape of x, filled with 0's.

func (*Node) AssertDims

func (n *Node) AssertDims(dimensions ...int) bool

AssertDims checks whether the shape has the given dimensions and rank. A value of -1 in dimensions means it can take any value and is not checked.

If the shape is not what was expected, it sets an error in the associated Graph and returns false. If the Graph is already in error state, it also returns false.

This often serves as documentation for the code when implementing some complex computational graphs. This allows the reader of the code to corroborte what is the expected shape of a node.

Example:

```

batch_size := inputs[0].Shape().Dimensions[0]
...
layer := Concatenate(allEmbeddings, -1)
if !layer.AssertDims(batchSize, -1) {  // 2D tensor, with batch size as the leading dimension.
    return nil
}

```

func (*Node) AssertRank

func (n *Node) AssertRank(rank int) bool

AssertRank checks whether the shape has the given rank.

If the shape is not what was expected, it sets an error in the associated Graph and returns false. If the Graph is already in error state, it also returns false.

It can be used in a similar fashion as AssertDims.

func (*Node) AssertScalar

func (n *Node) AssertScalar() bool

AssertScalar checks whether the shape is a scalar.

If the shape is not what was expected, it sets an error in the associated Graph and returns false. If the Graph is already in error state, it also returns false.

It can be used in a similar fashion as AssertDims.

func (*Node) DType

func (n *Node) DType() shapes.DType

DType returns the DType of the node's shape.

func (*Node) Graph

func (n *Node) Graph() *Graph

Graph that holds this Node.

func (*Node) Id

func (n *Node) Id() NodeId

Id is the unique id of this node within the Graph.

func (*Node) Inputs

func (n *Node) Inputs() []*Node

Inputs are the other nodes that are direct inputs to the node. This doesn't include static inputs for some operations, that are not given by other nodes.

func (*Node) IsLogged

func (n *Node) IsLogged() bool

IsLogged returns whether node is marked to be logged.

func (*Node) LogMessage

func (n *Node) LogMessage() string

LogMessage associated with node, if any.

func (*Node) NodeType

func (n *Node) NodeType() xla.NodeType

NodeType identify the operation performed by the node.

func (*Node) Ok

func (n *Node) Ok() bool

Ok indicates whether the Node was created successfully.

func (*Node) ParameterHandle

func (n *Node) ParameterHandle() ParameterHandle

ParameterHandle returns the parameter id in the graph. Returns InvalidParameterHandle if node is not a parameter.

func (*Node) ParameterName

func (n *Node) ParameterName() string

ParameterName returns the parameter name, if this node is a parameter.

func (*Node) Rank

func (n *Node) Rank() int

Rank returns the rank fo the node's shape.

func (*Node) SetLogged

func (n *Node) SetLogged(message string)

SetLogged indicates that a node should be logged by executors.

func (*Node) Shape

func (n *Node) Shape() shapes.Shape

Shape of the output of the Node. It can be nil, for nodes that simply have a side effect, like a "Print" Node.

func (*Node) String

func (n *Node) String() (str string)

String implements Stringer interface. Logged nodes are marked with (*).

func (*Node) Trace

func (n *Node) Trace() error

Trace returns stack-trace in form of an error, of when the node was created. Only available if enabled by `Graph.SetTraced(true)`.

func (*Node) XlaHandle

func (n *Node) XlaHandle() NodeXlaHandle

XlaHandle is used internally to refer to the node counterpart in the XLA implementation.

type NodeId

type NodeId int

NodeId is a unique NodeId within a Graph

type NodeXlaHandle

type NodeXlaHandle int

NodeXlaHandle is used by the underlying XLA implementation.

type PadAxis

type PadAxis struct {
	Start, End, Interior int
}

PadAxis defines the amount of padding preciding one axis (Start), at the end of axis (End) or in between the inputs (Interior). This is used as parameter for the Pad function.

type ParameterHandle

type ParameterHandle int

ParameterHandle is a key to be used by Graph implementations to refer to its internal parameters.

type ParamsMap

type ParamsMap map[*Node]any

ParamsMap is a shortcut for the map of parameters and their values passed to a graph execution. The values are anything that is accepted by tensor.FromAnyValue().

type PoolBuilder

type PoolBuilder struct {
	// contains filtered or unexported fields
}

PoolBuilder is a helper to build a pool computation. Create it with {Max|Sum|Prod}Pool, set the desired parameters and when set, call `IsNil()`.

func MaxPool

func MaxPool(x *Node) *PoolBuilder

MaxPool prepares a max pooling on x with the given kernel for arbitrary number of spatial dimensions (1D, 2D, 3D, etc.). It returns the max value for the selected window, on given strides.

It is very flexible and to ease setting its parameters it returns a PoolBuilder for configuration. Once it is set up call `PoolBuilder.Done` and it will return the pooled x. Browse through PoolBuilder to see the capabilities, and the defaults.

The window sizes must be set with PoolBuilder.Window or PoolBuilder.WindowPerDim.

The shape of x should be `[batch, <spatial_dimensions...>, input_channels]` if configured with `PoolBuilder.ChannelsLast()`, the default. If one sets `PoolBuilder.ChannelsFirst()`, the shape should be `[batch, input_channels, <spatial_dimensions...>]` instead.

The shape of kernel should be `[<spatial_dimensions...>, input_channels, output_channels]` if configured with `PoolBuilder.ChannelsLast()`, the default. If one sets `PoolBuilder.ChannelsFirst()`, the shape should be `[input_channels, <spatial_dimensions...>, output_channels]` instead.

We follow the Keras convention of calling the depth/feature/channels dimension **channels**.

func (*PoolBuilder) ChannelsAfter

func (pool *PoolBuilder) ChannelsAfter() *PoolBuilder

ChannelsAfter specify the order of the axes for x and kernel. This is the default.

If this is set x should be shaped `[batch, <spatial_dimensions...>, channels]`.

func (*PoolBuilder) ChannelsFirst

func (pool *PoolBuilder) ChannelsFirst() *PoolBuilder

ChannelsFirst specify the order of the axes for x. The default is ChannelsAfter.

If this is set x should be shaped `[batch, channels, <spatial_dimensions...>]`.

func (*PoolBuilder) Done

func (pool *PoolBuilder) Done() *Node

Done indicates that the convolve operation is finished being configured and it updates the computation graph with convolution, and returns the resulting Node.

func (*PoolBuilder) NoPadding

func (pool *PoolBuilder) NoPadding() *PoolBuilder

NoPadding removes any paddings, so if the kernel spatial dimensions > 1, the output shape will be reduced on the edges. This is the default.

func (*PoolBuilder) PadSame

func (pool *PoolBuilder) PadSame() *PoolBuilder

PadSame adds paddings on the edges of x such that in the end the output of the convolution has the same shape as the input (assuming strides=1). The default is NoPadding.

func (*PoolBuilder) PaddingPerDim

func (pool *PoolBuilder) PaddingPerDim(paddings [][2]int) *PoolBuilder

PaddingPerDim specifies the paddings at the start and at the end to use per spatial dimension, that means one pair ([2]int) per spatial dimension. The default is PadSame.

func (*PoolBuilder) StridePerDim

func (pool *PoolBuilder) StridePerDim(strides ...int) *PoolBuilder

StridePerDim sets the strides for each spatial dimension of the pooling.

The default is the same value as teh window size (set with Window or WindowPerDim).

The stride is how many steps to move after a pooling. A value of 2 will half the input size, since a pooling will be done at every other position, and so on. It can be defined separately per dimension.

One cannot use strides and dilation at the same time.

func (*PoolBuilder) Strides

func (pool *PoolBuilder) Strides(strides int) *PoolBuilder

Strides sets the strides of the pooling. It sets the same value for every spatial dimension.

The default is the same value as teh window size (set with Window or WindowPerDim).

The stride is how many steps to move after the pooling of a window. A value of 2 will half the input size, since the pooling will be done at every other position, and so on. It can be defined separately per dimension with StridePerDim.

One cannot use strides and dilation at the same time.

func (*PoolBuilder) Window

func (pool *PoolBuilder) Window(windowSize int) *PoolBuilder

Window sets the pooling window size for all spatial dimensions to the same windowSize.

There is no default, and this must be set either with Window or WindowPerDim.

func (*PoolBuilder) WindowPerDim

func (pool *PoolBuilder) WindowPerDim(sizes ...int) *PoolBuilder

WindowPerDim sets the pooling window size for each spatial dimension.

There is no default, and this must be set either with Window or WindowPerDim.

type SideParamsFn

type SideParamsFn func(graph *Graph, params []*tensor.Device)

SideParamsFn is the functions that sets side parameters during execution for Graphs that defines those. Typically, this is used to set the variables.

type VJP

type VJP func(node, v *Node, outputShape shapes.Shape) []*Node

VJP returns the $v \dot Jacobian$ of the given node, with respect to each of its inputs (given in node.Inputs()). outputShape is the shape of the value for which we are calculating the gradient for. For now this is only used for Gradient, so one can expect outputShape to be scalar, and `v.Shape()` to be the same as `output.Shape()`. But this won't be true once Jacobian functionality (like a Gradient where output is a non-scalar tensor), is defined.

Directories

Path Synopsis
Package graphtest holds test utilities for packages that depend on the graph package.
Package graphtest holds test utilities for packages that depend on the graph package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL