Back to godoc.org
gorgonia.org/gorgonia

# Package gorgonia

v0.9.14
Latest Go to latest

The latest major version is .

Published: Sep 10, 2020 | License: | Module:

## Overview ¶

Package gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. Do differentiation with them just as easily.

Example (Autodiff)

Autodiff showcases automatic differentiation

Code:

```g := NewGraph()

var x, y, z *Node
var err error

// define the expression
x = NewScalar(g, Float64, WithName("x"))
y = NewScalar(g, Float64, WithName("y"))
if z, err = Add(x, y); err != nil {
log.Fatal(err)
}

// set initial values then run
Let(x, 2.0)
Let(y, 2.5)

// by default, LispMachine performs forward mode and backwards mode execution
m := NewLispMachine(g)
defer m.Close()
if err = m.RunAll(); err != nil {
log.Fatal(err)
}

fmt.Printf("z: %v\n", z.Value())

if xgrad, err := x.Grad(); err == nil {
}

if ygrad, err := y.Grad(); err == nil {
}
```
```z: 4.5
dz/dx: 1
dz/dy: 1
```
Example (Basic)

Basic example of representing mathematical equations as graphs.

In this example, we want to represent the following equation

```z = x + y
```

Code:

```g := NewGraph()

var x, y, z *Node
var err error

// define the expression
x = NewScalar(g, Float64, WithName("x"))
y = NewScalar(g, Float64, WithName("y"))
if z, err = Add(x, y); err != nil {
log.Fatal(err)
}

// create a VM to run the program on
machine := NewTapeMachine(g)
defer machine.Close()

// set initial values then run
Let(x, 2.0)
Let(y, 2.5)
if err = machine.RunAll(); err != nil {
log.Fatal(err)
}

fmt.Printf("%v", z.Value())
```
```4.5
```
Example (ConcurrentTraining)

Code:

```xV, yV, bs := prep()
concurrentTraining(xV, yV, bs, epochs)

fmt.Printf("x:\n%1.1v", xV)
fmt.Printf("y:\n%1.1v", yV)

// Outputx:
// x:
// ⎡    6      7      8      9  ... 5e+01  5e+01  5e+01  5e+01⎤
// ⎢7e+01  7e+01  7e+01  7e+01  ... 1e+02  1e+02  1e+02  1e+02⎥
// ⎢1e+02  1e+02  1e+02  1e+02  ... 2e+02  2e+02  2e+02  2e+02⎥
// ⎢2e+02  2e+02  2e+02  2e+02  ... 2e+02  2e+02  2e+02  2e+02⎥
// .
// .
// .
// ⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
// ⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
// ⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
// ⎣4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎦
// y:
// [-1e+02  -4e+02  -7e+02  -9e+02  ... -2e+08  -2e+08  -2e+08  -2e+08]
```
Example (ErrorHandling)

Gorgonia provides an API that is fairly idiomatic - most of the functions in in the API return (T, error). This is useful for many cases, such as an interactive shell for deep learning. However, it must also be acknowledged that this makes composing functions together a bit cumbersome.

To that end, Gorgonia provides two alternative methods. First, the `Lift` based functions; Second the `Must` function

Code:

```// Lift
g := NewGraph()
x := NewMatrix(g, Float32, WithShape(2, 3), WithInit(RangedFrom(0)), WithName("a"))
y := NewMatrix(g, Float32, WithShape(3, 2), WithInit(ValuesOf(float32(2))), WithName("b"))
z := NewMatrix(g, Float32, WithShape(2, 1), WithInit(Zeroes()), WithName("bias"))
wrong := NewMatrix(g, Float64, WithShape(2, 3), WithInit(RangedFrom(0)), WithName("wrong"))

// Different LiftXXX functions exist for different API signatures
// A good way to do this is to have some instantiated functions at the top level of the package
mul := Lift2(Mul)
sq := Lift1(Square)
sm := Lift1Axial(SoftMax)

nn := sm(sq(addB(mul(x, y), z, nil, []byte{1}))) // OK
nnPlusWrong := add(nn, wrong)                    // Wrong types. Will Error
fmt.Printf("nn: %v\nAn error occurs: %v\n", nn, nnPlusWrong.Err())

// Must()
h := NewGraph()
a := NewMatrix(h, Float32, WithShape(2, 3), WithInit(RangedFrom(0)), WithName("a"))
b := NewMatrix(h, Float32, WithShape(3, 2), WithInit(ValuesOf(float32(2))), WithName("b"))
c := NewMatrix(h, Float32, WithShape(2, 1), WithInit(RangedFrom(0)), WithName("c"))
wrong2 := NewMatrix(h, Float64, WithShape(2, 3), WithInit(RangedFrom(0)), WithName("wrong"))

// This is OK
nn2 := Must(SoftMax(
Must(Square(
Must(Mul(a, b)),
c,
nil, []byte{1},
)),
)),
))
fmt.Printf("nn2: %v\n", nn2)

defer func() {
if r := recover(); r != nil {
fmt.Printf("An error occurs (caught by recover()): %v\n", r)
}
}()
nn2PlusWrong := Must(Add(nn2, wrong2))
_ = nn2PlusWrong
```
```nn: ÷ false(%a, %f) :: Matrix float32
An error occurs: Type inference error. Op: + false. Children: [Matrix float32, Matrix float64], OpType:Matrix a → Matrix a → Matrix a: Unable to unify while inferring type of + false: Unification Fail: float64 ~ float32 cannot be unified
nn2: ÷ false(%a, %f) :: Matrix float32
An error occurs (caught by recover()): Type inference error. Op: + false. Children: [Matrix float32, Matrix float64], OpType:Matrix a → Matrix a → Matrix a: Unable to unify while inferring type of + false: Unification Fail: float64 ~ float32 cannot be unified
```
Example (KeepDims)

Code:

```g := NewGraph()
a := NodeFromAny(g, tensor.New(tensor.WithShape(2, 3), tensor.WithBacking([]float64{1, 2, 3, 4, 5, 6})))
m1, _ := Mean(a, 1)
m2, _ := KeepDims(a, false, func(a *Node) (*Node, error) { return Mean(a, 1) })
m3, _ := Mean(a, 0)
m4, _ := KeepDims(a, true, func(a *Node) (*Node, error) { return Mean(a, 0) })
m5, _ := KeepDims(a, true, func(a *Node) (*Node, error) { return Mean(a) })

// these reads are necessary as the VM may feel free to clobber the underlying data.
// e.g. if m1.Value() is used in the print statement below, the answer will be wrong.
// This is because before the VM executes the operations, a check is done to see if unsafe
// operations may be done. Unsafe operations are useful in saving memory.
// In this example, Reshape can be unsafely done if no other node is "using" m1,
// so m1.Value() will have its shape clobbered. Thus if m1.Value() is read after the VM has run,
// there is no guarantee that the data is correct. The only way around this is to "use" m1, by the Read() function.
var m1v, m2v, m3v, m4v Value

vm := NewTapeMachine(g)
if err := vm.RunAll(); err != nil {
panic(err)
}

fmt.Printf("a:\n%v\n", a.Value())
fmt.Printf("m1 (shape: %v):\n%v\n", m1.Value().Shape(), m1v)
fmt.Printf("m2 (shape: %v):\n%v\n", m2.Value().Shape(), m2v)
fmt.Printf("m3 (shape: %v):\n%v\n", m3.Value().Shape(), m3v)
fmt.Printf("m4 (shape: %v):\n%v\n", m4.Value().Shape(), m4v)
fmt.Printf("m5 (shape: %v):\n%v\n", m5.Value().Shape(), m5.Value())
```
```a:
⎡1  2  3⎤
⎣4  5  6⎦

m1 (shape: (2)):
[2  5]
m2 (shape: (2, 1)):
C[2  5]
m3 (shape: (3)):
[2.5  3.5  4.5]
m4 (shape: (1, 3)):
R[2.5  3.5  4.5]
m5 (shape: (1, 1)):
⎡3.5⎤
```
Example (LinearRegression)

### Linear Regression Example¶

The formula for a straight line is

```y = mx + c
```

We want to find an `m` and a `c` that fits the equation well. We'll do it in both float32 and float64 to showcase the extensibility of Gorgonia

Code:

```package main

import (
"fmt"
"log"
"math/rand"
"runtime"

. "gorgonia.org/gorgonia"
"gorgonia.org/tensor"
)

const (
vecSize = 1000000
)

// manually generate a fake dataset which is y=2x+random
func xy(dt tensor.Dtype) (x tensor.Tensor, y tensor.Tensor) {
var xBack, yBack interface{}
switch dt {
case Float32:
xBack = tensor.Range(tensor.Float32, 1, vecSize+1).([]float32)
yBackC := tensor.Range(tensor.Float32, 1, vecSize+1).([]float32)

for i, v := range yBackC {
yBackC[i] = v*2 + rand.Float32()
}
yBack = yBackC
case Float64:
xBack = tensor.Range(tensor.Float64, 1, vecSize+1).([]float64)
yBackC := tensor.Range(tensor.Float64, 1, vecSize+1).([]float64)

for i, v := range yBackC {
yBackC[i] = v*2 + rand.Float64()
}
yBack = yBackC
}

x = tensor.New(tensor.WithBacking(xBack), tensor.WithShape(vecSize))
y = tensor.New(tensor.WithBacking(yBack), tensor.WithShape(vecSize))
return
}

func random(dt tensor.Dtype) interface{} {
rand.Seed(13370)
switch dt {
case tensor.Float32:
return rand.Float32()
case tensor.Float64:
return rand.Float64()
default:
panic("Unhandled dtype")
}
}

func linregSetup(Float tensor.Dtype) (m, c *Node, machine VM) {
var xT, yT Value
xT, yT = xy(Float)

g := NewGraph()
x := NewVector(g, Float, WithShape(vecSize), WithName("x"), WithValue(xT))
y := NewVector(g, Float, WithShape(vecSize), WithName("y"), WithValue(yT))
m = NewScalar(g, Float, WithName("m"), WithValue(random(Float)))
c = NewScalar(g, Float, WithName("c"), WithValue(random(Float)))

pred := Must(Add(Must(Mul(x, m)), c))
se := Must(Square(Must(Sub(pred, y))))
cost := Must(Mean(se))

if _, err := Grad(cost, m, c); err != nil {
log.Fatalf("Failed to backpropagate: %v", err)
}

// machine := NewLispMachine(g)  // you can use a LispMachine, but it'll be VERY slow.
machine = NewTapeMachine(g, BindDualValues(m, c))
return m, c, machine
}

func linregRun(m, c *Node, machine VM, iter int, autoCleanup bool) (retM, retC Value) {
if autoCleanup {
defer machine.Close()
}
model := []ValueGrad{m, c}
solver := NewVanillaSolver(WithLearnRate(0.001), WithClip(5)) // good idea to clip

if CUDA {
}
var err error
for i := 0; i < iter; i++ {
if err = machine.RunAll(); err != nil {
fmt.Printf("Error during iteration: %v: %v\n", i, err)
break
}

if err = solver.Step(model); err != nil {
log.Fatal(err)
}

machine.Reset() // Reset is necessary in a loop like this
}
return m.Value(), c.Value()

}

func linearRegression(Float tensor.Dtype, iter int) (retM, retC Value) {
defer runtime.GC()
m, c, machine := linregSetup(Float)
return linregRun(m, c, machine, iter, true)
}

// Linear Regression Example
//
// The formula for a straight line is
//		y = mx + c
// We want to find an `m` and a `c` that fits the equation well. We'll do it in both float32 and float64 to showcase the extensibility of Gorgonia
func main() {
var m, c Value
// Float32
m, c = linearRegression(Float32, 500)
fmt.Printf("float32: y = %3.3fx + %3.3f\n", m, c)

// Float64
m, c = linearRegression(Float64, 500)
fmt.Printf("float64: y = %3.3fx + %3.3f\n", m, c)

}
```
```float32: y = 2.001x + 2.001
float64: y = 2.001x + 2.001
```

This example showcases the reasons for the more confusing functions.

Code:

```// The main reason for the following function is to make it easier to create APIs.
// Gorgonia;s APIs are very explicit hence not very user friendly.

const (
n        = 32
features = 784
size     = 100
)

// The following is an example of how to set up a neural network

// First, we set up the components
g := NewGraph()
w1 := NewMatrix(g, Float32, WithShape(features, size), WithName("w"), WithInit(GlorotU(1)))
b1 := NewMatrix(g, Float32, WithShape(1, size), WithName("b"), WithInit(Zeroes()))
x1 := NewMatrix(g, Float32, WithShape(n, features), WithName("x"))

// Then we write the expression:
var xw, xwb, act *Node
var err error
if xw, err = Mul(x1, w1); err != nil {
fmt.Printf("Err while Mul: %v\n", err)
}
if xwb, err = BroadcastAdd(xw, b1, nil, []byte{0}); err != nil {
fmt.Printf("Err while Add: %v\n", err)
}
if act, err = Tanh(xwb); err != nil {
fmt.Printf("Err while Tanh: %v\n", err)
}
fmt.Printf("act is a %T\n", act)

// The following is how to set up the exact same network

// First we set up our environment
//
// These LiftXXX functions transforms Gorgonia's default API into functions that return `Result`
var mul = Lift2(Mul)                   // Lift2 turns a func(*Node, *Node) (*Node, error)
var tanh = Lift1(Tanh)                 // Lift1 turns a func(*Node) (*Node, error)

// First we set up the components
h := NewGraph()
w2 := NewMatrix(h, Float32, WithShape(features, size), WithName("w"), WithInit(GlorotU(1)))
b2 := NewMatrix(h, Float32, WithShape(1, size), WithName("b"), WithInit(Zeroes()))
x2 := NewMatrix(h, Float32, WithShape(n, features), WithName("x"))

// Then we write the expression
act2 := tanh(add(mul(x2, w2), b2, nil, []byte{0}))
fmt.Printf("act2 is a %T (note it's wrapped in the `Result` type)\n", act2)
fmt.Println()
// both g and h are the same graph but the expression is easier to write for act2
fmt.Printf("Both g and h are the same graph:\ng: %v\nh: %v\n", g.AllNodes(), h.AllNodes())
```
```act is a *gorgonia.Node
act2 is a *gorgonia.Node (note it's wrapped in the `Result` type)

Both g and h are the same graph:
g: [w, b, x, A × B(%2, %0), Reshape(1, 100)(%1), SizeOf=32(%3), Repeat0(%4, %5), + false(%3, %6), tanh(%7)]
h: [w, b, x, A × B(%2, %0), Reshape(1, 100)(%1), SizeOf=32(%3), Repeat0(%4, %5), + false(%3, %6), tanh(%7)]
```

This example showcases dealing with errors. This is part 2 of the raison d'être of the more complicated functions - dealing with errors

Code:

```// Observe that in a similar example, errors are manually controllable in the original case,
// but automated in the second case
const (
n        = 32
features = 784
size     = 100
)

// The following is an example of how to set up a neural network

// First, we set up the components
g := NewGraph()
w1 := NewMatrix(g, Float32, WithShape(features, size), WithName("w"), WithInit(GlorotU(1)))
b1 := NewMatrix(g, Float32, WithShape(1, size), WithName("b"), WithInit(Zeroes()))
x1 := NewMatrix(g, Float32, WithShape(n, features), WithName("x"))

// Then we write the expression:
var xw, xwb, act *Node
var err error
if xw, err = Mul(x1, w1); err != nil {
fmt.Printf("Err while Mul: %v\n", err)
}
// we introduce an error here - it should be []byte{0}
if xwb, err = BroadcastAdd(xw, b1, nil, []byte{1}); err != nil {
fmt.Printf("Err while Add: %v\n", err)
goto case2
}
if act, err = Tanh(xwb); err != nil {
fmt.Printf("Err while Tanh: %v\n", err)
}
_ = act // will never happen

case2:

// The following is how to set up the exact same network

// First we set up our environment
//
// Now, remember all these functions no longer return (*Node, error). Instead they return `Result`
var mul = Lift2(Mul)
var tanh = Lift1(Tanh)

// First we set up the components
h := NewGraph()
w2 := NewMatrix(h, Float32, WithShape(features, size), WithName("w"), WithInit(GlorotU(1)))
b2 := NewMatrix(h, Float32, WithShape(1, size), WithName("b"), WithInit(Zeroes()))
x2 := NewMatrix(h, Float32, WithShape(n, features), WithName("x"))

// Then we write the expression
act2 := tanh(add(mul(x2, w2), b2, nil, []byte{1}))

// REMEMBER: act2 is not a *Node! It is a Result
fmt.Printf("act2: %v\n", act2)

// To extract error, use CheckOne
fmt.Printf("error: %v\n", CheckOne(act2))

// If you extract the *Node from an error, you get nil
fmt.Printf("Node: %v\n", act2.Node())
```
```Err while Add: Failed to infer shape. Op: + false: Shape mismatch: (32, 100) and (1, 10000)
act2: Failed to infer shape. Op: + false: Shape mismatch: (32, 100) and (1, 10000)
error: Failed to infer shape. Op: + false: Shape mismatch: (32, 100) and (1, 10000)
Node: <nil>
```
Example (NonConcurrentTraining)

Code:

```xV, yV, _ := prep()
nonConcurrentTraining(xV, yV, epochs)

fmt.Printf("x:\n%1.1v", xV)
fmt.Printf("y:\n%1.1v", yV)
```
```x:
⎡    6      7      8      9  ... 5e+01  5e+01  5e+01  5e+01⎤
⎢7e+01  7e+01  7e+01  7e+01  ... 1e+02  1e+02  1e+02  1e+02⎥
⎢1e+02  1e+02  1e+02  1e+02  ... 2e+02  2e+02  2e+02  2e+02⎥
⎢2e+02  2e+02  2e+02  2e+02  ... 2e+02  2e+02  2e+02  2e+02⎥
.
.
.
⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
⎣4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎦
y:
[-1e+02  -4e+02  -7e+02  -9e+02  ... -2e+08  -2e+08  -2e+08  -2e+08]
```
Example (SymbolicDiff)

SymbolicDiff showcases symbolic differentiation

Code:

```g := NewGraph()

var x, y, z *Node
var err error

// define the expression
x = NewScalar(g, Float64, WithName("x"))
y = NewScalar(g, Float64, WithName("y"))
if z, err = Add(x, y); err != nil {
log.Fatal(err)
}

// symbolically differentiate z with regards to x and y
// this adds the gradient nodes to the graph g
if grads, err = Grad(z, x, y); err != nil {
log.Fatal(err)
}

// create a VM to run the program on
machine := NewTapeMachine(g)
defer machine.Close()

// set initial values then run
Let(x, 2.0)
Let(y, 2.5)
if err = machine.RunAll(); err != nil {
log.Fatal(err)
}

fmt.Printf("z: %v\n", z.Value())
if xgrad, err := x.Grad(); err == nil {
}

if ygrad, err := y.Grad(); err == nil {
}
```
```z: 4.5
dz/dx: 1 | 1
dz/dy: 1 | 1
```

## Index ¶

### Constants ¶

`const CUDA = false`

CUDA indicates if this build is using CUDA

`const DEBUG = false`

DEBUG indicates if this build is in debug mode. It is not.

### Variables ¶

```var (

// Float64 ...
Float64 = tensor.Float64
// Float32 ...
Float32 = tensor.Float32
// Int ...
Int = tensor.Int
// Int64 ...
Int64 = tensor.Int64
// Int32 ...
Int32 = tensor.Int32
// Byte ...
Byte = tensor.Uint8
// Bool ...
Bool = tensor.Bool

// Ptr is equivalent to interface{}. Ugh Ugh Ugh
Ptr = tensor.UnsafePointer
)```

### func BatchNorm¶

`func BatchNorm(x, scale, bias *Node, momentum, epsilon float64) (retVal, γ, β *Node, op *BatchNormOp, err error)`

BatchNorm applies a batchnormalization. This operator can be used in forward pass or for training. In an evaluation only, the "op" output can be discared. In training phase, γ, β can be discarded and the op should be used.

### func Binomial32¶

`func Binomial32(trials, prob float64, s ...int) []float32`

Binomial32 returns a []float32 drawn from a binomial distribution given the trial and probability parameters.

### func Binomial64¶

`func Binomial64(trials, prob float64, s ...int) []float64`

Binomial64 returns a []float64 drawn from a binomial distribution given the trial and probability parameters.

`func Broadcast(a, b *Node, pattern BroadcastPattern) (*Node, *Node, error)`

Broadcast apply the pattern to the input nodes and returns two nodes suitable for a binary operator. Broadcast works somewhat like Numpy's broadcast, except it's now exposed as a function.

### func CheckOne¶

`func CheckOne(in Input) error`

CheckOne checks whether an input is an error

### func Compile¶

`func Compile(g *ExprGraph) (prog *program, locMap map[*Node]register, err error)`

Compile takes a graph and outputs a program suitable for *tapeMachine to run

### func CompileFunction¶

`func CompileFunction(g *ExprGraph, inputs, outputs Nodes) (prog *program, locMap map[*Node]register, err error)`

CompileFunction takes a graph, subsets it based on the input and output nodes provided and outputs a program suitable for *tapeMachine to run. It is analogous to theano.Function(). If some input nodes are not used or is not reachable, this function will return an error

### func DebugDerives¶

`func DebugDerives()`

DebugDerives turns on the derivation debug option when printing a graph

### func DimSizersToShapes¶

`func DimSizersToShapes(ds []DimSizer) ([]tensor.Shape, error)`

DimSizersToShapes is a convenience function to convert a slice of DimSizer to a slice of tensor.Shape. It will return an error if any of them isn't a tensor.Shape

### func DontDebugDerives¶

`func DontDebugDerives()`

DontDebugDerives turns off derivation debug option when printing a graph. It is off by default

### func Err¶

`func Err(e error) gErr`

Err is a function that returns a gErr. It wraps errors with stack information. A gErr implements Result, as well as error. This way, the Err() method acts as an unwrapper.

### func FmtNodeMap¶

`func FmtNodeMap(m interface{}) mapFmt`

FmtNodeMap is a convenience function to print map[*Node]<T>

The fmt flag that makes it all nicely formatted is "-". Because a map consists of two types (key's type and val's type), and the Go fmt verb doesn't quite allow us to do something like "%ds", a hack is introduced to enable nicer printing of map[*Node]<T>

Here's the hack: The "#" flag is used to indicate if the map will use the Node's ID or Name when formatting the map.

```%-v 	nodeName:%v
%-#v	nodeID:%v
%-d 	nodeName:%x
%-#d 	nodeID: %x
%-p 	nodeName:%p
%-#p	nodeID:%p
```

If the "-" flag is not found, then the formatter returns the default Go format for map[<T>]<T2>

### func Gaussian32¶

`func Gaussian32(mean, stdev float64, s ...int) []float32`

Gaussian32 returns a []float32 drawn from a gaussian distribution as defined by the mean and stdev

### func Gaussian64¶

`func Gaussian64(mean, stdev float64, s ...int) []float64`

Gaussian64 returns a []float64 drawn from a gaussian distribution as defined by the mean and stdev

### func GlorotEtAlN32¶

`func GlorotEtAlN32(gain float64, s ...int) []float32`

GlorotEtAlN32 returns float32 weights sampled from a normal distribution using the methods specified in Glorot et. al (2010). See also: http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

### func GlorotEtAlN64¶

`func GlorotEtAlN64(gain float64, s ...int) []float64`

GlorotEtAlN64 returns float64 weights sampled from a normal distribution using the methods specified in Glorot et. al (2010). See also: http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

### func GlorotEtAlU32¶

`func GlorotEtAlU32(gain float64, s ...int) []float32`

GlorotEtAlU32 returns float32 weights sampled from a uniform distribution using the methods specified in Glorot et. al (2010). See also: http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

For best results, use:

```1.0 for gain for weights that will be used in linear and/or sigmoid units
math.Sqrt(2.0) for gain for weights that will be used in ReLU units
math.Sqrt(2.0 / (1+alpha*alpha)) for ReLU that are leaky with alpha
```

### func GlorotEtAlU64¶

`func GlorotEtAlU64(gain float64, s ...int) []float64`

GlorotEtAlU64 returns float64 weights sampled from a uniform distribution using the methods specified in Glorot et. al (2010). See also: http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

For best results, use:

```1.0 for gain for weights that will be used in linear and/or sigmoid units
math.Sqrt(2.0) for gain for weights that will be used in ReLU units
math.Sqrt(2.0 / (1+alpha*alpha)) for ReLU that are leaky with alpha
```

### func GraphCollisionStats¶

`func GraphCollisionStats() (int, int, int)`

GraphCollisionStats returns the collisions in the graph only when built with the debug tag, otherwise it's a noop that returns 0

### func HeEtAlN64¶

`func HeEtAlN64(gain float64, s ...int) []float64`

HeEtAlN64 returns float64 weights sampled from a normal distro, using the methods described in He et al (2015). The formula is:

```randn(n) * sqrt(2/n)
```

For best results, use:

```1.0 for gain for weights that will be used in linear and/or sigmoid units
math.Sqrt(2.0) for gain for weights that will be used in ReLU units
math.Sqrt(2.0 / (1+alpha*alpha)) for ReLU that are leaky with alpha
```

### func HeEtAlU64¶

`func HeEtAlU64(gain float64, s ...int) []float64`

HeEtAlU64 returns float64 weights sampled from a uniform distro, using the methods described in He et al (2015). The formula is:

```randn(n) * sqrt(2/n)
```

For best results, use:

```1.0 for gain for weights that will be used in linear and/or sigmoid units
math.Sqrt(2.0) for gain for weights that will be used in ReLU units
math.Sqrt(2.0 / (1+alpha*alpha)) for ReLU that are leaky with alpha
```

### func Let¶

`func Let(n *Node, be interface{}) error`

Let binds a Value to a node that is a variable. A variable is represented as a *Node with no Op. It is equivalent to :

```x = 2
```

### func Lift1¶

`func Lift1(fn func(a *Node) (*Node, error)) func(a Input) Result`

Lift1 decorates a function with a precheck and post function lifting

### func Lift1Axial¶

`func Lift1Axial(fn func(a *Node, axes ...int) (*Node, error)) func(a Input, axes ...int) Result`

Lift1Axial decorates a function with a precheck and post function lifting

### func Lift2¶

`func Lift2(fn func(a, b *Node) (*Node, error)) func(a, b Input) Result`

Lift2 decorates a function with a precheck and post function lifting

`func Lift2Broadcast(fn func(a, b *Node, pat1, pat2 []byte) (*Node, error)) func(a, b Input, pat1, pat2 []byte) Result`

Lift2Broadcast decorates a function with a precheck and post function lifting

### func NewLispMachine¶

`func NewLispMachine(g *ExprGraph, opts ...VMOpt) *lispMachine`

NewLispMachine creates a VM that executes the graph as it is traversed. Depending on the VMOpts passed in this VM is also capable of performing automatic differentiation.

### func NewTapeMachine¶

`func NewTapeMachine(g *ExprGraph, opts ...VMOpt) *tapeMachine`

NewTapeMachine creates a VM that compiles a graph into a prog.

### func ReturnNode¶

`func ReturnNode(n *Node)`

ReturnNode returns a node to the pool. It does not check that the *Node has been removed from the graph. USE WITH CAUTION.

### func ReturnType¶

`func ReturnType(t hm.Type)`

ReturnType ...

### func S¶

`func S(start int, opt ...int) tensor.Slice`

S creates a tensor.Slice. end is optional. It should be passed in as the first param of the optionals. step is optional. It should be passed in as the second param of the optionals.

Default end is start+1. Default step is 1, unless end == step+1, then it defaults to 0

### func SetDerivOf¶

`func SetDerivOf(deriv, of *Node)`

SetDerivOf is used to hack around the fundamental limitations of Gorgonia.

Specifically it is used to set a node as the derivative of another node, used in the cuDNN version of batch norm.

The cuDNN BatchNorm operation produces the derivatives for the scale and bias as a side effect of calculating the derivative of the input. Because Gorgonia's Ops are modelled as pure functions (and no tuples) this causes a bit of trouble. With the clever use of scratch space ops multireturn can be simulated. But this causes derivatives to not be set correctly.

### func SetOptimizationLevel¶

`func SetOptimizationLevel(i int)`

SetOptimizationLevel sets the fast math optimization level. By default, fast math is turned off, and this function is a no-op.

Use the `fastmath` build tag to use fast math

### func TransformResult¶

`func TransformResult(ins ...Input) func(a Input, err error) Result`

TransformResult is like LiftResult, but allows for custom data types that fulfil Mker

### func TypeOf¶

`func TypeOf(v Value) hm.Type`

TypeOf returns the Type of the value

### func Uniform32¶

`func Uniform32(low, high float64, s ...int) []float32`

Uniform32 returns a []float64 drawn from a uniform distribution between [low, high) that is provided

### func Uniform64¶

`func Uniform64(low, high float64, s ...int) []float64`

Uniform64 returns a []float64 drawn from a uniform distribution between [low, high) that is provided

### func UnsafeLet¶

`func UnsafeLet(n *Node, be interface{}) error`

UnsafeLet binds a Value to any node, not just a variable node. This means that you can use it to change any node's value at the runtime of the graph. UNSAFE!

Additional notes: if `be` is a tensor.Slice, and the node's op is a sliceOp or sliceIncrOp, the op's slice will be replaced with the new slice.

### func Use¶

`func Use(b BLAS)`

Use defines which BLAS implementation gorgonia should use. The default is Gonum's Native. These are the other options:

```Use(blase.Implementation())
Use(cubone.Implementation())
Use(cgo.Implementation)
```

Note the differences in the brackets. The blase and cubone ones are functions.

### func UseNonStable¶

`func UseNonStable()`

UseNonStable turns off the stabilization functions when building graphs.

### func UseStabilization¶

`func UseStabilization()`

UseStabilization sets the global option to invoke stabilization functions when building the graph. Numerical stabilization is on by default

### func ValueClose¶

`func ValueClose(a, b Value) bool`

ValueClose checks whether two values are close to one another. It's predominantly used as an alternative equality test for floats

### func ValueEq¶

`func ValueEq(a, b Value) bool`

ValueEq is the equality function for values

### func WalkGraph¶

`func WalkGraph(start *Node) <-chan *Node`

WalkGraph walks a graph. It returns a channel of *Nodes, so be sure to consume the channel or there may be a deadlock

### func WithGraphName¶

`func WithGraphName(name string) graphconopt`

WithGraphName is a ExprGraph construction option that provides a name.

```type ADOp interface {
Op

DoDiff(ctx ExecutionContext, inputs Nodes, output *Node) error
}```

An ADOp is an Op that supports automatic differentiation.

```type AdaGradSolver struct {
// contains filtered or unexported fields
}```

`func NewAdaGradSolver(opts ...SolverOpt) *AdaGradSolver`

`func (s *AdaGradSolver) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the Adaptive Gradient gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

```type AdamSolver struct {
// contains filtered or unexported fields
}```

AdamSolver is the Adaptive Moment Estimation solver (basically RMSProp on steroids). Paper: http://arxiv.org/abs/1412.6980

We overload the purpose of existing data structure of a *dualValue. However, instead of just holding a value and its derivative, the cache's *dualValues hold the Means of gradients (in .Value) and the variances of the gradients (in .d)

`func NewAdamSolver(opts ...SolverOpt) *AdamSolver`

NewAdamSolver creates an Adam solver with these default values:

```eta (learn rate)	  	: 0.001
eps (smoothing factor)		: 1e-8
beta1				: 0.9
beta2 				: 0.999
batch				: 1
```

### func (*AdamSolver) Step¶

`func (s *AdamSolver) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the Adaptive Moment Estimation gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

### type Arena¶

```type Arena interface {
Get(dev Device, size int64) (tensor.Memory, error)       // Get returns a NoOpError when it cannot get a memory. Please allocate
GetFromValue(dev Device, v Value) (tensor.Memory, error) // Gets a memory and copies the values into the memory and returns it.
Put(dev Device, mem tensor.Memory, size int64)           // puts the memory back into the arena
PutValue(dev Device, v Value)                            // puts the memory back into the arena

// Transfers memory from device to device
Transfer(toDev, fromDev Device, v Value, synchronous bool) (retVal Value, err error)
}```

Arena is a representation of a pool of tensor.Memory

### type AutoDiffError¶

`type AutoDiffError struct{}`

AutoDiffError is an error which should be passed if the function is not differentiable. This is useful for Op implementations

### func (AutoDiffError) Error¶

`func (err AutoDiffError) Error() string`

### type B¶

`type B bool`

B represents a bool value.

### func (*B) Data¶

`func (v *B) Data() interface{}`

Data returns the original representation of the Value

### func (*B) Dtype¶

`func (v *B) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

### func (*B) Format¶

`func (v *B) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

### func (*B) MemSize¶

`func (v *B) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

### func (*B) Pointer¶

`func (v *B) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

### func (*B) Shape¶

`func (v *B) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

### func (*B) Size¶

`func (v *B) Size() int`

Size returns 0 for all scalar Values

### func (*B) Uintptr¶

`func (v *B) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

### type BLAS¶

```type BLAS interface {
blas.Float32
blas.Float64
blas.Complex64
blas.Complex128
}```

BLAS represents all the possible implementations of BLAS. The default is Gonum's Native

### func WhichBLAS¶

`func WhichBLAS() BLAS`

WhichBLAS returns the BLAS that gorgonia uses.

### type BarzilaiBorweinSolver¶

```type BarzilaiBorweinSolver struct {
// contains filtered or unexported fields
}```

BarzilaiBorweinSolver / Barzilai-Borwein performs Gradient Descent in steepest descend direction Solves 0 = F(x), by

```xᵢ₊₁ = xᵢ - eta * Grad(F)(xᵢ)
```

Where the learn rate eta is calculated by the Barzilai-Borwein method:

```eta(xᵢ) = <(xᵢ - xᵢ₋₁), (Grad(F)(xᵢ) - Grad(F)(xᵢ₋₁))> /
```

The input learn rate is used for the first iteration.

TODO: Check out stochastic implementations, e.g. "Barzilai-Borwein Step Size for Stochastic Gradient Descent" https://arxiv.org/abs/1605.04131

### func NewBarzilaiBorweinSolver¶

`func NewBarzilaiBorweinSolver(opts ...SolverOpt) *BarzilaiBorweinSolver`

NewBarzilaiBorweinSolver creates a new Barzilai-Borwein solver withs some default values: the learn rate is set to 0.001 and the solver does not use clipping.

### func (*BarzilaiBorweinSolver) Step¶

`func (s *BarzilaiBorweinSolver) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the Barzilai-Borwein gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

### type BatchNormOp¶

```type BatchNormOp struct {
// contains filtered or unexported fields
}```

BatchNormOp is a batch normalization process as described by Ioffe and Szegedy (2015) - http://arxiv.org/abs/1502.03167

Normalization is done as:

```γ(x - μ) / σ + β
```

γ is the scaling factor and β is the offset factor. These are created by BatchNorm()

### func (*BatchNormOp) Arity¶

`func (op *BatchNormOp) Arity() int`

Arity returns 1

### func (*BatchNormOp) CallsExtern¶

`func (op *BatchNormOp) CallsExtern() bool`

CallsExtern is false

### func (*BatchNormOp) DiffWRT¶

`func (op *BatchNormOp) DiffWRT(inputs int) []bool`

DiffWRT ...

### func (*BatchNormOp) Do¶

`func (op *BatchNormOp) Do(values ...Value) (retVal Value, err error)`

Do performs the batchnorm computation on the values

### func (*BatchNormOp) DoDiff¶

`func (op *BatchNormOp) DoDiff(ctx ExecutionContext, inputs Nodes, output *Node) error`

DoDiff does the gradient computation

### func (*BatchNormOp) Hashcode¶

`func (op *BatchNormOp) Hashcode() uint32`

Hashcode ...

### func (*BatchNormOp) InferShape¶

`func (op *BatchNormOp) InferShape(ns ...DimSizer) (tensor.Shape, error)`

InferShape from the input values

### func (*BatchNormOp) OverwritesInput¶

`func (op *BatchNormOp) OverwritesInput() int`

OverwritesInput is -1 (operator doesn't overwrite any input value)

### func (*BatchNormOp) Reset¶

`func (op *BatchNormOp) Reset() error`

Reset the operator by zeroing the internals scratch spaces

### func (*BatchNormOp) ReturnsPtr¶

`func (op *BatchNormOp) ReturnsPtr() bool`

ReturnsPtr is true

### func (*BatchNormOp) SetTesting¶

`func (op *BatchNormOp) SetTesting()`

SetTesting configure the op for testing mode

### func (*BatchNormOp) SetTraining¶

`func (op *BatchNormOp) SetTraining()`

SetTraining configure the op for training mode. A call to this function implicitly calls the Reset() method

### func (*BatchNormOp) String¶

`func (op *BatchNormOp) String() string`

### func (*BatchNormOp) SymDiff¶

`func (op *BatchNormOp) SymDiff(inputs Nodes, output *Node, grad *Node) (retVal Nodes, err error)`

SymDiff ...

### func (*BatchNormOp) Type¶

`func (op *BatchNormOp) Type() hm.Type`

Type ...

### func (*BatchNormOp) UsePreallocDo¶

`func (op *BatchNormOp) UsePreallocDo(prealloc Value, inputs ...Value) (retVal Value, err error)`

UsePreallocDo ...

### func (*BatchNormOp) WriteHash¶

`func (op *BatchNormOp) WriteHash(h hash.Hash)`

WriteHash ...

### type Batched¶

```type Batched interface {
WorkAvailable() <-chan struct{}
DoWork()
}```

Batched interface describes any object that can process batch work

### type BatchedBLAS¶

```type BatchedBLAS interface {
Batched
BLAS
}```

BatchedBLAS interface describes any object that can process BLAS work in batch

### type BatchedDevice¶

```type BatchedDevice interface {
Batched
Retval() interface{}
Errors() error
}```

BatchedDevice is the superset of BatchedBLAS and the batched CUDA workflow.

### type BinaryOp¶

```type BinaryOp interface {
Op

IsBinary() bool
}```

A BinaryOp is an Op that takes only two inputs

`type BroadcastPattern byte`

BroadcastPattern is actually a bit array. It's split into 2 nibbles - the left nibble represents the left operand, the right nibble represents the right operand:

```xxxx|xxxx
```

The least significant bit of each nibble is elem 0. Concrete examples:

```00000010 (0x02) = broadcast axis 1 of the right operand
00000001 (0x01) = broadcast axis 0 of the right operand
00000101 (0x09) = broadcast axis 0 AND axis 2 of the right operand
00010000 (0x10) = broadcast axis 0 of the left operand
00110000 (0x30) = broadcast axis 0 and axis 1 of the lef operand
```

You get the drill.

Do note that the current limitation of the BroadcastPattern allows only up to 4 dimensions per operand.

`func NewBroadcastPattern(leftAxes, rightAxes []byte) BroadcastPattern`

NewBroadcastPattern is a helper function to create broadcast patterns

### type CLDoer¶

```type CLDoer interface {
CLDo(inputs ...Value) (Value, error)
}```

CLDoer uses OpenCL to perform the Op. As of now, there are NO Ops that support OpenCL

```type CUDAADOp interface {
CUDADoDiff(extern External, dev Device, inputs Nodes, output *Node) error
}```

A CUDAADOp operation have a specific method to run with CUDA

```type CUDADoer interface {
CUDADo(extern External, dev Device, prealloc Value, inputs ...Value) (retVal Value, err error)
}```

CUDADoer uses CUDA to perform the Op.

### type CloneErrorer¶

```type CloneErrorer interface {
Clone() (interface{}, error)
}```

CloneErrorer represents any type that can clone itself and return an error if necessary

### type Cloner¶

```type Cloner interface {
Clone() interface{}
}```

Cloner represents any type that can clone itself.

### type CopierFrom¶

```type CopierFrom interface {
CopyFrom(src interface{}) error
}```

CopierFrom represents any type that can copy data from the source provided.

### type CopierTo¶

```type CopierTo interface {
CopyTo(dest interface{}) error
}```

CopierTo represents any type that can copy data to the destination.

### type Device¶

`type Device int`

Device represents the device where the code will be executed on. In this build, all code will run on the CPU

```const (
// CPU the only device the graph will be executed on
CPU Device = 0
)```

### func (Device) Alloc¶

`func (d Device) Alloc(extern External, size int64) (tensor.Memory, error)`

Alloc allocates memory on the device. This is currently a NO-OP in this build

### func (Device) Free¶

`func (d Device) Free(extern External, mem tensor.Memory, sie uint) error`

Free frees the memory on the device. This is currently a NO-OP in this build

### func (Device) IsGPU¶

`func (d Device) IsGPU() bool`

IsGPU will always return false in this build

### func (Device) String¶

`func (d Device) String() string`

String implements fmt.Stringer and runtime.Stringer

### type DimSizer¶

```type DimSizer interface {
DimSize(int) (int, error)
}```

DimSizer is any type (typically a tensor.Shape) that allows querying for a dimension size given an input dimension.

### func ShapesToDimSizers¶

`func ShapesToDimSizers(shapes []tensor.Shape) []DimSizer`

ShapesToDimSizers is a convenience function to convert a slice of tensor.Shape to a slice of DimSizer

### type Dtyper¶

```type Dtyper interface {
Dtype() tensor.Dtype
}```

Dtyper represents any type (typically a Value) that knows its own Dtype

### type Errer¶

```type Errer interface {
Err() error
}```

Errer is an interface that can return an error.

### type ExecutionContext¶

```type ExecutionContext struct {
External
Device
}```

ExecutionContext informs how an op should be executed

### type ExprGraph¶

```type ExprGraph struct {
// contains filtered or unexported fields
}```

ExprGraph is a data structure for a directed acyclic graph (of expressions). This structure is the main entry point for Gorgonia.

### func NewGraph¶

`func NewGraph(opts ...graphconopt) *ExprGraph`

NewGraph creates a new graph. Duh

### func (*ExprGraph) AddNode¶

`func (g *ExprGraph) AddNode(n *Node) (retVal *Node)`

AddNode adds n to the graph. It panics if the added node ID matches an existing node ID.

### func (*ExprGraph) AllNodes¶

`func (g *ExprGraph) AllNodes() Nodes`

AllNodes is like Nodes, but returns Nodes instead of []graph.Node. Nodes() has been reserved for the graph.Directed interface, so this one is named AllNodes instead

### func (*ExprGraph) ByName¶

`func (g *ExprGraph) ByName(name string) (retVal Nodes)`

ByName returns nodes that have the name provided. Bear in mind that the name that is compared to is the internal name, not the result of calling node.Name(). The reason for doing this is for ease of finding only names that are user-supplied, instead of autogenerated names

### func (*ExprGraph) Clone¶

`func (g *ExprGraph) Clone() interface{}`

Clone clones the graph. All nodes gets cloned, and their values are cloned as well.

### func (*ExprGraph) Constant¶

`func (g *ExprGraph) Constant(v Value) *Node`

Constant returns a constant that may be found in the graph. If no constant were found, a new one is created instead

### func (*ExprGraph) Edge¶

`func (g *ExprGraph) Edge(u, v int64) graph.Edge`

Edge returns the edge from u to v if such an edge exists and nil otherwise. The node v must be directly reachable from u as defined by the From method.

### func (*ExprGraph) Edges¶

`func (g *ExprGraph) Edges() graph.Edges`

Edges returns all the edges in the graph.

### func (*ExprGraph) ExactSubgraphRoots¶

`func (g *ExprGraph) ExactSubgraphRoots(ns ...*Node) *ExprGraph`

ExactSubgraphRoots creates a subgraph from the roots provided. The difference between SubgraphRoots and ExactSubgraphRoots is that ExactSubGraphRoots will not attempt to discover if any nodes are missing.

Given a function like the following:

```z = x + y
set(x, -x.Grad) // setting the value of x to the negative of the gradient
```

When SubgraphRoots is used on z, the `-x.Grad` will be included. When using ExactSubgraphRoots, only `x` and `y` are included in the subgraph

### func (*ExprGraph) From¶

`func (g *ExprGraph) From(nodeid int64) graph.Nodes`

From returns all nodes in g that can be reached directly from n.

### func (*ExprGraph) Has¶

`func (g *ExprGraph) Has(nodeid int64) bool`

Has returns whether the node exists within the graph.

### func (*ExprGraph) HasEdgeBetween¶

`func (g *ExprGraph) HasEdgeBetween(x, y int64) bool`

HasEdgeBetween returns whether an edge exists between nodes x and y without considering direction.

### func (*ExprGraph) HasEdgeFromTo¶

`func (g *ExprGraph) HasEdgeFromTo(u, v int64) bool`

HasEdgeFromTo returns whether an edge exists in the graph from u to v.

### func (*ExprGraph) Inputs¶

`func (g *ExprGraph) Inputs() (retVal Nodes)`

Inputs returns a list of nodes which are inputs (that is to say, the user is required to set a value in it)

### func (*ExprGraph) Node¶

`func (g *ExprGraph) Node(id int64) graph.Node`

Node returns the node in the graph with the given ID.

### func (*ExprGraph) Nodes¶

`func (g *ExprGraph) Nodes() graph.Nodes`

Nodes returns all the nodes in the graph.

### func (*ExprGraph) RemoveNode¶

`func (g *ExprGraph) RemoveNode(node graph.Node)`

RemoveNode removes n from the graph, as well as any edges attached to it. If the node is not in the graph it is a no-op.

### func (*ExprGraph) Roots¶

`func (g *ExprGraph) Roots() (retVal Nodes)`

Roots returns a list of nodes that are not children of any other nodes

### func (*ExprGraph) SetEdge¶

`func (g *ExprGraph) SetEdge(e graph.Edge)`

SetEdge adds e, an edge from one node to another. If the nodes do not exist, they are added. It will panic if the IDs of the e.From and e.To are equal.

### func (*ExprGraph) String¶

`func (g *ExprGraph) String() string`

### func (*ExprGraph) Subgraph¶

`func (g *ExprGraph) Subgraph(ns ...*Node) *ExprGraph`

Subgraph subsets a graph. This function has overloaded meanings - If only one node is passed in, it assumes that the one node is the root, otherwise, it treats ns as the subset of nodes to be included in the subgraph

### func (*ExprGraph) SubgraphRoots¶

`func (g *ExprGraph) SubgraphRoots(ns ...*Node) *ExprGraph`

SubgraphRoots creates a subgraph, assuming the provided nodes are roots to the new subgraph.

### func (*ExprGraph) To¶

`func (g *ExprGraph) To(nid int64) graph.Nodes`

To returns all nodes in g that can reach directly to n.

### func (*ExprGraph) ToDot¶

`func (g *ExprGraph) ToDot() string`

ToDot generates the graph in graphviz format. The use of this is to generate for the entire graph which may have multiple trees with different roots TODO: This is getting unwieldy. Perhaps refactor out into a ToDot(...Opt)?

### func (*ExprGraph) UnbindAll¶

`func (g *ExprGraph) UnbindAll()`

UnbindAll unbinds all the values from the nodes

### func (*ExprGraph) UnbindAllNonInputs¶

`func (g *ExprGraph) UnbindAllNonInputs()`

UnbindAllNonInputs unbinds all the values from nodes that aren't input nodes

```type ExternMetadata struct {
tensor.Engine
// contains filtered or unexported fields
}```

ExternMetadata is used to hold metadata about external execution devices. In this build, it's an empty struct because the default build doesn't use external devices to execute the graph on

### func (*ExternMetadata) Cleanup¶

`func (m *ExternMetadata) Cleanup()`

Cleanup cleans up the ancillary allocations made during the calling of batched external device function.

The reason for this method is due to the fact that there is currently no way to free memory while the context is still running without causing some weirdness to the CUDA calls.

This is a No-op in this build

### func (*ExternMetadata) DoWork¶

`func (m *ExternMetadata) DoWork() error`

DoWork flushes any batched cgo calls. In this build it only flushes the batched BLAS calls.

### func (*ExternMetadata) Get¶

`func (m *ExternMetadata) Get(dev Device, size int64) (tensor.Memory, error)`

Get allocates a memory of the size. In this build it returns a NoOpError.

### func (*ExternMetadata) GetFromValue¶

`func (m *ExternMetadata) GetFromValue(dev Device, v Value) (tensor.Memory, error)`

GetFromValue allocates a memory of the size of v. In this build it returns a NoOpError, and v itself

### func (ExternMetadata) HasFunc¶

`func (m ExternMetadata) HasFunc(name string) bool`

HasFunc will always return false in this build

### func (*ExternMetadata) Put¶

`func (m *ExternMetadata) Put(dev Device, mem tensor.Memory, size int64)`

Put puts a previously allocated memory slab of the provided size back into the pool. Currently this is a No-op in this build.

### func (*ExternMetadata) PutValue¶

`func (m *ExternMetadata) PutValue(dev Device, v Value)`

PutValue puts a previously allocated value into the pool. In this build, it is a noop.

### func (*ExternMetadata) Reset¶

`func (m *ExternMetadata) Reset()`

Reset is a noop function for compatibility with the Cuda build

### func (*ExternMetadata) Signal¶

`func (m *ExternMetadata) Signal()`

Signal sends a signal down the workavailable channel, telling the VM to call the DoWork method. Signal is a synchronous method

### func (*ExternMetadata) Sync¶

`func (m *ExternMetadata) Sync() chan struct{}`

Sync returns the sync channel

### func (*ExternMetadata) Transfer¶

`func (m *ExternMetadata) Transfer(toDev, fromDev Device, v Value, synchronous bool) (retVal Value, err error)`

Transfer transfers a value from device to device. In this build, it's a noop, returning the input value, and a nil error

### func (*ExternMetadata) WorkAvailable¶

`func (m *ExternMetadata) WorkAvailable() <-chan bool`

WorkAvailable returns a channel of empty struct, which is used to signal to the VM when there is work available. The VM will then call the DoWork method.

### type External¶

```type External interface {
Arena
Signal() // signals the machine to do work
Sync() chan struct{}
}```

External is a representation of an external device (cuda/cgo/openCL), conceptually modelled as a machine.

### type ExternalOp¶

```type ExternalOp struct {
Op
ExecutionContext

Prealloc  Value
Incr      Value // is this a Incr? IncrDoers have higher precedence over PreallocDo
UseUnsafe bool  // Is this an unsafe op? Lowest of all "special" Dos
}```

ExternalOp is an op that contains an external context. This allows for ops to be run without needing a VM

`func NewAddOp(a, b *Node, ctx ExecutionContext) *ExternalOp`

NewAddOp creates a new *ExternalOp that wraps an add op

### func NewExternalOp¶

`func NewExternalOp(op Op, ctx ExecutionContext, prealloc Value) *ExternalOp`

NewExternalOp creates a new *ExternalOp.

`func NewHadamardProdOp(a, b *Node, ctx ExecutionContext) *ExternalOp`

NewHadamardProdOp creates a new *ExternalOp that wraps a mul op

### func NewSubOp¶

`func NewSubOp(a, b *Node, ctx ExecutionContext) *ExternalOp`

NewSubOp creates a new *ExternalOp that wraps a sub op

### func (*ExternalOp) DetermineDevice¶

`func (op *ExternalOp) DetermineDevice(inputs Nodes, output *Node) error`

DetermineDevice ...

### func (*ExternalOp) Do¶

`func (op *ExternalOp) Do(vals ...Value) (Value, error)`

Do performs the op,

### func (*ExternalOp) String¶

`func (op *ExternalOp) String() string`

### type F32¶

`type F32 float32`

F32 represents a float32 value.

### func (*F32) Data¶

`func (v *F32) Data() interface{}`

Data returns the original representation of the Value

### func (*F32) Dtype¶

`func (v *F32) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

### func (*F32) Format¶

`func (v *F32) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

### func (*F32) MemSize¶

`func (v *F32) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

### func (*F32) Pointer¶

`func (v *F32) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

### func (*F32) Shape¶

`func (v *F32) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

### func (*F32) Size¶

`func (v *F32) Size() int`

Size returns 0 for all scalar Values

### func (*F32) Uintptr¶

`func (v *F32) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

### type F64¶

`type F64 float64`

F64 represents a float64 value.

### func (*F64) Data¶

`func (v *F64) Data() interface{}`

Data returns the original representation of the Value

### func (*F64) Dtype¶

`func (v *F64) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

### func (*F64) Format¶

`func (v *F64) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

### func (*F64) MemSize¶

`func (v *F64) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

### func (*F64) Pointer¶

`func (v *F64) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

### func (*F64) Shape¶

`func (v *F64) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

### func (*F64) Size¶

`func (v *F64) Size() int`

Size returns 0 for all scalar Values

### func (*F64) Uintptr¶

`func (v *F64) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

### type I¶

`type I int`

I represents a int value.

### func (*I) Data¶

`func (v *I) Data() interface{}`

Data returns the original representation of the Value

### func (*I) Dtype¶

`func (v *I) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

### func (*I) Format¶

`func (v *I) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

### func (*I) MemSize¶

`func (v *I) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

### func (*I) Pointer¶

`func (v *I) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

### func (*I) Shape¶

`func (v *I) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

### func (*I) Size¶

`func (v *I) Size() int`

Size returns 0 for all scalar Values

### func (*I) Uintptr¶

`func (v *I) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

### type I32¶

`type I32 int32`

I32 represents a int32 value.

### func (*I32) Data¶

`func (v *I32) Data() interface{}`

Data returns the original representation of the Value

### func (*I32) Dtype¶

`func (v *I32) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

### func (*I32) Format¶

`func (v *I32) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

### func (*I32) MemSize¶

`func (v *I32) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

### func (*I32) Pointer¶

`func (v *I32) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

### func (*I32) Shape¶

`func (v *I32) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

### func (*I32) Size¶

`func (v *I32) Size() int`

Size returns 0 for all scalar Values

### func (*I32) Uintptr¶

`func (v *I32) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

### type I64¶

`type I64 int64`

I64 represents a int64 value.

### func (*I64) Data¶

`func (v *I64) Data() interface{}`

Data returns the original representation of the Value

### func (*I64) Dtype¶

`func (v *I64) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

### func (*I64) Format¶

`func (v *I64) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

### func (*I64) MemSize¶

`func (v *I64) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

### func (*I64) Pointer¶

`func (v *I64) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

### func (*I64) Shape¶

`func (v *I64) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

### func (*I64) Size¶

`func (v *I64) Size() int`

Size returns 0 for all scalar Values

### func (*I64) Uintptr¶

`func (v *I64) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

### type IncrDoer¶

```type IncrDoer interface {
IncrDo(toIncr Value, inputs ...Value) error
}```

IncrDoer increments the toIncr with the result of doing

### type InitWFn¶

`type InitWFn func(dt tensor.Dtype, s ...int) interface{}`

InitWFn is a type of helper function to help initialize weights vector/matrices. It generates the backing required for the tensors.

It's typically used in closures

### func Gaussian¶

`func Gaussian(mean, stdev float64) InitWFn`

Gaussian creates a InitWFn with the specified parameters. Example Usage:

```w := NewMatrix(g, Float64, WithName("w"), WithShape(2,2), WithInit(Gaussian(0, 1)))
```

This will create a backing slice of []float64, with the length of 4, and its values are drawn from a gaussian distro

### func GlorotN¶

`func GlorotN(gain float64) InitWFn`

GlorotN creates a InitWFn that populates a Value with weights normally sampled using Glorot et al.'s algorithm

### func GlorotU¶

`func GlorotU(gain float64) InitWFn`

GlorotU creates a InitWFn that populates a Value with weights uniformly sampled using Glorot et al.'s algorithm

### func HeN¶

`func HeN(gain float64) InitWFn`

### func HeU¶

`func HeU(gain float64) InitWFn`

### func Ones¶

`func Ones() InitWFn`

Ones creates an InitWfn that populates a Value with ones. See Zeroes() for more explanation.

### func RangedFrom¶

`func RangedFrom(start int) InitWFn`

RangedFrom creates an InitWFn that populates a Value starting with the provided start, increamenting the number for each element in the value by 1

### func Uniform¶

`func Uniform(low, high float64) InitWFn`

Uniform creates a InitWFn with the specified parameters. Example Usage:

```w := NewMatrix(g, Float64, WithName("w"), WithShape(2,2), WithInit(Uniform(-1, 1)))
```

This will create a backing slice of []float64, with the length of 4, and its values are drawn from a uniform distro

### func ValuesOf¶

`func ValuesOf(val interface{}) InitWFn`

ValuesOf creates an InitWrn that populates a value with val. This function will cause a panic if val's type is incompatible with the values type.

### func Zeroes¶

`func Zeroes() InitWFn`

Zeroes creates an InitWfn that populates a Value with... zeroes. I don't know what you expected.

### type Input¶

```type Input interface {
Node() *Node
Nodes() Nodes
}```

Input is something that can produce both a *Node and Nodes. Returning nil is OK.

### type Mker¶

```type Mker interface {
Mk(...Input) Input
}```

Mker is an interface of any Input that can make a new version of itself

### type Momentum¶

```type Momentum struct {
// contains filtered or unexported fields
}```

Momentum is the stochastic gradient descent optimizer with momentum item.

### func NewMomentum¶

`func NewMomentum(opts ...SolverOpt) *Momentum`

NewMomentum creates a new Momentum with sane-ish default values

### func (*Momentum) Step¶

`func (s *Momentum) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the Momentum stochastic gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

### type Namer¶

```type Namer interface {
Name() string
}```

Namer is anything that has a name

### type NoOpError¶

```type NoOpError interface {
NoOp() bool
}```

NoOpError is an error returned when an operation does nothing.

### type NoRetOp¶

```type NoRetOp interface {
Op

ReturnsNothing() bool
}```

A NoRetOp is an Op that reads a value, but does not return any value. It's a representation of a not-pure function

### type Node¶

```type Node struct {
// contains filtered or unexported fields
}```

A Node is a node in the computation graph

### func Abs¶

`func Abs(a *Node) (*Node, error)`

Abs performs a pointwise abs.

`func Add(a, b *Node) (*Node, error)`

### func ApplyOp¶

`func ApplyOp(op Op, children ...*Node) (retVal *Node, err error)`

ApplyOp is the generic function application - for when no specialization is required

### func ApplyOpWithName¶

`func ApplyOpWithName(op Op, name string, children ...*Node) (retVal *Node, err error)`

ApplyOpWithName applies the op, and then gives the node the given name

### func At¶

`func At(a *Node, coords ...int) (retVal *Node, err error)`

At is a symbolic operation for getting a value at the provided coordinates. If the input is a scalar, all the coordinates MUST be 0, or else an error will be returned.

### func BatchedMatMul¶

`func BatchedMatMul(a, b *Node, transes ...bool) (retVal *Node, err error)`

BatchedMatMul returns a node representing the batched mat mul operation.

A list of transpose options are allowed. The

Example

Code:

```g := NewGraph()
a := NewTensor(g, Float64, 3, WithShape(2, 2, 3), WithInit(RangedFrom(1)), WithName("a"))
b := NewTensor(g, Float64, 3, WithShape(2, 3, 2), WithInit(RangedFrom(13)), WithName("b"))
c, err := BatchedMatMul(a, b)
if err != nil {
log.Fatal(err)
}
x := NewTensor(g, Float64, 4, WithShape(3, 2, 2, 3), WithInit(RangedFrom(1)), WithName("x"))
y := NewTensor(g, Float64, 4, WithShape(3, 2, 3, 2), WithInit(RangedFrom(37)), WithName("y"))
z, err := BatchedMatMul(x, y)
if err != nil {
log.Fatal(err)
}

m := NewTapeMachine(g)
if err := m.RunAll(); err != nil {
log.Fatal(err)
}

fmt.Printf("a: %v\n%v\n", a.Value().Shape(), a.Value().Data())
fmt.Printf("b: %v\n%v\n", b.Value().Shape(), b.Value().Data())
fmt.Printf("c: %v\n%v\n", c.Value().Shape(), c.Value().Data())
fmt.Printf("x: %v\n%v\n", x.Value().Shape(), x.Value().Data())
fmt.Printf("y: %v\n%v\n", y.Value().Shape(), y.Value().Data())
fmt.Printf("z: %v\n%v\n", z.Value().Shape(), z.Value().Data())
```
```a: (2, 2, 3)
[1 2 3 4 5 6 7 8 9 10 11 12]
b: (2, 3, 2)
[13 14 15 16 17 18 19 20 21 22 23 24]
c: (2, 2, 2)
[94 100 229 244 508 532 697 730]
x: (3, 2, 2, 3)
[1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36]
y: (3, 2, 3, 2)
[37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72]
z: (3, 2, 2, 2)
[238 244 589 604 1084 1108 1489 1522 2146 2188 2605 2656 3424 3484 3937 4006 4918 4996 5485 5572 6628 6724 7249 7354]
```
Example (WithBackprop)

Code:

```g := NewGraph()
a := NewTensor(g, Float64, 4, WithShape(2, 4, 3, 9), WithInit(RangedFrom(1)), WithName("a"))
b := NewTensor(g, Float64, 4, WithShape(2, 4, 3, 9), WithInit(RangedFrom(13)), WithName("b"))
c, err := BatchedMatMul(a, b, false, true)
if err != nil {
log.Fatal(err)
}
s, err := Sum(c)
if err != nil {
log.Fatal(err)
}
if err != nil {
log.Fatal(err)
}

m := NewTapeMachine(g)
if err := m.RunAll(); err != nil {
log.Fatal(err)
}

fmt.Printf("a: %v\n%v\n", a.Value().Shape(), a.Value().Data())
fmt.Printf("b: %v\n%v\n", b.Value().Shape(), b.Value().Data())
fmt.Printf("c: %v\n%v\n", c.Value().Shape(), c.Value().Data())
```
```a: (2, 4, 3, 9)
[1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216]
b: (2, 4, 3, 9)
[13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228]
c: (2, 4, 3, 3)
[825 1230 1635 2202 3336 4470 3579 5442 7305 12732 15324 17916 16296 19617 22938 19860 23910 27960 37761 42540 47319 43512 49020 54528 49263 55500 61737 75912 82878 89844 83850 91545 99240 91788 100212 108636 127185 136338 145491 137310 147192 157074 147435 158046 168657 191580 202920 214260 203892 215961 228030 216204 229002 241800 269097 282624 296151 283596 297852 312108 298095 313080 328065 359736 375450 391164 376422 392865 409308 393108 410280 427452]
grads[0]:(2, 4, 3, 9)
[66 69 72 75 78 81 84 87 90 66 69 72 75 78 81 84 87 90 66 69 72 75 78 81 84 87 90 147 150 153 156 159 162 165 168 171 147 150 153 156 159 162 165 168 171 147 150 153 156 159 162 165 168 171 228 231 234 237 240 243 246 249 252 228 231 234 237 240 243 246 249 252 228 231 234 237 240 243 246 249 252 309 312 315 318 321 324 327 330 333 309 312 315 318 321 324 327 330 333 309 312 315 318 321 324 327 330 333 390 393 396 399 402 405 408 411 414 390 393 396 399 402 405 408 411 414 390 393 396 399 402 405 408 411 414 471 474 477 480 483 486 489 492 495 471 474 477 480 483 486 489 492 495 471 474 477 480 483 486 489 492 495 552 555 558 561 564 567 570 573 576 552 555 558 561 564 567 570 573 576 552 555 558 561 564 567 570 573 576 633 636 639 642 645 648 651 654 657 633 636 639 642 645 648 651 654 657 633 636 639 642 645 648 651 654 657]
```

### func BinaryXent¶

`func BinaryXent(output, target *Node) (retVal *Node, err error)`

BinaryXent is a convenience function for doing binary crossentropy stuff. The formula is as below:

```-(y * logprob) +  (1-y)(1-logprob)
```

### func BinomialRandomNode¶

`func BinomialRandomNode(g *ExprGraph, dt tensor.Dtype, trials, prob float64, shape ...int) *Node`

BinomialRandomNode creates an input node that has a random op so that everytime the node is passed, random values will be plucked from a binomial distribution with the mean and stdev provided. The type of the node depends on the shape passed in. To get a scalar value at run time, don't pass in any shapes

Whilst technically the number of trials of a binomal distribution should be a discrete value (you can't have half a trial), to keep with API uniformity, trials is passed in as a float64, but will be truncated to an int at runtime.

`func BroadcastAdd(a, b *Node, leftPattern, rightPattern []byte) (*Node, error)`

Add performs a add. The operation is precomposed with a broadcast such that the shapes matches before operations commence.

Example

By default, Gorgonia operations do not perform broadcasting. To do broadcasting, you would need to manually specify the operation

Code:

```g := NewGraph()
a := NewVector(g, tensor.Float64, WithShape(2), WithName("a"), WithValue(tensor.New(tensor.WithBacking([]float64{100, 100}))))
b := NewMatrix(g, tensor.Float64, WithShape(2, 2), WithName("b"), WithValue(tensor.New(tensor.WithShape(2, 2), tensor.WithBacking([]float64{1, 1, 2, 2}))))

fmt.Printf("a = %v\nb =\n%v\n", a.Value(), b.Value())

_, err := Add(a, b)
fmt.Printf("a + b yields an error: %v\n\n", err)

// Note here the broadcasting of a is on the first axis, not the zeroth axis. Simply put, assume that it's already a (2,1) matrix.
ab, err := BroadcastAdd(a, b, []byte{1}, nil)
if err != nil {
fmt.Printf("uh oh, something went wrong: %v\n", err)
}

ba, err := BroadcastAdd(b, a, nil, []byte{1})
if err != nil {
fmt.Printf("uh oh, something went wrong: %v\n", err)
}

// Now, let's run the program
machine := NewTapeMachine(g)
defer machine.Close()
if err = machine.RunAll(); err != nil {
log.Fatal(err)
}

fmt.Printf("a +⃗ b =\n%v\n", ab.Value())
fmt.Printf("b +⃗ a =\n%v", ba.Value())
```
```a = [100  100]
b =
⎡1  1⎤
⎣2  2⎦

a + b yields an error: Failed to infer shape. Op: + false: Shape mismatch: (2) and (2, 2)

a +⃗ b =
⎡101  101⎤
⎣102  102⎦

b +⃗ a =
⎡101  101⎤
⎣102  102⎦
```

`func BroadcastEq(a, b *Node, retSame bool, leftPattern, rightPattern []byte) (*Node, error)`

Eq performs a eq. The operation is precomposed with a broadcast such that the shapes matches before operations commence.

`func BroadcastGt(a, b *Node, retSame bool, leftPattern, rightPattern []byte) (*Node, error)`

Gt performs a gt. The operation is precomposed with a broadcast such that the shapes matches before operations commence.

`func BroadcastGte(a, b *Node, retSame bool, leftPattern, rightPattern []byte) (*Node, error)`

Gte performs a gte. The operation is precomposed with a broadcast such that the shapes matches before operations commence.

Example (CreatingTriangleMatrices)

Code:

```// Broadcasting is useful. We can create triangular dense matrices simply

g := NewGraph()
a := NewMatrix(g, tensor.Float64, WithShape(3, 1), WithName("a"), WithInit(RangedFrom(0)))
b := NewMatrix(g, tensor.Float64, WithShape(1, 4), WithName("b"), WithInit(RangedFrom(0)))
tl, err := BroadcastGte(a, b, true, []byte{1}, []byte{0})
if err != nil {
log.Fatalf("uh oh. Something went wrong %v", err)
}

tu, err := BroadcastLt(a, b, true, []byte{1}, []byte{0})
if err != nil {
log.Fatalf("uh oh. Something went wrong %v", err)
}

m := NewTapeMachine(g)

// PEDAGOGICAL:
// Uncomment the following code if you want to see what happens behind the scenes
// m.Close()
// logger := log.New(os.Stderr, "",0)
// m = NewTapeMachine(g, WithLogger(logger), WithWatchlist())

defer m.Close()
if err = m.RunAll(); err != nil {
log.Fatal(err)
}

fmt.Printf("triangular, lower:\n%v\n", tl.Value())
fmt.Printf("triangular, upper:\n%v\n", tu.Value())
```
```triangular, lower:
⎡1  0  0  0⎤
⎢1  1  0  0⎥
⎣1  1  1  0⎦

triangular, upper:
⎡0  1  1  1⎤
⎢0  0  1  1⎥
⎣0  0  0  1⎦
```

`func BroadcastHadamardDiv(a, b *Node, leftPattern, rightPattern []byte) (*Node, error)`

HadamardDiv performs a hadamarddiv. The operation is precomposed with a broadcast such that the shapes matches before operations commence.

`func BroadcastHadamardProd(a, b *Node, leftPattern, rightPattern []byte) (*Node, error)`

HadamardProd performs a hadamardprod. The operation is precomposed with a broadcast such that the shapes matches before operations commence.

`func BroadcastLt(a, b *Node, retSame bool, leftPattern, rightPattern []byte) (*Node, error)`

Lt performs a lt. The operation is precomposed with a broadcast such that the shapes matches before operations commence.

`func BroadcastLte(a, b *Node, retSame bool, leftPattern, rightPattern []byte) (*Node, error)`

Lte performs a lte. The operation is precomposed with a broadcast such that the shapes matches before operations commence.

`func BroadcastNe(a, b *Node, retSame bool, leftPattern, rightPattern []byte) (*Node, error)`

Ne performs a ne. The operation is precomposed with a broadcast such that the shapes matches before operations commence.

`func BroadcastPow(a, b *Node, leftPattern, rightPattern []byte) (*Node, error)`

Pow performs a pow. The operation is precomposed with a broadcast such that the shapes matches before operations commence.

`func BroadcastSub(a, b *Node, leftPattern, rightPattern []byte) (*Node, error)`

Sub performs a sub. The operation is precomposed with a broadcast such that the shapes matches before operations commence.

### func Ceil¶

`func Ceil(a *Node) (*Node, error)`

Ceil performs a pointwise ceil.

### func Concat¶

`func Concat(axis int, ns ...*Node) (retVal *Node, err error)`

Concat performs a concatenate on the provided axis and inputs.

Example

Code:

```g := NewGraph()
x := NewTensor(g, Float64, 4, WithShape(2, 3, 4, 5), WithInit(RangedFrom(0)), WithName("x"))
y := NewTensor(g, Float64, 4, WithShape(2, 3, 4, 5), WithInit(RangedFrom(120)), WithName("y"))

z, err := Concat(2, x, y)
if err != nil {
log.Fatal(err)
}

m := NewTapeMachine(g)
if err := m.RunAll(); err != nil {
log.Fatal(err)
}
tmp := fmt.Sprintf("z %v\n%v", z.Value().Shape(), z.Value())
fmt.Println(strings.Replace(tmp, "\n\n", "\n", -1)) // this is because
```
```z (2, 3, 8, 5)
⎡  0    1    2    3    4⎤
⎢  5    6    7    8    9⎥
⎢ 10   11   12   13   14⎥
⎢ 15   16   17   18   19⎥
⎢120  121  122  123  124⎥
⎢125  126  127  128  129⎥
⎢130  131  132  133  134⎥
⎣135  136  137  138  139⎦

⎡ 20   21   22   23   24⎤
⎢ 25   26   27   28   29⎥
⎢ 30   31   32   33   34⎥
⎢ 35   36   37   38   39⎥
⎢140  141  142  143  144⎥
⎢145  146  147  148  149⎥
⎢150  151  152  153  154⎥
⎣155  156  157  158  159⎦

⎡ 40   41   42   43   44⎤
⎢ 45   46   47   48   49⎥
⎢ 50   51   52   53   54⎥
⎢ 55   56   57   58   59⎥
⎢160  161  162  163  164⎥
⎢165  166  167  168  169⎥
⎢170  171  172  173  174⎥
⎣175  176  177  178  179⎦

⎡ 60   61   62   63   64⎤
⎢ 65   66   67   68   69⎥
⎢ 70   71   72   73   74⎥
⎢ 75   76   77   78   79⎥
⎢180  181  182  183  184⎥
⎢185  186  187  188  189⎥
⎢190  191  192  193  194⎥
⎣195  196  197  198  199⎦

⎡ 80   81   82   83   84⎤
⎢ 85   86   87   88   89⎥
⎢ 90   91   92   93   94⎥
⎢ 95   96   97   98   99⎥
⎢200  201  202  203  204⎥
⎢205  206  207  208  209⎥
⎢210  211  212  213  214⎥
⎣215  216  217  218  219⎦

⎡100  101  102  103  104⎤
⎢105  106  107  108  109⎥
⎢110  111  112  113  114⎥
⎢115  116  117  118  119⎥
⎢220  221  222  223  224⎥
⎢225  226  227  228  229⎥
⎢230  231  232  233  234⎥
⎣235  236  237  238  239⎦
```

### func Conv1d¶

`func Conv1d(in, filter *Node, kernel, pad, stride, dilation int) (*Node, error)`

Conv1d is a 1D convlution. It relies on Conv2D

### func Conv2d¶

`func Conv2d(im, filter *Node, kernelShape tensor.Shape, pad, stride, dilation []int) (retVal *Node, err error)`

Conv2d is a simple 2D convoution, to be used for CPU computation only. If CuDNN is used, use the CUDAConv2D function. These are the properties the inputs must fulfil:

im: must have 4D shape. Expected format is BCHW (batch, channel, height, width) filter: must have 4D shape: (batch, kernel, height, width) kernelShape: shape of the filter kernel pad: len(pad) == 2 stride: len(stride) == 2 dilation: len(dilation) == 2

### func Cos¶

`func Cos(a *Node) (*Node, error)`

Cos performs a pointwise cos.

### func Cube¶

`func Cube(a *Node) (*Node, error)`

Cube performs a pointwise cube.

### func Div¶

`func Div(a, b *Node) (retVal *Node, err error)`

Div is a shortcut function for HadamardDiv for scalar values. For matrix/tensor values, the matrix division operation is not yet handled, and will panic.

### func Dropout¶

`func Dropout(x *Node, dropProb float64) (retVal *Node, err error)`

Dropout is a convenience function to implement dropout. It uses randomly zeroes out a *Tensor with a probability drawn from a uniform distribution

### func Eq¶

`func Eq(a, b *Node, retSame bool) (*Node, error)`

Eq performs a pointwise eq operation. retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.

### func Exp¶

`func Exp(a *Node) (*Node, error)`

Exp performs a pointwise exp.

### func Expm1¶

`func Expm1(a *Node) (*Node, error)`

Expm1 performs a pointwise expm1.

### func Floor¶

`func Floor(a *Node) (*Node, error)`

Floor performs a pointwise floor.

### func GaussianRandomNode¶

`func GaussianRandomNode(g *ExprGraph, dt tensor.Dtype, mean, stdev float64, shape ...int) *Node`

GaussianRandomNode creates an input node that has a random op so everytime the node is passed, random values will be plucked from a gaussian distribution with the mean and stdev provided. The type of the node depends on the shape passed in. To get a scalar value at run time, don't pass in any shapes

### func GlobalAveragePool2D¶

`func GlobalAveragePool2D(x *Node) (*Node, error)`

GlobalAveragePool2D consumes an input tensor X and applies average pooling across the values in the same channel. The expected input shape is BCHW where B is the batch size, C is the number of channels, and H and W are the height and the width of the data.

### func Gt¶

`func Gt(a, b *Node, retSame bool) (*Node, error)`

Gt performs a pointwise gt operation. retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.

### func Gte¶

`func Gte(a, b *Node, retSame bool) (*Node, error)`

Gte performs a pointwise gte operation. retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.

`func HadamardDiv(a, b *Node) (*Node, error)`

`func HadamardProd(a, b *Node) (*Node, error)`

### func Im2Col¶

`func Im2Col(n *Node, kernel, pad, stride, dilation tensor.Shape) (retVal *Node, err error)`

Im2Col converts a BCHW image block to columns. The kernel, pad and stride parameter must be shape of size 2, no more no less This poor naming scheme clearly comes from matlab

### func Inverse¶

`func Inverse(a *Node) (*Node, error)`

Inverse performs a pointwise inverse.

### func InverseSqrt¶

`func InverseSqrt(a *Node) (*Node, error)`

InverseSqrt performs a pointwise inversesqrt.

### func KeepDims¶

`func KeepDims(a *Node, expandLeft bool, fn func(a *Node) (*Node, error)) (*Node, error)`

KeepDims is a function that ensures that input and output dimensions are the same though the shape may change.

The expandLeft flag in the function indicates if any shape expansion should be done leftwards or rightwards. For example, if fn() returns a tensor with a shape (3) and the desired dimension is 2, then if `expandLeft` is true the result will be `(1, 3)`. Otherwise the result will be `(3, 1)`.

At the moment, results that turn into scalars cannot have their dimensions kept - the semantics isn't well established yet and is a work in progress.

### func LeakyRelu¶

`func LeakyRelu(x *Node, alpha float64) (*Node, error)`

LeakyRelu returns a node whose underlying value is:

```f(x) = alpha * x if x < 0
f(x) = x for x ⩾ 0
```

applied elementwise.

### func Log¶

`func Log(a *Node) (*Node, error)`

Log performs a pointwise log.

### func Log1p¶

`func Log1p(a *Node) (*Node, error)`

Log1p performs a pointwise log1p.

### func Log2¶

`func Log2(a *Node) (*Node, error)`

Log2 performs a pointwise log2.

### func LogSumExp¶

`func LogSumExp(a *Node, axis int) (retVal *Node, err error)`

LogSumExp performs addition in the log domain

### func Lt¶

`func Lt(a, b *Node, retSame bool) (*Node, error)`

Lt performs a pointwise lt operation. retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.

### func Lte¶

`func Lte(a, b *Node, retSame bool) (*Node, error)`

Lte performs a pointwise lte operation. retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.

### func Max¶

`func Max(a *Node, along ...int) (retVal *Node, err error)`

Max performs a max() on the input and the provided axes.

### func MaxPool1D¶

`func MaxPool1D(x *Node, kernel, pad, stride int) (*Node, error)`

MaxPool1D applies a maxpool on the node x.

### func MaxPool2D¶

`func MaxPool2D(x *Node, kernel tensor.Shape, pad, stride []int) (*Node, error)`

MaxPool2D applies the kernel filter to the input node. The pad slice can have two different lengths.

- if len(pad) == 2, padding is assume to be symetric, and a padding is adding up *and* down to each dimension

```paddedOutputH = pad[0] + inputH + pad[0]
```

- if len(pad) == 4, padding is explicit and can be asymmetric.

```paddedOutputH = pad[0] + inputH + pad[1]
```

### func Mean¶

`func Mean(a *Node, along ...int) (retVal *Node, err error)`

Mean performs a mean() on the input and the provided axes.

### func Mish¶

`func Mish(a *Node) (retVal *Node, err error)`

Mish is a novel activation function that is self regularizing.

### func Mul¶

`func Mul(a, b *Node) (retVal *Node, err error)`

Mul is the general handler for multiplication of nodes. It is extremely overloaded. Only use if you know what you're doing

If any of the nodes are ScalarType, then it'll be redirected to HadamardProd() instead If the nodes are both vectors (that is, have a shape of (x, 1) or (1, x)), then the operator used will be a vectorDot If only one of the nodes is a vector, then the operator used will be a matrix-vector multiplication will be used, and most importantly, a transpose will be used (when necessary) If both nodes are matrices, then well, matrix multiplication will be done

### func Must¶

`func Must(n *Node, err error, opts ...NodeConsOpt) *Node`

Must indicates a node must be created. If there isn't a node created, or there was an error, it subsumes the error, and immediately panics

### func Ne¶

`func Ne(a, b *Node, retSame bool) (*Node, error)`

Ne performs a pointwise ne operation. retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.

### func Neg¶

`func Neg(a *Node) (*Node, error)`

Neg performs a pointwise neg.

### func NegNegOptimization¶

`func NegNegOptimization(a *Node) (retVal *Node, err error)`

NegNegOptimization optimizes away -(-x) to just return x place before neg

### func NewConstant¶

`func NewConstant(v interface{}, opts ...NodeConsOpt) *Node`

NewConstant takes in any reasonable value and makes it a constant node.

### func NewMatrix¶

`func NewMatrix(g *ExprGraph, t tensor.Dtype, opts ...NodeConsOpt) *Node`

NewMatrix creates a Node representing a variable that holds a matrix (nxm)

### func NewScalar¶

`func NewScalar(g *ExprGraph, t tensor.Dtype, opts ...NodeConsOpt) *Node`

NewScalar creates a Node representing a variable that holds a scalar value

### func NewTensor¶

`func NewTensor(g *ExprGraph, t tensor.Dtype, dims int, opts ...NodeConsOpt) *Node`

NewTensor creates a Node representing a variable that holds a tensor (any n-dimensional array with dimensions greater than 2)

### func NewUniqueNode¶

`func NewUniqueNode(opts ...NodeConsOpt) *Node`

NewUniqueNode creates a new unique node in a graph. If no graph was specified in the construction options then it will just return a graphless node.

### func NewVector¶

`func NewVector(g *ExprGraph, t tensor.Dtype, opts ...NodeConsOpt) *Node`

NewVector creates a Node representing a variable that holds a vector (nx1 matrix)

### func NodeFromAny¶

`func NodeFromAny(g *ExprGraph, any interface{}, opts ...NodeConsOpt) *Node`

NodeFromAny creates a Node from a tensor.Tensor, automatically filling in shape and type info

### func Norm¶

`func Norm(a *Node, axis, p int) (retVal *Node, err error)`

Norm returns the p-norm of a Value. Use p=2 if you want to use unordered norms.

This is a simpler version of the norms found in the Tensor package, which specializes and optimizes even more (well, given it's adapted from Numpy, it is clearly way more optimized)

### func OneHotVector¶

`func OneHotVector(id, classes int, t tensor.Dtype, opts ...NodeConsOpt) *Node`

OneHotVector creates a node representing a one hot vector

### func OuterProd¶

`func OuterProd(a, b *Node) (retVal *Node, err error)`

OuterProd returns a Node representing the outer product of two vectors. This function will return an error if both input nodes are not vectors

### func Pow¶

`func Pow(a, b *Node) (*Node, error)`

Pow performs a pointwise pow operation.

`func Read(n *Node, into *Value) (retVal *Node)`

Read allows for extraction of the value of the *Node at runtime into a Value. To achieve this, a pointer to a Value (*Value) is passed into this function, not a Value. The 'into' value remains nil until the execution of the graph (via a call to the Run() methods of the VM)

### func Rectify¶

`func Rectify(x *Node) (retVal *Node, err error)`

Rectify is a convenience function for creating rectified linear units activation functions. This function uses ⩾, which is the canonical version. If you want to use >, you can create your own by just following this.

`func ReduceAdd(nodes Nodes, opts ...NodeConsOpt) (retVal *Node, err error)`

ReduceAdd takes a slice of *Nodes, and folds them into one by adding

### func ReduceMul¶

`func ReduceMul(nodes Nodes, opts ...NodeConsOpt) (retVal *Node, err error)`

ReduceMul is like foldl(*, nodes)

### func Reshape¶

`func Reshape(n *Node, to tensor.Shape) (retVal *Node, err error)`

Reshape reshapes a node and returns a new node with the new shape

### func Set¶

`func Set(a, b *Node) (retVal *Node)`

Set is the equivalent of doing this:

```a = b
```

where a and b are both variables

### func Sigmoid¶

`func Sigmoid(a *Node) (*Node, error)`

Sigmoid performs a pointwise sigmoid.

### func Sign¶

`func Sign(a *Node) (*Node, error)`

Sign performs a pointwise sign.

### func Sin¶

`func Sin(a *Node) (*Node, error)`

Sin performs a pointwise sin.

### func SizeOf¶

`func SizeOf(axis int, x *Node) (retVal *Node, err error)`

SizeOf returns the size of a value along an axis

### func Slice¶

`func Slice(n *Node, slices ...tensor.Slice) (retVal *Node, err error)`

Slice slices a *Node. For T[:] slices, pass in nil. Will error out if node's type is not a Tensor

### func SoftMax¶

`func SoftMax(a *Node, axes ...int) (retVal *Node, err error)`

SoftMax performs softmax on the input. Specifically this is used:

```e^(a[i]) / sum((e^(a[i])))
```

For a more numerically stable SoftMax, use StableSoftMax. TODO: MULTI RANK SOFTMAX

Example

Code:

```g := NewGraph()
t := tensor.New(tensor.WithShape(2, 3), tensor.WithBacking([]float64{1, 3, 2, 3, 2, 1}))
u := t.Clone().(*tensor.Dense)
v := tensor.New(tensor.WithShape(2, 2, 3), tensor.WithBacking([]float64{
1, 3, 2,
4, 2, 1,

3, 5, 3,
2, 1, 5,
}))

a := NodeFromAny(g, t, WithName("a"))
b := NodeFromAny(g, u, WithName("b"))
c := NodeFromAny(g, v, WithName("c"))

sm1 := Must(SoftMax(a))
sm0 := Must(SoftMax(b, 0))
sm := Must(SoftMax(c))
m := NewTapeMachine(g)
if err := m.RunAll(); err != nil {
log.Fatal(err)
}
fmt.Printf("a:\n%v\nsoftmax(a) - along last axis (default behaviour):\n%1.2f", a.Value(), sm1.Value())
fmt.Printf("b:\n%v\nsoftmax(b) - along axis 0:\n%1.2f", b.Value(), sm0.Value())
tmp := fmt.Sprintf("c %v:\n%v\nsoftmax(c) - along last axis (default behaviour) %v:\n%1.2f", c.Value().Shape(), c.Value(), sm.Value().Shape(), sm.Value())
fmt.Println(strings.Replace(tmp, "\n\n\n", "\n\n", -1))
// the requirement to use tmp and strings.Replace is because when Go runs example tests, it strips excess newlines.
```
```a:
⎡1  3  2⎤
⎣3  2  1⎦

softmax(a) - along last axis (default behaviour):
⎡0.09  0.67  0.24⎤
⎣0.67  0.24  0.09⎦
b:
⎡1  3  2⎤
⎣3  2  1⎦

softmax(b) - along axis 0:
⎡0.12  0.73  0.73⎤
⎣0.88  0.27  0.27⎦
c (2, 2, 3):
⎡1  3  2⎤
⎣4  2  1⎦

⎡3  5  3⎤
⎣2  1  5⎦

softmax(c) - along last axis (default behaviour) (2, 2, 3):
⎡0.09  0.67  0.24⎤
⎣0.84  0.11  0.04⎦

⎡0.11  0.79  0.11⎤
⎣0.05  0.02  0.94⎦
```

### func Softplus¶

`func Softplus(a *Node) (*Node, error)`

Softplus performs a pointwise softplus.

### func Sqrt¶

`func Sqrt(a *Node) (*Node, error)`

Sqrt performs a pointwise sqrt.

### func Square¶

`func Square(a *Node) (*Node, error)`

Square performs a pointwise square.

### func StableSoftMax¶

`func StableSoftMax(a *Node) (retVal *Node, err error)`

StableSoftMax performs a numerically stable softmax on the input. Specifically this is the formula used:

```e^(a - max(a)) / sum(e^(a - max(a)))
```

### func Sub¶

`func Sub(a, b *Node) (*Node, error)`

Sub performs a pointwise sub operation.

### func Sum¶

`func Sum(a *Node, along ...int) (retVal *Node, err error)`

Sum performs a sum() on the input and the provided axes.

### func Tanh¶

`func Tanh(a *Node) (*Node, error)`

Tanh performs a pointwise tanh.

### func Tensordot¶

`func Tensordot(aAxes []int, bAxes []int, a, b *Node) (retVal *Node, err error)`

Tensordot performs a tensor contraction of a and b along specified axes.

### func Transpose¶

`func Transpose(n *Node, axes ...int) (retVal *Node, err error)`

Transpose performs a transpose on the input and provided permutation axes.

### func UniformRandomNode¶

`func UniformRandomNode(g *ExprGraph, dt tensor.Dtype, low, high float64, shape ...int) *Node`

UniformRandomNode creates an input node that has a random op so everytime the node is passed, random values will be plucked from a uniform distribution. The type of the node depends on the shape passed in. To get a scalar value at run time, don't pass in any shapes

### func Upsample2D¶

`func Upsample2D(x *Node, scale int) (*Node, error)`

Upsample2D - simply upscaling Tensor by scale factor.

```1, 2
3, 4
converts to
1,1,2,2
1,1,2,2
3,3,4,4,
3,3,4,4,
```

### func YOLOv3¶

`func YOLOv3(input *Node, anchors []float32, masks []int, netSize, numClasses int, ignoreTresh float32, targets ...*Node) (*Node, error)`

### func (*Node) Clone¶

`func (n *Node) Clone() (retVal interface{})`

Clone clones the node. There are some caveats:

```- the graph is not copied over - the node essentially does not belong to a collection
- there is no ID
- the children are not cloned
```

### func (*Node) CloneTo¶

`func (n *Node) CloneTo(g *ExprGraph) *Node`

CloneTo clones the node into a new graph. If CloneTo() is called on the same graph as the n, it will return n. The reason this is done is because at any given time, every node should be unique in the *ExprGraph.

TODO: clone children as well (this means that CloneTo() is only currently suitable fo input nodes)

### func (*Node) DataSize¶

`func (n *Node) DataSize() int`

### func (*Node) Deriv¶

`func (n *Node) Deriv() *Node`

### func (*Node) DerivOf¶

`func (n *Node) DerivOf() Nodes`

### func (*Node) Device¶

`func (n *Node) Device() Device`

Device returns the device the data will be on

### func (*Node) Dims¶

`func (n *Node) Dims() int`

Dims indicates how many dimensions the node's result has

### func (*Node) Dtype¶

`func (n *Node) Dtype() tensor.Dtype`

Dtype returns the dtype of the node

### func (*Node) Err¶

`func (n *Node) Err() error`

Err always returns nil. However, this method is implemented to enable nicer composition of functions

### func (*Node) Grad¶

`func (n *Node) Grad() (Value, error)`

Grad returns the gradient if there is one.

### func (*Node) GradOnDevice¶

`func (n *Node) GradOnDevice(dev Device, extern External) (retVal Value, allocOnExtern bool, err error)`

GradOnDevice gets the gradient value of the node as a Value but on the desired device. In this build the device is always CPU, so it's equivalent to calling .Grad()

### func (*Node) Graph¶

`func (n *Node) Graph() *ExprGraph`

Graph returns the graph of the node

### func (*Node) Groups¶

`func (n *Node) Groups() encoding.Groups`

Groups to fulfil the encoding Grouper interface

### func (*Node) Hashcode¶

`func (n *Node) Hashcode() uint32`

Hashcode provides the hash for the tree, assuming that the node is the root of the tree. Original implementation was here by Vatine (who's apparently 80 years old and using SO!?!):

```http://stackoverflow.com/questions/1988665/hashing-a-tree-structure
```

### func (*Node) ID¶

`func (n *Node) ID() int64`

ID returns the ID of the node. This satisfies the gonum/graph.Node interface

### func (*Node) IsColVec¶

`func (n *Node) IsColVec() bool`

IsColVec indicates if a node represents a Column Vector. This is based on the type of the node, not the actual value associated with the node

### func (*Node) IsMatrix¶

`func (n *Node) IsMatrix() bool`

IsMatrix indicates if a node represents a matrix. This is based on the type of the node, not the actual value associated with the node

### func (*Node) IsRowVec¶

`func (n *Node) IsRowVec() bool`

IsRowVec indicates if a node represents a Row Vector. This is based on the type of the node, not the actual value associated with the node

### func (*Node) IsScalar¶

`func (n *Node) IsScalar() bool`

IsScalar indicates if a node represents a a scalar value. This is based on the type of the node, not the actual value associated with the node

### func (*Node) IsVar¶

`func (n *Node) IsVar() bool`

IsVar returns true if the node represents a differentiable variable (i.e. it's an argument to the function that is not a statement)

### func (*Node) IsVec¶

`func (n *Node) IsVec() bool`

IsVec returns whether this node is a vector

### func (*Node) IsVector¶

`func (n *Node) IsVector() bool`

IsVector indicates if a node represents a vector value. This is based on the type of the node, not the actual value associated with the node

### func (*Node) Name¶

`func (n *Node) Name() string`

Name returns the name of the node. If a name was specified and it is too long, the short name will be used instead (except in inputs)

The short name is typically of the form: OpName(%1, %2 ...), making it read more like a function call

### func (*Node) Node¶

`func (n *Node) Node() *Node`

Node returns itself. This sorts of monoidal patterns are useful for compositions via interfaces.

### func (*Node) Nodes¶

`func (n *Node) Nodes() Nodes`

Nodes returns n as a slice of *Node. Again, this is mostly useful for interfaces

### func (*Node) Op¶

`func (n *Node) Op() Op`

Op returns the Op of the node

### func (*Node) RestrictedToDot¶

`func (n *Node) RestrictedToDot(up, down int) string`

RestrictedToDot prints the graphviz compatible string but does not print the entire tree up and down indicates how many levels to look up, and how many levels to look down

### func (*Node) Shape¶

`func (n *Node) Shape() tensor.Shape`

Shape returns the shape of the node

### func (*Node) Strides¶

`func (n *Node) Strides() []int`

Strides returns the strides of the value of the node

### func (*Node) String¶

`func (n *Node) String() string`

String() implements the fmt.Stringer interface

### func (*Node) ToDot¶

`func (n *Node) ToDot() string`

ToDot returns the graph as a graphviz compatible string. DEPRECATED: This function will be removed in the next release, please use the encoding/dot package

### func (*Node) Type¶

`func (n *Node) Type() hm.Type`

Type returns the type of the node

### func (*Node) Value¶

`func (n *Node) Value() Value`

Value returns the valuse bound to the node. May return nil

### func (*Node) ValueOnDevice¶

`func (n *Node) ValueOnDevice(dev Device, extern External) (retVal Value, allocOnExtern bool, err error)`

ValueOnDevice gets the value of the node as a Value but on the desired device. In this build the device is always CPU, so it's equivalent to calling .Value()

### func (*Node) WriteHash¶

`func (n *Node) WriteHash(h hash.Hash32)`

WriteHash writes the hash to the provided Hash32.

### type NodeConsOpt¶

`type NodeConsOpt func(*Node)`

NodeConsOpt is a function that provides construction options for any Node.

### func In¶

`func In(g *ExprGraph) NodeConsOpt`

In is a node construction option to set a node's graph. A `*Node`'s graph is immutable. If the graph has already been set, a check will be made that the specifiec *Graph and the *Graph set in *Node are the same. If they are not, the function will panic/

### func WithChildren¶

`func WithChildren(children Nodes) NodeConsOpt`

WithChildren sets the children of a node to the specified chidren. This construction option does NOT check if existing children exists, and will overwrite the existing children.

`func WithGrad(any interface{}) NodeConsOpt`

WithGrad is a node construction option that binds the value to the *Node. This function may panic if:

```- There isn't already a value associated with the node (.boundTo == nil)
- The type of the Value does not match the value of the node.
```

### func WithGroupName¶

`func WithGroupName(name string) NodeConsOpt`

WithGroupName is a node construction option to group a *Node within a particular group. This option is useful for debugging with graphs. This function is deprecated and will proabably be remove in the next version.

### func WithInit¶

`func WithInit(fn InitWFn) NodeConsOpt`

WithInit is a node construction option to initialize a *Node with the InitWFn provided.

### func WithName¶

`func WithName(name string) NodeConsOpt`

WithName is a node construction option that gives the *Node the provided name. This is especially useful in debugging graphs.

### func WithOp¶

`func WithOp(op Op) NodeConsOpt`

WithOp is a node construction option to set a node's Op to the specified Op. `Op`s in `*Node`s are immutable once set and cannot be changed. If the node already has an Op specified a check will be made to see if the provided Op and the one already specified in the `*Node` is the same - do note that comparison of Ops is done using the `Hashcode()` method of Ops, and hash collisions MAY occur - If both ops are different, this function will panic.

### func WithShape¶

`func WithShape(shp ...int) NodeConsOpt`

WithShape is a node construction option to initialize a *Node with a particular shape. This function panics if the shape's dimensions do not match the specified dimensions of the *Node.

### func WithType¶

`func WithType(t hm.Type) NodeConsOpt`

WithType is a node construction option to set a node to the specified type. Types in *Node are immutable once set. If the type has already been specified in the node, a check will be made to see if the both types are the same. If it isn't, it will panic.

### func WithValue¶

`func WithValue(any interface{}) NodeConsOpt`

WithValue is a node construction option that binds the value to the *Node. This function may panic if:

```- Gorgonia was unable to convert interface{} into a Value.
- The type of the Value does not match the type of the nodes.
```

### type NodeSet¶

`type NodeSet map[*Node]struct{}`

NodeSet is the primary type that represents a set

### func NewNodeSet¶

`func NewNodeSet(a ...*Node) NodeSet`

NewNodeSet creates and returns a reference to an empty set.

### func (NodeSet) Add¶

`func (set NodeSet) Add(i *Node) bool`

Add adds an item to the current set if it doesn't already exist in the set.

### func (NodeSet) Cardinality¶

`func (set NodeSet) Cardinality() int`

Cardinality returns how many items are currently in the set.

### func (*NodeSet) Clear¶

`func (set *NodeSet) Clear()`

Clear clears the entire set to be the empty set.

### func (NodeSet) Clone¶

`func (set NodeSet) Clone() NodeSet`

Clone returns a clone of the set. Does NOT clone the underlying elements.

### func (NodeSet) Contains¶

`func (set NodeSet) Contains(i *Node) bool`

Contains determines if a given item is already in the set.

### func (NodeSet) ContainsAll¶

`func (set NodeSet) ContainsAll(i ...*Node) bool`

ContainsAll determines if the given items are all in the set

### func (NodeSet) Difference¶

`func (set NodeSet) Difference(other NodeSet) NodeSet`

Difference returns a new set with items in the current set but not in the other set

### func (NodeSet) Equal¶

`func (set NodeSet) Equal(other NodeSet) bool`

Equal determines if two sets are equal to each other. If they both are the same size and have the same items they are considered equal. Order of items is not relevant for sets to be equal.

### func (NodeSet) Intersect¶

`func (set NodeSet) Intersect(other NodeSet) NodeSet`

Intersect returns a new set with items that exist only in both sets.

### func (NodeSet) IsSubset¶

`func (set NodeSet) IsSubset(other NodeSet) bool`

IsSubset determines if every item in the other set is in this set.

### func (NodeSet) IsSuperset¶

`func (set NodeSet) IsSuperset(other NodeSet) bool`

IsSuperset determines if every item of this set is in the other set.

### func (NodeSet) Iter¶

`func (set NodeSet) Iter() <-chan *Node`

Iter returns a channel of type *Node that you can range over.

### func (NodeSet) Remove¶

`func (set NodeSet) Remove(i *Node)`

Remove allows the removal of a single item in the set.

### func (NodeSet) SymmetricDifference¶

`func (set NodeSet) SymmetricDifference(other NodeSet) NodeSet`

SymmetricDifference returns a new set with items in the current set or the other set but not in both.

### func (NodeSet) ToSlice¶

`func (set NodeSet) ToSlice() Nodes`

ToSlice returns the elements of the current set as a slice

### func (NodeSet) Union¶

`func (set NodeSet) Union(other NodeSet) NodeSet`

Union returns a new set with all items in both sets.

### type Nodes¶

`type Nodes []*Node`

Nodes is a slice of nodes, but it also acts as a set of nodes by implementing the Sort interface

### func Backpropagate¶

`func Backpropagate(outputs, gradOutputs, wrt Nodes) (retVal Nodes, err error)`

Backpropagate backpropagates errors by performing reverse-mode symbolic differentiation, starting from the outputs, and working its way towads the inputs.

This is the rough algorithm:

```1. Filter out nodes that are unreachable
2. Forwards analysis, where a list of nodes affecting the output is added to consideration
3. Backwards analysis, where a list of nodes affected by differentiating the output are added to the consideration
4. If there is a difference in both sets, it will cause an error (both sets should be the same)
5. Traverse the graph from output towards input. On each visit, perform the symbolic differentiation
```

For most cases, Grad() should be used instead of Backpropagate(), as Grad() performs several checks which would be the general use case, before calling Backpropagate()

`func Grad(cost *Node, WRTs ...*Node) (retVal Nodes, err error)`

Grad takes a scalar cost node and a list of with-regards-to, and returns the gradient

### func NodesFromInputs¶

`func NodesFromInputs(xs ...Input) (Nodes, error)`

NodesFromInputs creates a Nodes from a list of Input.

### func Sort¶

`func Sort(g *ExprGraph) (sorted Nodes, err error)`

Sort topologically sorts a ExprGraph: root of graph will be first nodes are sorted using gonum's SortStabilized function.

### func Unconcat¶

`func Unconcat(a *Node, along int, n int) (Nodes, error)`

Unconcat is the opposite of the built in concat function TODO: port this back to Gorgonia and use Gorgonia's sli instead

Example

Code:

```g := NewGraph()
x := NewTensor(g, Float64, 4, WithShape(2, 3, 4, 5), WithInit(RangedFrom(0)), WithName("x"))
y := NewTensor(g, Float64, 4, WithShape(2, 3, 4, 5), WithInit(RangedFrom(120)), WithName("y"))

z, err := Concat(2, x, y)
if err != nil {
log.Fatal(err)
}

unconcats, err := Unconcat(z, 2, 2)
if err != nil {
log.Fatal(err)
}
a, b := unconcats[0], unconcats[1]

m := NewTapeMachine(g)
if err := m.RunAll(); err != nil {
log.Fatal(err)
}
tmp := fmt.Sprintf("a %v\n%v\nb %v\n%v", a.Value().Shape(), a.Value(), b.Value().Shape(), b.Value())
fmt.Println(strings.Replace(tmp, "\n\n", "\n", -1))
```
```a (2, 3, 4, 5)
⎡  0    1    2    3    4⎤
⎢  5    6    7    8    9⎥
⎢ 10   11   12   13   14⎥
⎣ 15   16   17   18   19⎦

⎡ 20   21   22   23   24⎤
⎢ 25   26   27   28   29⎥
⎢ 30   31   32   33   34⎥
⎣ 35   36   37   38   39⎦

⎡ 40   41   42   43   44⎤
⎢ 45   46   47   48   49⎥
⎢ 50   51   52   53   54⎥
⎣ 55   56   57   58   59⎦

⎡ 60   61   62   63   64⎤
⎢ 65   66   67   68   69⎥
⎢ 70   71   72   73   74⎥
⎣ 75   76   77   78   79⎦

⎡ 80   81   82   83   84⎤
⎢ 85   86   87   88   89⎥
⎢ 90   91   92   93   94⎥
⎣ 95   96   97   98   99⎦

⎡100  101  102  103  104⎤
⎢105  106  107  108  109⎥
⎢110  111  112  113  114⎥
⎣115  116  117  118  119⎦

b (2, 3, 4, 5)
⎡120  121  122  123  124⎤
⎢125  126  127  128  129⎥
⎢130  131  132  133  134⎥
⎣135  136  137  138  139⎦

⎡140  141  142  143  144⎤
⎢145  146  147  148  149⎥
⎢150  151  152  153  154⎥
⎣155  156  157  158  159⎦

⎡160  161  162  163  164⎤
⎢165  166  167  168  169⎥
⎢170  171  172  173  174⎥
⎣175  176  177  178  179⎦

⎡180  181  182  183  184⎤
⎢185  186  187  188  189⎥
⎢190  191  192  193  194⎥
⎣195  196  197  198  199⎦

⎡200  201  202  203  204⎤
⎢205  206  207  208  209⎥
⎢210  211  212  213  214⎥
⎣215  216  217  218  219⎦

⎡220  221  222  223  224⎤
⎢225  226  227  228  229⎥
⎢230  231  232  233  234⎥
⎣235  236  237  238  239⎦
```

### func UnstableSort¶

`func UnstableSort(g *ExprGraph) (sorted Nodes, err error)`

UnstableSort performs a topological sort of the directed graph g returning the 'from' to 'to' sort order. If a topological ordering is not possible, an Unorderable error is returned listing cyclic components in g with each cyclic component's members sorted by ID. When an Unorderable error is returned, each cyclic component's topological position within the sorted nodes is marked with a nil graph.Node.

### func (Nodes) Add¶

`func (ns Nodes) Add(n *Node) Nodes`

### func (Nodes) AllSameGraph¶

`func (ns Nodes) AllSameGraph() bool`

AllSameGraph returns true if all the nodes in the slice belong to the same graph. Note that constants do not have to belong to the same graph.

### func (Nodes) Contains¶

`func (ns Nodes) Contains(want *Node) bool`

Contains checks if the wanted node is in the set

### func (Nodes) Difference¶

`func (ns Nodes) Difference(other Nodes) Nodes`

Difference is ns - other. Bear in mind it is NOT commutative

### func (Nodes) Equals¶

`func (ns Nodes) Equals(other Nodes) bool`

Equals returns true if two Nodes are the same

### func (Nodes) Err¶

`func (ns Nodes) Err() error`

Err returns nil always

### func (Nodes) Format¶

`func (ns Nodes) Format(s fmt.State, c rune)`

Format implements fmt.Formatter, which allows Nodes to be differently formatted depending on the verbs

### func (Nodes) Intersect¶

`func (ns Nodes) Intersect(other Nodes) Nodes`

Intersect performs an intersection with other Nodes

### func (Nodes) Len¶

`func (ns Nodes) Len() int`

### func (Nodes) Less¶

`func (ns Nodes) Less(i, j int) bool`

### func (Nodes) Node¶

`func (ns Nodes) Node() *Node`

Node returns nil. Always. This is bound to cause a panic somewhere if an program is not using it correctly. The reason for implementing this is so that it may fulfil common interfaces.

### func (Nodes) Nodes¶

`func (ns Nodes) Nodes() Nodes`

Nodes returns itself. This is useful for interfaces

### func (Nodes) Set¶

`func (ns Nodes) Set() Nodes`

Set returns a uniquifies slice. It mutates the slice.

### func (Nodes) Swap¶

`func (ns Nodes) Swap(i, j int)`

### type Op¶

```type Op interface {

// Arity returns the number of inputs the Op expects. -1 indicates that it's n-ary and will be determined at runtime
Arity() int

// Informs the type of the Op (not the node). This will be used by the type system to infer the final type of the node
Type() hm.Type

// returns the output shape as a function of the inputs
InferShape(...DimSizer) (tensor.Shape, error)

// executes the op
Do(...Value) (Value, error)

// indicates if the Op will return a pointer (allowing possible inplace edits) or by value
// if it's false, the return value of the Op will be a copy of its input
ReturnsPtr() bool

// Does this op potentially call external (cgo or cuda) functions (thereby requiring extra overhead for Go's trampolining thing)
CallsExtern() bool

// overwriteInput() is a method which states which input the output will be overwriting.
// This allows for some efficiency gains as the underlying arrays wouldn't have to be re-allocated.
// The method returns an int instead of a bool because potentially different operations may be allowed
// to overwrite certain inputs. For example, consider an operation to increment a value:
// the IncrementOp would be a unary operator, and assuming we would like to overwrite the input,
// the retVal of overwriteInput() will be 0 (inputs[0]).
// -1 is returned if overwriting of input is disallowed
OverwritesInput() int

/* Other methods */
WriteHash(h hash.Hash)
Hashcode() uint32
fmt.Stringer
}```

An Op is a symbolic representation of an operation Think of them as functions, taking an input (or multiple), and outputting something

All Ops have type signatures that look like this:

```OpName :: (Floats a) ⇒ Tensor a → Tensor a → Tensor a
```

### type RMSPropSolver¶

```type RMSPropSolver struct {
// contains filtered or unexported fields
}```

RMSPropSolver is a solver that implements Geoffrey Hinton's RMSProp gradient descent optimization algorithm. http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf

### func NewRMSPropSolver¶

`func NewRMSPropSolver(opts ...SolverOpt) *RMSPropSolver`

NewRMSPropSolver creates an RMSProp solver with these default values:

```eta (learn rate)	  : 0.001
eps (smoothing factor): 1e-8
rho (decay factor)    : 0.999
```

### func (*RMSPropSolver) Step¶

`func (s *RMSPropSolver) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the RMSProp gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

### type ReductionOp¶

```type ReductionOp interface {
Op

IsReduction() bool
}```

ReductionOp changes the shape of the node

### type Result¶

```type Result interface {
Input
Errer
}```

Result is either a Node or Nodes or error. It's a poor man's sum types and it's not sealed for good reason

### func LiftResult¶

`func LiftResult(a Input, err error) Result`

LiftResult creates a Result from a Input and error pair. If the error is not nil, the Input is discarded.

The usual use case is in a function that returns a `(*Node, error)`. e.g LiftResult(Add(a, b))

### type SDOp¶

```type SDOp interface {
Op

// DiffWRT indicates if the op is differentiable with regards to the given number of inputs
// returns []bool to indicate which input it is differentiable to
DiffWRT(inputs int) []bool

// SymDiff symbolically differentiates the op
SymDiff(inputs Nodes, output, grad *Node) (retVal Nodes, err error)
}```

A SDOp is an Op that supports symbolic differentiation

### type Scalar¶

```type Scalar interface {
Value
// contains filtered or unexported methods
}```

Scalar represents a scalar(non-array-based) value. Do note that it's the pointers of the scalar types (F64, F32, etc) that implement the Scalar interface. The main reason is primarily due to optimizations with regards to memory allocation and copying for device interoperability.

### type Solver¶

```type Solver interface {
}```

Solver is anything that does gradient updates. The name solvers is stolen from Caffe. A much shorter name than GradientUpdaters

### type SolverOpt¶

`type SolverOpt func(s Solver)`

SolverOpt is a function that provides construction options for a Solver

### func WithBatchSize¶

`func WithBatchSize(batch float64) SolverOpt`

WithBatchSize sets the batch size for the solver. Currently only Adam and Vanilla (basic SGD) has batch size support

### func WithBeta1¶

`func WithBeta1(beta1 float64) SolverOpt`

WithBeta1 sets the beta1 param of the solver. Only works with Adam

### func WithBeta2¶

`func WithBeta2(beta2 float64) SolverOpt`

WithBeta2 sets the beta1 param of the solver. Only works with Adam

### func WithClip¶

`func WithClip(clip float64) SolverOpt`

WithClip clips the gradient if it gets too crazy. By default all solvers do not have any clips attached

### func WithEps¶

`func WithEps(eps float64) SolverOpt`

WithEps sets the smoothing factor for the solver.

### func WithL1Reg¶

`func WithL1Reg(l1reg float64) SolverOpt`

WithL1Reg adds a L1 regularization parameter to the solver. By default, the solvers do not use any regularization param

### func WithL2Reg¶

`func WithL2Reg(l2reg float64) SolverOpt`

WithL2Reg adds a L2 regularization parameter to the solver. By default, the solvers do not use any regularization param

### func WithLearnRate¶

`func WithLearnRate(eta float64) SolverOpt`

WithLearnRate sets the learn rate or step size for the solver.

### func WithMomentum¶

`func WithMomentum(momentum float64) SolverOpt`

WithMomentum sets the momentum of the solver. It is a no-op is the solver's type is not Momentum

### func WithRho¶

`func WithRho(rho float64) SolverOpt`

WithRho sets the decay parameter of the RMSProp solver

### type StandardEngine¶

```type StandardEngine struct {
tensor.StdEng
}```

StandardEngine is the default CPU engine for gorgonia

### func (StandardEngine) Transpose¶

`func (e StandardEngine) Transpose(a tensor.Tensor, expStrides []int) error`

Transpose tensor a according to expStrides

### type SymDiffError¶

```type SymDiffError struct {
// contains filtered or unexported fields
}```

SymDiffError provides the context at which an error occurred

### func (SymDiffError) Error¶

`func (err SymDiffError) Error() string`

### func (SymDiffError) Grad¶

`func (err SymDiffError) Grad() *Node`

Grad returns a specific grad involved in the error

### func (SymDiffError) Grads¶

`func (err SymDiffError) Grads() map[*Node]Nodes`

Grads returns the grads involved in the error

### func (SymDiffError) Node¶

`func (err SymDiffError) Node() *Node`

Node returns a specific node involved in the error

### func (SymDiffError) Nodes¶

`func (err SymDiffError) Nodes() Nodes`

Nodes returns the nodes involved in the error

### type Tensor¶

```type Tensor interface {
// info about the ndarrayN
Shape() tensor.Shape
Strides() []int
Dtype() tensor.Dtype
Dims() int
Size() int
DataSize() int

IsScalar() bool
ScalarValue() interface{}

// engine/memory related stuff
// all Tensors should be able to be expressed of as a slab of memory
// Note: the size of each element can be acquired by T.Dtype().Size()
Engine() tensor.Engine      // Engine can be nil
MemSize() uintptr           // the size in memory
Uintptr() uintptr           // the pointer to the first element, as a uintptr
Pointer() unsafe.Pointer    // the pointer to the first elemment as a unsafe.Ponter
IsNativelyAccessible() bool // Can Go access the memory
IsManuallyManaged() bool    // Must Go manage the memory
}```

Tensor is an interface that describes an ndarray

### type TensorType¶

```type TensorType struct {
Dims int // dims

Of hm.Type
}```

TensorType is a type constructor for tensors.

Think of it as something like this:

```data Tensor a = Tensor d a
```

The shape of the Tensor is not part of TensorType. Shape checking is relegated to the dynamic part of the program run

### func (TensorType) Apply¶

`func (t TensorType) Apply(sub hm.Subs) hm.Substitutable`

Apply applies the substitutions on the types. Satisfies the hm.Type interface.

### func (TensorType) Eq¶

`func (t TensorType) Eq(other hm.Type) bool`

Eq is the equality function of this type. The type of Tensor has to be the same, and for now, only the dimensions are compared. Shape may be compared in the future for tighter type inference. Satisfies the hm.Type interface.

### func (TensorType) Format¶

`func (t TensorType) Format(state fmt.State, c rune)`

Format implements fmt.Formatter. It is also required for the satisfication the hm.Type interface.

### func (TensorType) FreeTypeVar¶

`func (t TensorType) FreeTypeVar() hm.TypeVarSet`

FreeTypeVar returns any free (unbound) type variables in this type. Satisfies the hm.Type interface.

### func (TensorType) Name¶

`func (t TensorType) Name() string`

Name returns the name of the type, which will always be "Tensor". Satisfies the hm.Type interface.

### func (TensorType) Normalize¶

`func (t TensorType) Normalize(k, v hm.TypeVarSet) (hm.Type, error)`

Normalize normalizes the type variable names (if any) in the TensorType. Satisfies the hm.Type interface.

### func (TensorType) String¶

`func (t TensorType) String() string`

String implements fmt.Stringer and runtime.Stringer. Satisfies the hm.Type interface.

### func (TensorType) Types¶

`func (t TensorType) Types() hm.Types`

Types returns a list of types that TensorType contains - in this case, the type of Tensor (float64, float32, etc). Satisfies the hm.Type interface.

### type Typer¶

```type Typer interface {
Type() hm.Type
}```

Typer represents any type (typically a Op) that knows its own Type

### type U8¶

`type U8 byte`

U8 represents a byte value.

### func (*U8) Data¶

`func (v *U8) Data() interface{}`

Data returns the original representation of the Value

### func (*U8) Dtype¶

`func (v *U8) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

### func (*U8) Format¶

`func (v *U8) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

### func (*U8) MemSize¶

`func (v *U8) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

### func (*U8) Pointer¶

`func (v *U8) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

### func (*U8) Shape¶

`func (v *U8) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

### func (*U8) Size¶

`func (v *U8) Size() int`

Size returns 0 for all scalar Values

### func (*U8) Uintptr¶

`func (v *U8) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

### type UnaryOp¶

```type UnaryOp interface {
Op

IsUnary() bool
}```

A UnaryOp is an Op that takes only one input

### type UnsafeDoer¶

```type UnsafeDoer interface {
UnsafeDo(inputs ...Value) (Value, error)
}```

UnsafeDoer is an op that will overwrite the underlying value.

### type UsePreallocDoer¶

```type UsePreallocDoer interface {
UsePreallocDo(prealloc Value, inputs ...Value) (Value, error)
}```

UsePreallocDoer is an op that works when a preallocated value is provided

### type VM¶

```type VM interface {
RunAll() error
Reset()

// Close closes all the machine resources (CUDA, if any, loggers if any)
Close() error
}```

VM represents a structure that can execute a graph or program. There are two VMs (both unexported):

```- *tapeMachine
- *lispMachine
```

The *tapeMachine pre-compiles a graph into a list of instructions, then executes the instructions linearly and sequentially. The main tradeoff is dynamism. Graphs cannot be dynamically created on the fly as a re-compilation process is required (and compilation is relatively expensive). However, graphs executed with the *tapeMachine run much faster as plenty of optimizations has been done in the code generation stage.

The *lispMachine allows for graphs to be dynamically built and executed upon. The tradeoff is that executing a graph on *lispMachine is generally slower than on *tapeMachine, given the same static "image" of a graph.

### type VMOpt¶

`type VMOpt func(m VM)`

VMOpt is a VM creation option

### func BindDualValues¶

`func BindDualValues(nodes ...*Node) VMOpt`

BindDualValues is an option for *tapeMachine only. This is useful to set when using a Solver

### func ExecuteBwdOnly¶

`func ExecuteBwdOnly() VMOpt`

ExecuteBwdOnly creates a VM that will execute a graph by doing back propagation only. The assumption is of course, that the forward graph has already been executed, and there are already values associated with the nodes. This option is only for *lispMachine. Try it on any other VMs and it will panic.

### func ExecuteFwdOnly¶

`func ExecuteFwdOnly() VMOpt`

ExecuteFwdOnly creates a VM that will execute a graph forwards only - it will not do back propagation. This option is only for *lispMachine. Try it on any other VMs and it will panic.

### func LogBothDir¶

`func LogBothDir() VMOpt`

LogBothDir logs both directions of the execution of the graph. This option is only available for *lispMachine.

### func LogBwd¶

`func LogBwd() VMOpt`

LogBwd logs the backwards execution of a graph. This option is only for *lispMachine. Try it on any other VMs and it will panic.

### func LogFwd¶

`func LogFwd() VMOpt`

LogFwd logs the forward execution of a graph. This option is only for *lispMachine. Try it on any other VMs and it will panic.

### func TraceExec¶

`func TraceExec() VMOpt`

TraceExec is an option for *tapeMachine only. It stores an immutable copy of the executed value into the node, instead of a mutable value, which may be clobbered

### func UseCudaFor¶

`func UseCudaFor(ops ...string) VMOpt`

UseCudaFor is an option for *tapeMachine. This function is NO-OP unless the program is built with the `cuda` tag.

### func WithEngine¶

`func WithEngine(e tensor.Engine) VMOpt`

WithEngine sets the tensor engine for computation inside the VM.

### func WithInfWatch¶

`func WithInfWatch() VMOpt`

WithInfWatch creates a VM that will watch for Infs when executing. It watches for +Inf, -Inf and Inf. No choice there. This slows the execution down.

### func WithLogger¶

`func WithLogger(logger *log.Logger) VMOpt`

WithLogger creates a VM with the supplied logger. If the logger is nil, a default logger, writing to os.stderr will be created.

`func WithManualGradient() VMOpt`

WithManualGradient allows the user to set the gradient of the root, before backprop. The root gradients should be set using the SetDeriv method

### func WithNaNWatch¶

`func WithNaNWatch() VMOpt`

WithNaNWatch creates a VM that will watch for NaNs when executing. This slows the execution down.

### func WithPrecompiled¶

`func WithPrecompiled(prog *program, locMap map[*Node]register) VMOpt`

WithPrecompiled is an option to pass in compiled programs. This is useful for users who use the CompileFunction function

### func WithValueFmt¶

`func WithValueFmt(format string) VMOpt`

WithValueFmt defines how the logger will output the values. It defaults to "%3.3f"

### func WithWatchlist¶

`func WithWatchlist(list ...interface{}) VMOpt`

WithWatchlist creates a VM with a watchlist. When the execution touches the things in the watchlist, the VM's logger will the log it. This allows for watching and finetuning of the algorithm. When nothing is passed in, then the VM will default to watching and logging every single execution object.

The watchlist allows for different things to be watched, depending on VM type:

```*lispMachine will ONLY take *Node
*tapeMachine will take int (for register IDs) or *Node.
```

### type Value¶

```type Value interface {
Shape() tensor.Shape // Shape  returns the shape of the Value. Scalar values return ScalarShape()
Size() int           // Size represents the number of elements in the Value. Note that in cases such as a *tensor.Dense, the underlying slice MAY have more elements than the Size() reports. This is correct.
Data() interface{}   // Data returns the original representation of the Value
Dtype() tensor.Dtype // Dtype returns the Dtype of the value

tensor.Memory
fmt.Formatter
}```

Value represents a value that Gorgonia accepts. At this point it is implemented by:

```- all scalar value types (F64, F32... etc)
- *tensor.Dense
- *dualValue
```

A Value is essentially any thing that knows its own type and shape. Most importantly though, a Value is a pointer - and can be converted into a tensor.Memory. This is done for the sake of interoperability with external devices like cgo or CUDA or OpenCL. This also means for the most part most Values will be allocated on the heap. There are some performance tradeoffs made in this decision, but ultimately this is better than having to manually manage blocks of memory

### func CloneValue¶

`func CloneValue(v Value) (Value, error)`

CloneValue clones a value. For scalars, since Go copies scalars, it returns itself

### func Copy¶

`func Copy(dest, src Value) (Value, error)`

Copy copies the src values into dest values. For scalars, it just returns itself

### func ScalarAsTensor¶

`func ScalarAsTensor(v Value, dims int, e tensor.Engine) Value`

ScalarAsTensor returns the tensor representation of a scalar. It is particularly useful as a "reshape" of tensors of sorts

The Value passed in are either Scalar, tensor.Tensor, or *dualValue. Anything else will panic.

### func ZeroValue¶

`func ZeroValue(v Value) Value`

ZeroValue returns the zero value of a type

### type ValueCloser¶

```type ValueCloser interface {
ValueClose(interface{}) bool
}```

ValueCloser represents any type that can perform a close-value check

### type ValueEqualer¶

```type ValueEqualer interface {
ValueEq(Value) bool
}```

ValueEqualer represents any type that can perform a equal value check

```type ValueGrad interface {
Valuer
}```

ValueGrad is any type that has a value and a grad. This is used for Solvers

`func NodesToValueGrads(in Nodes) (out []ValueGrad)`

NodesToValueGrads is a utility function that converts a Nodes to a slice of ValueGrad for the solvers

### type Valuer¶

```type Valuer interface {
Value() Value
}```

Valuer is any type that can return a Value

### type VanillaSolver¶

```type VanillaSolver struct {
// contains filtered or unexported fields
}```

VanillaSolver is your bog standard stochastic gradient descent optimizer. There are no fancy features to this

### func NewVanillaSolver¶

`func NewVanillaSolver(opts ...SolverOpt) *VanillaSolver`

NewVanillaSolver creates a new VanillaSolver with sane-ish default values

### func (*VanillaSolver) Step¶

`func (s *VanillaSolver) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the most basic gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

### type ZeroValuer¶

```type ZeroValuer interface {
Value
ZeroValue() Value
}```

ZeroValuer is a a Value that can provide the zero-value of its type

### type Zeroer¶

```type Zeroer interface {
Value
Zero()
}```

Zeroer is a Value that can zero itself

### Package Files ¶

Documentation was rendered with GOOS=linux and GOARCH=amd64.

## Keyboard shortcuts

 ? : This menu / : Search site f or F : Jump to identifier