Documentation
¶
Index ¶
- func FullyConnectedBackwardData(dx, w, dy *Tensor, alpha, beta float32) error
- func FullyConnectedBackwardFilter(x, dw, db, dy *Tensor, alpha, beta float32) error
- func FullyConnectedForward(x, w, b, y *Tensor, alpha, beta float32) error
- func L1L2Regularization(decay1, decay2 float32, dw, w *Tensor) (l1, l2 float32)
- func SoftMaxBackward(dx, y, target *Tensor, alpha, beta float32) error
- func SoftMaxForward(x, y *Tensor, alpha, beta float32) (err error)
- func SoftMaxLossandPercent(target, output *Tensor) (avgloss float32, avgpercent float32)
- type Adam
- type AdamOptions
- type Convolution
- func (c *Convolution) BackwardData(dx, w, dy *Tensor, alpha, beta float32) (err error)
- func (c *Convolution) BackwardFilter(x, dw, db, dy *Tensor, alpha, beta float32) (err error)
- func (c *Convolution) FindOutputDims(x, w *Tensor) []int
- func (c *Convolution) Forward(x, w, wb, y *Tensor, alpha, beta float32) (err error)
- func (c *Convolution) Get() (padding, dilation, stride []int)
- func (c *Convolution) Set(padding, stride, dilation []int, NHWC bool) error
- type LeakyRelu
- type Relu
- type Tensor
- func (t *Tensor) Add(A, B *Tensor, alpha1, alpha2, beta float32) error
- func (t *Tensor) AddAll(val float32)
- func (t *Tensor) AddAtoT(A *Tensor, alpha1 float32) error
- func (t *Tensor) Average() float32
- func (t *Tensor) Dims() (dims []int)
- func (t *Tensor) Div(A, B *Tensor, alpha1, alpha2, beta float32) error
- func (t *Tensor) Get(dimlocation []int) (value float32)
- func (t *Tensor) LoadFromSlice(values []float32) (err error)
- func (t *Tensor) Mult(A, B *Tensor, alpha1, alpha2, beta float32) error
- func (t *Tensor) MultAll(val float32)
- func (t *Tensor) Set(value float32, dimlocation []int)
- func (t *Tensor) SetAll(value float32)
- func (t *Tensor) Stride() (stride []int)
- func (t *Tensor) Volume() int
- func (t *Tensor) ZeroClone() (*Tensor, error)
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func FullyConnectedBackwardData ¶
FullyConnectedBackwardData does the backward data operation for a fully connected neural network dx stores gradients found for the input of this layer (out) w is weight tensor (in) dy is are the gradients found for the output of this layer (out)
Batches are done in parallel alpha has no functionality but beta will multiply the previous values of dx by beta before adding new gradients
func FullyConnectedBackwardFilter ¶
FullyConnectedBackwardFilter does the backward filter operation for a fully connected neural network x is the input for this layer (in) dw stores the gradients for the weight tensor (output) db stores the gradients for the bias (output) dy is are the gradients found for the output of this layer (out)
Batches are done in parallel ¶
alpha has no functionality but beta will multiply the previous values of dw,db by beta before adding new gradients
func FullyConnectedForward ¶
FullyConnectedForward does the forward operation for a fully connected neural network x is input tensor (in) w is weight tensor (in) b is bias tensor (in) y is the output tensor (out)
Batches are performed in parrallel ¶
alpha and beta have no functionality at this point. Set alpha to 1 and beta to 0 to not run into any future issues.
func L1L2Regularization ¶
L1L2Regularization performs the regularizaion on dw. This should be ran before a trainer like adam or momentum
func SoftMaxBackward ¶
SoftMaxBackward does the softmax backwards y is the output of softmax from softmax forward dx is the errors from the inputs of the SoftMaxForwards target is the target values that the output is trying to get
alpha, beta behave like
dx= dx*beta + alpha *Operation(y, target)
func SoftMaxForward ¶
SoftMaxForward performs the softmax calculation alpha and beta have no function right now
func SoftMaxLossandPercent ¶
SoftMaxLossandPercent will do the softmax loss of the layer. It will be an average of all the indicators(anything greater than 0).
Types ¶
type Adam ¶
type Adam struct {
// contains filtered or unexported fields
}
Adam is the adam trainer
func CreateAdamTrainer ¶
func CreateAdamTrainer(options *AdamOptions) *Adam
CreateAdamTrainer creates an adam trainer. If options is nil then default values will be used.
type AdamOptions ¶
type AdamOptions struct { Rate float32 Beta1 float32 Beta2 float32 Eps float32 Decay1 float32 Decay2 float32 }
AdamOptions are options that can be passed to CreateAdamTrainer
type Convolution ¶
type Convolution struct {
// contains filtered or unexported fields
}
Convolution contains the parameters that are used to do a convolution
func CreateConvolution ¶
func CreateConvolution() *Convolution
CreateConvolution creates a convolution algo
func (*Convolution) BackwardData ¶
func (c *Convolution) BackwardData(dx, w, dy *Tensor, alpha, beta float32) (err error)
BackwardData does the backward data operation dx stores gradients from x in forward propagation. (out) w is weights (in) dy are gradients stored from layers output (in) alpha is for future work it has no function right now if beta will multiply the previous values for dx by whatever beta is
func (*Convolution) BackwardFilter ¶
func (c *Convolution) BackwardFilter(x, dw, db, dy *Tensor, alpha, beta float32) (err error)
BackwardFilter updates gradients from dy to dw and db. alpha is for future work. beta will multiply previous values of dw by beta before gradient is accumulated
func (*Convolution) FindOutputDims ¶
func (c *Convolution) FindOutputDims(x, w *Tensor) []int
FindOutputDims finds the output dims of the convolution
func (*Convolution) Forward ¶
func (c *Convolution) Forward(x, w, wb, y *Tensor, alpha, beta float32) (err error)
Forward is the forward propagation. Calcultions are stored in y alpha and beta are for future work. they don't have any function rightnow
func (*Convolution) Get ¶
func (c *Convolution) Get() (padding, dilation, stride []int)
Get gets the convolution settings
type LeakyRelu ¶
type LeakyRelu struct {
// contains filtered or unexported fields
}
LeakyRelu is a struct that holds the neg and pos coef
func CreateLeakyRelu ¶
CreateLeakyRelu creates a leaky relu
func (*LeakyRelu) Backward ¶
Backward does the backward relu activation alpha and beta have no function right now Set to default alpha=1,beta=0
type Relu ¶
type Relu struct {
// contains filtered or unexported fields
}
Relu holds the methods to do Relu activation
func CreateRelu ¶
CreateRelu will create the relu function if ceiling <= 0 then there won't be a ceiling
func (*Relu) Backward ¶
Backward does the Backward operation alpha and beta have no function right now
type Tensor ¶
type Tensor struct {
// contains filtered or unexported fields
}
Tensor is the basic data structure of convolutional neural networks. It can be used for regular neural networks too.
func CreateRandomizedWeightsTensor ¶
CreateRandomizedWeightsTensor creates a tensor with randomized weights.
func CreateTensor ¶
CreateTensor creates a tensor according to the values passed. If len(dims) not ==4 an error will return! f64 is a place holder only thing available is float32
func CreateTensorEx ¶
CreateTensorEx creates a tensor and copies data into Tensor if len(data)!=(t*Tensor)Volume() then error will return If len(dims) not ==4 an error will return!
func (*Tensor) LoadFromSlice ¶
LoadFromSlice will load from values into the tensor. It starts at zero til the length of values. If values is longer than the volume of the tensor an error will return.