convnetgo

package module
v0.0.0-...-28fc4de Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 25, 2020 License: MIT Imports: 5 Imported by: 0

README

convnetgo

Examples of operations used in CNNs and NNs, Parallelized to boot.

This operations are parallelized per batch. So for best results make larger batches.

I will make single threaded operations, too.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func FullyConnectedBackwardData

func FullyConnectedBackwardData(dx, w, dy *Tensor, alpha, beta float32) error

FullyConnectedBackwardData does the backward data operation for a fully connected neural network dx stores gradients found for the input of this layer (out) w is weight tensor (in) dy is are the gradients found for the output of this layer (out)

Batches are done in parallel alpha has no functionality but beta will multiply the previous values of dx by beta before adding new gradients

func FullyConnectedBackwardFilter

func FullyConnectedBackwardFilter(x, dw, db, dy *Tensor, alpha, beta float32) error

FullyConnectedBackwardFilter does the backward filter operation for a fully connected neural network x is the input for this layer (in) dw stores the gradients for the weight tensor (output) db stores the gradients for the bias (output) dy is are the gradients found for the output of this layer (out)

Batches are done in parallel

alpha has no functionality but beta will multiply the previous values of dw,db by beta before adding new gradients

func FullyConnectedForward

func FullyConnectedForward(x, w, b, y *Tensor, alpha, beta float32) error

FullyConnectedForward does the forward operation for a fully connected neural network x is input tensor (in) w is weight tensor (in) b is bias tensor (in) y is the output tensor (out)

Batches are performed in parrallel

alpha and beta have no functionality at this point. Set alpha to 1 and beta to 0 to not run into any future issues.

func L1L2Regularization

func L1L2Regularization(decay1, decay2 float32, dw, w *Tensor) (l1, l2 float32)

L1L2Regularization performs the regularizaion on dw. This should be ran before a trainer like adam or momentum

func SoftMaxBackward

func SoftMaxBackward(dx, y, target *Tensor, alpha, beta float32) error

SoftMaxBackward does the softmax backwards y is the output of softmax from softmax forward dx is the errors from the inputs of the SoftMaxForwards target is the target values that the output is trying to get

alpha, beta behave like

dx= dx*beta + alpha *Operation(y, target)

func SoftMaxForward

func SoftMaxForward(x, y *Tensor, alpha, beta float32) (err error)

SoftMaxForward performs the softmax calculation alpha and beta have no function right now

func SoftMaxLossandPercent

func SoftMaxLossandPercent(target, output *Tensor) (avgloss float32, avgpercent float32)

SoftMaxLossandPercent will do the softmax loss of the layer. It will be an average of all the indicators(anything greater than 0).

Types

type Adam

type Adam struct {
	// contains filtered or unexported fields
}

Adam is the adam trainer

func CreateAdamTrainer

func CreateAdamTrainer(options *AdamOptions) *Adam

CreateAdamTrainer creates an adam trainer. If options is nil then default values will be used.

func (*Adam) UpdateWeights

func (a *Adam) UpdateWeights(gsum, xsum, dw, w *Tensor, multithreaded bool) error

UpdateWeights updates the weights of w

dw is the accumulated gradients for the weights

gsum,xsum are accumulators used to smooth out the training. Should be the same size as w and dw.

type AdamOptions

type AdamOptions struct {
	Rate   float32
	Beta1  float32
	Beta2  float32
	Eps    float32
	Decay1 float32
	Decay2 float32
}

AdamOptions are options that can be passed to CreateAdamTrainer

type Convolution

type Convolution struct {
	// contains filtered or unexported fields
}

Convolution contains the parameters that are used to do a convolution

func CreateConvolution

func CreateConvolution() *Convolution

CreateConvolution creates a convolution algo

func (*Convolution) BackwardData

func (c *Convolution) BackwardData(dx, w, dy *Tensor, alpha, beta float32) (err error)

BackwardData does the backward data operation dx stores gradients from x in forward propagation. (out) w is weights (in) dy are gradients stored from layers output (in) alpha is for future work it has no function right now if beta will multiply the previous values for dx by whatever beta is

func (*Convolution) BackwardFilter

func (c *Convolution) BackwardFilter(x, dw, db, dy *Tensor, alpha, beta float32) (err error)

BackwardFilter updates gradients from dy to dw and db. alpha is for future work. beta will multiply previous values of dw by beta before gradient is accumulated

func (*Convolution) FindOutputDims

func (c *Convolution) FindOutputDims(x, w *Tensor) []int

FindOutputDims finds the output dims of the convolution

func (*Convolution) Forward

func (c *Convolution) Forward(x, w, wb, y *Tensor, alpha, beta float32) (err error)

Forward is the forward propagation. Calcultions are stored in y alpha and beta are for future work. they don't have any function rightnow

func (*Convolution) Get

func (c *Convolution) Get() (padding, dilation, stride []int)

Get gets the convolution settings

func (*Convolution) Set

func (c *Convolution) Set(padding, stride, dilation []int, NHWC bool) error

Set sets the convolution settings

type LeakyRelu

type LeakyRelu struct {
	// contains filtered or unexported fields
}

LeakyRelu is a struct that holds the neg and pos coef

func CreateLeakyRelu

func CreateLeakyRelu(negcoef, poscoef float32) (l *LeakyRelu, err error)

CreateLeakyRelu creates a leaky relu

func (*LeakyRelu) Backward

func (l *LeakyRelu) Backward(x, dx, dy *Tensor, alpha, beta float32) (err error)

Backward does the backward relu activation alpha and beta have no function right now Set to default alpha=1,beta=0

func (*LeakyRelu) Forward

func (l *LeakyRelu) Forward(x, y *Tensor, alpha, beta float32) (err error)

Forward does the leaky relu activation alpha and beta have no function right now Set to default alpha=1,beta=0

func (*LeakyRelu) Get

func (l *LeakyRelu) Get() (negcoef, poscoef float32)

Get gets the coefs

func (*LeakyRelu) Set

func (l *LeakyRelu) Set(negcoef, poscoef float32) (err error)

Set sets the coefs

type Relu

type Relu struct {
	// contains filtered or unexported fields
}

Relu holds the methods to do Relu activation

func CreateRelu

func CreateRelu(ceiling float32) (l *Relu)

CreateRelu will create the relu function if ceiling <= 0 then there won't be a ceiling

func (*Relu) Backward

func (r *Relu) Backward(x, dx, dy *Tensor, alpha, beta float32) (err error)

Backward does the Backward operation alpha and beta have no function right now

func (*Relu) Forward

func (r *Relu) Forward(x, y *Tensor, alpha, beta float32) (err error)

Forward does the forward operation alpha and beta have no function right now

func (*Relu) Get

func (r *Relu) Get() (ceiling float32)

Get gets the ceiling

func (*Relu) Set

func (r *Relu) Set(ceiling float32)

Set sets the ceiling

type Tensor

type Tensor struct {
	// contains filtered or unexported fields
}

Tensor is the basic data structure of convolutional neural networks. It can be used for regular neural networks too.

func CreateRandomizedWeightsTensor

func CreateRandomizedWeightsTensor(wdims, xdims []int, NHWC bool) (*Tensor, error)

CreateRandomizedWeightsTensor creates a tensor with randomized weights.

func CreateTensor

func CreateTensor(dims []int, NHWC bool) (*Tensor, error)

CreateTensor creates a tensor according to the values passed. If len(dims) not ==4 an error will return! f64 is a place holder only thing available is float32

func CreateTensorEx

func CreateTensorEx(dims []int, data []float32, NHWC bool) (*Tensor, error)

CreateTensorEx creates a tensor and copies data into Tensor if len(data)!=(t*Tensor)Volume() then error will return If len(dims) not ==4 an error will return!

func (*Tensor) Add

func (t *Tensor) Add(A, B *Tensor, alpha1, alpha2, beta float32) error

Add does a t[i]=t[i]*beta + A[i]*alpha1 +B[i]*alpha2

func (*Tensor) AddAll

func (t *Tensor) AddAll(val float32)

AddAll adds val to all elements in t

func (*Tensor) AddAtoT

func (t *Tensor) AddAtoT(A *Tensor, alpha1 float32) error

AddAtoT does is t=t+(alpha1*A)

func (*Tensor) Average

func (t *Tensor) Average() float32

Average returns the average of all the elements in t

func (*Tensor) Dims

func (t *Tensor) Dims() (dims []int)

Dims returns a copy of tensor dims

func (*Tensor) Div

func (t *Tensor) Div(A, B *Tensor, alpha1, alpha2, beta float32) error

Div does a t[i]=t[i]*beta + A[i]*alpha1 / B[i]*alpha2

func (*Tensor) Get

func (t *Tensor) Get(dimlocation []int) (value float32)

Get returns the value at dim location

func (*Tensor) LoadFromSlice

func (t *Tensor) LoadFromSlice(values []float32) (err error)

LoadFromSlice will load from values into the tensor. It starts at zero til the length of values. If values is longer than the volume of the tensor an error will return.

func (*Tensor) Mult

func (t *Tensor) Mult(A, B *Tensor, alpha1, alpha2, beta float32) error

Mult does a t[i]=t[i]*beta + A[i]*alpha1 *B[i]*alpha2

func (*Tensor) MultAll

func (t *Tensor) MultAll(val float32)

MultAll multiplies val to all elements in t

func (*Tensor) Set

func (t *Tensor) Set(value float32, dimlocation []int)

Set sets the value in the tensor at dim location

func (*Tensor) SetAll

func (t *Tensor) SetAll(value float32)

SetAll sets all the elments in t to value.

func (*Tensor) Stride

func (t *Tensor) Stride() (stride []int)

Stride returns a copy of tensor stride

func (*Tensor) Volume

func (t *Tensor) Volume() int

Volume returns volume of tensor (num of elements)

func (*Tensor) ZeroClone

func (t *Tensor) ZeroClone() (*Tensor, error)

ZeroClone returns a zeroed out clone of t

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL