convnet

package module
v1.1.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 6, 2021 License: MIT Imports: 7 Imported by: 0

README

ConvNetJS

ConvNetJS is a Javascript implementation of Neural networks, together with nice browser-based demos. It currently supports:

  • Common Neural Network modules (fully connected layers, non-linearities)
  • Classification (SVM/Softmax) and Regression (L2) cost functions
  • Ability to specify and train Convolutional Networks that process images
  • An experimental Reinforcement Learning module, based on Deep Q Learning

For much more information, see the main page at convnetjs.com

Note: I am not actively maintaining ConvNetJS anymore because I simply don't have time. I think the npm repo might not work at this point.

Online Demos

Example Code

Here's a minimum example of defining a 2-layer neural network and training it on a single data point:

// species a 2-layer neural network with one hidden layer of 20 neurons
var layer_defs = [];
// input layer declares size of input. here: 2-D data
// ConvNetJS works on 3-Dimensional volumes (sx, sy, depth), but if you're not dealing with images
// then the first two dimensions (sx, sy) will always be kept at size 1
layer_defs.push({type:'input', out_sx:1, out_sy:1, out_depth:2});
// declare 20 neurons, followed by ReLU (rectified linear unit non-linearity)
layer_defs.push({type:'fc', num_neurons:20, activation:'relu'}); 
// declare the linear classifier on top of the previous hidden layer
layer_defs.push({type:'softmax', num_classes:10});

var net = new convnetjs.Net();
net.makeLayers(layer_defs);

// forward a random data point through the network
var x = new convnetjs.Vol([0.3, -0.5]);
var prob = net.forward(x); 

// prob is a Vol. Vols have a field .w that stores the raw data, and .dw that stores gradients
console.log('probability that x is class 0: ' + prob.w[0]); // prints 0.50101

var trainer = new convnetjs.SGDTrainer(net, {learning_rate:0.01, l2_decay:0.001});
trainer.train(x, 0); // train the network, specifying that x is class zero

var prob2 = net.forward(x);
console.log('probability that x is class 0: ' + prob2.w[0]);
// now prints 0.50374, slightly higher than previous 0.50101: the networks
// weights have been adjusted by the Trainer to give a higher probability to
// the class we trained the network with (zero)

and here is a small Convolutional Neural Network if you wish to predict on images:

var layer_defs = [];
layer_defs.push({type:'input', out_sx:32, out_sy:32, out_depth:3}); // declare size of input
// output Vol is of size 32x32x3 here
layer_defs.push({type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'});
// the layer will perform convolution with 16 kernels, each of size 5x5.
// the input will be padded with 2 pixels on all sides to make the output Vol of the same size
// output Vol will thus be 32x32x16 at this point
layer_defs.push({type:'pool', sx:2, stride:2});
// output Vol is of size 16x16x16 here
layer_defs.push({type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'});
// output Vol is of size 16x16x20 here
layer_defs.push({type:'pool', sx:2, stride:2});
// output Vol is of size 8x8x20 here
layer_defs.push({type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'});
// output Vol is of size 8x8x20 here
layer_defs.push({type:'pool', sx:2, stride:2});
// output Vol is of size 4x4x20 here
layer_defs.push({type:'softmax', num_classes:10});
// output Vol is of size 1x1x10 here

net = new convnetjs.Net();
net.makeLayers(layer_defs);

// helpful utility for converting images into Vols is included
var x = convnetjs.img_to_vol(document.getElementById('some_image'))
var output_probabilities_vol = net.forward(x)

Getting Started

A Getting Started tutorial is available on main page.

The full Documentation can also be found there.

See the releases page for this project to get the minified, compiled library, and a direct link to is also available below for convenience (but please host your own copy)

Compiling the library from src/ to build/

If you would like to add features to the library, you will have to change the code in src/ and then compile the library into the build/ directory. The compilation script simply concatenates files in src/ and then minifies the result.

The compilation is done using an ant task: it compiles build/convnet.js by concatenating the source files in src/ and then minifies the result into build/convnet-min.js. Make sure you have ant installed (on Ubuntu you can simply sudo apt-get install it), then cd into compile/ directory and run:

$ ant -lib yuicompressor-2.4.8.jar -f build.xml

The output files will be in build/

Use in Node

The library is also available on node.js:

  1. Install it: $ npm install convnetjs
  2. Use it: var convnetjs = require("convnetjs");

License

MIT

Documentation

Index

Constants

This section is empty.

Variables

View Source
var DefaultTrainerOptions = TrainerOptions{
	LearningRate: 0.01,
	L1Decay:      0.0,
	L2Decay:      0.0,
	BatchSize:    1,
	Method:       MethodSGD,

	Momentum: 0.9,
	Ro:       0.95,
	Eps:      1e-8,
	Beta1:    0.9,
	Beta2:    0.999,
}

Functions

This section is empty.

Types

type ConvLayer

type ConvLayer struct {
	// contains filtered or unexported fields
}

func (*ConvLayer) Backward

func (l *ConvLayer) Backward()

func (*ConvLayer) Forward

func (l *ConvLayer) Forward(v *Vol, isTraining bool) *Vol

func (*ConvLayer) MarshalJSON

func (l *ConvLayer) MarshalJSON() ([]byte, error)

func (*ConvLayer) OutDepth

func (l *ConvLayer) OutDepth() int

func (*ConvLayer) OutSx

func (l *ConvLayer) OutSx() int

func (*ConvLayer) OutSy

func (l *ConvLayer) OutSy() int

func (*ConvLayer) ParamsAndGrads

func (l *ConvLayer) ParamsAndGrads() []ParamsAndGrads

func (*ConvLayer) UnmarshalJSON

func (l *ConvLayer) UnmarshalJSON(b []byte) error

type DropoutLayer

type DropoutLayer struct {
	// contains filtered or unexported fields
}

An inefficient dropout layer Note this is not most efficient implementation since the layer before computed all these activations and now we're just going to drop them :( same goes for backward pass. Also, if we wanted to be efficient at test time we could equivalently be clever and upscale during train and copy pointers during test

func (*DropoutLayer) Backward

func (l *DropoutLayer) Backward()

func (*DropoutLayer) Forward

func (l *DropoutLayer) Forward(v *Vol, isTraining bool) *Vol

func (*DropoutLayer) MarshalJSON

func (l *DropoutLayer) MarshalJSON() ([]byte, error)

func (*DropoutLayer) OutDepth

func (l *DropoutLayer) OutDepth() int

func (*DropoutLayer) OutSx

func (l *DropoutLayer) OutSx() int

func (*DropoutLayer) OutSy

func (l *DropoutLayer) OutSy() int

func (*DropoutLayer) ParamsAndGrads

func (l *DropoutLayer) ParamsAndGrads() []ParamsAndGrads

func (*DropoutLayer) UnmarshalJSON

func (l *DropoutLayer) UnmarshalJSON(b []byte) error

type FullyConnLayer

type FullyConnLayer struct {
	// contains filtered or unexported fields
}

func (*FullyConnLayer) Backward

func (l *FullyConnLayer) Backward()

func (*FullyConnLayer) Forward

func (l *FullyConnLayer) Forward(v *Vol, isTraining bool) *Vol

func (*FullyConnLayer) MarshalJSON

func (l *FullyConnLayer) MarshalJSON() ([]byte, error)

func (*FullyConnLayer) OutDepth

func (l *FullyConnLayer) OutDepth() int

func (*FullyConnLayer) OutSx

func (l *FullyConnLayer) OutSx() int

func (*FullyConnLayer) OutSy

func (l *FullyConnLayer) OutSy() int

func (*FullyConnLayer) ParamsAndGrads

func (l *FullyConnLayer) ParamsAndGrads() []ParamsAndGrads

func (*FullyConnLayer) UnmarshalJSON

func (l *FullyConnLayer) UnmarshalJSON(b []byte) error

type InputLayer

type InputLayer struct {
	// contains filtered or unexported fields
}

func (*InputLayer) Backward

func (l *InputLayer) Backward()

func (*InputLayer) Forward

func (l *InputLayer) Forward(v *Vol, isTraining bool) *Vol

func (*InputLayer) MarshalJSON

func (l *InputLayer) MarshalJSON() ([]byte, error)

func (*InputLayer) OutDepth

func (l *InputLayer) OutDepth() int

func (*InputLayer) OutSx

func (l *InputLayer) OutSx() int

func (*InputLayer) OutSy

func (l *InputLayer) OutSy() int

func (*InputLayer) ParamsAndGrads

func (l *InputLayer) ParamsAndGrads() []ParamsAndGrads

func (*InputLayer) UnmarshalJSON

func (l *InputLayer) UnmarshalJSON(b []byte) error

type Layer

type Layer interface {
	OutSx() int
	OutSy() int
	OutDepth() int

	Forward(v *Vol, isTraining bool) *Vol
	Backward()
	ParamsAndGrads() []ParamsAndGrads

	json.Marshaler
	json.Unmarshaler
	// contains filtered or unexported methods
}

type LayerDef

type LayerDef struct {
	Type           LayerType `json:"type"`
	NumNeurons     int       `json:"num_neurons"`
	NumClasses     int       `json:"num_classes"`
	BiasPref       float64   `json:"bias_pref"`
	BiasPrefZero   bool      `json:"-"`
	Activation     LayerType `json:"activation"`
	GroupSize      int       `json:"group_size"`
	GroupSizeZero  bool      `json:"-"`
	DropProb       float64   `json:"drop_prob"`
	DropProbZero   bool      `json:"-"`
	InSx           int       `json:"in_sx"`
	InSy           int       `json:"in_sy"`
	InDepth        int       `json:"in_depth"`
	OutSx          int       `json:"out_sx"`
	OutSy          int       `json:"out_sy"`
	OutDepth       int       `json:"out_depth"`
	L1DecayMul     float64   `json:"l1_decay_mul"`
	L1DecayMulZero bool      `json:"-"`
	L2DecayMul     float64   `json:"l2_decay_mul"`
	L2DecayMulZero bool      `json:"-"`
	Sx             int       `json:"sx"`
	SxZero         bool      `json:"-"`
	Sy             int       `json:"sy"`
	SyZero         bool      `json:"-"`
	Pad            int       `json:"pad"`
	PadZero        bool      `json:"-"`
	Stride         int       `json:"stride"`
	StrideZero     bool      `json:"-"`
	Filters        int       `json:"filters"`
	K              float64   `json:"k"`
	N              int       `json:"n"`
	Alpha          float64   `json:"alpha"`
	Beta           float64   `json:"beta"`
}

type LayerType

type LayerType int
const (
	LayerInput      LayerType = iota + 1 // input
	LayerRelu                            // relu
	LayerSigmoid                         // sigmoid
	LayerTanh                            // tanh
	LayerDropout                         // dropout
	LayerConv                            // conv
	LayerPool                            // pool
	LayerLRN                             // lrn
	LayerSoftmax                         // softmax
	LayerRegression                      // regression
	LayerFC                              // fc
	LayerMaxout                          // maxout
	LayerSVM                             // svm
)

func (LayerType) String

func (i LayerType) String() string

type LocalResponseNormalizationLayer

type LocalResponseNormalizationLayer struct {
	// contains filtered or unexported fields
}

Local Response Normalization in window, along depths of volumes

func (*LocalResponseNormalizationLayer) Backward

func (l *LocalResponseNormalizationLayer) Backward()

func (*LocalResponseNormalizationLayer) Forward

func (l *LocalResponseNormalizationLayer) Forward(v *Vol, isTraining bool) *Vol

func (*LocalResponseNormalizationLayer) MarshalJSON

func (l *LocalResponseNormalizationLayer) MarshalJSON() ([]byte, error)

func (*LocalResponseNormalizationLayer) OutDepth

func (l *LocalResponseNormalizationLayer) OutDepth() int

func (*LocalResponseNormalizationLayer) OutSx

func (*LocalResponseNormalizationLayer) OutSy

func (*LocalResponseNormalizationLayer) ParamsAndGrads

func (l *LocalResponseNormalizationLayer) ParamsAndGrads() []ParamsAndGrads

func (*LocalResponseNormalizationLayer) UnmarshalJSON

func (l *LocalResponseNormalizationLayer) UnmarshalJSON(b []byte) error

type LossData

type LossData struct {
	Dim int
	Val float64
}

type LossLayer

type LossLayer interface {
	Layer
	BackwardLoss(y LossData) float64
}

type MagicNet

type MagicNet struct{}

MagicNet takes data: a list of convnet.Vol, and labels which for now are assumed to be class indices 0..K. MagicNet then: - creates data folds for cross-validation - samples candidate networks - evaluates candidate networks on all data folds - produces predictions by model-averaging the best networks

type MaxoutLayer

type MaxoutLayer struct {
	// contains filtered or unexported fields
}

Implements Maxout nonlinearity that computes x -> max(x) where x is a vector of size group_size. Ideally of course, the input size should be exactly divisible by group_size

func (*MaxoutLayer) Backward

func (l *MaxoutLayer) Backward()

func (*MaxoutLayer) Forward

func (l *MaxoutLayer) Forward(v *Vol, isTraining bool) *Vol

func (*MaxoutLayer) MarshalJSON

func (l *MaxoutLayer) MarshalJSON() ([]byte, error)

func (*MaxoutLayer) OutDepth

func (l *MaxoutLayer) OutDepth() int

func (*MaxoutLayer) OutSx

func (l *MaxoutLayer) OutSx() int

func (*MaxoutLayer) OutSy

func (l *MaxoutLayer) OutSy() int

func (*MaxoutLayer) ParamsAndGrads

func (l *MaxoutLayer) ParamsAndGrads() []ParamsAndGrads

func (*MaxoutLayer) UnmarshalJSON

func (l *MaxoutLayer) UnmarshalJSON(b []byte) error

type Net

type Net struct {
	Layers []Layer `json:"layers"`
}

Net manages a set of layers For now constraints: Simple linear order of layers, first layer input last layer a cost layer

func (*Net) Backward

func (n *Net) Backward(y LossData) float64

backprop: compute gradients wrt all parameters

func (*Net) CostLoss

func (n *Net) CostLoss(v *Vol, y LossData) float64

func (*Net) Forward

func (n *Net) Forward(v *Vol, isTraining bool) *Vol

forward prop the network. The trainer class passes is_training = true, but when this function is called from outside (not from the trainer), it defaults to prediction mode

func (*Net) MakeLayers

func (n *Net) MakeLayers(defs []LayerDef, r *rand.Rand)

takes a list of layer definitions and creates the network layer objects

func (*Net) ParamsAndGrads

func (n *Net) ParamsAndGrads() []ParamsAndGrads

accumulate parameters and gradients for the entire network

func (*Net) Prediction

func (n *Net) Prediction() int

this is a convenience function for returning the argmax prediction, assuming the last layer of the net is a softmax

func (*Net) UnmarshalJSON

func (n *Net) UnmarshalJSON(b []byte) error

type ParamsAndGrads

type ParamsAndGrads struct {
	Params     []float64
	Grads      []float64
	L1DecayMul float64
	L2DecayMul float64
}

type PoolLayer

type PoolLayer struct {
	// contains filtered or unexported fields
}

func (*PoolLayer) Backward

func (l *PoolLayer) Backward()

func (*PoolLayer) Forward

func (l *PoolLayer) Forward(v *Vol, isTraining bool) *Vol

func (*PoolLayer) MarshalJSON

func (l *PoolLayer) MarshalJSON() ([]byte, error)

func (*PoolLayer) OutDepth

func (l *PoolLayer) OutDepth() int

func (*PoolLayer) OutSx

func (l *PoolLayer) OutSx() int

func (*PoolLayer) OutSy

func (l *PoolLayer) OutSy() int

func (*PoolLayer) ParamsAndGrads

func (l *PoolLayer) ParamsAndGrads() []ParamsAndGrads

func (*PoolLayer) UnmarshalJSON

func (l *PoolLayer) UnmarshalJSON(b []byte) error

type RegressionLayer

type RegressionLayer struct {
	// contains filtered or unexported fields
}

implements an L2 regression cost layer, so penalizes \sum_i(||x_i - y_i||^2), where x is its input and y is the user-provided array of "correct" values.

func (*RegressionLayer) Backward

func (l *RegressionLayer) Backward()

func (*RegressionLayer) BackwardLoss

func (l *RegressionLayer) BackwardLoss(y LossData) float64

func (*RegressionLayer) Forward

func (l *RegressionLayer) Forward(v *Vol, isTraining bool) *Vol

func (*RegressionLayer) MarshalJSON

func (l *RegressionLayer) MarshalJSON() ([]byte, error)

func (*RegressionLayer) OutDepth

func (l *RegressionLayer) OutDepth() int

func (*RegressionLayer) OutSx

func (l *RegressionLayer) OutSx() int

func (*RegressionLayer) OutSy

func (l *RegressionLayer) OutSy() int

func (*RegressionLayer) ParamsAndGrads

func (l *RegressionLayer) ParamsAndGrads() []ParamsAndGrads

func (*RegressionLayer) UnmarshalJSON

func (l *RegressionLayer) UnmarshalJSON(b []byte) error

type ReluLayer

type ReluLayer struct {
	// contains filtered or unexported fields
}

Implements ReLU nonlinearity elementwise x -> max(0, x) the output is in [0, inf)

func (*ReluLayer) Backward

func (l *ReluLayer) Backward()

func (*ReluLayer) Forward

func (l *ReluLayer) Forward(v *Vol, isTraining bool) *Vol

func (*ReluLayer) MarshalJSON

func (l *ReluLayer) MarshalJSON() ([]byte, error)

func (*ReluLayer) OutDepth

func (l *ReluLayer) OutDepth() int

func (*ReluLayer) OutSx

func (l *ReluLayer) OutSx() int

func (*ReluLayer) OutSy

func (l *ReluLayer) OutSy() int

func (*ReluLayer) ParamsAndGrads

func (l *ReluLayer) ParamsAndGrads() []ParamsAndGrads

func (*ReluLayer) UnmarshalJSON

func (l *ReluLayer) UnmarshalJSON(b []byte) error

type SVMLayer

type SVMLayer struct {
	// contains filtered or unexported fields
}

func (*SVMLayer) Backward

func (l *SVMLayer) Backward()

func (*SVMLayer) BackwardLoss

func (l *SVMLayer) BackwardLoss(y LossData) float64

func (*SVMLayer) Forward

func (l *SVMLayer) Forward(v *Vol, isTraining bool) *Vol

func (*SVMLayer) MarshalJSON

func (l *SVMLayer) MarshalJSON() ([]byte, error)

func (*SVMLayer) OutDepth

func (l *SVMLayer) OutDepth() int

func (*SVMLayer) OutSx

func (l *SVMLayer) OutSx() int

func (*SVMLayer) OutSy

func (l *SVMLayer) OutSy() int

func (*SVMLayer) ParamsAndGrads

func (l *SVMLayer) ParamsAndGrads() []ParamsAndGrads

func (*SVMLayer) UnmarshalJSON

func (l *SVMLayer) UnmarshalJSON(b []byte) error

type SigmoidLayer

type SigmoidLayer struct {
	// contains filtered or unexported fields
}

Implements Sigmoid nonlinearity elementwise x -> 1/(1+e^(-x)) so the output is between 0 and 1.

func (*SigmoidLayer) Backward

func (l *SigmoidLayer) Backward()

func (*SigmoidLayer) Forward

func (l *SigmoidLayer) Forward(v *Vol, isTraining bool) *Vol

func (*SigmoidLayer) MarshalJSON

func (l *SigmoidLayer) MarshalJSON() ([]byte, error)

func (*SigmoidLayer) OutDepth

func (l *SigmoidLayer) OutDepth() int

func (*SigmoidLayer) OutSx

func (l *SigmoidLayer) OutSx() int

func (*SigmoidLayer) OutSy

func (l *SigmoidLayer) OutSy() int

func (*SigmoidLayer) ParamsAndGrads

func (l *SigmoidLayer) ParamsAndGrads() []ParamsAndGrads

func (*SigmoidLayer) UnmarshalJSON

func (l *SigmoidLayer) UnmarshalJSON(b []byte) error

type SoftmaxLayer

type SoftmaxLayer struct {
	// contains filtered or unexported fields
}

This is a classifier, with N discrete classes from 0 to N-1 it gets a stream of N incoming numbers and computes the softmax function (exponentiate and normalize to sum to 1 as probabilities should)

func (*SoftmaxLayer) Backward

func (l *SoftmaxLayer) Backward()

func (*SoftmaxLayer) BackwardLoss

func (l *SoftmaxLayer) BackwardLoss(y LossData) float64

func (*SoftmaxLayer) Forward

func (l *SoftmaxLayer) Forward(v *Vol, isTraining bool) *Vol

func (*SoftmaxLayer) MarshalJSON

func (l *SoftmaxLayer) MarshalJSON() ([]byte, error)

func (*SoftmaxLayer) OutDepth

func (l *SoftmaxLayer) OutDepth() int

func (*SoftmaxLayer) OutSx

func (l *SoftmaxLayer) OutSx() int

func (*SoftmaxLayer) OutSy

func (l *SoftmaxLayer) OutSy() int

func (*SoftmaxLayer) ParamsAndGrads

func (l *SoftmaxLayer) ParamsAndGrads() []ParamsAndGrads

func (*SoftmaxLayer) UnmarshalJSON

func (l *SoftmaxLayer) UnmarshalJSON(b []byte) error

type TanhLayer

type TanhLayer struct {
	// contains filtered or unexported fields
}

Implements Tanh nnonlinearity elementwise x -> tanh(x) so the output is between -1 and 1.

func (*TanhLayer) Backward

func (l *TanhLayer) Backward()

func (*TanhLayer) Forward

func (l *TanhLayer) Forward(v *Vol, isTraining bool) *Vol

func (*TanhLayer) MarshalJSON

func (l *TanhLayer) MarshalJSON() ([]byte, error)

func (*TanhLayer) OutDepth

func (l *TanhLayer) OutDepth() int

func (*TanhLayer) OutSx

func (l *TanhLayer) OutSx() int

func (*TanhLayer) OutSy

func (l *TanhLayer) OutSy() int

func (*TanhLayer) ParamsAndGrads

func (l *TanhLayer) ParamsAndGrads() []ParamsAndGrads

func (*TanhLayer) UnmarshalJSON

func (l *TanhLayer) UnmarshalJSON(b []byte) error

type Trainer

type Trainer struct {
	Net *Net
	TrainerOptions
	// contains filtered or unexported fields
}

func NewTrainer

func NewTrainer(net *Net, opts TrainerOptions) *Trainer

func (*Trainer) Train

func (t *Trainer) Train(x *Vol, y LossData) TrainingResult

type TrainerMethod

type TrainerMethod int
const (
	MethodSGD        TrainerMethod = iota // sgd
	MethodAdam                            // adam
	MethodADAGrad                         // adagrad
	MethodADADelta                        // adadelta
	MethodWindowGrad                      // windowgrad
	MethodNetsterov                       // netsterov
)

func (TrainerMethod) String

func (i TrainerMethod) String() string

type TrainerOptions

type TrainerOptions struct {
	LearningRate float64
	L1Decay      float64
	L2Decay      float64
	BatchSize    int
	Method       TrainerMethod

	Momentum float64
	Ro       float64 // used in adadelta
	Eps      float64 // used in adam or adadelta
	Beta1    float64 // used in adam
	Beta2    float64 // used in adam
}

type TrainingResult

type TrainingResult struct {
	Loss        float64
	CostLoss    float64
	L1DecayLoss float64
	L2DecayLoss float64
}

type Vol

type Vol struct {
	Sx    int       `json:"sx"`
	Sy    int       `json:"sy"`
	Depth int       `json:"depth"`
	W     []float64 `json:"w"`
	Dw    []float64 `json:"-"`
}

Vol is the basic building block of all data in a net. it is essentially just a 3D volume of numbers, with a width (sx), height (sy), and depth (depth). it is used to hold data for all filters, all volumes, all weights, and also stores all gradients w.r.t. the data. c is optionally a value to initialize the volume with. If c is missing, fills the Vol with random numbers.

func ImgToVol

func ImgToVol(img image.Image, convertGrayscale bool) *Vol

returns a Vol of size (W, H, 4). 4 is for RGBA

func NewVol

func NewVol(sx, sy, depth int, c float64) *Vol

func NewVol1D

func NewVol1D(w []float64) *Vol

func NewVolRand

func NewVolRand(sx, sy, depth int, r *rand.Rand) *Vol

func (*Vol) Add

func (v *Vol) Add(x, y, d int, value float64)

func (*Vol) AddFrom

func (v *Vol) AddFrom(v2 *Vol)

func (*Vol) AddFromScaled

func (v *Vol) AddFromScaled(v2 *Vol, a float64)

func (*Vol) AddGrad

func (v *Vol) AddGrad(x, y, d int, value float64)

func (*Vol) Augment

func (v *Vol) Augment(crop, dx, dy int, fliplr bool) *Vol

Volume utilities intended for use with data augmentation crop is the size of output dx,dy are offset wrt incoming volume, of the shift fliplr is boolean on whether we also want to flip left<->right

Note: When converting from convnetjs, dx and dy default to a random number in [0, v.S[xy] - crop).

func (*Vol) Clone

func (v *Vol) Clone() *Vol

func (*Vol) CloneAndZero

func (v *Vol) CloneAndZero() *Vol

func (*Vol) Get

func (v *Vol) Get(x, y, d int) float64

func (*Vol) GetGrad

func (v *Vol) GetGrad(x, y, d int) float64

func (*Vol) Set

func (v *Vol) Set(x, y, d int, value float64)

func (*Vol) SetConst

func (v *Vol) SetConst(a float64)

func (*Vol) SetGrad

func (v *Vol) SetGrad(x, y, d int, value float64)

func (*Vol) UnmarshalJSON

func (v *Vol) UnmarshalJSON(b []byte) error

Directories

Path Synopsis
Package cnnutil contains various utility functions.
Package cnnutil contains various utility functions.
Package cnnvis contains various utility functions
Package cnnvis contains various utility functions

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL