neuralnet

package module
v0.0.0-...-927247f Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 25, 2015 License: MIT Imports: 10 Imported by: 0

README

neuralnet

A Go package for matrix-based neural networks. Includes optional parameters for regularization.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ChooseBest

func ChooseBest(values []float64) int

func ChooseBestFromEach

func ChooseBestFromEach(outputs Matrix) []int

func Log

func Log(r, c int, z float64) float64

Calculate the log of a matrix cell

func MatrixToSlice

func MatrixToSlice(m Matrix) [][]float64

TODO: Test this

func PercentCorrect

func PercentCorrect(numLabels int, expected, actual []int) (percentTotal float64, percentCorrectByLabel []float64)

func Prepend

func Prepend(sl []float64, val float64) []float64

func Sigmoid

func Sigmoid(r, c int, z float64) float64

Calculate the sigmoid of a matrix cell

func SigmoidGradient

func SigmoidGradient(r, c int, z float64) float64

Calculate the gradient of the sigmoid of a matrix cell

func Square

func Square(r, c int, z float64) float64

Calculate the square of a matrix cell

Types

type CostFunction

type CostFunction interface {
	CalculateCost(thetas []mat64.Matrix, examples mat64.Matrix, answers mat64.Matrix, lambda float64) (cost float64, gradients []mat64.Matrix)
}

A cost function is a function that can be used by the gradient descent (or similar) algorithm to measure, and thus minimize, the cost (or error) of the network by optimizing its parameters.

A cost function takes a slice of mat64.Matrix structs, one for each layer of the network (from first hidden layer to the output layer), each containing the thetas (weights) for that layer a mat64.Matrix containing the training data inputs, a mat64.Matrix containing the expected outputs for the training data, and a lambda value that is the regularization parameter (0.0 would mean no regularization would be performed)

The cost function should return two values: The current cost for the network for the given examples, and a slice of mat64.Matrix structs with the theta gradients to be used for performing gradient descent

type DataSet

type DataSet struct {
	NN       *NeuralNet
	Examples Matrix
	Answers  Matrix
}

Implementation of a github.com/alonsovidales/go_ml ml.DataSet

func (DataSet) CostFunction

func (ds DataSet) CostFunction(lambda float64, calcGrad bool) (j float64, grad [][][]float64, err error)

Returns the cost and gradients for the current thetas configuration

func (DataSet) GetTheta

func (ds DataSet) GetTheta() [][][]float64

Returns the theta as a 3 dimensional slice

func (DataSet) RollThetasGrad

func (ds DataSet) RollThetasGrad(x [][][]float64) [][]float64

Returns the thetas in a 1xn matrix

func (DataSet) SetTheta

func (ds DataSet) SetTheta(t [][][]float64)

Sets the Theta param after convert it to the corresponding internal data structure

func (DataSet) UnrollThetasGrad

func (ds DataSet) UnrollThetasGrad(x [][]float64) [][][]float64

Returns the thetas rolled by the rollThetasGrad method as it original form

type DefaultCostFunction

type DefaultCostFunction struct{}

type Matrix

func NewForValue

func NewForValue(rows, cols int, val float64) Matrix

TODO: turn this into a function that returns a function (via closure)

func NewOnes

func NewOnes(rows, cols int) Matrix

func NewRand

func NewRand(rows, cols int) Matrix

func NewZeroes

func NewZeroes(rows, cols int) Matrix

type NeuralNet

type NeuralNet struct {
	Thetas  []Matrix
	Zs      []Matrix // Intermediate calculations, pre-sigmoid (Z(n+1) = A(n) * Theta(n)')
	Outputs []Matrix // Calculated values, post-sigmoid (Output(n) = Sigmoid(Z(n))
}

TODO: make this more generic? Separate NN implementations for categorization vs. calculation?

func NewNeuralNet

func NewNeuralNet(nodes []int) NeuralNet

Creates a new NeuralNet The number of layers in the network will be defined by the size of the slice passed to the method The number of nodes per layer (starting with the input layer) are defined by the values in the slice The values of thetas in each node will be initialized randomly (requires the caller to seed the rand package beforehad as needed)

func NewNeuralNetFromFiles

func NewNeuralNetFromFiles(thetaSrc []string) (result *NeuralNet)

NewNeuralNetFromFiles Loads the informaton contained in the specified file paths and returns a NeuralNet instance. Each input file should contain a row by sample, and the values separated by a single space. To load the thetas specify on thetaSrc the file paths that contains each of the layer values. The order of this paths will represent the order of the layers.

func (*NeuralNet) Calculate

func (nn *NeuralNet) Calculate(input Matrix) Matrix

Activates the network for the given inputs. Returns the calculated confidence scores as a matrix

func (*NeuralNet) CalculateCost

func (nn *NeuralNet) CalculateCost(examples Matrix, answers Matrix, lambda float64) (cost float64, gradients []Matrix)

Calculate Cost is the Cost Function for the neural network Each call to this method represents a single training step in a process of Gradient Descent (or related algorithms)

The method returns the cost ("J"), or error, and a slice of matrices corresponding to the gradient for the thetas in each layer of the network (from the first hidden layer through to the output layer), as in the list of Thetas in the network itself

func (*NeuralNet) CalculatePercentAccuracies

func (nn *NeuralNet) CalculatePercentAccuracies(inputs, expected []Matrix) (percentAccuracies []float64, percentAccurateByLabel []float64, scoreByLabel []float64)

func (*NeuralNet) CalculatedValues

func (nn *NeuralNet) CalculatedValues() Matrix

func (*NeuralNet) Categorize

func (nn *NeuralNet) Categorize(input []float64) int

Activates the network for the given inputs. Based on the calculated results, chooses the best label (the output with the highest level of confidence) The labels are indexed starting from 0

func (*NeuralNet) NumInputs

func (nn *NeuralNet) NumInputs() int

Returns the number of floats expected as inputs to the calculation (number of input nodes)

func (*NeuralNet) NumOutputs

func (nn *NeuralNet) NumOutputs() int

Returns the number of labels in the output layer of the network

func (*NeuralNet) SaveThetas

func (nn *NeuralNet) SaveThetas(targetDir string) (files []string)

SaveThetas Store all the current theta values of the instance in the "targetDir" directory. This method will create a file for each layer of theta called theta_X.txt where X is the layer contained on the file.

func (*NeuralNet) Train

func (nn *NeuralNet) Train(inputs []Matrix, expected []Matrix, alpha float64, lambda float64, maxCost float64, minPercentAccuracy float64, maxIterations int) (percentAccuracies []float64, percentAccurateByLabel []float64, scoreByLabel []float64)

Trains the network on the given inputs and expected results The inputs and expected slices can be any length, but must be of the same size Each pair of input/expected corresponds to a training, testing or validation data set Only the first pair will be used for training

The algorithm continues training until the max number of iterations has been reached, or until the cost is below the max cost on the training set, and the accuracy is above the min percent accuracy (from 0 to 1) for ALL the provided sets

Training will change the Thetas of the neural net. The function returns a slice with the percent accuracy (from 0 to 1) of each corresponding training set

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL