Documentation ¶
Index ¶
- func ChooseBest(values []float64) int
- func ChooseBestFromEach(outputs Matrix) []int
- func Log(r, c int, z float64) float64
- func MatrixToSlice(m Matrix) [][]float64
- func PercentCorrect(numLabels int, expected, actual []int) (percentTotal float64, percentCorrectByLabel []float64)
- func Prepend(sl []float64, val float64) []float64
- func Sigmoid(r, c int, z float64) float64
- func SigmoidGradient(r, c int, z float64) float64
- func Square(r, c int, z float64) float64
- type CostFunction
- type DataSet
- func (ds DataSet) CostFunction(lambda float64, calcGrad bool) (j float64, grad [][][]float64, err error)
- func (ds DataSet) GetTheta() [][][]float64
- func (ds DataSet) RollThetasGrad(x [][][]float64) [][]float64
- func (ds DataSet) SetTheta(t [][][]float64)
- func (ds DataSet) UnrollThetasGrad(x [][]float64) [][][]float64
- type DefaultCostFunction
- type Matrix
- type NeuralNet
- func (nn *NeuralNet) Calculate(input Matrix) Matrix
- func (nn *NeuralNet) CalculateCost(examples Matrix, answers Matrix, lambda float64) (cost float64, gradients []Matrix)
- func (nn *NeuralNet) CalculatePercentAccuracies(inputs, expected []Matrix) (percentAccuracies []float64, percentAccurateByLabel []float64, ...)
- func (nn *NeuralNet) CalculatedValues() Matrix
- func (nn *NeuralNet) Categorize(input []float64) int
- func (nn *NeuralNet) NumInputs() int
- func (nn *NeuralNet) NumOutputs() int
- func (nn *NeuralNet) SaveThetas(targetDir string) (files []string)
- func (nn *NeuralNet) Train(inputs []Matrix, expected []Matrix, alpha float64, lambda float64, ...) (percentAccuracies []float64, percentAccurateByLabel []float64, ...)
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ChooseBest ¶
func ChooseBestFromEach ¶
func PercentCorrect ¶
func SigmoidGradient ¶
Calculate the gradient of the sigmoid of a matrix cell
Types ¶
type CostFunction ¶
type CostFunction interface {
CalculateCost(thetas []mat64.Matrix, examples mat64.Matrix, answers mat64.Matrix, lambda float64) (cost float64, gradients []mat64.Matrix)
}
A cost function is a function that can be used by the gradient descent (or similar) algorithm to measure, and thus minimize, the cost (or error) of the network by optimizing its parameters.
A cost function takes a slice of mat64.Matrix structs, one for each layer of the network (from first hidden layer to the output layer), each containing the thetas (weights) for that layer a mat64.Matrix containing the training data inputs, a mat64.Matrix containing the expected outputs for the training data, and a lambda value that is the regularization parameter (0.0 would mean no regularization would be performed)
The cost function should return two values: The current cost for the network for the given examples, and a slice of mat64.Matrix structs with the theta gradients to be used for performing gradient descent
type DataSet ¶
Implementation of a github.com/alonsovidales/go_ml ml.DataSet
func (DataSet) CostFunction ¶
func (ds DataSet) CostFunction(lambda float64, calcGrad bool) (j float64, grad [][][]float64, err error)
Returns the cost and gradients for the current thetas configuration
func (DataSet) RollThetasGrad ¶
Returns the thetas in a 1xn matrix
func (DataSet) SetTheta ¶
Sets the Theta param after convert it to the corresponding internal data structure
func (DataSet) UnrollThetasGrad ¶
Returns the thetas rolled by the rollThetasGrad method as it original form
type DefaultCostFunction ¶
type DefaultCostFunction struct{}
type Matrix ¶
type Matrix interface { mat64.Matrix mat64.Vectorer mat64.VectorSetter mat64.RowViewer mat64.ColViewer mat64.Augmenter mat64.Muler mat64.Sumer mat64.Suber mat64.Adder mat64.ElemMuler mat64.ElemDiver mat64.Equaler mat64.Applyer }
func NewForValue ¶
TODO: turn this into a function that returns a function (via closure)
type NeuralNet ¶
type NeuralNet struct { Thetas []Matrix Zs []Matrix // Intermediate calculations, pre-sigmoid (Z(n+1) = A(n) * Theta(n)') Outputs []Matrix // Calculated values, post-sigmoid (Output(n) = Sigmoid(Z(n)) }
TODO: make this more generic? Separate NN implementations for categorization vs. calculation?
func NewNeuralNet ¶
Creates a new NeuralNet The number of layers in the network will be defined by the size of the slice passed to the method The number of nodes per layer (starting with the input layer) are defined by the values in the slice The values of thetas in each node will be initialized randomly (requires the caller to seed the rand package beforehad as needed)
func NewNeuralNetFromFiles ¶
NewNeuralNetFromFiles Loads the informaton contained in the specified file paths and returns a NeuralNet instance. Each input file should contain a row by sample, and the values separated by a single space. To load the thetas specify on thetaSrc the file paths that contains each of the layer values. The order of this paths will represent the order of the layers.
func (*NeuralNet) Calculate ¶
Activates the network for the given inputs. Returns the calculated confidence scores as a matrix
func (*NeuralNet) CalculateCost ¶
func (nn *NeuralNet) CalculateCost(examples Matrix, answers Matrix, lambda float64) (cost float64, gradients []Matrix)
Calculate Cost is the Cost Function for the neural network Each call to this method represents a single training step in a process of Gradient Descent (or related algorithms)
The method returns the cost ("J"), or error, and a slice of matrices corresponding to the gradient for the thetas in each layer of the network (from the first hidden layer through to the output layer), as in the list of Thetas in the network itself
func (*NeuralNet) CalculatePercentAccuracies ¶
func (*NeuralNet) CalculatedValues ¶
func (*NeuralNet) Categorize ¶
Activates the network for the given inputs. Based on the calculated results, chooses the best label (the output with the highest level of confidence) The labels are indexed starting from 0
func (*NeuralNet) NumInputs ¶
Returns the number of floats expected as inputs to the calculation (number of input nodes)
func (*NeuralNet) NumOutputs ¶
Returns the number of labels in the output layer of the network
func (*NeuralNet) SaveThetas ¶
SaveThetas Store all the current theta values of the instance in the "targetDir" directory. This method will create a file for each layer of theta called theta_X.txt where X is the layer contained on the file.
func (*NeuralNet) Train ¶
func (nn *NeuralNet) Train(inputs []Matrix, expected []Matrix, alpha float64, lambda float64, maxCost float64, minPercentAccuracy float64, maxIterations int) (percentAccuracies []float64, percentAccurateByLabel []float64, scoreByLabel []float64)
Trains the network on the given inputs and expected results The inputs and expected slices can be any length, but must be of the same size Each pair of input/expected corresponds to a training, testing or validation data set Only the first pair will be used for training
The algorithm continues training until the max number of iterations has been reached, or until the cost is below the max cost on the training set, and the accuracy is above the min percent accuracy (from 0 to 1) for ALL the provided sets
Training will change the Thetas of the neural net. The function returns a slice with the percent accuracy (from 0 to 1) of each corresponding training set