neural

package
v0.0.0-...-73f864d Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 23, 2019 License: MIT Imports: 13 Imported by: 0

Documentation

Overview

Package neural implements a CPU implementation of neural networks. The main goal of this package is to solve the MNIST handwritten digit database.

There are two implementations for the mathmatical functions: naive and AVX2 accelerated. The package will determine if AVX2 is available on the machine and will use it by default. No other acceleration profiles are supported at this point.

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrIncorrectArgument = errors.New("incorrect argument, probably incompatible with target")
	ErrNotEnoughLayers   = errors.New("the smallest amount of layers permitted is 2")
	ErrInternalError     = errors.New("an internal error occured")
	ErrFileFormat        = errors.New("incorrect file format")
)

Package errors

Functions

This section is empty.

Types

type Delta

type Delta struct {
	// deltas keeps track of how many Deltas are combined into this struct,
	// this is used for the calculation of an average for application to the network.
	Deltas int

	// values are current mean values of the Delta
	Values [][]float64
}

Delta is the output of Network backpropogation. It consists of the cost of each activation in each layer. Can be combined together to enable batch processing capability.

func (*Delta) Apply

func (d *Delta) Apply(x Delta) error

Apply sums the components for averaging later

func (*Delta) Avg

func (d *Delta) Avg() ([][]float64, error)

Avg calculates and returns the mean average of the current Delta + the given one.

func (*Delta) IsCompatible

func (d *Delta) IsCompatible(x Delta) bool

IsCompatible checks if given Delta is compatible with the current one.

type FunctionType

type FunctionType byte

FunctionType used to describe available functions in the package

const (
	// LinearRegressionType is and id for a simple linear regression neural network.
	// The function for the network is
	LinearRegressionType FunctionType = 0xF0 + iota
)

type IncompatibleError

type IncompatibleError map[string]int

IncompatibleError is returned by a function when matrices or vectors are of incompatible size/length. This error is initialized with

func (IncompatibleError) Error

func (ie IncompatibleError) Error() string

Error builds and returns the errror string from Fields

type LinearRegression

type LinearRegression struct {
	// contains filtered or unexported fields
}

LinearRegression is a type of network that uses a gradient slope function to train and find a stable state.

func NewLinearRegression

func NewLinearRegression(function FunctionType, layers ...int32) (*LinearRegression, error)

NewLinearRegression returns a randomized linear regression network with specified amount of layers

func (*LinearRegression) Activate

func (n *LinearRegression) Activate(activations []float64) ([]float64, error)

Activate runs the neural network with given activations and returns the output layers

func (*LinearRegression) Apply

func (n *LinearRegression) Apply(d Delta) error

Apply is not implemented

func (*LinearRegression) Backpropogate

func (n *LinearRegression) Backpropogate(inputActivations []float64, desiredOutput []float64) (Delta, error)

Backpropogate returns the desired changes (deltas) ratios with given activations and outputs.

Algorithm: 1. Traverse network -> returns all activaitons and bias activations 2. Create the ratios array for all nodes -> you get set of exact same size as activations 3. Calculate output layer ratios -> they're put into last layer of the ratio's set 4. Calculate derrivative sigma for last layer -> get a set that will update every layer calculation 5. Calculate sigma for the last layer -> also get a set that updates for every layer 6. Loop over layers, goes backwards, starts from second to last layer:

  1. Calculate derrivative sigma for current layer
  2. Get a transposed matrix for that layer.
  3. Loop over and calculate ratios: 3.1) Calculate cost-weigth ratio 3.2) Calculate cost-activation ratio, with input from cost-weight ratio

func (*LinearRegression) WriteTo

func (n *LinearRegression) WriteTo(w io.Writer) (int64, error)

WriteTo self serializes the struct and writes the output into w

type Network

type Network interface {
	// Network must be able to self serialize, for saving to file.
	io.WriterTo

	// Activate runs the neural network with given activations and returns the output layers
	// resulting activations.
	Activate(activations []float64) ([]float64, error)

	// Backpropogate activates the network and uses implementation specific funtions
	// to adjust values in the network, making it trained.
	Backpropogate(activations []float64, desiredActivations []float64) (Delta, error)

	// Apply sums two identical networks together.
	// It's purpose is to apply the delta network for training.
	Apply(Delta) error
}

Network is a representation of a neural network.

func NewNetworkFromSource

func NewNetworkFromSource(r io.Reader) (Network, error)

NewNetworkFromSource loads a network from a reader

type Trainer

type Trainer interface {

	// Train launches the training routine for a network.
	// A channel is returned which accepts TrainingCases from
	// the the training set. Closing the channel completes the training.
	// After closing the channel, user must wait on `done` channel before
	// performing any actions on the trained Network to prevent races.
	// Train will attepmt to use all cores of the machine by spawning
	// runtime.NumCPU() number of goroutines.
	Train(Network) (trainingSet chan<- TrainingCase, done <-chan struct{})
}

Trainer describes a training mechanism for Network types.

func NewTrainer

func NewTrainer(batch int, stepScale float64) (Trainer, error)

NewTrainer creates the default trainer for this package.

type TrainingCase

type TrainingCase struct {

	// Input contains the input activations for the network.
	Input []float64

	// Desired contains the desired activations.
	Desired []float64
}

TrainingCase is a structure used to feed the Training algorithm.

Directories

Path Synopsis
Package f32 houses the math calculations of various implementations for the float32 type.
Package f32 houses the math calculations of various implementations for the float32 type.
Package f64 houses the math calculations of various implementations for the float64 type.
Package f64 houses the math calculations of various implementations for the float64 type.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL