nn

package
v0.0.0-...-fcc35b9 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 25, 2024 License: GPL-3.0 Imports: 2 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// SIGMOID is the sigmoid activation function
	SIGMOID = "sigmoid"
	// RELU is the rectified linear unit activation function
	RELU = "relu"
	// LINEAR is the linear activation function
	LINEAR = "linear"
)

Activation function types

Variables

This section is empty.

Functions

This section is empty.

Types

type Layer

type Layer struct {
	Neurons []*Neuron `json:"neurons"`
	Type    string
}

Layer represents a layer of neurons in a neural network

func NewLayer

func NewLayer(activationType string, numNeurons, numInputs int) *Layer

NewLayer creates a new layer of `numNeurons` neurons with the given activation type and number of inputs

func (*Layer) Activation

func (l *Layer) Activation() []float64

Activation returns the activation vector of the layer based on a previous call to Update

func (*Layer) Forward

func (l *Layer) Forward(inputs []float64) []float64

Forward takes a slice of inputs from the previous layer and returns the activation vector of the layer.

This is a convenience method that calls Update followed by Activation in a single call.

func (*Layer) Update

func (l *Layer) Update(inputs []float64) []float64

Update takes a slice of inputs from the previous layer and returns the z-vector of the layer

type Neuron

type Neuron struct {
	Type    string    `json:"type"`
	Weights []float64 `json:"weights"`
	Bias    float64   `json:"bias"`
	// contains filtered or unexported fields
}

Neuron represents a single neuron in a neural network

func NewNeuron

func NewNeuron(activationType string, numInputs int) *Neuron

NewNeuron creates a new neuron with the given number of inputs, and using the given activation function.

The weights are initialized to random values between -1 and 1, and the bias is initialized to 0. The activation function can be one of the following: - SIGMOID (default): f(x) = 1 / (1 + e^(-x)) - RELU: f(x) = max(0, x) - LINEAR: f(x) = x

func (*Neuron) Activation

func (n *Neuron) Activation() float64

func (*Neuron) Update

func (n *Neuron) Update(inputs []float64) float64

Process takes a slice of inputs and returns the output of the neuron

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL