Documentation
¶
Index ¶
Constants ¶
const ( // SIGMOID is the sigmoid activation function SIGMOID = "sigmoid" // RELU is the rectified linear unit activation function RELU = "relu" // LINEAR is the linear activation function LINEAR = "linear" )
Activation function types
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Layer ¶
Layer represents a layer of neurons in a neural network
func NewLayer ¶
NewLayer creates a new layer of `numNeurons` neurons with the given activation type and number of inputs
func (*Layer) Activation ¶
Activation returns the activation vector of the layer based on a previous call to Update
type Neuron ¶
type Neuron struct { Type string `json:"type"` Weights []float64 `json:"weights"` Bias float64 `json:"bias"` // contains filtered or unexported fields }
Neuron represents a single neuron in a neural network
func NewNeuron ¶
NewNeuron creates a new neuron with the given number of inputs, and using the given activation function.
The weights are initialized to random values between -1 and 1, and the bias is initialized to 0. The activation function can be one of the following: - SIGMOID (default): f(x) = 1 / (1 + e^(-x)) - RELU: f(x) = max(0, x) - LINEAR: f(x) = x