Documentation
¶
Overview ¶
Package loss provides various loss functions for neural networks.
Index ¶
- type CorrLoss
- func (c *CorrLoss[T]) Attributes() map[string]any
- func (c *CorrLoss[T]) Backward(ctx context.Context, _ types.BackwardMode, dOut *tensor.TensorNumeric[T], ...) ([]*tensor.TensorNumeric[T], error)
- func (c *CorrLoss[T]) Forward(ctx context.Context, inputs ...*tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], error)
- func (c *CorrLoss[T]) OpType() string
- func (c *CorrLoss[T]) OutputShape() []int
- func (c *CorrLoss[T]) Parameters() []*graph.Parameter[T]
- type CrossEntropyLoss
- func (cel *CrossEntropyLoss[T]) Attributes() map[string]interface{}
- func (cel *CrossEntropyLoss[T]) Backward(ctx context.Context, _ types.BackwardMode, dOut *tensor.TensorNumeric[T], ...) ([]*tensor.TensorNumeric[T], error)
- func (cel *CrossEntropyLoss[T]) Forward(ctx context.Context, inputs ...*tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], error)
- func (cel *CrossEntropyLoss[T]) OpType() string
- func (cel *CrossEntropyLoss[T]) OutputShape() []int
- func (cel *CrossEntropyLoss[T]) Parameters() []*graph.Parameter[T]
- type Loss
- type MSE
- func (m *MSE[T]) Attributes() map[string]interface{}
- func (m *MSE[T]) Backward(ctx context.Context, _ types.BackwardMode, dOut *tensor.TensorNumeric[T], ...) ([]*tensor.TensorNumeric[T], error)
- func (m *MSE[T]) Forward(ctx context.Context, inputs ...*tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], error)
- func (m *MSE[T]) OpType() string
- func (m *MSE[T]) OutputShape() []int
- func (m *MSE[T]) Parameters() []*graph.Parameter[T]
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type CorrLoss ¶ added in v0.2.1
CorrLoss computes -PearsonCorrelation(predictions, targets) as a differentiable scalar loss. Minimizing this loss maximizes the Pearson correlation between predictions and targets. Since Numerai targets are rank-normalized, Pearson closely approximates Spearman rank correlation.
Forward: loss = -sum(p_c * t_c) / (sqrt(sum(p_c^2) * sum(t_c^2)) + eps)
where p_c = p - mean(p), t_c = t - mean(t)
Backward: grad_i = -(t_c_i / denom - corr * p_c_i / sum_pp) * dOut
All tensor operations use the engine, keeping data on GPU when available. Only scalar intermediate values (means, sums) are read back to CPU.
func NewCorrLoss ¶ added in v0.2.1
func NewCorrLoss[T tensor.Numeric](engine compute.Engine[T], ops numeric.Arithmetic[T]) *CorrLoss[T]
NewCorrLoss creates a new correlation loss function.
func (*CorrLoss[T]) Attributes ¶ added in v0.2.1
Attributes returns nil (no configurable attributes).
func (*CorrLoss[T]) Backward ¶ added in v0.2.1
func (c *CorrLoss[T]) Backward(ctx context.Context, _ types.BackwardMode, dOut *tensor.TensorNumeric[T], inputs ...*tensor.TensorNumeric[T]) ([]*tensor.TensorNumeric[T], error)
Backward computes the gradient of -PearsonCorrelation with respect to predictions. Returns [dPredictions, dTargets(zeros)].
func (*CorrLoss[T]) Forward ¶ added in v0.2.1
func (c *CorrLoss[T]) Forward(ctx context.Context, inputs ...*tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], error)
Forward computes -PearsonCorrelation(predictions, targets).
func (*CorrLoss[T]) OutputShape ¶ added in v0.2.1
OutputShape returns [1] (scalar loss).
func (*CorrLoss[T]) Parameters ¶ added in v0.2.1
Parameters returns nil (no trainable parameters).
type CrossEntropyLoss ¶
CrossEntropyLoss computes the cross-entropy loss.
func NewCrossEntropyLoss ¶
func NewCrossEntropyLoss[T tensor.Numeric](engine compute.Engine[T]) *CrossEntropyLoss[T]
NewCrossEntropyLoss creates a new CrossEntropyLoss layer.
func (*CrossEntropyLoss[T]) Attributes ¶ added in v0.2.1
func (cel *CrossEntropyLoss[T]) Attributes() map[string]interface{}
Attributes returns the attributes of the CrossEntropyLoss layer.
func (*CrossEntropyLoss[T]) Backward ¶
func (cel *CrossEntropyLoss[T]) Backward(ctx context.Context, _ types.BackwardMode, dOut *tensor.TensorNumeric[T], _ ...*tensor.TensorNumeric[T]) ([]*tensor.TensorNumeric[T], error)
Backward computes the gradients for CrossEntropyLoss.
func (*CrossEntropyLoss[T]) Forward ¶
func (cel *CrossEntropyLoss[T]) Forward(ctx context.Context, inputs ...*tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], error)
Forward computes the cross-entropy loss. Inputs: predictions (logits as T), targets (labels as T that will be converted to int indices).
func (*CrossEntropyLoss[T]) OpType ¶ added in v0.2.1
func (cel *CrossEntropyLoss[T]) OpType() string
OpType returns the operation type of the CrossEntropyLoss layer.
func (*CrossEntropyLoss[T]) OutputShape ¶
func (cel *CrossEntropyLoss[T]) OutputShape() []int
OutputShape returns the output shape of the loss (a scalar).
func (*CrossEntropyLoss[T]) Parameters ¶
func (cel *CrossEntropyLoss[T]) Parameters() []*graph.Parameter[T]
Parameters returns an empty slice as CrossEntropyLoss has no trainable parameters.
type Loss ¶
type Loss[T tensor.Numeric] interface { // Forward computes the loss and its gradient. Forward(ctx context.Context, predictions, targets *tensor.TensorNumeric[T]) (T, *tensor.TensorNumeric[T], error) }
Loss defines the interface for loss functions.
type MSE ¶
MSE calculates the mean squared error between predictions and targets.
func (*MSE[T]) Attributes ¶ added in v0.2.1
Attributes returns the attributes of the MSE loss function.
func (*MSE[T]) Backward ¶
func (m *MSE[T]) Backward(ctx context.Context, _ types.BackwardMode, dOut *tensor.TensorNumeric[T], inputs ...*tensor.TensorNumeric[T]) ([]*tensor.TensorNumeric[T], error)
Backward computes the gradients for MSE with respect to inputs. Returns gradients in the order of inputs: [dPredictions, dTargets(nil)].
func (*MSE[T]) Forward ¶
func (m *MSE[T]) Forward(ctx context.Context, inputs ...*tensor.TensorNumeric[T]) (*tensor.TensorNumeric[T], error)
Forward computes the loss value.
func (*MSE[T]) OutputShape ¶ added in v0.2.1
OutputShape returns the output shape of the MSE loss function.
func (*MSE[T]) Parameters ¶ added in v0.2.1
Parameters returns the parameters of the MSE loss function.