Documentation
¶
Overview ¶
Package optim provides classical optimization algorithms.
The Optimizer interface accepts a generic ObjectiveFunc and optional GradientFunc, making the package usable for any minimization problem. Gradient-free methods (Nelder-Mead, SPSA) ignore the gradient argument; gradient-based methods (Adam, L-BFGS) require it.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Adam ¶
type Adam struct {
// LR is the learning rate. Default: 0.001.
LR float64
// Beta1 is the exponential decay rate for first moment. Default: 0.9.
Beta1 float64
// Beta2 is the exponential decay rate for second moment. Default: 0.999.
Beta2 float64
// Epsilon prevents division by zero. Default: 1e-8.
Epsilon float64
}
Adam implements the Adam optimizer (gradient-based).
type GradientFunc ¶
GradientFunc computes the gradient at a parameter point.
type LBFGS ¶
type LBFGS struct {
// Memory is the number of correction pairs stored. Default: 10.
Memory int
}
LBFGS implements the Limited-memory BFGS quasi-Newton method (gradient-based).
type NelderMead ¶
type NelderMead struct {
// InitialStep controls the initial simplex size. Default: 0.05.
InitialStep float64
}
NelderMead implements the Nelder-Mead simplex algorithm (gradient-free).
func (*NelderMead) Minimize ¶
func (nm *NelderMead) Minimize(ctx context.Context, f ObjectiveFunc, x0 []float64, _ GradientFunc, opts *Options) (Result, error)
func (*NelderMead) Name ¶
func (nm *NelderMead) Name() string
type ObjectiveFunc ¶
ObjectiveFunc evaluates cost at a parameter point. Must be goroutine-safe.
type Optimizer ¶
type Optimizer interface {
// Minimize finds the minimum of f starting from x0.
// grad may be nil for gradient-free methods.
// opts may be nil for defaults.
Minimize(ctx context.Context, f ObjectiveFunc, x0 []float64,
grad GradientFunc, opts *Options) (Result, error)
// Name returns the optimizer name.
Name() string
}
Optimizer is a classical optimization algorithm.
type Options ¶
type Options struct {
MaxIter int // Default: 1000
FunTol float64 // Default: 1e-8
XTol float64 // Default: 1e-8
GradTol float64 // Default: 0 (disabled)
Callback Callback // Optional per-iteration callback
}
Options controls optimization behavior. A nil *Options uses defaults.
type Result ¶
type Result struct {
X []float64 // Optimal parameters
Fun float64 // Objective value at X
Iterations int // Number of iterations performed
FuncEvals int // Number of objective evaluations
GradEvals int // Number of gradient evaluations
Converged bool // Whether the optimizer converged
Message string // Human-readable status
}
Result holds the outcome of an optimization run.
type SPSA ¶
type SPSA struct {
// A, a, c are SPSA hyperparameters following Spall's notation.
// Defaults: A=100, a=0.16, c=0.1, alpha=0.602, gamma=0.101.
A float64
LR float64 // a
C float64 // c
Alpha float64 // learning rate decay exponent
Gamma float64 // perturbation decay exponent
}
SPSA implements Simultaneous Perturbation Stochastic Approximation. Only 2 objective evaluations per iteration regardless of dimension.