optimize

package
v0.15.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 16, 2024 License: BSD-3-Clause Imports: 12 Imported by: 91

README

Gonum optimize

go.dev reference GoDoc

Package optimize is an optimization package for the Go language.

Documentation

Overview

Package optimize implements algorithms for finding the optimum value of functions.

Index

Examples

Constants

This section is empty.

Variables

View Source
var (
	// ErrZeroDimensional signifies an optimization was called with an input of length 0.
	ErrZeroDimensional = errors.New("optimize: zero dimensional input")

	// ErrLinesearcherFailure signifies that a Linesearcher has iterated too
	// many times. This may occur if the gradient tolerance is set too low.
	ErrLinesearcherFailure = errors.New("linesearch: failed to converge")

	// ErrNonDescentDirection signifies that LinesearchMethod has received a
	// search direction from a NextDirectioner in which the function is not
	// decreasing.
	ErrNonDescentDirection = errors.New("linesearch: non-descent search direction")

	// ErrNoProgress signifies that LinesearchMethod cannot make further
	// progress because there is no change in location after Linesearcher step
	// due to floating-point arithmetic.
	ErrNoProgress = errors.New("linesearch: no change in location after Linesearcher step")

	// ErrLinesearcherBound signifies that a Linesearcher reached a step that
	// lies out of allowed bounds.
	ErrLinesearcherBound = errors.New("linesearch: step out of bounds")

	// ErrMissingGrad signifies that a Method requires a Gradient function that
	// is not supplied by Problem.
	ErrMissingGrad = errors.New("optimize: problem does not provide needed Grad function")

	// ErrMissingHess signifies that a Method requires a Hessian function that
	// is not supplied by Problem.
	ErrMissingHess = errors.New("optimize: problem does not provide needed Hess function")
)

Functions

func ArmijoConditionMet

func ArmijoConditionMet(currObj, initObj, initGrad, step, decrease float64) bool

ArmijoConditionMet returns true if the Armijo condition (aka sufficient decrease) has been met. Under normal conditions, the following should be true, though this is not enforced:

  • initGrad < 0
  • step > 0
  • 0 < decrease < 1

func StrongWolfeConditionsMet

func StrongWolfeConditionsMet(currObj, currGrad, initObj, initGrad, step, decrease, curvature float64) bool

StrongWolfeConditionsMet returns true if the strong Wolfe conditions have been met. The strong Wolfe conditions ensure sufficient decrease in the function value, and sufficient decrease in the magnitude of the projected gradient. Under normal conditions, the following should be true, though this is not enforced:

  • initGrad < 0
  • step > 0
  • 0 <= decrease < curvature < 1

func WeakWolfeConditionsMet

func WeakWolfeConditionsMet(currObj, currGrad, initObj, initGrad, step, decrease, curvature float64) bool

WeakWolfeConditionsMet returns true if the weak Wolfe conditions have been met. The weak Wolfe conditions ensure sufficient decrease in the function value, and sufficient decrease in the value of the projected gradient. Under normal conditions, the following should be true, though this is not enforced:

  • initGrad < 0
  • step > 0
  • 0 <= decrease < curvature< 1

Types

type Available

type Available struct {
	Grad bool
	Hess bool
}

Available describes the functions available to call in Problem.

type BFGS

type BFGS struct {
	// Linesearcher selects suitable steps along the descent direction.
	// Accepted steps should satisfy the strong Wolfe conditions.
	// If Linesearcher == nil, an appropriate default is chosen.
	Linesearcher Linesearcher
	// GradStopThreshold sets the threshold for stopping if the gradient norm
	// gets too small. If GradStopThreshold is 0 it is defaulted to 1e-12, and
	// if it is NaN the setting is not used.
	GradStopThreshold float64
	// contains filtered or unexported fields
}

BFGS implements the Broyden–Fletcher–Goldfarb–Shanno optimization method. It is a quasi-Newton method that performs successive rank-one updates to an estimate of the inverse Hessian of the objective function. It exhibits super-linear convergence when in proximity to a local minimum. It has memory cost that is O(n^2) relative to the input dimension.

func (*BFGS) Init

func (b *BFGS) Init(dim, tasks int) int

func (*BFGS) InitDirection

func (b *BFGS) InitDirection(loc *Location, dir []float64) (stepSize float64)

func (*BFGS) NextDirection

func (b *BFGS) NextDirection(loc *Location, dir []float64) (stepSize float64)

func (*BFGS) Run

func (b *BFGS) Run(operation chan<- Task, result <-chan Task, tasks []Task)

func (*BFGS) Status

func (b *BFGS) Status() (Status, error)

func (*BFGS) Uses

func (*BFGS) Uses(has Available) (uses Available, err error)

type Backtracking

type Backtracking struct {
	DecreaseFactor    float64 // Constant factor in the sufficient decrease (Armijo) condition.
	ContractionFactor float64 // Step size multiplier at each iteration (step *= ContractionFactor).
	// contains filtered or unexported fields
}

Backtracking is a Linesearcher that uses backtracking to find a point that satisfies the Armijo condition with the given decrease factor. If the Armijo condition has not been met, the step size is decreased by ContractionFactor.

The Armijo condition only requires the gradient at the beginning of each major iteration (not at successive step locations), and so Backtracking may be a good linesearch for functions with expensive gradients. Backtracking is not appropriate for optimizers that require the Wolfe conditions to be met, such as BFGS.

Both DecreaseFactor and ContractionFactor must be between zero and one, and Backtracking will panic otherwise. If either DecreaseFactor or ContractionFactor are zero, it will be set to a reasonable default.

func (*Backtracking) Init

func (b *Backtracking) Init(f, g float64, step float64) Operation

func (*Backtracking) Iterate

func (b *Backtracking) Iterate(f, _ float64) (Operation, float64, error)

type Bisection

type Bisection struct {
	// CurvatureFactor is the constant factor in the curvature condition.
	// Smaller values result in a more exact line search.
	// A set value must be in the interval (0, 1), otherwise Init will panic.
	// If it is zero, it will be defaulted to 0.9.
	CurvatureFactor float64
	// contains filtered or unexported fields
}

Bisection is a Linesearcher that uses a bisection to find a point that satisfies the strong Wolfe conditions with the given curvature factor and a decrease factor of zero.

func (*Bisection) Init

func (b *Bisection) Init(f, g float64, step float64) Operation

func (*Bisection) Iterate

func (b *Bisection) Iterate(f, g float64) (Operation, float64, error)

type CG

type CG struct {
	// Linesearcher must satisfy the strong Wolfe conditions at every iteration.
	// If Linesearcher == nil, an appropriate default is chosen.
	Linesearcher Linesearcher
	// Variant implements the particular CG formula for computing β_k.
	// If Variant is nil, an appropriate default is chosen.
	Variant CGVariant
	// InitialStep estimates the initial line search step size, because the CG
	// method does not generate well-scaled search directions.
	// If InitialStep is nil, an appropriate default is chosen.
	InitialStep StepSizer

	// IterationRestartFactor determines the frequency of restarts based on the
	// problem dimension. The negative gradient direction is taken whenever
	// ceil(IterationRestartFactor*(problem dimension)) iterations have elapsed
	// without a restart. For medium and large-scale problems
	// IterationRestartFactor should be set to 1, low-dimensional problems a
	// larger value should be chosen. Note that if the ceil function returns 1,
	// CG will be identical to gradient descent.
	// If IterationRestartFactor is 0, it will be set to 6.
	// CG will panic if IterationRestartFactor is negative.
	IterationRestartFactor float64
	// AngleRestartThreshold sets the threshold angle for restart. The method
	// is restarted if the cosine of the angle between two consecutive
	// gradients is smaller than or equal to AngleRestartThreshold, that is, if
	//  ∇f_k·∇f_{k+1} / (|∇f_k| |∇f_{k+1}|) <= AngleRestartThreshold.
	// A value of AngleRestartThreshold closer to -1 (successive gradients in
	// exact opposite directions) will tend to reduce the number of restarts.
	// If AngleRestartThreshold is 0, it will be set to -0.9.
	// CG will panic if AngleRestartThreshold is not in the interval [-1, 0].
	AngleRestartThreshold float64
	// GradStopThreshold sets the threshold for stopping if the gradient norm
	// gets too small. If GradStopThreshold is 0 it is defaulted to 1e-12, and
	// if it is NaN the setting is not used.
	GradStopThreshold float64
	// contains filtered or unexported fields
}

CG implements the nonlinear conjugate gradient method for solving nonlinear unconstrained optimization problems. It is a line search method that generates the search directions d_k according to the formula

d_{k+1} = -∇f_{k+1} + β_k*d_k,   d_0 = -∇f_0.

Variants of the conjugate gradient method differ in the choice of the parameter β_k. The conjugate gradient method usually requires fewer function evaluations than the gradient descent method and no matrix storage, but L-BFGS is usually more efficient.

CG implements a restart strategy that takes the steepest descent direction (i.e., d_{k+1} = -∇f_{k+1}) whenever any of the following conditions holds:

  • A certain number of iterations has elapsed without a restart. This number is controllable via IterationRestartFactor and if equal to 0, it is set to a reasonable default based on the problem dimension.
  • The angle between the gradients at two consecutive iterations ∇f_k and ∇f_{k+1} is too large.
  • The direction d_{k+1} is not a descent direction.
  • β_k returned from CGVariant.Beta is equal to zero.

The line search for CG must yield step sizes that satisfy the strong Wolfe conditions at every iteration, otherwise the generated search direction might fail to be a descent direction. The line search should be more stringent compared with those for Newton-like methods, which can be achieved by setting the gradient constant in the strong Wolfe conditions to a small value.

See also William Hager, Hongchao Zhang, A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization, 2 (2006), pp. 35-58, and references therein.

func (*CG) Init

func (cg *CG) Init(dim, tasks int) int

func (*CG) InitDirection

func (cg *CG) InitDirection(loc *Location, dir []float64) (stepSize float64)

func (*CG) NextDirection

func (cg *CG) NextDirection(loc *Location, dir []float64) (stepSize float64)

func (*CG) Run

func (cg *CG) Run(operation chan<- Task, result <-chan Task, tasks []Task)

func (*CG) Status

func (cg *CG) Status() (Status, error)

func (*CG) Uses

func (*CG) Uses(has Available) (uses Available, err error)

type CGVariant

type CGVariant interface {
	// Init is called at the first iteration and provides a way to initialize
	// any internal state.
	Init(loc *Location)
	// Beta returns the value of the scaling parameter that is computed
	// according to the particular variant of the CG method.
	Beta(grad, gradPrev, dirPrev []float64) float64
}

CGVariant calculates the scaling parameter, β, used for updating the conjugate direction in the nonlinear conjugate gradient (CG) method.

type CmaEsChol

type CmaEsChol struct {
	// InitStepSize sets the initial size of the covariance matrix adaptation.
	// If InitStepSize is 0, a default value of 0.5 is used. InitStepSize cannot
	// be negative, or CmaEsChol will panic.
	InitStepSize float64
	// Population sets the population size for the algorithm. If Population is
	// 0, a default value of 4 + math.Floor(3*math.Log(float64(dim))) is used.
	// Population cannot be negative or CmaEsChol will panic.
	Population int
	// InitCholesky specifies the Cholesky decomposition of the covariance
	// matrix for the initial sampling distribution. If InitCholesky is nil,
	// a default value of I is used. If it is non-nil, then it must have
	// InitCholesky.Size() be equal to the problem dimension.
	InitCholesky *mat.Cholesky
	// StopLogDet sets the threshold for stopping the optimization if the
	// distribution becomes too peaked. The log determinant is a measure of the
	// (log) "volume" of the normal distribution, and when it is too small
	// the samples are almost the same. If the log determinant of the covariance
	// matrix becomes less than StopLogDet, the optimization run is concluded.
	// If StopLogDet is 0, a default value of dim*log(1e-16) is used.
	// If StopLogDet is NaN, the stopping criterion is not used, though
	// this can cause numeric instabilities in the algorithm.
	StopLogDet float64
	// ForgetBest, when true, does not track the best overall function value found,
	// instead returning the new best sample in each iteration. If ForgetBest
	// is false, then the minimum value returned will be the lowest across all
	// iterations, regardless of when that sample was generated.
	ForgetBest bool
	// Src allows a random number generator to be supplied for generating samples.
	// If Src is nil the generator in golang.org/x/math/rand is used.
	Src rand.Source
	// contains filtered or unexported fields
}

CmaEsChol implements the covariance matrix adaptation evolution strategy (CMA-ES) based on the Cholesky decomposition. The full algorithm is described in

Krause, Oswin, Dídac Rodríguez Arbonès, and Christian Igel. "CMA-ES with
optimal covariance update and storage complexity." Advances in Neural
Information Processing Systems. 2016.
https://papers.nips.cc/paper/6457-cma-es-with-optimal-covariance-update-and-storage-complexity.pdf

CMA-ES is a global optimization method that progressively adapts a population of samples. CMA-ES combines techniques from local optimization with global optimization. Specifically, the CMA-ES algorithm uses an initial multivariate normal distribution to generate a population of input locations. The input locations with the lowest function values are used to update the parameters of the normal distribution, a new set of input locations are generated, and this procedure is iterated until convergence. The initial sampling distribution will have a mean specified by the initial x location, and a covariance specified by the InitCholesky field.

As the normal distribution is progressively updated according to the best samples, it can be that the mean of the distribution is updated in a gradient-descent like fashion, followed by a shrinking covariance. It is recommended that the algorithm be run multiple times (with different InitMean) to have a better chance of finding the global minimum.

The CMA-ES-Chol algorithm differs from the standard CMA-ES algorithm in that it directly updates the Cholesky decomposition of the normal distribution. This changes the runtime from O(dimension^3) to O(dimension^2*population) The evolution of the multi-variate normal will be similar to the baseline CMA-ES algorithm, but the covariance update equation is not identical.

For more information about the CMA-ES algorithm, see

https://en.wikipedia.org/wiki/CMA-ES
https://arxiv.org/pdf/1604.00772.pdf

func (*CmaEsChol) Init

func (cma *CmaEsChol) Init(dim, tasks int) int

func (*CmaEsChol) Run

func (cma *CmaEsChol) Run(operations chan<- Task, results <-chan Task, tasks []Task)

func (*CmaEsChol) Status

func (cma *CmaEsChol) Status() (Status, error)

Status returns the status of the method.

func (*CmaEsChol) Uses

func (*CmaEsChol) Uses(has Available) (uses Available, err error)

type ConstantStepSize

type ConstantStepSize struct {
	Size float64
}

ConstantStepSize is a StepSizer that returns the same step size for every iteration.

func (ConstantStepSize) Init

func (c ConstantStepSize) Init(_ *Location, _ []float64) float64

func (ConstantStepSize) StepSize

func (c ConstantStepSize) StepSize(_ *Location, _ []float64) float64

type Converger

type Converger interface {
	Init(dim int)
	Converged(loc *Location) Status
}

Converger returns the convergence of the optimization based on locations found during optimization. Converger must not modify the value of the provided Location in any of the methods.

type DaiYuan

type DaiYuan struct {
	// contains filtered or unexported fields
}

DaiYuan implements the Dai-Yuan variant of the CG method that computes the scaling parameter β_k according to the formula

β_k = |∇f_{k+1}|^2 / d_k·y_k,

where y_k = ∇f_{k+1} - ∇f_k.

func (*DaiYuan) Beta

func (dy *DaiYuan) Beta(grad, gradPrev, dirPrev []float64) (beta float64)

func (*DaiYuan) Init

func (dy *DaiYuan) Init(loc *Location)

type ErrFunc

type ErrFunc float64

ErrFunc is returned when an initial function value is invalid. The error state may be either +Inf or NaN. ErrFunc satisfies the error interface.

func (ErrFunc) Error

func (err ErrFunc) Error() string

type ErrGrad

type ErrGrad struct {
	Grad  float64 // Grad is the invalid gradient value.
	Index int     // Index is the position at which the invalid gradient was found.
}

ErrGrad is returned when an initial gradient is invalid. The error gradient may be either ±Inf or NaN. ErrGrad satisfies the error interface.

func (ErrGrad) Error

func (err ErrGrad) Error() string

type FirstOrderStepSize

type FirstOrderStepSize struct {
	// InitialStepFactor sets the step size for the first iteration to be InitialStepFactor / |g|_∞.
	// If InitialStepFactor is zero, it will be set to one.
	InitialStepFactor float64
	// MinStepSize is the lower bound on the estimated step size.
	// MinStepSize times GradientAbsTol should always be greater than machine epsilon.
	// If MinStepSize is zero, it will be set to 1e-3.
	MinStepSize float64
	// MaxStepSize is the upper bound on the estimated step size.
	// If MaxStepSize is zero, it will be set to 1.
	MaxStepSize float64
	// contains filtered or unexported fields
}

FirstOrderStepSize estimates the initial line search step size based on the assumption that the first-order change in the function will be the same as that obtained at the previous iteration. That is, the initial step size s^0_k is chosen so that

s^0_k ∇f_k⋅p_k = s_{k-1} ∇f_{k-1}⋅p_{k-1}

This is useful for line search methods that do not produce well-scaled descent directions, such as gradient descent or conjugate gradient methods.

func (*FirstOrderStepSize) Init

func (fo *FirstOrderStepSize) Init(loc *Location, dir []float64) (stepSize float64)

func (*FirstOrderStepSize) StepSize

func (fo *FirstOrderStepSize) StepSize(loc *Location, dir []float64) (stepSize float64)

type FletcherReeves

type FletcherReeves struct {
	// contains filtered or unexported fields
}

FletcherReeves implements the Fletcher-Reeves variant of the CG method that computes the scaling parameter β_k according to the formula

β_k = |∇f_{k+1}|^2 / |∇f_k|^2.

func (*FletcherReeves) Beta

func (fr *FletcherReeves) Beta(grad, _, _ []float64) (beta float64)

func (*FletcherReeves) Init

func (fr *FletcherReeves) Init(loc *Location)

type FunctionConverge

type FunctionConverge struct {
	Absolute   float64
	Relative   float64
	Iterations int
	// contains filtered or unexported fields
}

FunctionConverge tests for insufficient improvement in the optimum value over the last iterations. A FunctionConvergence status is returned if there is no significant decrease for FunctionConverge.Iterations. A significant decrease is considered if

f < f_best

and

f_best - f > FunctionConverge.Relative * maxabs(f, f_best) + FunctionConverge.Absolute

If the decrease is significant, then the iteration counter is reset and f_best is updated.

If FunctionConverge.Iterations == 0, it has no effect.

func (*FunctionConverge) Converged

func (fc *FunctionConverge) Converged(l *Location) Status

func (*FunctionConverge) Init

func (fc *FunctionConverge) Init(dim int)

type GradientDescent

type GradientDescent struct {
	// Linesearcher selects suitable steps along the descent direction.
	// If Linesearcher is nil, a reasonable default will be chosen.
	Linesearcher Linesearcher
	// StepSizer determines the initial step size along each direction.
	// If StepSizer is nil, a reasonable default will be chosen.
	StepSizer StepSizer
	// GradStopThreshold sets the threshold for stopping if the gradient norm
	// gets too small. If GradStopThreshold is 0 it is defaulted to 1e-12, and
	// if it is NaN the setting is not used.
	GradStopThreshold float64
	// contains filtered or unexported fields
}

GradientDescent implements the steepest descent optimization method that performs successive steps along the direction of the negative gradient.

func (*GradientDescent) Init

func (g *GradientDescent) Init(dim, tasks int) int

func (*GradientDescent) InitDirection

func (g *GradientDescent) InitDirection(loc *Location, dir []float64) (stepSize float64)

func (*GradientDescent) NextDirection

func (g *GradientDescent) NextDirection(loc *Location, dir []float64) (stepSize float64)

func (*GradientDescent) Run

func (g *GradientDescent) Run(operation chan<- Task, result <-chan Task, tasks []Task)

func (*GradientDescent) Status

func (g *GradientDescent) Status() (Status, error)

func (*GradientDescent) Uses

func (*GradientDescent) Uses(has Available) (uses Available, err error)

type GuessAndCheck

type GuessAndCheck struct {
	Rander distmv.Rander
	// contains filtered or unexported fields
}

GuessAndCheck is a global optimizer that evaluates the function at random locations. Not a good optimizer, but useful for comparison and debugging.

func (*GuessAndCheck) Init

func (g *GuessAndCheck) Init(dim, tasks int) int

func (*GuessAndCheck) Run

func (g *GuessAndCheck) Run(operation chan<- Task, result <-chan Task, tasks []Task)

func (*GuessAndCheck) Uses

func (*GuessAndCheck) Uses(has Available) (uses Available, err error)

type HagerZhang

type HagerZhang struct {
	// contains filtered or unexported fields
}

HagerZhang implements the Hager-Zhang variant of the CG method that computes the scaling parameter β_k according to the formula

β_k = (y_k - 2 d_k |y_k|^2/(d_k·y_k))·∇f_{k+1} / (d_k·y_k),

where y_k = ∇f_{k+1} - ∇f_k.

func (*HagerZhang) Beta

func (hz *HagerZhang) Beta(grad, gradPrev, dirPrev []float64) (beta float64)

func (*HagerZhang) Init

func (hz *HagerZhang) Init(loc *Location)

type HestenesStiefel

type HestenesStiefel struct {
	// contains filtered or unexported fields
}

HestenesStiefel implements the Hestenes-Stiefel variant of the CG method that computes the scaling parameter β_k according to the formula

β_k = max(0, ∇f_{k+1}·y_k / d_k·y_k),

where y_k = ∇f_{k+1} - ∇f_k.

func (*HestenesStiefel) Beta

func (hs *HestenesStiefel) Beta(grad, gradPrev, dirPrev []float64) (beta float64)

func (*HestenesStiefel) Init

func (hs *HestenesStiefel) Init(loc *Location)

type LBFGS

type LBFGS struct {
	// Linesearcher selects suitable steps along the descent direction.
	// Accepted steps should satisfy the strong Wolfe conditions.
	// If Linesearcher is nil, a reasonable default will be chosen.
	Linesearcher Linesearcher
	// Store is the size of the limited-memory storage.
	// If Store is 0, it will be defaulted to 15.
	Store int
	// GradStopThreshold sets the threshold for stopping if the gradient norm
	// gets too small. If GradStopThreshold is 0 it is defaulted to 1e-12, and
	// if it is NaN the setting is not used.
	GradStopThreshold float64
	// contains filtered or unexported fields
}

LBFGS implements the limited-memory BFGS method for gradient-based unconstrained minimization.

It stores a modified version of the inverse Hessian approximation H implicitly from the last Store iterations while the normal BFGS method stores and manipulates H directly as a dense matrix. Therefore LBFGS is more appropriate than BFGS for large problems as the cost of LBFGS scales as O(Store * dim) while BFGS scales as O(dim^2). The "forgetful" nature of LBFGS may also make it perform better than BFGS for functions with Hessians that vary rapidly spatially.

func (*LBFGS) Init

func (l *LBFGS) Init(dim, tasks int) int

func (*LBFGS) InitDirection

func (l *LBFGS) InitDirection(loc *Location, dir []float64) (stepSize float64)

func (*LBFGS) NextDirection

func (l *LBFGS) NextDirection(loc *Location, dir []float64) (stepSize float64)

func (*LBFGS) Run

func (l *LBFGS) Run(operation chan<- Task, result <-chan Task, tasks []Task)

func (*LBFGS) Status

func (l *LBFGS) Status() (Status, error)

func (*LBFGS) Uses

func (*LBFGS) Uses(has Available) (uses Available, err error)

type LinesearchMethod

type LinesearchMethod struct {
	// NextDirectioner specifies the search direction of each linesearch.
	NextDirectioner NextDirectioner
	// Linesearcher performs a linesearch along the search direction.
	Linesearcher Linesearcher
	// contains filtered or unexported fields
}

LinesearchMethod represents an abstract optimization method in which a function is optimized through successive line search optimizations.

func (*LinesearchMethod) Init

func (ls *LinesearchMethod) Init(loc *Location) (Operation, error)

func (*LinesearchMethod) Iterate

func (ls *LinesearchMethod) Iterate(loc *Location) (Operation, error)

type Linesearcher

type Linesearcher interface {
	// Init initializes the Linesearcher and a new line search. Value and
	// derivative contain φ(0) and φ'(0), respectively, and step contains the
	// first trial step length. It returns an Operation that must be one of
	// FuncEvaluation, GradEvaluation, FuncEvaluation|GradEvaluation. The
	// caller must evaluate φ(step), φ'(step), or both, respectively, and pass
	// the result to Linesearcher in value and derivative arguments to Iterate.
	Init(value, derivative float64, step float64) Operation

	// Iterate takes in the values of φ and φ' evaluated at the previous step
	// and returns the next operation.
	//
	// If op is one of FuncEvaluation, GradEvaluation,
	// FuncEvaluation|GradEvaluation, the caller must evaluate φ(step),
	// φ'(step), or both, respectively, and pass the result to Linesearcher in
	// value and derivative arguments on the next call to Iterate.
	//
	// If op is MajorIteration, a sufficiently accurate minimum of φ has been
	// found at the previous step and the line search has concluded. Init must
	// be called again to initialize a new line search.
	//
	// If err is nil, op must not specify another operation. If err is not nil,
	// the values of op and step are undefined.
	Iterate(value, derivative float64) (op Operation, step float64, err error)
}

Linesearcher is a type that can perform a line search. It tries to find an (approximate) minimum of the objective function along the search direction dir_k starting at the most recent location x_k, i.e., it tries to minimize the function

φ(step) := f(x_k + step * dir_k) where step > 0.

Typically, a Linesearcher will be used in conjunction with LinesearchMethod for performing gradient-based optimization through sequential line searches.

type ListSearch

type ListSearch struct {
	// Locs is the list of locations to optimize. Each row of Locs is a location
	// to optimize. The number of columns of Locs must match the dimensions
	// passed to InitGlobal, and Locs must have at least one row.
	Locs mat.Matrix
	// contains filtered or unexported fields
}

ListSearch finds the optimum location from a specified list of possible optimum locations.

func (*ListSearch) Init

func (l *ListSearch) Init(dim, tasks int) int

Init initializes the method for optimization. The input dimension must match the number of columns of Locs.

func (*ListSearch) Run

func (l *ListSearch) Run(operation chan<- Task, result <-chan Task, tasks []Task)

func (*ListSearch) Status

func (l *ListSearch) Status() (Status, error)

func (*ListSearch) Uses

func (*ListSearch) Uses(has Available) (uses Available, err error)

type Location

type Location struct {
	// X is the function input for the location.
	X []float64
	// F is the result of evaluating the function at X.
	F float64
	// Gradient holds the first-order partial derivatives
	// of the function at X.
	// The length of Gradient must match the length of X
	// or be zero. If the capacity of Gradient is less
	// than the length of X, a new slice will be allocated.
	Gradient []float64
	// Hessian holds the second-order partial derivatives
	// of the function at X.
	// The dimensions of Hessian must match the length of X
	// or Hessian must be nil or empty. If Hessian is nil
	// a new mat.SymDense will be allocated, if it is empty
	// it will be resized to match the length of X.
	Hessian *mat.SymDense
}

Location represents a location in the optimization procedure.

type Method

type Method interface {
	// Init initializes the method for optimization. The inputs are
	// the problem dimension and number of available concurrent tasks.
	//
	// Init returns the number of concurrent processes to use, which must be
	// less than or equal to tasks.
	Init(dim, tasks int) (concurrent int)
	// Run runs an optimization. The method sends Tasks on
	// the operation channel (for performing function evaluations, major
	// iterations, etc.). The result of the tasks will be returned on Result.
	// See the documentation for Operation types for the possible operations.
	//
	// The caller of Run will signal the termination of the optimization
	// (i.e. convergence from user settings) by sending a task with a PostIteration
	// Op field on result. More tasks may still be sent on operation after this
	// occurs, but only MajorIteration operations will still be conducted
	// appropriately. Thus, it can not be guaranteed that all Evaluations sent
	// on operation will be evaluated, however if an Evaluation is started,
	// the results of that evaluation will be sent on results.
	//
	// The Method must read from the result channel until it is closed.
	// During this, the Method may want to send new MajorIteration(s) on
	// operation. Method then must close operation, and return from Run.
	// These steps must establish a "happens-before" relationship between result
	// being closed (externally) and Run closing operation, for example
	// by using a range loop to read from result even if no results are expected.
	//
	// The last parameter to Run is a slice of tasks with length equal to
	// the return from Init. Task has an ID field which may be
	// set and modified by Method, and must not be modified by the caller.
	// The first element of tasks contains information about the initial location.
	// The Location.X field is always valid. The Operation field specifies which
	// other values of Location are known. If Operation == NoOperation, none of
	// the values should be used, otherwise the Evaluation operations will be
	// composed to specify the valid fields. Methods are free to use or
	// ignore these values.
	//
	// Successful execution of an Operation may require the Method to modify
	// fields a Location. MajorIteration calls will not modify the values in
	// the Location, but Evaluation operations will. Methods are encouraged to
	// leave Location fields untouched to allow memory re-use. If data needs to
	// be stored, the respective field should be set to nil -- Methods should
	// not allocate Location memory themselves.
	//
	// Method may have its own specific convergence criteria, which can
	// be communicated using a MethodDone operation. This will trigger a
	// PostIteration to be sent on result, and the MethodDone task will not be
	// returned on result. The Method must implement Statuser, and the
	// call to Status must return a Status other than NotTerminated.
	//
	// The operation and result tasks are guaranteed to have a buffer length
	// equal to the return from Init.
	Run(operation chan<- Task, result <-chan Task, tasks []Task)
	// Uses checks if the Method is suited to the optimization problem. The
	// input is the available functions in Problem to call, and the returns are
	// the functions which may be used and an error if there is a mismatch
	// between the Problem and the Method's capabilities.
	Uses(has Available) (uses Available, err error)
}

Method is a type which can search for an optimum of an objective function.

type MoreThuente

type MoreThuente struct {
	// DecreaseFactor is the constant factor in the sufficient decrease
	// (Armijo) condition.
	// It must be in the interval [0, 1). The default value is 0.
	DecreaseFactor float64
	// CurvatureFactor is the constant factor in the Wolfe conditions. Smaller
	// values result in a more exact line search.
	// A set value must be in the interval (0, 1). If it is zero, it will be
	// defaulted to 0.9.
	CurvatureFactor float64
	// StepTolerance sets the minimum acceptable width for the linesearch
	// interval. If the relative interval length is less than this value,
	// ErrLinesearcherFailure is returned.
	// It must be non-negative. If it is zero, it will be defaulted to 1e-10.
	StepTolerance float64

	// MinimumStep is the minimum step that the linesearcher will take.
	// It must be non-negative and less than MaximumStep. Defaults to no
	// minimum (a value of 0).
	MinimumStep float64
	// MaximumStep is the maximum step that the linesearcher will take.
	// It must be greater than MinimumStep. If it is zero, it will be defaulted
	// to 1e20.
	MaximumStep float64
	// contains filtered or unexported fields
}

MoreThuente is a Linesearcher that finds steps that satisfy both the sufficient decrease and curvature conditions (the strong Wolfe conditions).

References:

  • More, J.J. and D.J. Thuente: Line Search Algorithms with Guaranteed Sufficient Decrease. ACM Transactions on Mathematical Software 20(3) (1994), 286-307

func (*MoreThuente) Init

func (mt *MoreThuente) Init(f, g float64, step float64) Operation

func (*MoreThuente) Iterate

func (mt *MoreThuente) Iterate(f, g float64) (Operation, float64, error)

type NelderMead

type NelderMead struct {
	InitialVertices [][]float64
	InitialValues   []float64
	Reflection      float64 // Reflection parameter (>0)
	Expansion       float64 // Expansion parameter (>1)
	Contraction     float64 // Contraction parameter (>0, <1)
	Shrink          float64 // Shrink parameter (>0, <1)
	SimplexSize     float64 // size of auto-constructed initial simplex
	// contains filtered or unexported fields
}

NelderMead is an implementation of the Nelder-Mead simplex algorithm for gradient-free nonlinear optimization (not to be confused with Danzig's simplex algorithm for linear programming). The implementation follows the algorithm described in

http://epubs.siam.org/doi/pdf/10.1137/S1052623496303470

If an initial simplex is provided, it is used and initLoc is ignored. If InitialVertices and InitialValues are both nil, an initial simplex will be generated automatically using the initial location as one vertex, and each additional vertex as SimplexSize away in one dimension.

If the simplex update parameters (Reflection, etc.) are zero, they will be set automatically based on the dimension according to the recommendations in

http://www.webpages.uidaho.edu/~fuchang/res/ANMS.pdf

func (*NelderMead) Init

func (n *NelderMead) Init(dim, tasks int) int

func (*NelderMead) Run

func (n *NelderMead) Run(operation chan<- Task, result <-chan Task, tasks []Task)

func (*NelderMead) Status

func (n *NelderMead) Status() (Status, error)

func (*NelderMead) Uses

func (*NelderMead) Uses(has Available) (uses Available, err error)

type NeverTerminate

type NeverTerminate struct{}

NeverTerminate implements Converger, always reporting NotTerminated.

func (NeverTerminate) Converged

func (NeverTerminate) Converged(loc *Location) Status

func (NeverTerminate) Init

func (NeverTerminate) Init(dim int)

type Newton

type Newton struct {
	// Linesearcher is used for selecting suitable steps along the descent
	// direction d. Accepted steps should satisfy at least one of the Wolfe,
	// Goldstein or Armijo conditions.
	// If Linesearcher == nil, an appropriate default is chosen.
	Linesearcher Linesearcher
	// Increase is the factor by which a scalar tau is successively increased
	// so that (H + tau*I) is positive definite. Larger values reduce the
	// number of trial Hessian factorizations, but also reduce the second-order
	// information in H.
	// Increase must be greater than 1. If Increase is 0, it is defaulted to 5.
	Increase float64
	// GradStopThreshold sets the threshold for stopping if the gradient norm
	// gets too small. If GradStopThreshold is 0 it is defaulted to 1e-12, and
	// if it is NaN the setting is not used.
	GradStopThreshold float64
	// contains filtered or unexported fields
}

Newton implements a modified Newton's method for Hessian-based unconstrained minimization. It applies regularization when the Hessian is not positive definite, and it can converge to a local minimum from any starting point.

Newton iteratively forms a quadratic model to the objective function f and tries to minimize this approximate model. It generates a sequence of locations x_k by means of

solve H_k d_k = -∇f_k for d_k,
x_{k+1} = x_k + α_k d_k,

where H_k is the Hessian matrix of f at x_k and α_k is a step size found by a line search.

Away from a minimizer H_k may not be positive definite and d_k may not be a descent direction. Newton implements a Hessian modification strategy that adds successively larger multiples of identity to H_k until it becomes positive definite. Note that the repeated trial factorization of the modified Hessian involved in this process can be computationally expensive.

If the Hessian matrix cannot be formed explicitly or if the computational cost of its factorization is prohibitive, BFGS or L-BFGS quasi-Newton method can be used instead.

func (*Newton) Init

func (n *Newton) Init(dim, tasks int) int

func (*Newton) InitDirection

func (n *Newton) InitDirection(loc *Location, dir []float64) (stepSize float64)

func (*Newton) NextDirection

func (n *Newton) NextDirection(loc *Location, dir []float64) (stepSize float64)

func (*Newton) Run

func (n *Newton) Run(operation chan<- Task, result <-chan Task, tasks []Task)

func (*Newton) Status

func (n *Newton) Status() (Status, error)

func (*Newton) Uses

func (*Newton) Uses(has Available) (uses Available, err error)

type NextDirectioner

type NextDirectioner interface {
	// InitDirection initializes the NextDirectioner at the given starting location,
	// putting the initial direction in place into dir, and returning the initial
	// step size. InitDirection must not modify Location.
	InitDirection(loc *Location, dir []float64) (step float64)

	// NextDirection updates the search direction and step size. Location is
	// the location seen at the conclusion of the most recent linesearch. The
	// next search direction is put in place into dir, and the next step size
	// is returned. NextDirection must not modify Location.
	NextDirection(loc *Location, dir []float64) (step float64)
}

NextDirectioner implements a strategy for computing a new line search direction at each major iteration. Typically, a NextDirectioner will be used in conjunction with LinesearchMethod for performing gradient-based optimization through sequential line searches.

type Operation

type Operation uint64

Operation represents the set of operations commanded by Method at each iteration. It is a bitmap of various Iteration and Evaluation constants. Individual constants must NOT be combined together by the binary OR operator except for the Evaluation operations.

const (
	// NoOperation specifies that no evaluation or convergence check should
	// take place.
	NoOperation Operation = 0
	// InitIteration is sent to Recorder to indicate the initial location.
	// All fields of the location to record must be valid.
	// Method must not return it.
	InitIteration Operation = 1 << (iota - 1)
	// PostIteration is sent to Recorder to indicate the final location
	// reached during an optimization run.
	// All fields of the location to record must be valid.
	// Method must not return it.
	PostIteration
	// MajorIteration indicates that the next candidate location for
	// an optimum has been found and convergence should be checked.
	MajorIteration
	// MethodDone declares that the method is done running. A method must
	// be a Statuser in order to use this iteration, and after returning
	// MethodDone, the Status must return other than NotTerminated.
	MethodDone
	// FuncEvaluation specifies that the objective function
	// should be evaluated.
	FuncEvaluation
	// GradEvaluation specifies that the gradient
	// of the objective function should be evaluated.
	GradEvaluation
	// HessEvaluation specifies that the Hessian
	// of the objective function should be evaluated.
	HessEvaluation
)

Supported Operations.

func (Operation) String

func (op Operation) String() string

type PolakRibierePolyak

type PolakRibierePolyak struct {
	// contains filtered or unexported fields
}

PolakRibierePolyak implements the Polak-Ribiere-Polyak variant of the CG method that computes the scaling parameter β_k according to the formula

β_k = max(0, ∇f_{k+1}·y_k / |∇f_k|^2),

where y_k = ∇f_{k+1} - ∇f_k.

func (*PolakRibierePolyak) Beta

func (pr *PolakRibierePolyak) Beta(grad, gradPrev, _ []float64) (beta float64)

func (*PolakRibierePolyak) Init

func (pr *PolakRibierePolyak) Init(loc *Location)

type Printer

type Printer struct {
	Writer          io.Writer
	HeadingInterval int
	ValueInterval   time.Duration
	// contains filtered or unexported fields
}

Printer writes column-format output to the specified writer as the optimization progresses. By default, it writes to os.Stdout.

func NewPrinter

func NewPrinter() *Printer

func (*Printer) Init

func (p *Printer) Init() error

func (*Printer) Record

func (p *Printer) Record(loc *Location, op Operation, stats *Stats) error

type Problem

type Problem struct {
	// Func evaluates the objective function at the given location. Func
	// must not modify x.
	Func func(x []float64) float64

	// Grad evaluates the gradient at x and stores the result in grad which will
	// be the same length as x. Grad must not modify x.
	Grad func(grad, x []float64)

	// Hess evaluates the Hessian at x and stores the result in-place in hess which
	// will have dimensions matching the length of x. Hess must not modify x.
	Hess func(hess *mat.SymDense, x []float64)

	// Status reports the status of the objective function being optimized and any
	// error. This can be used to terminate early, for example when the function is
	// not able to evaluate itself. The user can use one of the pre-provided Status
	// constants, or may call NewStatus to create a custom Status value.
	Status func() (Status, error)
}

Problem describes the optimization problem to be solved.

type QuadraticStepSize

type QuadraticStepSize struct {
	// Threshold determines that the initial step size should be estimated by
	// quadratic interpolation when the relative change in the objective
	// function is larger than Threshold.  Otherwise the initial step size is
	// set to 2*previous step size.
	// If Threshold is zero, it will be set to 1e-12.
	Threshold float64
	// InitialStepFactor sets the step size for the first iteration to be InitialStepFactor / |g|_∞.
	// If InitialStepFactor is zero, it will be set to one.
	InitialStepFactor float64
	// MinStepSize is the lower bound on the estimated step size.
	// MinStepSize times GradientAbsTol should always be greater than machine epsilon.
	// If MinStepSize is zero, it will be set to 1e-3.
	MinStepSize float64
	// MaxStepSize is the upper bound on the estimated step size.
	// If MaxStepSize is zero, it will be set to 1.
	MaxStepSize float64
	// contains filtered or unexported fields
}

QuadraticStepSize estimates the initial line search step size as the minimum of a quadratic that interpolates f(x_{k-1}), f(x_k) and ∇f_k⋅p_k. This is useful for line search methods that do not produce well-scaled descent directions, such as gradient descent or conjugate gradient methods. The step size is bounded away from zero.

func (*QuadraticStepSize) Init

func (q *QuadraticStepSize) Init(loc *Location, dir []float64) (stepSize float64)

func (*QuadraticStepSize) StepSize

func (q *QuadraticStepSize) StepSize(loc *Location, dir []float64) (stepSize float64)

type Recorder

type Recorder interface {
	Init() error
	Record(*Location, Operation, *Stats) error
}

A Recorder can record the progress of the optimization, for example to print the progress to StdOut or to a log file. A Recorder must not modify any data.

type Result

type Result struct {
	Location
	Stats
	Status Status
}

Result represents the answer of an optimization run. It contains the optimum function value, X location, and gradient as well as the Status at convergence and Statistics taken during the run.

func Minimize

func Minimize(p Problem, initX []float64, settings *Settings, method Method) (*Result, error)

Minimize uses an optimizer to search for a minimum of a function. A maximization problem can be transformed into a minimization problem by multiplying the function by -1.

The first argument represents the problem to be minimized. Its fields are routines that evaluate the objective function, gradient, and other quantities related to the problem. The objective function, p.Func, must not be nil. The optimization method used may require other fields to be non-nil as specified by method.Needs. Minimize will panic if these are not met. The method can be determined automatically from the supplied problem which is described below.

If p.Status is not nil, it is called before every evaluation. If the returned Status is other than NotTerminated or if the error is not nil, the optimization run is terminated.

The second argument specifies the initial location for the optimization. Some Methods do not require an initial location, but initX must still be specified for the dimension of the optimization problem.

The third argument contains the settings for the minimization. If settings is nil, the zero value will be used, see the documentation of the Settings type for more information, and see the warning below. All settings will be honored for all Methods, even if that setting is counter-productive to the method. Minimize cannot guarantee strict adherence to the evaluation bounds specified when performing concurrent evaluations and updates.

The final argument is the optimization method to use. If method == nil, then an appropriate default is chosen based on the properties of the other arguments (dimension, gradient-free or gradient-based, etc.). If method is not nil, Minimize panics if the Problem is not consistent with the Method (Uses returns an error).

Minimize returns a Result struct and any error that occurred. See the documentation of Result for more information.

See the documentation for Method for the details on implementing a method.

Be aware that the default settings of Minimize are to accurately find the minimum. For certain functions and optimization methods, this can take many function evaluations. The Settings input struct can be used to limit this, for example by modifying the maximum function evaluations or gradient tolerance.

Example
package main

import (
	"fmt"
	"log"

	"gonum.org/v1/gonum/optimize"
	"gonum.org/v1/gonum/optimize/functions"
)

func main() {
	p := optimize.Problem{
		Func: functions.ExtendedRosenbrock{}.Func,
		Grad: functions.ExtendedRosenbrock{}.Grad,
	}

	x := []float64{1.3, 0.7, 0.8, 1.9, 1.2}
	result, err := optimize.Minimize(p, x, nil, nil)
	if err != nil {
		log.Fatal(err)
	}
	if err = result.Status.Err(); err != nil {
		log.Fatal(err)
	}
	fmt.Printf("result.Status: %v\n", result.Status)
	fmt.Printf("result.X: %0.4g\n", result.X)
	fmt.Printf("result.F: %0.4g\n", result.F)
	fmt.Printf("result.Stats.FuncEvaluations: %d\n", result.Stats.FuncEvaluations)
}
Output:

result.Status: GradientThreshold
result.X: [1 1 1 1 1]
result.F: 4.98e-30
result.Stats.FuncEvaluations: 31

type Settings

type Settings struct {
	// InitValues specifies properties (function value, gradient, etc.) known
	// at the initial location passed to Minimize. If InitValues is non-nil, then
	// the function value F must be provided, the location X must not be specified
	// and other fields may be specified. The values in Location may be modified
	// during the call to Minimize.
	InitValues *Location

	// GradientThreshold stops optimization with GradientThreshold status if the
	// infinity norm of the gradient is less than this value. This defaults to
	// a value of 0 (and so gradient convergence is not checked), however note
	// that many Methods (LBFGS, CG, etc.) will converge with a small value of
	// the gradient, and so to fully disable this setting the Method may need to
	// be modified.
	// This setting has no effect if the gradient is not used by the Method.
	GradientThreshold float64

	// Converger checks if the optimization has converged based on the (history
	// of) locations found during the optimization. Minimize will pass the
	// Location at every MajorIteration to the Converger.
	//
	// If the Converger is nil, a default value of
	//  FunctionConverge {
	//		Absolute: 1e-10,
	//		Iterations: 100,
	//  }
	// will be used. NeverTerminated can be used to always return a
	// NotTerminated status.
	Converger Converger

	// MajorIterations is the maximum number of iterations allowed.
	// IterationLimit status is returned if the number of major iterations
	// equals or exceeds this value.
	// If it equals zero, this setting has no effect.
	// The default value is 0.
	MajorIterations int

	// Runtime is the maximum runtime allowed. RuntimeLimit status is returned
	// if the duration of the run is longer than this value. Runtime is only
	// checked at MajorIterations of the Method.
	// If it equals zero, this setting has no effect.
	// The default value is 0.
	Runtime time.Duration

	// FuncEvaluations is the maximum allowed number of function evaluations.
	// FunctionEvaluationLimit status is returned if the total number of calls
	// to Func equals or exceeds this number.
	// If it equals zero, this setting has no effect.
	// The default value is 0.
	FuncEvaluations int

	// GradEvaluations is the maximum allowed number of gradient evaluations.
	// GradientEvaluationLimit status is returned if the total number of calls
	// to Grad equals or exceeds this number.
	// If it equals zero, this setting has no effect.
	// The default value is 0.
	GradEvaluations int

	// HessEvaluations is the maximum allowed number of Hessian evaluations.
	// HessianEvaluationLimit status is returned if the total number of calls
	// to Hess equals or exceeds this number.
	// If it equals zero, this setting has no effect.
	// The default value is 0.
	HessEvaluations int

	Recorder Recorder

	// Concurrent represents how many concurrent evaluations are possible.
	Concurrent int
}

Settings represents settings of the optimization run. It contains initial settings, convergence information, and Recorder information. Convergence settings are only checked at MajorIterations, while Evaluation thresholds are checked at every Operation. See the field comments for default values.

type Stats

type Stats struct {
	MajorIterations int           // Total number of major iterations
	FuncEvaluations int           // Number of evaluations of Func
	GradEvaluations int           // Number of evaluations of Grad
	HessEvaluations int           // Number of evaluations of Hess
	Runtime         time.Duration // Total runtime of the optimization
}

Stats contains the statistics of the run.

type Status

type Status int

Status represents the status of the optimization. Programs should not rely on the underlying numeric value of the Status being constant.

const (
	NotTerminated Status = iota
	Success
	FunctionThreshold
	FunctionConvergence
	GradientThreshold
	StepConvergence
	FunctionNegativeInfinity
	MethodConverge
	Failure
	IterationLimit
	RuntimeLimit
	FunctionEvaluationLimit
	GradientEvaluationLimit
	HessianEvaluationLimit
)

func NewStatus

func NewStatus(name string, early bool, err error) Status

NewStatus returns a unique Status variable to represent a custom status. NewStatus is intended to be called only during package initialization, and calls to NewStatus are not thread safe.

NewStatus takes in three arguments, the string that should be output from Status.String, a boolean if the status indicates early optimization conclusion, and the error to return from Err (if any).

func (Status) Early

func (s Status) Early() bool

Early returns true if the status indicates the optimization ended before a minimum was found. As an example, if the maximum iterations was reached, a minimum was not found, but if the gradient norm was reached then a minimum was found.

func (Status) Err

func (s Status) Err() error

Err returns the error associated with an early ending to the minimization. If Early returns false, Err will return nil.

func (Status) String

func (s Status) String() string

type Statuser

type Statuser interface {
	Status() (Status, error)
}

Statuser can report the status and any error. It is intended for methods as an additional error reporting mechanism apart from the errors returned from Init and Iterate.

type StepSizer

type StepSizer interface {
	Init(loc *Location, dir []float64) float64
	StepSize(loc *Location, dir []float64) float64
}

StepSizer can set the next step size of the optimization given the last Location. Returned step size must be positive.

type Task

type Task struct {
	ID int
	Op Operation
	*Location
}

Task is a type to communicate between the Method and the outer calling script.

Directories

Path Synopsis
convex
lp
Package lp implements routines to solve linear programming problems.
Package lp implements routines to solve linear programming problems.
Package functions provides objective functions for testing optimization algorithms.
Package functions provides objective functions for testing optimization algorithms.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL