linsolve

package
v0.0.0-...-8508f36 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 28, 2023 License: BSD-3-Clause Imports: 6 Imported by: 4

Documentation

Overview

Package linsolve provides iterative methods for solving linear systems.

Background

A system of linear equations can be written as

A * x = b,

where A is a given n×n non-singular matrix, b is a given n-vector (the right-hand side), and x is an unknown n-vector.

Direct methods such as the LU or QR decomposition compute (in the absence of roundoff errors) the exact solution after a finite number of steps. For a general matrix A they require O(n^3) arithmetic operations which becomes infeasible for large n due to excessive memory and time cost.

Iterative methods, in contrast, generally do not compute the exact solution x. Starting from an initial estimate x_0, they instead compute a sequence x_i of increasingly accurate approximations to x. This iterative process is stopped when the estimated difference between x_i and the true x becomes smaller than a prescribed threshold. The number of iterations thus depends on the value of the threshold. If the desired threshold is very small, then the iterative methods may take as long or longer than a direct method. However, for many problems a decent approximation can be found in a small number of steps.

The iterative methods implemented in this package do not access the elements of A directly, they instead ask for the result of matrix-vector products with A. For a general n×n matrix this requires O(n^2) operations, but can be much cheaper depending on the structure of the matrix (sparse, banded, block-stuctured, etc.). Such structure often arises in practical applications. An iterative method can thus be significantly cheaper than a direct method, by using a small number of iterations and taking advantage of matrix structure.

Iterative methods are most often useful in the following situations:

  • The system matrix A is sparse, blocked or has other special structure,
  • The problem size is sufficiently large that a dense factorization of A is costly in terms of compute time and/or memory storage,
  • Computing the product of A (or A^T, if necessary) with a vector can be done efficiently,
  • An approximate solution is all that is required.

Using linsolve

The two most important elements of the API are the MulVecToer interface and the Iterative function.

MulVecToer interface

The MulVecToer interface represents the system matrix A. This abstracts the details of any particular matrix storage, and allows the user to exploit the properties of their particular matrix. Matrix types provided by gonum/mat and github.com/james-bowman/sparse packages implement this interface.

Note that methods in this package have only limited means for checking whether the provided MulVecToer represents a matrix that satisfies all assumptions made by the chosen Method, for example if the matrix is actually symmetric positive definite.

Iterative function

The Iterative function is the entry point to the functionality provided by this package. It takes as parameters the matrix A (via the MulVecToer interface as discussed above), the right-hand side vector b, the iterative method and settings that control the iterative process and provide a way for reusing memory.

Choosing an iterative method

The choice of an iterative method is typically guided by the properties of the matrix A including symmetry, definiteness, sparsity, conditioning, and block structure. In general, performance on symmetric matrices is well understood (see the references below), with the conjugate gradient method being a good starting point. Non-symmetric matrices are much more difficult to assess, where any suggestion of a 'best' method is usually accompanied by a recommendation to use trial-and-error.

Preconditioning

Preconditioning is a family of techniques that attempt to transform the linear system into one that has the same solution but more favorable eigenspectrum. The transformation matrix is called a preconditioner. A good preconditioner will reduce the number of iterations needed to find a good approximate solution (hopefully enough to overcome the cost of applying the preconditioning!), and in some cases preconditioning is necessary to get any kind of convergence. In linsolve a preconditioner is specified by Settings.PreconSolve.

Implementing Method interface

This package allows external implementations of iterative solvers by means of the Method interface. It uses a reverse-communication style of API to "outsource" operations such as matrix-vector multiplication, preconditioner solve or convergence checks to the caller. The caller performs the commanded operation and passes the result again to Method. The matrix A and the right-hand side b are not directly available to Methods which encourages their cleaner implementation. See the documentation for Method, Operation, and Context for more information.

References

Further details about computational practice and mathematical theory of iterative methods can be found in the following references:

Index

Examples

Constants

This section is empty.

Variables

View Source
var ErrIterationLimit = errors.New("linsolve: iteration limit reached")

ErrIterationLimit is returned when a maximum number of iterations were done without converging to a solution.

Functions

func NoPreconditioner

func NoPreconditioner(dst *mat.VecDense, trans bool, rhs mat.Vector) error

NoPreconditioner implements the identity preconditioner.

Types

type BiCG

type BiCG struct {
	// contains filtered or unexported fields
}

BiCG implements the Bi-Conjugate Gradient method with preconditioning for solving systems of linear equations

A * x = b,

where A is a nonsymmetric, nonsingular matrix. It uses limited memory storage but the convergence may be irregular and the method may break down. BiCG requires a multiplication with A and Aᵀ at each iteration. BiCGStab is a related method that does not use Aᵀ.

References:

  • Barrett, R. et al. (1994). Section 2.3.5 BiConjugate Gradient (BiCG). In Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods (2nd ed.) (pp. 19-20). Philadelphia, PA: SIAM. Retrieved from http://www.netlib.org/templates/templates.pdf

func (*BiCG) Init

func (b *BiCG) Init(x, residual *mat.VecDense)

Init initializes the data for a linear solve. See the Method interface for more details.

func (*BiCG) Iterate

func (b *BiCG) Iterate(ctx *Context) (Operation, error)

Iterate performs an iteration of the linear solve. See the Method interface for more details.

BiCG will command the following operations:

MulVec
MulVec|Trans
PreconSolve
PreconSolve|Trans
CheckResidualNorm
MajorIteration
NoOperation

type BiCGStab

type BiCGStab struct {
	// contains filtered or unexported fields
}

BiCGStab implements the BiConjugate Gradient Stabilized method with preconditioning for solving systems of linear equations

A * x = b,

where A is a nonsymmetric, nonsingular matrix. The method is a variant of BiCG but offers a smoother convergence and does not require multiplication with Aᵀ.

References:

  • Barrett, R. et al. (1994). Section 2.3.8 BiConjugate Gradient Stabilized (Bi-CGSTAB). In Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods (2nd ed.) (pp. 24-25). Philadelphia, PA: SIAM. Retrieved from http://www.netlib.org/templates/templates.pdf

func (*BiCGStab) Init

func (b *BiCGStab) Init(x, residual *mat.VecDense)

Init initializes the data for a linear solve. See the Method interface for more details.

func (*BiCGStab) Iterate

func (b *BiCGStab) Iterate(ctx *Context) (Operation, error)

Iterate performs an iteration of the linear solve. See the Method interface for more details.

BiCGStab will command the following operations:

MulVec
PreconSolve
CheckResidualNorm
MajorIteration
NoOperation

type BreakdownError

type BreakdownError struct {
	Value     float64
	Tolerance float64
}

BreakdownError signifies that a breakdown occured and the method cannot continue.

func (*BreakdownError) Error

func (e *BreakdownError) Error() string

type CG

type CG struct {
	// contains filtered or unexported fields
}

CG implements the Conjugate Gradient iterative method with preconditioning for solving systems of linear equations

A * x = b,

where A is a symmetric positive definite matrix. It requires minimal memory storage and is a good choice for symmetric positive definite problems.

References:

  • Barrett, Richard et al. (1994). Section 2.3.1 Conjugate Gradient Method (CG). In Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods (2nd ed.) (pp. 12-15). Philadelphia, PA: SIAM. Retrieved from http://www.netlib.org/templates/templates.pdf
  • Hestenes, M., and Stiefel, E. (1952). Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards, 49(6), 409. doi:10.6028/jres.049.044
  • Málek, J. and Strakoš, Z. (2015). Preconditioning and the Conjugate Gradient Method in the Context of Solving PDEs. Philadelphia, PA: SIAM.

func (*CG) Init

func (cg *CG) Init(x, residual *mat.VecDense)

Init initializes the data for a linear solve. See the Method interface for more details.

func (*CG) Iterate

func (cg *CG) Iterate(ctx *Context) (Operation, error)

Iterate performs an iteration of the linear solve. See the Method interface for more details.

CG will command the following operations:

MulVec
PreconSolve
CheckResidualNorm
MajorIteration

type Context

type Context struct {
	// X will be set by Method to the current approximate
	// solution when it commands ComputeResidual and MajorIteration.
	X *mat.VecDense

	// ResidualNorm is (an estimate of) a norm of
	// the residual. Method will set it to the current
	// value when it commands CheckResidualNorm.
	ResidualNorm float64

	// Converged indicates to Method whether ResidualNorm
	// satisfies a stopping criterion as a result of
	// CheckResidualNorm operation.
	Converged bool

	// Src and Dst are the source and destination vectors
	// for various Operations. Src will be set by Method
	// and the caller must store the result in Dst. See
	// the Operation documentation for more information.
	Src, Dst *mat.VecDense
}

Context mediates the communication between the Method and the caller. The caller must not modify Context apart from the commanded Operations.

func NewContext

func NewContext(n int) *Context

NewContext returns a new Context for work on problems of dimension n. NewContext will panic if n is not positive.

func (*Context) Reset

func (ctx *Context) Reset(n int)

Reset reinitializes the Context for work on problems of dimension n. Reset will panic if n is not positive.

type GMRES

type GMRES struct {
	// Restart is the restart parameter which limits the computation and
	// storage costs. It must hold that
	//  1 <= Restart <= n
	// where n is the dimension of the problem. If Restart is 0, n will be
	// used instead. This guarantees convergence of GMRES and increases
	// robustness. Many specific problems however, particularly for large
	// n, will benefit in efficiency by setting Restart to
	// a problem-dependent value less than n.
	Restart int
	// contains filtered or unexported fields
}

GMRES implements the Generalized Minimum Residual method with the modified Gram-Schmidt orthogonalization for solving systems of linear equations

A * x = b,

where A is a nonsymmetric, nonsingular matrix. It may find a solution in fewer iterations and with fewer matrix-vector products compared to BiCG or BiCGStab at the price of much large memory storage. This implementation uses restarts to limit the memory requirements. GMRES does not need the multiplication with Aᵀ.

References:

  • Barrett, R. et al. (1994). Section 2.3.4 Generalized Minimal Residual (GMRES). In Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods (2nd ed.) (pp. 17-19). Philadelphia, PA: SIAM. Retrieved from http://www.netlib.org/templates/templates.pdf
  • Saad, Y., and Schultz, M. (1986). GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput., 7(3), 856. doi:10.6028/jres.049.044 Retrieved from https://web.stanford.edu/class/cme324/saad-schultz.pdf

func (*GMRES) Init

func (g *GMRES) Init(x, residual *mat.VecDense)

Init initializes the data for a linear solve. See the Method interface for more details.

func (*GMRES) Iterate

func (g *GMRES) Iterate(ctx *Context) (Operation, error)

Iterate performs an iteration of the linear solve. See the Method interface for more details.

GMRES will command the following operations:

MulVec
PreconSolve
CheckResidualNorm
MajorIteration
NoOperation

type Method

type Method interface {
	// Init initializes the method for solving an n×n
	// linear system with an initial estimate x and
	// the corresponding residual vector.
	//
	// Method will not retain x or residual.
	Init(x, residual *mat.VecDense)

	// Iterate performs a step in converging to the
	// solution of a linear system.
	//
	// Iterate retrieves data from Context, updates it,
	// and returns the next operation. The caller must
	// perform the Operation using data in Context, and
	// depending on the state call Iterate again.
	Iterate(*Context) (Operation, error)
}

Method is an iterative method that produces a sequence of vectors that converge to the solution of the system of linear equations

A * x = b,

where A is non-singular n×n matrix, and x and b are vectors of dimension n.

Method uses a reverse-communication interface between the iterative algorithm and the caller. Method acts as a client that commands the caller to perform needed operations via an Operation returned from the Iterate method. This provides independence of Method on representation of the matrix A, and enables automation of common operations like checking for convergence and maintaining statistics.

type MulVecToer

type MulVecToer interface {
	// MulVecTo computes A*x or Aᵀ*x and stores the result into dst.
	MulVecTo(dst *mat.VecDense, trans bool, x mat.Vector)
}

MulVecToer represents a square matrix A by means of a matrix-vector multiplication.

type Operation

type Operation uint

Operation specifies the type of operation.

const (
	NoOperation Operation = 0

	// Compute A*x where x is stored in Context.Src. The
	// result must be placed in Context.Dst.
	MulVec Operation = 1 << (iota - 1)

	// Perform a preconditioner solve
	//  M z = r
	// where r is stored in Context.Src. The solution z
	// must be placed in Context.Dst.
	PreconSolve

	// Trans indicates that MulVec or PreconSolve
	// operation must be performed wih the transpose,
	// that is, compute Aᵀ*x or solve Mᵀ*z = r. Method
	// will command Trans only in bitwise OR combination
	// with MulVec and PreconSolve.
	Trans

	// Compute b-A*x where x is stored in Context.X,
	// and store the result in Context.Dst.
	ComputeResidual

	// Check convergence using (an estimate of) a
	// residual norm in Context.ResidualNorm. Context.X
	// does not need to be valid. The caller must set
	// Context.Converged to indicate whether convergence
	// has been determined, and then it must call
	// Method.Iterate again.
	CheckResidualNorm

	// MajorIteration indicates that Method has finished
	// what it considers to be one iteration. Method
	// will make sure that Context.X is updated.
	// If Context.Converged is true, the caller must
	// terminate the iterative process, otherwise it
	// should call Method.Iterate again.
	MajorIteration
)

Operations commanded by Method.Iterate.

type Result

type Result struct {
	// X is the approximate solution.
	X *mat.VecDense

	// ResidualNorm is an approximation to the norm of the final residual.
	ResidualNorm float64

	// Stats holds statistics about the iterative solve.
	Stats Stats
}

Result holds the result of an iterative solve.

func Iterative

func Iterative(a MulVecToer, b *mat.VecDense, m Method, settings *Settings) (*Result, error)

Iterative finds an approximate solution of the system of n linear equations

A*x = b,

where A is a nonsingular square matrix of order n and b is the right-hand side vector, using an iterative method m. If m is nil, default GMRES will be used.

settings provide means for adjusting parameters of the iterative process. See the Settings documentation for more information. Iterative will not modify the fields of Settings. If settings is nil, default settings will be used.

Note that the default choices of Method and Settings were chosen to provide accuracy and robustness, rather than speed. There are many algorithms for iterative linear solutions that have different tradeoffs, and can exploit special structure in the A matrix. Similarly, in many cases the number of iterations can be significantly reduced by using an appropriate preconditioner or increasing the error tolerance. Combined, these choices can significantly reduce computation time. Thus, while Iterative has supplied defaults, users are strongly encouraged to adjust these defaults for their problem.

Example
package main

import (
	"fmt"
	"math"

	"gonum.org/v1/exp/linsolve"
	"gonum.org/v1/gonum/floats"
	"gonum.org/v1/gonum/mat"
)

// System represents a linear system with a symmetric band matrix
//
//	A*x = b
type System struct {
	A *mat.SymBandDense
	B *mat.VecDense
}

// L2Projection returns a linear system whose solution is the L2 projection of f
// into the space of piecewise linear functions defined on the given grid.
//
// References:
//   - M. Larson, F. Bengzon, The Finite Element Method: Theory,
//     Implementations, and Applications. Springer (2013), Section 1.3, also
//     available at:
//     http://www.springer.com/cda/content/document/cda_downloaddocument/9783642332869-c1.pdf
func L2Projection(grid []float64, f func(float64) float64) System {
	n := len(grid)

	// Assemble the symmetric banded mass matrix by iterating over all elements.
	A := mat.NewSymBandDense(n, 1, nil)
	for i := 0; i < n-1; i++ {
		// h is the length of the i-th element.
		h := grid[i+1] - grid[i]
		// Add contribution from the i-th element.
		A.SetSymBand(i, i, A.At(i, i)+h/3)
		A.SetSymBand(i, i+1, h/6)
		A.SetSymBand(i+1, i+1, A.At(i+1, i+1)+h/3)
	}

	// Assemble the load vector by iterating over all elements.
	b := mat.NewVecDense(n, nil)
	for i := 0; i < n-1; i++ {
		h := grid[i+1] - grid[i]
		b.SetVec(i, b.AtVec(i)+f(grid[i])*h/2)
		b.SetVec(i+1, b.AtVec(i+1)+f(grid[i+1])*h/2)
	}

	return System{A, b}
}

func main() {
	const (
		n  = 10
		x0 = 0.0
		x1 = 1.0
	)
	// Make a uniform grid.
	grid := make([]float64, n+1)
	floats.Span(grid, x0, x1)
	sys := L2Projection(grid, func(x float64) float64 {
		return x * math.Sin(x)
	})

	result, err := linsolve.Iterative(sys.A, sys.B, &linsolve.CG{}, nil)
	if err != nil {
		fmt.Println("Error:", err)
		return
	}

	fmt.Printf("# iterations: %v\n", result.Stats.Iterations)
	fmt.Printf("Final solution: %.6f\n", mat.Formatted(result.X.T()))

}
Output:

# iterations: 11
Final solution: [-0.003339   0.006677   0.036530   0.085606   0.152981   0.237072   0.337006   0.447616   0.578244   0.682719   0.920847]
Example (EvolutionPDE)
package main

import (
	"fmt"
	"log"

	"golang.org/x/exp/rand"

	"gonum.org/v1/exp/linsolve"
	"gonum.org/v1/gonum/mat"
	"gonum.org/v1/plot"
	"gonum.org/v1/plot/plotter"
	"gonum.org/v1/plot/plotutil"
	"gonum.org/v1/plot/vg"
)

// AllenCahnFD implements a semi-implicit finite difference scheme for the
// solution of the one-dimensional Allen-Cahn equation
//
//	 u_t = u_xx - 1/ξ²·f'(u)  in (0,L)×(0,T)
//	 u_x = 0                  on (0,T)
//	u(0) = u0                 on (0,L)
//
// where f is a double-well-shaped function with two minima at ±1
//
//	f(s) = 1/4·(s²-1)²
//
// The equation arises in materials science in the description of phase
// transitions, e.g. solidification in crystal growth, but also in other areas
// like image processing due to its connection to mean-curvature motion.
// Starting the evolution from an initial distribution u0, the solution u
// develops a thin steep layer, an interface between regions of the domain
// (0,L) where u is constant and close to one of the minima of f.
//
// AllenCahnFD approximates derivatives by finite differences and the solution
// is advanced in time by a semi-implicit Euler scheme where the nonlinear term
// is taken from the previous time step. Therefore, at each time step a linear
// system must be solved.
type AllenCahnFD struct {
	// Xi is the ξ parameter that determines the interface width.
	Xi float64

	// InitCond is the initial condition u0.
	InitCond func(x float64) float64

	h   float64 // Spatial step size
	tau float64 // Time step size

	a *mat.SymBandDense
	b *mat.VecDense
	u *mat.VecDense

	ls         linsolve.Method
	lssettings linsolve.Settings
}

// FPrime returns the value of the derivative of the double-well potential f at s.
//
//	f'(s) = s·(s²-1)
func FPrime(s float64) float64 {
	return s * (s*s - 1)
}

// Setup initializes the receiver for solving the Allen-Cahn equation on a
// uniform grid with n+1 nodes on the spatial interval (0,L) and with the time
// step size tau.
func (ac *AllenCahnFD) Setup(n int, L float64, tau float64) {
	ac.h = L / float64(n)
	ac.tau = tau

	// We solve this PDE numerically by replacing the derivatives with finite
	// differences. For the spatial derivative, we use a central difference
	// scheme, and for the time derivative derivative we use semi-implicit Euler
	// integration where the non-linear term with f' is taken from the previous
	// time step.
	//
	// After replacing the derivatives we get
	//  1/tau*(u^{k+1}_i - u^k_i) = 1/h²*(u^{k+1}_{i-1} - 2*u^{k+1}_i + u^{k+1}_{i+1}) - 1/ξ²*f'(u^k_i)
	// where tau is the time step size, h is the spatial step size, and u^k_i
	// denotes the numerical solution that approximates u at time level k and
	// grid node i, that is,
	//  u^k_i ≅ u(k*tau,i*h)
	// Multiplying the equation by tau and collecting the terms from the same
	// time level on each side gives
	//  -tau/h²*u^{k+1}_{i-1} + (1+2*tau/h²)*u^{k+1}_i - tau/h²*u^{k+1}_{i+1}) = u^k_i - tau/ξ²*f'(u^k_i)
	// If we denote C:=tau/h² we can simplify this to
	//  -C*u^{k+1}_{i-1} + (1+2*C)*u^{k+1}_i - C*u^{k+1}_{i+1} = u^k_i - tau/ξ²*f'(u^k_i)   (1)
	// which must hold for all i=0,...,n.
	//
	// When i=0 or i=n, the expression (1) refers to values at nodes -1 and n+1
	// which lie outside of the domain. We can eliminate them by using the fact
	// that the first derivative is zero at the boundary. We approximate it by
	// central difference:
	//      -1/h*(u^{k+1}_{-1} - u^{k+1}_1) = 0
	//  1/h*(u^{k+1}_{n+1} - u^{k+1}_{n-1}) = 0
	// which after simplifying gives
	//   u^{k+1}_{-1} = u^{k+1}_1
	//  u^{k+1}_{n+1} = u^{k+1}_{n-1}
	// By substituting these two expressions into (1) at i=0 and i=n values at
	// outside nodes do not appear in the expressions. If we further divide them
	// by 2 (to obtain a symmetric matrix), we finally get
	//  (1/2+C)*u^{k+1}_0 - C*u^{k+1}_1 = 1/2*u^k_0 - 1/2*tau/ξ²*f'(u^k_0)                  (2)
	//  -C*u^{k+1}_{n-1} + (1/2+C)*u^{k+1}_n = 1/2*u^k_n - 1/2*tau/ξ²*f'(u^k_n)             (3)
	// Note that simply means that we treat values at the boundary nodes the
	// same as the inner nodes.
	//
	// For a given k the equations (1),(2),(3) form a linear system for the
	// unknown vector [u^{k+1}_i], i=0,...,n that must be solved at each step in
	// order to advance the solution in time. The matrix of this system is
	// tridiagonal and symmetric positive-definite:
	//  ⎛1/2+C   -C    0    .    .    .    .     0⎞
	//  ⎜   -C 1+2C   -C                         .⎟
	//  ⎜    0   -C 1+2C   -C                    .⎟
	//  ⎜    .        -C    .    .               .⎟
	//  ⎜    .              .    .    .          .⎟
	//  ⎜    .                   .    .   -C     0⎟
	//  ⎜    .                       -C 1+2C    -C⎟
	//  ⎝    0    .    .    .    .    0   -C 1/2+C⎠
	// The right-hand side is:
	//  ⎛1/2*u^k_0     - 1/2*tau/ξ²*f'(u^k_0)    ⎞
	//  ⎜    u^k_1     -     tau/ξ²*f'(u^k_1)    ⎟
	//  ⎜    u^k_2     -     tau/ξ²*f'(u^k_2)    ⎟
	//  ⎜              .                         ⎟
	//  ⎜              .                         ⎟
	//  ⎜              .                         ⎟
	//  ⎜    u^k_{n-1} -     tau/ξ²*f'(u^k_{n-1})⎟
	//  ⎝1/2*u^k_n     - 1/2*tau/ξ²*f'(u^k_n)    ⎠

	// Assemble the system matrix A based on the discretization scheme above.
	// Since the matrix is symmetric, we only need to set elements in the upper
	// triangle.
	A := mat.NewSymBandDense(n+1, 1, nil)
	c := ac.tau / ac.h / ac.h
	// Boundary condition at the left node
	A.SetSymBand(0, 0, 0.5+c)
	A.SetSymBand(0, 1, -c)
	// Interior nodes
	for i := 1; i < n; i++ {
		A.SetSymBand(i, i, 1+2*c)
		A.SetSymBand(i, i+1, -c)
	}
	// Boundary condition at the right node
	A.SetSymBand(n, n, 0.5+c)
	ac.a = A

	// Allocate the right-hand side b.
	ac.b = mat.NewVecDense(n+1, nil)

	// Allocate and set up the initial condition.
	ac.u = mat.NewVecDense(n+1, nil)
	for i := 0; i < ac.u.Len(); i++ {
		ac.u.SetVec(i, ac.InitCond(float64(i)*ac.h))
	}

	// Allocate the linear solver and the settings.
	ac.ls = &linsolve.CG{}
	ac.lssettings = linsolve.Settings{
		// Solution from the previous time step will be a good initial estimate.
		InitX: ac.u,
		// Store the solution into the existing vector.
		Dst: ac.u,
		// Provide context to reduce memory allocation and GC pressure.
		Work: linsolve.NewContext(n + 1),
	}
}

// Step advances the solution one step in time.
func (ac *AllenCahnFD) Step() error {
	// Assemble the right-hand side vector b.
	tauXi2 := ac.tau / ac.Xi / ac.Xi
	n := ac.u.Len()
	for i := 0; i < ac.u.Len(); i++ {
		ui := ac.u.AtVec(i)
		bi := ui - tauXi2*FPrime(ui)
		if i == 0 || i == n-1 {
			bi *= 0.5
		}
		ac.b.SetVec(i, bi)
	}
	// Solve the system.
	_, err := linsolve.Iterative(ac, ac.b, ac.ls, &ac.lssettings)
	return err
}

// MulVecTo implements the MulVecToer interface.
func (ac *AllenCahnFD) MulVecTo(dst *mat.VecDense, _ bool, x mat.Vector) {
	dst.MulVec(ac.a, x)
}

func (ac *AllenCahnFD) Solution() *mat.VecDense {
	return ac.u
}

func output(u mat.Vector, L float64, step int) error {
	p, err := plot.New()
	if err != nil {
		return err
	}

	p.Title.Text = fmt.Sprintf("Step %d", step)
	p.X.Label.Text = "x"
	p.X.Min = 0
	p.X.Max = L
	p.Y.Min = -1.1
	p.Y.Max = 1.1

	n := u.Len()
	h := L / float64(n-1)
	pts := make(plotter.XYs, n)
	for i := 0; i < u.Len(); i++ {
		pts[i].X = float64(i) * h
		pts[i].Y = u.AtVec(i)
	}
	err = plotutil.AddLines(p, "u", pts)
	if err != nil {
		return err
	}
	err = p.Save(20*vg.Centimeter, 10*vg.Centimeter, fmt.Sprintf("u%04d.png", step))
	if err != nil {
		return err
	}
	return nil
}

func main() {
	const (
		L   = 10.0
		nx  = 1000
		nt  = 200
		tau = 0.1 * L / nx
		xi  = 6.0 * L / nx
	)
	rnd := rand.New(rand.NewSource(1))
	ac := AllenCahnFD{
		Xi: xi,
		InitCond: func(x float64) float64 {
			// Initial condition is a perturbation of the (unstable) zero state
			// (the peak in the double-well function f).
			return 0.01 * rnd.NormFloat64()
		},
	}
	ac.Setup(nx, L, tau)
	for i := 1; i <= nt; i++ {
		err := ac.Step()
		if err != nil {
			log.Fatal(err)
		}
		// Generate plots of u as PNG images that can be converted to video
		// using for example
		//  ffmpeg -i u%04d.png output.webm
		err = output(ac.Solution(), L, i)
		if err != nil {
			log.Fatal(err)
		}
	}
}
Output:

type Settings

type Settings struct {
	// InitX holds the initial guess. If it is nil or empty, the zero vector
	// will be used, otherwise its length must be equal to the dimension of
	// the system.
	InitX *mat.VecDense

	// Dst, if not nil, will be used for storing the approximate solution,
	// otherwise a new vector will be allocated. In both cases the vector will
	// also be returned in Result. If Dst is not empty, its length must be equal
	// to the dimension of the system.
	Dst *mat.VecDense

	// Tolerance specifies error tolerance for the final (approximate)
	// solution produced by the iterative method. The iteration will be
	// stopped when
	//  |r_i| < Tolerance * |b|
	// where r_i is the residual at i-th iteration.
	//
	// If Tolerance is zero, a default value of 1e-8 will be used, otherwise
	// it must be positive and less than 1.
	Tolerance float64

	// MaxIterations is the limit on the number of iterations. If it is
	// zero, a default value of twice the dimension of the system will be
	// used.
	MaxIterations int

	// PreconSolve describes a preconditioner solve that stores into dst the
	// solution of the system
	//  M  * dst = rhs, or
	//  Mᵀ * dst = rhs,
	// where M is the preconditioning matrix. If PreconSolve is nil, no
	// preconditioning will be used (M is the identity).
	PreconSolve func(dst *mat.VecDense, trans bool, rhs mat.Vector) error

	// Work context can be provided to reduce memory allocation when solving
	// multiple linear systems. If Work is not nil, its fields must be
	// either empty or their length must be equal to the dimension of the
	// system.
	Work *Context
}

Settings holds settings for solving a linear system.

type Stats

type Stats struct {
	// Iterations is the number of iterations performed by Method.
	Iterations int

	// MulVec is the number of MulVec operations commanded by Method.
	MulVec int

	// PreconSolve is the number of PreconSolve operations commanded by Method.
	PreconSolve int
}

Stats holds statistics about an iterative solve.

Directories

Path Synopsis
internal
mmarket
Package mmarket provides a type to read Matrix Market format files.
Package mmarket provides a type to read Matrix Market format files.
triplet
Package triplet provides triplet representation for sparse matrices.
Package triplet provides triplet representation for sparse matrices.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL