Version: v0.0.0-...-fcde085 Latest Latest Go to latest
Published: Oct 19, 2020 License: LGPL-2.1 README ¶

Introduction

The package setpso is a collection of experimental set based Particle Swarm based Optimizer(SPSO) used for finding a near optimal binary string where each binary string is associated with a cost, supplied by an external cost-function, that is to be minimized. As such it is a combinatorial optimizer and is based on the continuous swarm optimizer. The cost value can have several representations ranging from big integers to a floating point noisy value depending on the cost to be approximately minimized.

The swarm of Particles iterates towards finding better solutions. Each Particle in the swarm has a candidate exploratory solution as a binary string together with an array of Velocity components and a record of its Personal-best solution so far. Each Particle is guided, through Velocity updates, by its Personal-best and a collection of chosen other Particles, called Targets, by using their Personal-best solutions as well.

Subset Encoding

The binary string is stored as a big positive integer and is also treated as a subset where a bit value of 1 in the ith position corresponds to the ith element being a member of the set and 0 corresponds to it being a non member element of the set associated with the binary string. Following the continuous case by analogy the binary string can be regarded as a vector of parameters that happen to take on the values 0 or 1 and as such it is called the Parameters of the Particle.

Experimental Components

1. Each binary bit has associated velocity which loosely corresponds to the probability of the bit flipping in the next update, which is set to zero after the update if the flip occurs during the update.

2. prior to the update the velocity is changed to increase the chance of mutation through flipping that is proportional to the Hamming distance to various targets plus a small bias; this mimics the continuous case where such a shotgun approach guarantees finding genuine local minima.

3. velocity probabilities are combined using a second order polynomial:

P(p,q)=p+q-pq

for combining probabilities p and q. It can be easily checked that as an infix operation it is both commutative and associative and combined probability is usually greater than the individual probabilities and always in the range 0.0 to 1.0. (provided they are both in the range from 0.0 to 1.0).

4. The swarm is partitioned into Groups of particles with the same Targets and velocity update parameters called Heuristics .The Target Particles and Heuristics for the group are not fixed and can be programmed to change prior to an update. How this is done depends on the variant of SPSO in use.

5. The core of the SPSO is the same for all variants which only differ by how the Groups are organized.

6. The process of setting up batch runs and testing involves similar processes over and over again so a tool kit has been produced to simplify this. The tool kit has a list of cost-functions and SPSO variants and one can build in cases of interest. Code for the toolkit is in the subdirectory psokit .

7. The tool kit has various functions to enable new components to be introduced as well as new SPSOs and cost-functions. It also includes a interface to include actions that trigger at specific points of the runs.

8. To support the tool kit all SPSOs will have to meet a golang interface specified in the setpso package. Likewise for all cost-functions and monitoring components.

Hopefully this will produce a rich class of SPSOs that perform well. The main reason for producing these novel changes is to find a better class of SPSO. However, it may not be so. Thus there is a need to produce competing SPSOS to compare it with. for instance:

• Splitting the Target influence down further to be a function of individual bit position may work better, although the mutation described in 2. should remove the need to do this.

• Combining probabilities using the maximum of the two probabilities may work better. Here I think the combining of probabilities as given in 3. above is smooth and enables Both probabilities to contribute in more cases. For instance if p=q=0.5 then the combined probability is 0.75 rather than 0.5.

• Removing the targeting of a Particle partly to its Personal-best may give better results.

Any help in producing such variants for comparison and possible adoption are always welcome, but are for the moment not done here.

Possible Future Development

1. The current SPSOs in this package do quickly converge to near best solutions and take significantly more time to hit the optimal solution because of the nature of directed random search so they will be used more for finding approximate solutions to be used in machine learning where the process is stopped to avoid over fitting and there is no need to go further.

2. The SPSO should be a generalist so lots of test cost functions need to be coded up and SPSO evolved to work reasonably well on these test cost-functions. However, the heuristics and targeting should be adaptable to the cost-function at hand by simple tracking of cost and size of sets involved.

3. The test cost-functions should have a large number of functions that try to find algorithms that fit criteria so the SPSO can be used to find its own expressions for heuristic values and target choice by using a bootstrap process.

4. Support for a Cost-function that changes, but infrequently, so the personal best cost has to be re-evaluated from time to time. This could for instance include the Cost-function replacing a set item that is never used to give a low cost by another item that was not in the original set of items which may result in improved cost resulting in changing the items properties. As part of this The number of set elements should be allowed to change from time to time. At the moment this is partially supported through repeated evaluation of costs with one cost type that supports averaging costs for a given parameter with a forgetting time constant.

5. If a group ends up with redundancy that is a lot of Particles converge to a single Subset then some of the Particles could be reassigned to a new exploratory group to search for better alternative solutions. This could also mean storing a good subset before removing it and then using it later on. Perhaps a collection of such good solutions could form a new optimization where these good solutions are elements of a larger set that encodes how to combine these good solutions to produce better ones - a crude form of subroutines.

Getting Started

golang is a Google programming language that appears to be suited for developing SPSOs. ensure you have a copy by going to golang and down loading it.

Sofware is developed using Visual Studio Code at https://code.visualstudio.com which is the preferred IDE.

The following instructions are for quickly getting a working example using a command line at a terminal:

Open a terminal and clone get this package to your computer by going to an empty directory and typing

git clone https://github.com/mathrgo/setpso .

now move to the subdirectory

example/runkit1

and type

go run runkit1.go

this will give you a set of command options to use

to do a single default run type

go run runkit1.go -nrun 1

This may not mean that much at this point but you have found a solution to the subset sum problem of 100 items with element items randomly chosen from 20 bit positive integers.

Documentation

For background information and some insight into the algorithms read https://github.com/mathrgo/setpso/blob/master/doc/psoForML.pdf .

Code derived documentation is given in https://godoc.org/github.com/mathrgo/setpso .

This documentation is hierarchical in nature giving information on each package and is worth reading to understand the packages starting with setpso.

follow the instructions and example for running GPso and CLPso . Documentation ¶

Overview ¶

Package setpso is a collection of Set based Particle Swarm Optimisers(SPSO) designed for cost functions that map binary patterns to *big.Int cost values. The binary patterns called Parameters is encoded also as a *big.Int. The SPSO is a swarm of entities called Particles that together iteratively hunt for better solutions. The update iteration of the swarm mimics the spirit of the continuous case and is based on set operations. It also includes experimental enhancements to improve the discrete case. For brief introduction, context of use and planned future development read the Readme file at https://github.com/mathrgo/setpso

Relation to Other Sub Packages ¶

Package setpso lives in a directory that is at the top of a a hierarchy of packages.

Package setpso contains two working SPSOs: GPso and CLPso that depend on Pso for all interfaces except Update() needed in package psokit.

Packages in setpso/fun is where cost-functions that interface with Pso are usually placed and includes any helper packages for such cost-functions.

Package psokit enables a high level multiple run interface where elements for the rum are referred by name to be used in setting up runs of various SPSOs and cost-function combinations and searching for good heuristics.

Particle's Personal-best ¶

While exploring Parameters for finding reduced cost as returned by the independent cost function the Particle keeps a record of the personal best Parameter achieved so far called Personal-best with a corresponding best cost. The Personal-best status is checked after each update.

Particle's Update Velocity ¶

It represents update Velocity as a vector of weights of the probability of flipping the corresponding bit at the update iteration. At the beginning of the update the velocity is calculated without flipping bits and then the bits are flipped with a probability given by the computed velocity component. During the update, once the bit has been flipped the corresponding probability is set to zero thus avoiding flipping back and keeping the velocity as a vector of flips that are requested with a given probability to move from a given position to a desired one that may improve performance.

during the calculation of the velocity of a particle probabilities are combined using an operation called pseudo adding where by default probabilities p,q are pseudo added to give p+q-pq. Alternatives may be considered in the future such as max(p,q) if only to show which is best.

Grouping of particles ¶

The particles are split up into groups with each group containing its own heuristic settings and a list of particles called Targets for it to tend to move towards. each Particle in the group also uses its Personal-best to move towards. Various strategies for targeting other Particle's personal-best Parameters or adapting heuristics can be explored: Pso is not used by its self, since it has no targets, but forms most common interfaces and has a function PUpdate() that does the common velocity update. To create a functioning SPSO extra code is added before PUpdate() to choose Targets and Heuristics which are added by the derived working SPSOs to generate the total update iteration function, Update(). GPso and CLPso are examples of such derived working SPSOs.

It is important to note that the collection of groups is stored as mapping from strings to pointers to groups so groups can be accessed by name if necessary although each particle knows which group it belongs to without using the name reference. Also groups can have no particles that belong to the group. At start up there is only one group called "root" which contains all the particles. Additional groups can be formed during initialisation or even during iteration and particles moved between groups as and when required.

setpso can be used in low level coding and the higher level run management is provided by the psokit toolkit package in

import "github.com/mathrgo/setpso/psokit"

you can quickly get to run an example by going to the setpso/example/runkit1 directory in a terminal then execute

go run runkit1.go -nrun 1

Constants ¶

This section is empty.

Variables ¶

This section is empty.

Functions ¶

func CardinalSize ¶

func CardinalSize(x *big.Int) (card int)

CardinalSize returns the number of 1's in a binary representation of Int (assuming it is positive) so is the cardinal size of the corresponding set.

func Pc0 ¶

func Pc0(i int, n float64) float64

Pc0 calculates the assigned probability of learning from other particles. i is the Particle id and n is the number of particles. The formula is an empirical one used in the continuous case. This is likely to be replaced by another empirical formula called Pc1 and Pc0 is just a stand in for Pc1.

Types ¶

type CLPso ¶

type CLPso struct {
*Pso

//target refreshing gap
TryGap int
// contains filtered or unexported fields
}

CLPso is a comprehensive learning PSO that targets other personal best rather than just the global best. Each particle has its own group with group name set to the particles id and has one target. All particles share Heuristics.

func NewCLPso ¶

func NewCLPso(p *Pso) *CLPso

NewCLPso creates a CLPso and sets the heuristics if required. In this case TryGap heuristic is the number of iterations with no improvement needed to trigger the probabilistic search for an alternative target. The ith particle is assigned the probability Pc0(i,n) to look for an alternative target when triggered, where n is the number of particles.

func (*CLPso) SetHeuristics ¶

func (p *CLPso) SetHeuristics(hu PsoHeuristics)

SetHeuristics sets the heuristics for the particle swarm.

func (*CLPso) Update ¶

func (p *CLPso) Update()

Update does the iteration update. For each Particle a check is made to see if the updates have not given an improvement in cost of the Parameters compared to the last improvement for at least TryGap iterations. If so it looks for an alternative target with probability pc=Pc0(). If it looks for an alternative target it randomly selects two Particles and chooses the one that gives the least of the Local-best Cost. Note the choice is from all particles and can include itself. After this it does the usual PUpdate().

type CLpart ¶

type CLpart struct {
// contains filtered or unexported fields
}

CLpart includes the additional particle state used by CLPso to manage particle targets.

type Fun ¶

type Fun interface {
//creates a new Try which is a pointer to a structure using  a default parameter that should satisfy constraints and updates Try's cost and internal decode.
NewTry() Try
// setup t from x, where z is parameter assumed to satisfy constraints and it also computes dependants such as cost
SetTry(t Try, z *big.Int)

// sets  the try by copying src to dest
Copy(dest, src Try)

// updates cost evaluation of a try where a lower cost is better
// this is needed when the cost changes
UpdateCost(x Try)

// Cmp checks to see if y is better than x
// for mode = futil.CostMode:
// compare two tries with a spread of uncertainty from -1.0 to 1.0. -1.0 for
// definitely cost(x) < cost(y) and 1.0 for definitely cost(x) > cost(y).
// A deterministic cost should always return -1.0 or 1.0 with a value of
// -1.0 if the costs are equal.
// for mode = futil.TriesMode: (used only when cost is not deterministic)
// compare how successful  y has been in being better than  an x
// While being compared a value > 1.0 indicates y should replace x; a value
// < -1.0 indicates y should be removed as a candidate.
Cmp(x, y Try, mode futil.CmpMode) float64

// maximum number of bits in the parameter big integer which is the
// maximum number of elements in the subset
// at the moment this is fixed during a run but in the future could vary possibly to incorporate some form of subroutine use.
MaxLen() (maxlen int)
// string giving a description of the cost function
// this attempts to  give a constraint satisfying Try  in  pre that matches the hint by modifying hint
// pre is the previous try to be replaced. on success it returns true otherwise it returns false and should leave pre un changed.
ToConstraint(pre Try, hint *big.Int) bool

// this hints to the function to replace the ith particle-item
// if it returns True the function has replaced the item with a new meaning
// thus modifying the decoder.
Delete(i int) bool
}

Fun is the interface to the function that evaluates the cost of tries. The try parameter is a binary pattern stored as a big integer. The integer can be regarded as a sub set of integers from 0 to MaxLen()-1 where i in this range is a member of the sub set if the ith bit is 1. MaxLen() returns the maximum number of bits in the pattern. ToConstraint() attempts to modify hint to be constraint satisfying. while possibly using pre to aid in this process. pre should not be changed during this process. It returns true if it succeeds. By convention pre always corresponds to a Parameter (apart from when it is the empty set at the beginning) that was a personal best and thus satisfies the constraints required by the function to give a meaningful solution. The function should be able to find a constraint satisfying version for hint that approximates to hint most of the time so that the SPSO can frequently change the Particle's Parameter during an update.

Providing a Meaningful Interpretation of Parameters ¶

Representing a particle's Parameters as a binary string or subset may not be easily interpreted as a meaningful representation of a solution to a combinatorial optimization so the cost function must provide an easily understood description of a Parameter as a string supplied by the Decode() function.

Catering for Changing a Set Item ¶

This is an advanced feature for cost functions. Certain set items will after a while not be used to give personal best at which point a cost function can be hinted to remove this item through Delete() which returns true if the cost function does so. if the cost function returns true it is then up to the cost function to either replace this by a new item with its own contribution to cost or set the corresponding item in a hint to zero during a successful ToConstraint() call. in this way the cost function can try out alternative items that may result in an improved cost. Typically this feature is not used and a call to Delete() returns false.

type GPso ¶

type GPso struct {
*Pso
}

GPso is a PSO that targets the global best element and has only one Group corresponding to all the particles

func NewGPso ¶

func NewGPso(p *Pso) *GPso

NewGPso creates a GPso

func (*GPso) SetHeuristics ¶

func (p *GPso) SetHeuristics(hu PsoHeuristics)

SetHeuristics sets the heuristics for the particle swarm.

func (*GPso) Update ¶

func (p *GPso) Update()

Update sets the target to the current global best Particle before updating the particle swarm

type Group ¶

type Group struct {
// contains filtered or unexported fields
}

Group is a collection of particles with same heuristic settings

func (*Group) Heuristics ¶

func (g *Group) Heuristics() (hu *PsoHeuristics)

Heuristics returns the pointer to the group's heuristics

type Particle ¶

type Particle struct {
// contains filtered or unexported fields
}

Particle is the state of a member of the PSO

func (*Particle) BestTry ¶

func (p *Particle) BestTry() Try

BestTry returns the best try

func (*Particle) CurrentTry ¶

func (p *Particle) CurrentTry() Try

CurrentTry returns current try

type Pso ¶

type Pso struct {

// collection of particles
Pt []Particle
// contains filtered or unexported fields
}

Pso is the Particle swarm optimizer

func NewPso ¶

func NewPso(n int, fun Fun,
sd int64) *Pso

NewPso sets up a PSO with a swarm of n particles. fun is the costing function interface. Each Particle has a big integer Parameter. To initialize the parameter set values are uniformly randomly chosen with integer values up to Func.MaxLen()bits; ToConstraint() is used on these random choices with repeated use of random choice until each Particle has an initial Parameters that satisfies the cost function constraints on the Parameters. Default heuristics are applied to the "root" Group and all particles are added to this group. The PSO uses the random generator seed sd.

func (*Pso) BestParticle ¶

func (pso *Pso) BestParticle() int

BestParticle returns the current global best particle.

func (*Pso) BlurTarget ¶

func (pso *Pso) BlurTarget(x *big.Int, id int, l, l0 float64)

BlurTarget blurs the target set displacement based on its CardinalSize. Normally x is the exclusive or of a particle Parameters with a target Local-best Parameters. The blur is effected indirectly by pseudo adding a probability

pb =rand*(l*CardinalSize(x)+l0)/MaxLen()

to each component of the Velocity;l,l0 corresponds to the Lfactor, Loffset heuristics;MaxLen() is the maximum number of set items ;rand is a random number between 0 and 1.

func (*Pso) CreateGroup ¶

func (pso *Pso) CreateGroup(name string, ntargets int) *Group

CreateGroup creates a group named 'name' with a slot for 'ntargets' targets. The heuristics for the group are shared from the 'root' group.

func (*Pso) CurrentTry ¶

func (pso *Pso) CurrentTry(i int) Try

CurrentTry returns the current try for the ith particle

func (*Pso) Gr ¶

func (pso *Pso) Gr(name string) *Group

Gr returns group given by name it returns nil if it does not exist

func (*Pso) Group ¶

func (pso *Pso) Group(id int) *Group

Group returns group for particle id

func (*Pso) GroupBest ¶

func (pso *Pso) GroupBest(grp *Group) (best int)

GroupBest returns the best particle id in the group grp. It returns -1 if there is no member.

func (*Pso) Heuristics ¶

func (pso *Pso) Heuristics() PsoHeuristics

Heuristics returns a copy of the master heuristics

func (*Pso) LocalBestTry ¶

func (pso *Pso) LocalBestTry(i int) Try

LocalBestTry returns Local-best try of the ith Particle

func (*Pso) MoveTo ¶

func (pso *Pso) MoveTo(g *Group, pat int)

MoveTo moves the particle 'pat' to the designated group

func (*Pso) NominalL ¶

func (pso *Pso) NominalL() (l float64)

NominalL returns a nominal Lfactor value for the heuristic HLfactor based on * Number of particles and number of parameters. This was tuned for the * continuous case this needs tuning for the discrete set case and is not yet * used.

func (*Pso) Nparticles ¶

func (pso *Pso) Nparticles() int

Nparticles returns the number of particles in use.

func (*Pso) PUpdate ¶

func (pso *Pso) PUpdate()

PUpdate does a single step update assuming that groups have been setup with the appropriate targets and heuristics.

for each Particle the Parameters difference between itself Personal-best and Targets is represented by taking the exclusive or of the corresponding Parameters as subset. This is then used to pseudo add up contributions to target blur prior to reducing the velocity by a factor equal to the Omega heuristic.

Then the Phi heuristic is used to generate randomly chosen parameters r for each exclusive or between 0 and Phi; each r is checked to be <= 1.0 and is forced to be so by replacing by 2-r if it isn't. These r are then pseudo added to the the velocity components that have an exclusive or bit of 1.

In this way the Velocity components are computed to encourage movement toward Personal-best and Targets after encouraging more mutation for distant Targets and Personal-best Parameters.

After this Setparams() and UpdateGlobal() are called to finish the update.

func (*Pso) Part ¶

func (pso *Pso) Part(i int) *Particle

Part returns ith particle

func (*Pso) PrintDebug ¶

func (pso *Pso) PrintDebug(w io.Writer, id string)

PrintDebug outputs debugging diagnostics depending on the command id. It has the following values and output:

Command | Output
===============================
group   | group data for each Particle
group0  | "root" group data's best member and cost
Pt      | particle local-best cost and parameter and
|  current parameter for each particle
vel     | velocity for each particle

func (*Pso) SetGroupHeuristics ¶

func (pso *Pso) SetGroupHeuristics(g *Group, h *PsoHeuristics)

SetGroupHeuristics sets the heuristics for the group g. As the Pso undergoes development additional heuristic parameters can be introduced with compatible defaults without destroying backwards compatibility with previous versions that do not have the extra heuristics. Not that only a link to the heuristics is passed so the heuristics can be changed outside the group with many groups referring to the same heuristics.

All group heuristics are derived, normally by just a pointer, from a master heuristics stored in pso, so often all heuristics can be changed via the master heuristic; how this is done should be through calling SetHeuristic() for the derived SPSO.

func (*Pso) SetGroupTarget ¶

func (pso *Pso) SetGroupTarget(grp *Group, targetList ...int)

SetGroupTarget sets the first few Targets of group 'grp' to the particle list targetList.

func (*Pso) SetParams ¶

func (pso *Pso) SetParams(id int)

SetParams sets the parameters of the id th particle updating the resulting cost and personal best case. It also revaluates the personal best cost in case the function has changed.

func (*Pso) UpdateGlobal ¶

func (pso *Pso) UpdateGlobal()

UpdateGlobal updates the global best by calling UpdateGroup() and then finding the best group with its best particle.

It searches for minimum best cost particle.

func (*Pso) UpdateGroup ¶

func (pso *Pso) UpdateGroup(g *Group)

UpdateGroup does housekeeping for the group k it calculates the best try and the particle in group that gives this. Note that it looks for the best in the current iteration and disregards historic best costs even if they were better.

type PsoHeuristics ¶

type PsoHeuristics struct {
//for target shooting probability range (1.0 )
Phi float64
//for probability velocity factoring after target blur (0.73)
Omega float64
// for target blur factor (0.15)
Lfactor float64
// for target blur offset (2.0)
Loffset float64
// for a minimum number of tries before doing something different(100)
TryGap int
//threshold for acting on a comparison(0.99)
Threshold float64
// maximum number of tries  stored in a particle(250)
NTries int
}

PsoHeuristics is the collection of group heuristics, where some of the heuristics may not be used. The default values are in brackets.

Support for future heuristics ¶

Future heuristic parameters may be added to this list which can be ignored by earlier PSO variants by suitable choice of defaults. Note heuristics are often shared between groups so it is important to know where this is done when updating a group's heuristics.

func (*PsoHeuristics) ToDefault ¶

func (hu *PsoHeuristics) ToDefault()

ToDefault sets the heuristics to their default value. Note this may change in future when better defaults are found.

type PsoInterface ¶

type PsoInterface interface {
// single state update to all particles in the PSO
Update()
// best particle Id
BestParticle() int
// number of particles
Nparticles() int
// array of particles
Part(i int) *Particle
// current try of the ith Particle
CurrentTry(i int) Try
// Local-best try of the ith Particle
LocalBestTry(i int) Try
// interface for requesting debug output based on cmd
PrintDebug(w io.Writer, cmd string)
//Heuristics returns a copy of the master heuristics
Heuristics() PsoHeuristics
//SetHeuristics sets the heuristics for the particle swarm.
SetHeuristics(hu PsoHeuristics)
}

PsoInterface is the interface used with the PSO and its variants. Normally Pso supplies this apart from the Update() function. The interface is used by the monitoring and running program and psokit uses this interface to provide useful plotting functions.

type Try ¶

type Try = futil.Try

Try interface from futil It essentially contains all the data that depends on the parameter being tried for a solution of the optimization problem. Directories ¶

Path Synopsis
example
main gives an example of a multimode function with added noise for testing the SPSO using the futil.SFloatCostvalue CostValue interface
main gives an example of a multimode function with added noise for testing the SPSO using the futil.SFloatCostvalue CostValue interface
main gives an example of how to find DAG modeling of parity checking using random samples.
main gives an example of how to find DAG modeling of parity checking using random samples.
main gives an example of how to find DAG modeling of quadratic equation solution.
main gives an example of how to find DAG modeling of quadratic equation solution.
main gives an example of how to use psokit to run an arbitrary prime factorization problem.
main gives an example of how to use psokit to run an arbitrary prime factorization problem.
this runs the sum of 3 cubes for 42 finder
this runs the sum of 3 cubes for 42 finder
fun
Package circles provides method for playing with circles
Package circles provides method for playing with circles
Package cubes3442 is a function for finding the sum of 3 cubes that equals 42.
Package cubes3442 is a function for finding the sum of 3 cubes that equals 42.
Package dag provides the base for Directed Asynchronous Graphs functions with two inputs per node.
Package dag provides the base for Directed Asynchronous Graphs functions with two inputs per node.
Package futil provides some usefull utilities for cost functions
Package futil provides some usefull utilities for cost functions
Package lincircles provides method for packing circles on a straight line using a packing cost
Package lincircles provides method for packing circles on a straight line using a packing cost
Package multimode is a multimode function with noise
Package multimode is a multimode function with noise
Package quadratic includes a sampler for finding algebraic solutions to quadratic equation ax^2+bx+c =0.
Package quadratic includes a sampler for finding algebraic solutions to quadratic equation ax^2+bx+c =0.
Package simplefactor is a cost function for finding prime factor pair: used to check that it is too difficult for the PSO to find the primes.
Package simplefactor is a cost function for finding prime factor pair: used to check that it is too difficult for the PSO to find the primes.
Package subsetsum contains the object Fun and its generator for the the cost function of the combinatorial subset sum problem where a big integer is used to represent subsets as well as integer values thus potentially being able to represent combinatorial hard problems
Package subsetsum contains the object Fun and its generator for the the cost function of the combinatorial subset sum problem where a big integer is used to represent subsets as well as integer values thus potentially being able to represent combinatorial hard problems
this is a utility for generating prime pairs
this is a utility for generating prime pairs
Package psokit is a toolkit for running a collection of, at the moment, experimental Set-based Particle Swarm Optimisers (SPSOs) in the package setpso.
Package psokit is a toolkit for running a collection of, at the moment, experimental Set-based Particle Swarm Optimisers (SPSOs) in the package setpso.