axon

package
v2.0.0-dev0.2.19 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 11, 2025 License: BSD-3-Clause Imports: 65 Imported by: 0

Documentation

Overview

Package axon provides the basic reference axon implementation, for rate-coded activations and standard error-driven learning. Other packages provide spiking or deep axon, Rubicon, PBWM, etc.

The overall design seeks an "optimal" tradeoff between simplicity, transparency, ability to flexibly recombine and extend elements, and avoiding having to rewrite a bunch of stuff.

The *Stru elements handle the core structural components of the network, and hold emer.* interface pointers to elements such as emer.Layer, which provides a very minimal interface for these elements. Interfaces are automatically pointers, so think of these as generic pointers to your specific Layers etc.

This design means the same *Stru infrastructure can be re-used across different variants of the algorithm. Because we're keeping this infrastructure minimal and algorithm-free it should be much less confusing than dealing with the multiple levels of inheritance in C++ emergent. The actual algorithm-specific code is now fully self-contained, and largely orthogonalized from the infrastructure.

One specific cost of this is the need to cast the emer.* interface pointers into the specific types of interest, when accessing via the *Stru infrastructure.

The *Params elements contain all the (meta)parameters and associated methods for computing various functions. They are the equivalent of Specs from original emergent, but unlike specs they are local to each place they are used, and styling is used to apply common parameters across multiple layers etc. Params seems like a more explicit, recognizable name compared to specs, and this also helps avoid confusion about their different nature than old specs. Pars is shorter but confusable with "Parents" so "Params" is more unambiguous.

Params are organized into four major categories, which are more clearly functionally labeled as opposed to just structurally so, to keep things clearer and better organized overall: * ActParams -- activation params, at the Neuron level (in act.go) * InhibParams -- inhibition params, at the Layer / Pool level (in inhib.go) * LearnNeuronParams -- learning parameters at the Neuron level (running-averages that drive learning) * LearnSynParams -- learning parameters at the Synapse level (both in learn.go)

The levels of structure and state are: * Network * .Layers * .Pools: pooled inhibition state -- 1 for layer plus 1 for each sub-pool (unit group) with inhibition * .RecvPaths: receiving pathways from other sending layers * .SendPaths: sending pathways from other receiving layers * .Neurons: neuron state variables

There are methods on the Network that perform initialization and overall computation, by iterating over layers and calling methods there. This is typically how most users will run their models.

Parallel computation across multiple CPU cores (threading) is achieved through persistent worker go routines that listen for functions to run on thread-specific channels. Each layer has a designated thread number, so you can experiment with different ways of dividing up the computation. Timing data is kept for per-thread time use -- see TimeReport() on the network.

The Layer methods directly iterate over Neurons, Pools, and Paths, and there is no finer-grained level of computation (e.g., at the individual Neuron level), except for the *Params methods that directly compute relevant functions. Thus, looking directly at the layer.go code should provide a clear sense of exactly how everything is computed -- you may need to the refer to act.go, learn.go etc to see the relevant details but at least the overall organization should be clear in layer.go.

Computational methods are generally named: VarFromVar to specifically name what variable is being computed from what other input variables. e.g., SpikeFromG computes activation from conductances G.

The Pools (type Pool, in pool.go) hold state used for computing pooled inhibition, but also are used to hold overall aggregate pooled state variables -- the first element in Pools applies to the layer itself, and subsequent ones are for each sub-pool (4D layers). These pools play the same role as the AxonUnGpState structures in C++ emergent.

Paths directly support all synapse-level computation, and hold the LearnSynParams and iterate directly over all of their synapses. It is the exact same Path object that lives in the RecvPaths of the receiver-side, and the SendPaths of the sender-side, and it maintains and coordinates both sides of the state. This clarifies and simplifies a lot of code. There is no separate equivalent of AxonConSpec / AxonConState at the level of connection groups per unit per pathway.

The pattern of connectivity between units is specified by the paths.Pattern interface and all the different standard options are avail in that path package. The Pattern code generates a full tensor bitmap of binary 1's and 0's for connected (1's) and not (0's) units, and can use any method to do so. This full lookup-table approach is not the most memory-efficient, but it is fully general and shouldn't be too-bad memory-wise overall (fully bit-packed arrays are used, and these bitmaps don't need to be retained once connections have been established). This approach allows patterns to just focus on patterns, and they don't care at all how they are used to allocate actual connections.

Index

Constants

View Source
const (
	// StartOff is the starting offset.
	StartOff int32 = iota

	// Number of items.
	Nitems

	// Number of StartN elements.
	StartNN
)
View Source
const (
	PoolVarsN = poolFloatAvgMaxStart + fsfffb.InhibVars(int32(AvgMaxVarsN)*int32(AvgMaxN)*int32(AvgMaxPhasesN))

	PoolIntVarsTot = PoolIntAvgMaxStart + PoolIntVars(int32(AvgMaxVarsN)*int32(AvgMaxN))
)
View Source
const MaxGlobalVecN = 16

MaxGlobalVecN is the maximum number of GlobalVectors values per variable (costs, drives, USs). This could be dynamically set but a large max is much simpler.

Variables

View Source
var (
	NeuronVarNames []string
	NeuronVarsMap  map[string]int
)
View Source
var (
	NeuronLayerVars  = []string{"DA", "ACh", "NE", "Ser", "Gated"}
	NNeuronLayerVars = len(NeuronLayerVars)
	NNeuronCaBins    = 20 // generic max for display
)

NeuronLayerVars are layer-level variables displayed as neuron layers.

View Source
var (
	SynapseVarNames []string
	SynapseVarsMap  map[string]int
)
View Source
var (

	// Layers are all the layer parameters.
	//gosl:group Params
	//gosl:read-only
	Layers []LayerParams

	// Paths are all the path parameters.
	//gosl:read-only
	Paths []PathParams

	// NetworkIxs have indexes and sizes for entire network (one only).
	//gosl:read-only
	NetworkIxs []NetworkIndexes

	// PoolIxs have index values for each Pool.
	// [Layer * Pools][PoolIndexVars]
	//gosl:read-only
	//gosl:dims 2
	PoolIxs *tensor.Uint32

	// NeuronIxs have index values for each neuron: index into layer, pools.
	// [Neurons][Indexes]
	//gosl:read-only
	//gosl:dims 2
	NeuronIxs *tensor.Uint32

	// SynapseIxs have index values for each synapse:
	// providing index into recv, send neurons, path.
	// [Indexes][NSyns]; NSyns = [Layer][SendPaths][SendNeurons][Syns]
	//gosl:group Indexes
	//gosl:read-only
	//gosl:dims 2
	SynapseIxs *tensor.Uint32

	// PathSendCon are starting offset and N cons for each sending neuron,
	// for indexing into the Syns synapses, which are organized sender-based.
	// [NSendCon][StartNN]; NSendCon = [Layer][SendPaths][SendNeurons]
	//gosl:read-only
	//gosl:dims 2
	PathSendCon *tensor.Uint32

	// RecvPathIxs indexes into Paths (organized by SendPath) organized
	// by recv pathways. needed for iterating through recv paths efficiently on GPU.
	// [NRecvPaths] = [Layer][RecvPaths]
	//gosl:read-only
	//gosl:dims 1
	RecvPathIxs *tensor.Uint32

	// PathRecvCon are the receiving path starting index and number of connections.
	// [NRecvCon][StartNN]; NRecvCon = [Layer][RecvPaths][RecvNeurons]
	//gosl:read-only
	//gosl:dims 2
	PathRecvCon *tensor.Uint32

	// RecvSynIxs are the indexes into Synapses for each recv neuron, organized
	// into blocks according to PathRecvCon, for receiver-based access.
	// [NSyns] = [Layer][RecvPaths][RecvNeurons][Syns]
	//gosl:read-only
	//gosl:dims 1
	RecvSynIxs *tensor.Uint32

	// Ctx is the current context state (one only). This is read-only except in
	// specific kernels.
	//gosl:group Neurons
	//gosl:read-or-write
	Ctx []Context

	// Neurons are all the neuron state variables.
	// [Neurons][Data][NeuronVarsN+NCaBins]
	//gosl:dims 3
	Neurons *tensor.Float32

	// NeuronAvgs are variables with averages over the
	// Data parallel dimension for each neuron.
	// [Neurons][Vars]
	//gosl:dims 2
	NeuronAvgs *tensor.Float32

	// LayerStates holds layer-level state values, with variables defined in
	// [LayerVars], for each layer and Data parallel index.
	// [Layer][Data][LayerVarsN]
	//gosl:dims 3
	LayerStates *tensor.Float32

	// GlobalScalars are the global scalar state variables.
	// [GlobalScalarsN + 2*NCaBins][Data]
	//gosl:dims 2
	GlobalScalars *tensor.Float32

	// GlobalVectors are the global vector state variables.
	// [GlobalVectorsN][MaxVecN][Data]
	//gosl:dims 3
	GlobalVectors *tensor.Float32

	// Exts are external input values for all Input / Target / Compare layers
	// in the network. The ApplyExt methods write to this per layer,
	// and it is then actually applied in one consistent method.
	// [NExts][Data]; NExts = [In / Out Layers][Neurons]
	//gosl:dims 2
	Exts *tensor.Float32

	// Pools are the [PoolVars] float32 state values for layer and sub-pool inhibition,
	// Including the float32 AvgMax values by Phase and variable: use [AvgMaxVarIndex].
	// [Layer * Pools][Data][PoolVars+AvgMax]
	//gosl:group Synapse
	//gosl:dims 3
	Pools *tensor.Float32

	// PoolsInt are the [PoolIntVars] int32 state values for layer and sub-pool
	// inhibition, AvgMax atomic integration, and other vars: use [AvgMaxIntVarIndex]
	// [Layer * Pools][Data][PoolIntVars+AvgMax]
	//gosl:dims 3
	PoolsInt *tensor.Int32

	// PathGBuf is the conductance buffer for accumulating spikes.
	// Subslices are allocated to each pathway.
	// Uses int-encoded values for faster GPU atomic integration.
	// [NPathNeur][Data][MaxDel+1]; NPathNeur = [Layer][RecvPaths][RecvNeurons]
	//gosl:dims 3
	PathGBuf *tensor.Int32

	// PathGSyns are synaptic conductance integrated over time per pathway
	// per recv neurons. spikes come in via PathBuf.
	// subslices are allocated to each pathway.
	// [NPathNeur][Data]
	//gosl:dims 2
	PathGSyns *tensor.Float32

	//	Synapses are the synapse level variables (weights etc).
	// These do not depend on the data parallel index, unlike [SynapseTraces].
	// [NSyns][Vars]; NSyns = [Layer][SendPaths][SendNeurons][Syns]
	//gosl:dims 2
	Synapses *tensor.Float32

	// SynapseTraces are synaptic variables that depend on the data
	// parallel index, for accumulating learning traces and weight changes per data.
	// This is the largest data size, so multiple instances are used
	// to handle larger networks.
	// [NSyns][Data][Vars]; NSyns = [Layer][SendPaths][SendNeurons][Syns]
	//gosl:dims 3
	SynapseTraces *tensor.Float32
)

vars are all the global vars for axon GPU / CPU computation.

View Source
var ComputeGPU *gpu.GPU

ComputeGPU is the compute gpu device

View Source
var GPUSystem *gpu.ComputeSystem

GPUSystem is a GPU compute System with kernels operating on the same set of data variables.

View Source
var NeuronVarProps = map[string]string{

	"Spike":  `cat:"Act"`,
	"Spiked": `cat:"Act"`,
	"Act":    `cat:"Act"`,
	"ActInt": `cat:"Act"`,
	"Ge":     `cat:"Act" range:"2"`,
	"Gi":     `cat:"Act" auto-scale:"+"`,
	"Gk":     `cat:"Act" auto-scale:"+"`,
	"Inet":   `cat:"Act"`,
	"Vm":     `cat:"Act" min:"0" max:"1"`,
	"VmDend": `cat:"Act" min:"0" max:"1"`,
	"ISI":    `cat:"Act" auto-scale:"+"`,
	"ISIAvg": `cat:"Act" auto-scale:"+"`,
	"Ext":    `cat:"Act"`,
	"Target": `cat:"Act"`,

	"CaM":     `cat:"Learn"`,
	"CaP":     `cat:"Learn"`,
	"CaD":     `cat:"Learn"`,
	"CaDPrev": `cat:"Learn"`,

	"CaSyn":    `cat:"Learn"`,
	"LearnCa":  `cat:"Learn"`,
	"LearnCaM": `cat:"Learn"`,
	"LearnCaP": `cat:"Learn"`,
	"LearnCaD": `cat:"Learn"`,
	"CaDiff":   `cat:"Learn"`,
	"RLRate":   `cat:"Learn" auto-scale:"+"`,

	"GnmdaSyn":   `cat:"Excite" auto-scale:"+"`,
	"Gnmda":      `cat:"Excite" auto-scale:"+"`,
	"GnmdaLrn":   `cat:"Excite" auto-scale:"+"`,
	"GnmdaMaint": `cat:"Excite" auto-scale:"+"`,
	"NmdaCa":     `cat:"Excite" auto-scale:"+"`,

	"Gvgcc":     `cat:"Excite" auto-scale:"+"`,
	"VgccM":     `cat:"Excite"`,
	"VgccH":     `cat:"Excite"`,
	"VgccCa":    `cat:"Excite" auto-scale:"+"`,
	"VgccCaInt": `cat:"Excite" auto-scale:"+"`,

	"Burst":      `cat:"Excite"`,
	"BurstPrv":   `cat:"Excite"`,
	"CtxtGe":     `cat:"Excite"`,
	"CtxtGeRaw":  `cat:"Excite"`,
	"CtxtGeOrig": `cat:"Excite"`,

	"GgabaB": `cat:"Inhib" auto-scale:"+"`,
	"GABAB":  `cat:"Inhib" auto-scale:"+"`,
	"GABABx": `cat:"Inhib" auto-scale:"+"`,

	"Gak":      `cat:"Inhib" auto-scale:"+"`,
	"SSGiDend": `cat:"Inhib" auto-scale:"+"`,

	"GknaMed":  `cat:"Inhib" auto-scale:"+"`,
	"GknaSlow": `cat:"Inhib" auto-scale:"+"`,
	"Gkir":     `cat:"Inhib"`,
	"KirM":     `cat:"Inhib"`,

	"Gsk":    `cat:"Inhib"`,
	"SKCaIn": `cat:"Inhib"`,
	"SKCaR":  `cat:"Inhib"`,
	"SKCaM":  `cat:"Inhib"`,

	"Gmahp":  `cat:"Inhib" auto-scale:"+"`,
	"MahpN":  `cat:"Inhib" auto-scale:"+"`,
	"Gsahp":  `cat:"Inhib" auto-scale:"+"`,
	"SahpCa": `cat:"Inhib"`,
	"SahpN":  `cat:"Inhib"`,

	"ActM":     `cat:"Stats"`,
	"ActP":     `cat:"Stats"`,
	"Beta1":    `cat:"Stats"`,
	"Beta2":    `cat:"Stats"`,
	"CaPMax":   `cat:"Stats"`,
	"CaPMaxCa": `cat:"Stats"`,

	"GeNoise":  `cat:"Gmisc"`,
	"GeNoiseP": `cat:"Gmisc"`,
	"GiNoise":  `cat:"Gmisc"`,
	"GiNoiseP": `cat:"Gmisc"`,

	"GeExt":     `cat:"Gmisc"`,
	"GeRaw":     `cat:"Gmisc"`,
	"GeSyn":     `cat:"Gmisc" range:"2"`,
	"GiRaw":     `cat:"Gmisc"`,
	"GiSyn":     `cat:"Gmisc"`,
	"GeInt":     `cat:"Gmisc" range:"2"`,
	"GeIntNorm": `cat:"Gmisc" range:"1"`,
	"GiInt":     `cat:"Gmisc" range:"2"`,
	"GModRaw":   `cat:"Gmisc"`,
	"GModSyn":   `cat:"Gmisc"`,
	"SMaintP":   `cat:"Gmisc"`,
	"GMaintRaw": `cat:"Gmisc"`,
	"GMaintSyn": `cat:"Gmisc"`,

	"NeurFlags": `display:"-"`,

	"CaBins": `cat:"Spikes"`,

	"ActAvg":  `cat:"Avg"`,
	"AvgPct":  `cat:"Avg" range:"2"`,
	"TrgAvg":  `cat:"Avg" range:"2"`,
	"DTrgAvg": `cat:"Avg" auto-scale:"+"`,
	"AvgDif":  `cat:"Avg"`,
	"GeBase":  `cat:"Avg"`,
	"GiBase":  `cat:"Avg"`,

	"DA":    `cat:"Learn" doc:"dopamine neuromodulation (layer-level variable)"`,
	"ACh":   `cat:"Learn" doc:"cholinergic neuromodulation (layer-level variable)"`,
	"NE":    `cat:"Learn" doc:"norepinepherine (noradrenaline) neuromodulation  (layer-level variable)"`,
	"Ser":   `cat:"Learn" doc:"serotonin neuromodulation (layer-level variable)"`,
	"Gated": `cat:"Learn" doc:"signals whether the layer gated"`,
}

NeuronVarProps has display properties for neuron variables.

View Source
var PCAStrongThr = 0.01

PCAStrongThr is the threshold for counting PCA eigenvalues as "strong".

View Source
var ScriptParams = `` /* 269-byte string literal not displayed */

ScriptParams is a template for yaegi interpreted parameters

View Source
var SynapseVarProps = map[string]string{
	"Wt":    `cat:"Wts"`,
	"LWt":   `cat:"Wts"`,
	"SWt":   `cat:"Wts"`,
	"DWt":   `cat:"Wts" auto-scale:"+"`,
	"DSWt":  `cat:"Wts" auto-scale:"+"`,
	"CaM":   `cat:"Wts" auto-scale:"+"`,
	"CaP":   `cat:"Wts" auto-scale:"+"`,
	"CaD":   `cat:"Wts" auto-scale:"+"`,
	"Tr":    `cat:"Wts" auto-scale:"+"`,
	"DTr":   `cat:"Wts" auto-scale:"+"`,
	"DiDWt": `cat:"Wts" auto-scale:"+"`,
}

SynapseVarProps has all of the display properties for synapse variables, including desc tooltips

View Source
var TensorStrides tensor.Uint32

Tensor stride variables

View Source
var UseGPU bool

UseGPU indicates whether to use GPU vs. CPU.

View Source
var VarCategories = []emer.VarCategory{
	{"Act", "basic activation variables, including conductances, current, Vm, spiking"},
	{"Learn", "calcium-based learning variables and other related learning factors"},
	{"Excite", "excitatory channels including NMDA, Vgcc and other excitatory inputs"},
	{"Inhib", "inhibitory channels including GABA inhibition, after hyperpolarization (AHP) and other K channels"},
	{"Stats", "statistics and aggregate values"},
	{"Gmisc", "more detailed conductance (G) variables for integration and other computational values"},
	{"Avg", "longer-term average variables and homeostatic regulation"},
	{"Spikes", "Binned spike counts used for learning"},
	{"Wts", "weights and other synaptic-level variables"},
}
View Source
var ViewTimeCycles = []int{1, 10, 25, 50, 100, 150, 200}

ViewTimeCycles are the cycle intervals associated with each ViewTimes level.

Functions

func AdaptGiLayer

func AdaptGiLayer(li uint32)

AdaptGiLayer is the kernel over Layers (not * Data) to run adaptating inhibition function.

func ApplyExtsNeuron

func ApplyExtsNeuron(i uint32)

ApplyExtsNeuron is the kernel over Neurons * Data to apply Ext external input to the neurons receiving inputs.

func ApplyLayerSheet

func ApplyLayerSheet(net *Network, sheet *params.Sheet[*LayerParams]) bool

ApplyLayerSheet applies Layer parameters from given sheet, returning true if any applied.

func ApplyParamSheets

func ApplyParamSheets(net *Network, layer *params.Sheet[*LayerParams], path *params.Sheet[*PathParams]) bool

ApplyParamSheets applies Layer and Path parameters from given sheets, returning true if any applied.

func ApplyPathSheet

func ApplyPathSheet(net *Network, sheet *params.Sheet[*PathParams]) bool

ApplyPathSheet applies Path parameters from given sheet, returning true if any applied.

func AvgMaxIntVarIndex

func AvgMaxIntVarIndex(vr AvgMaxVars, am AvgMax) uint32

AvgMaxIntVarIndex returns the variable index for accessing Pools AvgMax int32 variables. Avg = Sum actually. There are only values for the Cycle phase level.

func AvgMaxVarIndex

func AvgMaxVarIndex(vr AvgMaxVars, phase AvgMaxPhases, am AvgMax) uint32

AvgMaxVarIndex returns the variable index for accessing Pools AvgMax float32 variables.

func Beta1Neuron

func Beta1Neuron(i uint32)

Beta1Neuron is the kernel over Neurons * Data to do neuron-level updating at Beta1.

func Beta2Neuron

func Beta2Neuron(i uint32)

Beta2Neuron is the kernel over Neurons * Data to do neuron-level updating at Beta1.

func BetweenGi

func BetweenGi(i uint32)

BetweenGi is the kernel over Layers * Data for updating Gi inhibition between layers.

func CloseLogFiles

func CloseLogFiles(ls *looper.Stacks, statsDir *tensorfs.Node, exclude ...enums.Enum)

CloseLogFiles closes all the log files for each mode and level of the looper, Excluding given level(s).

func CycleInc

func CycleInc(i uint32)

CycleInc is the kernel over 1 call to increment the cycle counter.

func CycleNeuron

func CycleNeuron(i uint32)

CycleNeuron is the kernel over Neurons * Data to do one cycle (msec) of updating at the neuron level.

func CyclePost

func CyclePost(i uint32)

CyclePost is the kernel over Layers * Data to update state after each Cycle of updating.

func DWtFromDiSyn

func DWtFromDiSyn(syni uint32)

DWtFromDiSyn is the kernel over Synapses (not * Data) to integrate DWt over Di data parallel values.

func DWtSubMeanNeuron

func DWtSubMeanNeuron(ni uint32)

DWtSubMeanNeuron is the kernel over Paths to compute DWt - mean(DWt) for each recv neuron.

func DWtSyn

func DWtSyn(i uint32)

DWtSyn is the kernel over Synapses * Data to compute weight changes (learning).

func GPUInit

func GPUInit()

GPUInit initializes the GPU compute system, configuring system(s), variables and kernels. It is safe to call multiple times: detects if already run.

func GPURelease

func GPURelease()

GPURelease releases the GPU compute system resources. Call this at program exit.

func GPUTestWrite

func GPUTestWrite(i uint32)

GPUTestWrite is the kernel over Neurons * Data for testing the unique writing of data on GPU.

func GatherSpikes

func GatherSpikes(i uint32)

GatherSpikes is the kernel over Neurons * Data for gathering spike inputs sent on the previous cycle.

func GetRandomNumber

func GetRandomNumber(index uint32, counter uint64, funIndex RandFunIndex) float32

GetRandomNumber returns a random number that depends on the index, counter and function index. We increment the counter after each cycle, so that we get new random numbers. This whole scheme exists to ensure equal results under different multithreading settings.

func GlobalSetRew

func GlobalSetRew(di uint32, rew float32, hasRew bool)

GlobalSetRew is a convenience function for setting the external reward state in Globals variables

func GlobalsReset

func GlobalsReset()

GlobalsReset resets all global values to 0, for all NData

func HashEncodeSlice

func HashEncodeSlice(slice []float32) string

func IndexToAvgMaxIntVar

func IndexToAvgMaxIntVar(vi uint32) (vr AvgMaxVars, am AvgMax)

IndexToAvgMaxIntVar returns the AvgMaxVar indexes from overall PoolInt variable index.

func IndexToAvgMaxVar

func IndexToAvgMaxVar(vi uint32) (vr AvgMaxVars, phase AvgMaxPhases, am AvgMax)

IndexToAvgMaxVar returns the AvgMaxVar indexes from overall Pool variable index.

func InitGBuffsPath

func InitGBuffsPath(pti uint32)

InitGBuffsPath is the kernel over Paths to initialize PathGBuf, PathGSyns.

func IsExtLayerType

func IsExtLayerType(lt LayerTypes) bool

IsExtLayerType returns true if the layer type deals with external input: Input, Target, Compare

func JsonToParams

func JsonToParams(b []byte) string

JsonToParams reformates json output to suitable params display output

func LayerGi

func LayerGi(i uint32)

LayerGi is the kernel over Layers * Data for updating Gi inhibition.

func LogFilename

func LogFilename(netName, runName, logName string) string

LogFilename returns a standard log file name as netName_runName_logName.tsv

func LooperStandard

func LooperStandard(ls *looper.Stacks, net *Network, viewFunc func(mode enums.Enum) *NetViewUpdate, plusStart, plusEnd int, cycle, trial, trainMode enums.Enum)

LooperStandard adds all the standard Axon Trial and Cycle level processing calls to the given Looper Stacks. cycle and trial are the enums for the looper levels, trainMode is the training mode enum value.

  • minus and plus phases of the theta cycle (trial), at plusStart (150) and plusEnd (199) cycles.
  • embedded beta phases within theta, that record Beta1 and Beta2 states.
  • net.Cycle() at every cycle step.
  • net.DWt() and net.WtFromDWt() learning calls in training mode, with netview update between these two calls if it is visible and viewing synapse variables.
  • netview update calls at appropriate levels (no-op if no GUI)

func LooperUpdateNetView

func LooperUpdateNetView(ls *looper.Stacks, cycle, trial enums.Enum, viewFunc func(mode enums.Enum) *NetViewUpdate)

LooperUpdateNetView adds netview update calls to the given trial and cycle levels for given NetViewUpdate associated with the mode, returned by the given viewFunc function. The countersFunc returns the counters and other stats to display at the bottom of the NetView, based on given mode and level.

func MinusPhaseNeuron

func MinusPhaseNeuron(i uint32)

MinusPhaseNeuron is the kernel over Neurons * Data to do neuron-level updating after end of minus phase.

func MinusPhasePool

func MinusPhasePool(pi uint32)

MinusPhasePool is the kernel over Pools to do pool-level updating after end of minus phase.

func MinusPhasePost

func MinusPhasePost(li uint32)

MinusPhasePost does special algorithm post processing.

func NeuronClearFlag

func NeuronClearFlag(flag NeuronFlags, ni, di uint32)

func NeuronHasFlag

func NeuronHasFlag(flag NeuronFlags, ni, di uint32) bool

NeuronHasFlag

func NeuronIsOff

func NeuronIsOff(ni uint32) bool

NeuronIsOff returns true if the neuron has been turned off (lesioned) Only checks the first data item -- all should be consistent.

func NeuronSetFlag

func NeuronSetFlag(flag NeuronFlags, ni, di uint32)

func NeuronVarIndexByName

func NeuronVarIndexByName(varNm string) (int, error)

NeuronVarIndexByName returns the index of the variable in the Neuron, or error

func NewStateLayer

func NewStateLayer(li uint32)

NewStateLayer is the kernel over Layers (not Data) which does new state on pools as well.

func NewStateNeuron

func NewStateNeuron(i uint32)

NewStateNeuron is the kernel over Neurons * Data to do new state on neurons (decay).

func OpenLogFile

func OpenLogFile(on bool, dt *table.Table, netName, runName, logName string)

OpenLogFile, if on == true, sets the log file for given table using given netName, runName, and logName in order.

func OpenLogFiles

func OpenLogFiles(ls *looper.Stacks, statsDir *tensorfs.Node, netName, runName string, modeLevels [][]string)

OpenLogFiles opens the log files for modes and levels of the looper, based on the lists of level names, ordered by modes in numerical order. The netName and runName are used for naming the file, along with the mode_level in lower case.

func PlusPhaseNeuron

func PlusPhaseNeuron(i uint32)

PlusPhaseNeuron is the kernel over Neurons * Data to do neuron-level updating after end of plus phase.

func PlusPhasePool

func PlusPhasePool(i uint32)

PlusPhasePool is the kernel over Pools * Data to do pool-level updating after end of plus phase.

func PlusPhasePost

func PlusPhasePost(li uint32)

PlusPhasePost does special algorithm post processing.

func PlusPhaseStartContext

func PlusPhaseStartContext(i uint32)

PlusPhaseStartContext is the kernel over 1 call to call PlusPhaseStart on context.

func PlusPhaseStartNeuron

func PlusPhaseStartNeuron(i uint32)

PlusPhaseStartNeuron is the kernel over Neurons * Data to do neuron-level updating at start of plus phase.

func PoolAvgDifCalc

func PoolAvgDifCalc(pi, di uint32)

PoolAvgDifCalc does Calc on Cycle level, and re-inits

func PoolAvgDifInit

func PoolAvgDifInit(pi, di uint32)

PoolAvgDifInit initializes the AvgMax AvgDif Int accumulators for Cycle vals for update start. always left init'd so generally unnecessary. pi = global pool index.

func PoolAvgDifUpdate

func PoolAvgDifUpdate(pi, di uint32, avdif float32)

PoolAvgDifUpdate updates the AvgMax values for AvgDif Var. pi = global pool index.

func PoolAvgMax

func PoolAvgMax(vr AvgMaxVars, phase AvgMaxPhases, am AvgMax, pi, di uint32) float32

PoolAvgMax returns an AvgMax value for given variable, phase, and Avg or Max, for given pool index and data index.

func PoolAvgMaxCalc

func PoolAvgMaxCalc(pi, di uint32)

PoolAvgMaxCalc does Calc on Cycle level, and re-inits

func PoolAvgMaxCalcVar

func PoolAvgMaxCalcVar(vr AvgMaxVars, pi, di uint32)

PoolAvgMaxCalcVar does Calc on Cycle level, and re-inits, for given Var

func PoolAvgMaxInit

func PoolAvgMaxInit(pi, di uint32)

PoolAvgMaxInit initializes the AvgMax Int accumulators for Cycle vals for update start. always left init'd so generally unnecessary. pi = global pool index.

func PoolAvgMaxUpdate

func PoolAvgMaxUpdate(pi, di, ni uint32)

PoolAvgMaxUpdate updates the AvgMax values based on current neuron values. pi = global pool index.

func PoolAvgMaxUpdateVar

func PoolAvgMaxUpdateVar(vr AvgMaxVars, pi, di uint32, val float32)

PoolAvgMaxUpdateVar updates the AvgMax value based on given value. pi = global pool index.

func PoolAvgMaxUpdateVarNonAtomic

func PoolAvgMaxUpdateVarNonAtomic(vr AvgMaxVars, pi, di uint32, val float32)

PoolAvgMaxUpdateVarNonAtomic updates the AvgMax value based on given value. non-atomic version: only when explicitly looping over neurons. pi = global pool index.

func PoolAvgMaxZero

func PoolAvgMaxZero(pi, di uint32)

PoolAvgMaxZero initializes all the AvgMax values to zero. pi = global pool index.

func PoolCycleToMinus

func PoolCycleToMinus(pi, di uint32)

PoolCycleToMinus grabs current Cycle values into the Minus phase values, and Plus values into Prev.

func PoolCycleToPlus

func PoolCycleToPlus(pi, di uint32)

PoolCycleToPlus grabs current Cycle values into the Plus phase values.

func PoolGi

func PoolGi(i uint32)

PoolGi is the kernel over Pools * Data for updating Gi inhibition.

func PoolInhib

func PoolInhib(fb *fsfffb.GiParams, pi, di uint32, gimult float32)

PoolInhib computes FSFFFB inhibition for a pool, based on aggregated FFs and FBs spiking values

func PoolInhibDecay

func PoolInhibDecay(pi, di uint32, decay float32)

Decay reduces inhibition values by given decay proportion

func PoolInhibGiFromFSSS

func PoolInhibGiFromFSSS(pi, di uint32) float32

GiFromFSSS returns the sum of FSGi and SSGi as overall inhibition

func PoolInhibInit

func PoolInhibInit(pi, di uint32)

func PoolInhibInitRaw

func PoolInhibInitRaw(pi, di uint32)

PoolInhibInitRaw clears raw spike counters -- done every cycle prior to accumulating

func PoolInhibIntToRaw

func PoolInhibIntToRaw(pi, di uint32)

IntToRaw computes int values into float32 raw values

func PoolInhibLayerMax

func PoolInhibLayerMax(pi, di uint32, liGi float32)

LayerMax updates given pool-level inhib values from given layer-level Gi with resulting value being the Max of either

func PoolInhibPoolMax

func PoolInhibPoolMax(pi, di uint32, piGi float32)

PoolMax updates given layer-level inhib values from given pool-level with resulting value being the Max of either

func PoolInhibRawIncrInt

func PoolInhibRawIncrInt(pi, di uint32, spike, geRaw, geExt float32)

RawIncrInt increments raw values from given neuron-based input values for the int-based values (typically use Atomic InterlockedAdd instead)

func PoolInhibSaveOrig

func PoolInhibSaveOrig(pi, di uint32)

SaveOrig saves the current Gi values as original values

func PoolInhibSpikesFromRaw

func PoolInhibSpikesFromRaw(pi, di uint32)

SpikesFromRaw updates spike values from raw, dividing by given number in pool

func PoolInhibZero

func PoolInhibZero(pi, di uint32)

PoolInhibZero resets all accumulating inhibition factors to 0

func PoolInit

func PoolInit(pi, di uint32)

PoolInit is callled during InitActs

func PoolIntVarName

func PoolIntVarName(vi uint32) string

func PoolNNeurons

func PoolNNeurons(pi uint32) int32

PoolNNeurons returns the number of neurons in the given pool. pi = global pool index.

func PoolPoolGi

func PoolPoolGi(ctx *Context, pi, di uint32)

PoolPoolGi computes the total inhibitory conductance for the pool.

func PoolTestValues

func PoolTestValues(pi, di uint32, layKey string, vals map[string]float32)

TestValues returns a map of CaD.Avg, which provides an integrated summary of pool activity for testing

func PoolVarName

func PoolVarName(vi uint32) string

func ReadFromGPU

func ReadFromGPU(vars ...GPUVars)

ReadFromGPU starts the process of copying vars to the GPU.

func RubiconNormFun

func RubiconNormFun(raw float32) float32

RubiconLNormFun is the normalizing function applied to the sum of all weighted raw values: 1 - (1 / (1 + usRaw.Sum()))

func RubiconUSStimValue

func RubiconUSStimValue(di uint32, usIndex uint32, valence ValenceTypes) float32

RubiconUSStimValue returns stimulus value for US at given index and valence (includes Cost). If US > 0.01, a full 1 US activation is returned.

func RunAdaptGiLayer

func RunAdaptGiLayer(n int)

RunAdaptGiLayer runs the AdaptGiLayer kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneAdaptGiLayer call does Run and Done for a single run-and-sync case.

func RunAdaptGiLayerCPU

func RunAdaptGiLayerCPU(n int)

RunAdaptGiLayerCPU runs the AdaptGiLayer kernel on the CPU.

func RunAdaptGiLayerGPU

func RunAdaptGiLayerGPU(n int)

RunAdaptGiLayerGPU runs the AdaptGiLayer kernel on the GPU. See RunAdaptGiLayer for more info.

func RunApplyExtsNeuron

func RunApplyExtsNeuron(n int)

RunApplyExtsNeuron runs the ApplyExtsNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneApplyExtsNeuron call does Run and Done for a single run-and-sync case.

func RunApplyExtsNeuronCPU

func RunApplyExtsNeuronCPU(n int)

RunApplyExtsNeuronCPU runs the ApplyExtsNeuron kernel on the CPU.

func RunApplyExtsNeuronGPU

func RunApplyExtsNeuronGPU(n int)

RunApplyExtsNeuronGPU runs the ApplyExtsNeuron kernel on the GPU. See RunApplyExtsNeuron for more info.

func RunBeta1Neuron

func RunBeta1Neuron(n int)

RunBeta1Neuron runs the Beta1Neuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneBeta1Neuron call does Run and Done for a single run-and-sync case.

func RunBeta1NeuronCPU

func RunBeta1NeuronCPU(n int)

RunBeta1NeuronCPU runs the Beta1Neuron kernel on the CPU.

func RunBeta1NeuronGPU

func RunBeta1NeuronGPU(n int)

RunBeta1NeuronGPU runs the Beta1Neuron kernel on the GPU. See RunBeta1Neuron for more info.

func RunBeta2Neuron

func RunBeta2Neuron(n int)

RunBeta2Neuron runs the Beta2Neuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneBeta2Neuron call does Run and Done for a single run-and-sync case.

func RunBeta2NeuronCPU

func RunBeta2NeuronCPU(n int)

RunBeta2NeuronCPU runs the Beta2Neuron kernel on the CPU.

func RunBeta2NeuronGPU

func RunBeta2NeuronGPU(n int)

RunBeta2NeuronGPU runs the Beta2Neuron kernel on the GPU. See RunBeta2Neuron for more info.

func RunBetweenGi

func RunBetweenGi(n int)

RunBetweenGi runs the BetweenGi kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneBetweenGi call does Run and Done for a single run-and-sync case.

func RunBetweenGiCPU

func RunBetweenGiCPU(n int)

RunBetweenGiCPU runs the BetweenGi kernel on the CPU.

func RunBetweenGiGPU

func RunBetweenGiGPU(n int)

RunBetweenGiGPU runs the BetweenGi kernel on the GPU. See RunBetweenGi for more info.

func RunCycleInc

func RunCycleInc(n int)

RunCycleInc runs the CycleInc kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneCycleInc call does Run and Done for a single run-and-sync case.

func RunCycleIncCPU

func RunCycleIncCPU(n int)

RunCycleIncCPU runs the CycleInc kernel on the CPU.

func RunCycleIncGPU

func RunCycleIncGPU(n int)

RunCycleIncGPU runs the CycleInc kernel on the GPU. See RunCycleInc for more info.

func RunCycleNeuron

func RunCycleNeuron(n int)

RunCycleNeuron runs the CycleNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneCycleNeuron call does Run and Done for a single run-and-sync case.

func RunCycleNeuronCPU

func RunCycleNeuronCPU(n int)

RunCycleNeuronCPU runs the CycleNeuron kernel on the CPU.

func RunCycleNeuronGPU

func RunCycleNeuronGPU(n int)

RunCycleNeuronGPU runs the CycleNeuron kernel on the GPU. See RunCycleNeuron for more info.

func RunCyclePost

func RunCyclePost(n int)

RunCyclePost runs the CyclePost kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneCyclePost call does Run and Done for a single run-and-sync case.

func RunCyclePostCPU

func RunCyclePostCPU(n int)

RunCyclePostCPU runs the CyclePost kernel on the CPU.

func RunCyclePostGPU

func RunCyclePostGPU(n int)

RunCyclePostGPU runs the CyclePost kernel on the GPU. See RunCyclePost for more info.

func RunDWtFromDiSyn

func RunDWtFromDiSyn(n int)

RunDWtFromDiSyn runs the DWtFromDiSyn kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneDWtFromDiSyn call does Run and Done for a single run-and-sync case.

func RunDWtFromDiSynCPU

func RunDWtFromDiSynCPU(n int)

RunDWtFromDiSynCPU runs the DWtFromDiSyn kernel on the CPU.

func RunDWtFromDiSynGPU

func RunDWtFromDiSynGPU(n int)

RunDWtFromDiSynGPU runs the DWtFromDiSyn kernel on the GPU. See RunDWtFromDiSyn for more info.

func RunDWtSubMeanNeuron

func RunDWtSubMeanNeuron(n int)

RunDWtSubMeanNeuron runs the DWtSubMeanNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneDWtSubMeanNeuron call does Run and Done for a single run-and-sync case.

func RunDWtSubMeanNeuronCPU

func RunDWtSubMeanNeuronCPU(n int)

RunDWtSubMeanNeuronCPU runs the DWtSubMeanNeuron kernel on the CPU.

func RunDWtSubMeanNeuronGPU

func RunDWtSubMeanNeuronGPU(n int)

RunDWtSubMeanNeuronGPU runs the DWtSubMeanNeuron kernel on the GPU. See RunDWtSubMeanNeuron for more info.

func RunDWtSyn

func RunDWtSyn(n int)

RunDWtSyn runs the DWtSyn kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneDWtSyn call does Run and Done for a single run-and-sync case.

func RunDWtSynCPU

func RunDWtSynCPU(n int)

RunDWtSynCPU runs the DWtSyn kernel on the CPU.

func RunDWtSynGPU

func RunDWtSynGPU(n int)

RunDWtSynGPU runs the DWtSyn kernel on the GPU. See RunDWtSyn for more info.

func RunDone

func RunDone(syncVars ...GPUVars)

RunDone must be called after Run* calls to start compute kernels. This actually submits the kernel jobs to the GPU, and adds commands to synchronize the given variables back from the GPU to the CPU. After this function completes, the GPU results will be available in the specified variables.

func RunDoneContext

func RunDoneContext()

func RunDoneLayers

func RunDoneLayers()

RunDoneLayers finishes running and copies all the layer-level state from the GPU, (and Context, Globals) but NOT neurons. This is the minimal case for Cycle().

func RunDoneLayersNeurons

func RunDoneLayersNeurons()

RunDoneLayersNeurons finishes running and copies all the layer-level and neuron state from the GPU, including context and globals.

func RunDoneLayersSynapses

func RunDoneLayersSynapses()

RunDoneLayersSynapses finishes running and copies the Layers and Synapse state back. This is sufficient for saving synaptic weights.

func RunDoneSynapses

func RunDoneSynapses()

RunDoneSynapses finishes running and copies the Synapse state back.

func RunDoneSynapsesTrace

func RunDoneSynapsesTrace()

RunDoneSynapses finishes running and copies the Synapse state back, including SynapseTraces, for visualization.

func RunGPUSync

func RunGPUSync()

RunGPUSync can be called to synchronize data between CPU and GPU. Any prior ToGPU* calls will execute to send data to the GPU, and any subsequent RunDone* calls will copy data back from the GPU.

func RunGPUTestWrite

func RunGPUTestWrite(n int)

RunGPUTestWrite runs the GPUTestWrite kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneGPUTestWrite call does Run and Done for a single run-and-sync case.

func RunGPUTestWriteCPU

func RunGPUTestWriteCPU(n int)

RunGPUTestWriteCPU runs the GPUTestWrite kernel on the CPU.

func RunGPUTestWriteGPU

func RunGPUTestWriteGPU(n int)

RunGPUTestWriteGPU runs the GPUTestWrite kernel on the GPU. See RunGPUTestWrite for more info.

func RunGatherSpikes

func RunGatherSpikes(n int)

RunGatherSpikes runs the GatherSpikes kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneGatherSpikes call does Run and Done for a single run-and-sync case.

func RunGatherSpikesCPU

func RunGatherSpikesCPU(n int)

RunGatherSpikesCPU runs the GatherSpikes kernel on the CPU.

func RunGatherSpikesGPU

func RunGatherSpikesGPU(n int)

RunGatherSpikesGPU runs the GatherSpikes kernel on the GPU. See RunGatherSpikes for more info.

func RunInitGBuffsPath

func RunInitGBuffsPath(n int)

RunInitGBuffsPath runs the InitGBuffsPath kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneInitGBuffsPath call does Run and Done for a single run-and-sync case.

func RunInitGBuffsPathCPU

func RunInitGBuffsPathCPU(n int)

RunInitGBuffsPathCPU runs the InitGBuffsPath kernel on the CPU.

func RunInitGBuffsPathGPU

func RunInitGBuffsPathGPU(n int)

RunInitGBuffsPathGPU runs the InitGBuffsPath kernel on the GPU. See RunInitGBuffsPath for more info.

func RunLayerGi

func RunLayerGi(n int)

RunLayerGi runs the LayerGi kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneLayerGi call does Run and Done for a single run-and-sync case.

func RunLayerGiCPU

func RunLayerGiCPU(n int)

RunLayerGiCPU runs the LayerGi kernel on the CPU.

func RunLayerGiGPU

func RunLayerGiGPU(n int)

RunLayerGiGPU runs the LayerGi kernel on the GPU. See RunLayerGi for more info.

func RunMinusPhaseNeuron

func RunMinusPhaseNeuron(n int)

RunMinusPhaseNeuron runs the MinusPhaseNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneMinusPhaseNeuron call does Run and Done for a single run-and-sync case.

func RunMinusPhaseNeuronCPU

func RunMinusPhaseNeuronCPU(n int)

RunMinusPhaseNeuronCPU runs the MinusPhaseNeuron kernel on the CPU.

func RunMinusPhaseNeuronGPU

func RunMinusPhaseNeuronGPU(n int)

RunMinusPhaseNeuronGPU runs the MinusPhaseNeuron kernel on the GPU. See RunMinusPhaseNeuron for more info.

func RunMinusPhasePool

func RunMinusPhasePool(n int)

RunMinusPhasePool runs the MinusPhasePool kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneMinusPhasePool call does Run and Done for a single run-and-sync case.

func RunMinusPhasePoolCPU

func RunMinusPhasePoolCPU(n int)

RunMinusPhasePoolCPU runs the MinusPhasePool kernel on the CPU.

func RunMinusPhasePoolGPU

func RunMinusPhasePoolGPU(n int)

RunMinusPhasePoolGPU runs the MinusPhasePool kernel on the GPU. See RunMinusPhasePool for more info.

func RunMinusPhasePost

func RunMinusPhasePost(n int)

RunMinusPhasePost runs the MinusPhasePost kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneMinusPhasePost call does Run and Done for a single run-and-sync case.

func RunMinusPhasePostCPU

func RunMinusPhasePostCPU(n int)

RunMinusPhasePostCPU runs the MinusPhasePost kernel on the CPU.

func RunMinusPhasePostGPU

func RunMinusPhasePostGPU(n int)

RunMinusPhasePostGPU runs the MinusPhasePost kernel on the GPU. See RunMinusPhasePost for more info.

func RunNewStateLayer

func RunNewStateLayer(n int)

RunNewStateLayer runs the NewStateLayer kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneNewStateLayer call does Run and Done for a single run-and-sync case.

func RunNewStateLayerCPU

func RunNewStateLayerCPU(n int)

RunNewStateLayerCPU runs the NewStateLayer kernel on the CPU.

func RunNewStateLayerGPU

func RunNewStateLayerGPU(n int)

RunNewStateLayerGPU runs the NewStateLayer kernel on the GPU. See RunNewStateLayer for more info.

func RunNewStateNeuron

func RunNewStateNeuron(n int)

RunNewStateNeuron runs the NewStateNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneNewStateNeuron call does Run and Done for a single run-and-sync case.

func RunNewStateNeuronCPU

func RunNewStateNeuronCPU(n int)

RunNewStateNeuronCPU runs the NewStateNeuron kernel on the CPU.

func RunNewStateNeuronGPU

func RunNewStateNeuronGPU(n int)

RunNewStateNeuronGPU runs the NewStateNeuron kernel on the GPU. See RunNewStateNeuron for more info.

func RunOneAdaptGiLayer

func RunOneAdaptGiLayer(n int, syncVars ...GPUVars)

RunOneAdaptGiLayer runs the AdaptGiLayer kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneApplyExtsNeuron

func RunOneApplyExtsNeuron(n int, syncVars ...GPUVars)

RunOneApplyExtsNeuron runs the ApplyExtsNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneBeta1Neuron

func RunOneBeta1Neuron(n int, syncVars ...GPUVars)

RunOneBeta1Neuron runs the Beta1Neuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneBeta2Neuron

func RunOneBeta2Neuron(n int, syncVars ...GPUVars)

RunOneBeta2Neuron runs the Beta2Neuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneBetweenGi

func RunOneBetweenGi(n int, syncVars ...GPUVars)

RunOneBetweenGi runs the BetweenGi kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneCycleInc

func RunOneCycleInc(n int, syncVars ...GPUVars)

RunOneCycleInc runs the CycleInc kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneCycleNeuron

func RunOneCycleNeuron(n int, syncVars ...GPUVars)

RunOneCycleNeuron runs the CycleNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneCyclePost

func RunOneCyclePost(n int, syncVars ...GPUVars)

RunOneCyclePost runs the CyclePost kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneDWtFromDiSyn

func RunOneDWtFromDiSyn(n int, syncVars ...GPUVars)

RunOneDWtFromDiSyn runs the DWtFromDiSyn kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneDWtSubMeanNeuron

func RunOneDWtSubMeanNeuron(n int, syncVars ...GPUVars)

RunOneDWtSubMeanNeuron runs the DWtSubMeanNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneDWtSyn

func RunOneDWtSyn(n int, syncVars ...GPUVars)

RunOneDWtSyn runs the DWtSyn kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneGPUTestWrite

func RunOneGPUTestWrite(n int, syncVars ...GPUVars)

RunOneGPUTestWrite runs the GPUTestWrite kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneGatherSpikes

func RunOneGatherSpikes(n int, syncVars ...GPUVars)

RunOneGatherSpikes runs the GatherSpikes kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneInitGBuffsPath

func RunOneInitGBuffsPath(n int, syncVars ...GPUVars)

RunOneInitGBuffsPath runs the InitGBuffsPath kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneLayerGi

func RunOneLayerGi(n int, syncVars ...GPUVars)

RunOneLayerGi runs the LayerGi kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneMinusPhaseNeuron

func RunOneMinusPhaseNeuron(n int, syncVars ...GPUVars)

RunOneMinusPhaseNeuron runs the MinusPhaseNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneMinusPhasePool

func RunOneMinusPhasePool(n int, syncVars ...GPUVars)

RunOneMinusPhasePool runs the MinusPhasePool kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneMinusPhasePost

func RunOneMinusPhasePost(n int, syncVars ...GPUVars)

RunOneMinusPhasePost runs the MinusPhasePost kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneNewStateLayer

func RunOneNewStateLayer(n int, syncVars ...GPUVars)

RunOneNewStateLayer runs the NewStateLayer kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneNewStateNeuron

func RunOneNewStateNeuron(n int, syncVars ...GPUVars)

RunOneNewStateNeuron runs the NewStateNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOnePlusPhaseNeuron

func RunOnePlusPhaseNeuron(n int, syncVars ...GPUVars)

RunOnePlusPhaseNeuron runs the PlusPhaseNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOnePlusPhasePool

func RunOnePlusPhasePool(n int, syncVars ...GPUVars)

RunOnePlusPhasePool runs the PlusPhasePool kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOnePlusPhasePost

func RunOnePlusPhasePost(n int, syncVars ...GPUVars)

RunOnePlusPhasePost runs the PlusPhasePost kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOnePlusPhaseStartContext

func RunOnePlusPhaseStartContext(n int, syncVars ...GPUVars)

RunOnePlusPhaseStartContext runs the PlusPhaseStartContext kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOnePlusPhaseStartNeuron

func RunOnePlusPhaseStartNeuron(n int, syncVars ...GPUVars)

RunOnePlusPhaseStartNeuron runs the PlusPhaseStartNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOnePoolGi

func RunOnePoolGi(n int, syncVars ...GPUVars)

RunOnePoolGi runs the PoolGi kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneSendSpike

func RunOneSendSpike(n int, syncVars ...GPUVars)

RunOneSendSpike runs the SendSpike kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneSlowAdaptLayer

func RunOneSlowAdaptLayer(n int, syncVars ...GPUVars)

RunOneSlowAdaptLayer runs the SlowAdaptLayer kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneSlowAdaptNeuron

func RunOneSlowAdaptNeuron(n int, syncVars ...GPUVars)

RunOneSlowAdaptNeuron runs the SlowAdaptNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneWtFromDWtLayer

func RunOneWtFromDWtLayer(n int, syncVars ...GPUVars)

RunOneWtFromDWtLayer runs the WtFromDWtLayer kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunOneWtFromDWtSyn

func RunOneWtFromDWtSyn(n int, syncVars ...GPUVars)

RunOneWtFromDWtSyn runs the WtFromDWtSyn kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. This version then calls RunDone with the given variables to sync after the Run, for a single-shot Run-and-Done call. If multiple kernels can be run in sequence, it is much more efficient to do multiple Run* calls followed by a RunDone call.

func RunPlusPhaseNeuron

func RunPlusPhaseNeuron(n int)

RunPlusPhaseNeuron runs the PlusPhaseNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOnePlusPhaseNeuron call does Run and Done for a single run-and-sync case.

func RunPlusPhaseNeuronCPU

func RunPlusPhaseNeuronCPU(n int)

RunPlusPhaseNeuronCPU runs the PlusPhaseNeuron kernel on the CPU.

func RunPlusPhaseNeuronGPU

func RunPlusPhaseNeuronGPU(n int)

RunPlusPhaseNeuronGPU runs the PlusPhaseNeuron kernel on the GPU. See RunPlusPhaseNeuron for more info.

func RunPlusPhasePool

func RunPlusPhasePool(n int)

RunPlusPhasePool runs the PlusPhasePool kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOnePlusPhasePool call does Run and Done for a single run-and-sync case.

func RunPlusPhasePoolCPU

func RunPlusPhasePoolCPU(n int)

RunPlusPhasePoolCPU runs the PlusPhasePool kernel on the CPU.

func RunPlusPhasePoolGPU

func RunPlusPhasePoolGPU(n int)

RunPlusPhasePoolGPU runs the PlusPhasePool kernel on the GPU. See RunPlusPhasePool for more info.

func RunPlusPhasePost

func RunPlusPhasePost(n int)

RunPlusPhasePost runs the PlusPhasePost kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOnePlusPhasePost call does Run and Done for a single run-and-sync case.

func RunPlusPhasePostCPU

func RunPlusPhasePostCPU(n int)

RunPlusPhasePostCPU runs the PlusPhasePost kernel on the CPU.

func RunPlusPhasePostGPU

func RunPlusPhasePostGPU(n int)

RunPlusPhasePostGPU runs the PlusPhasePost kernel on the GPU. See RunPlusPhasePost for more info.

func RunPlusPhaseStartContext

func RunPlusPhaseStartContext(n int)

RunPlusPhaseStartContext runs the PlusPhaseStartContext kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOnePlusPhaseStartContext call does Run and Done for a single run-and-sync case.

func RunPlusPhaseStartContextCPU

func RunPlusPhaseStartContextCPU(n int)

RunPlusPhaseStartContextCPU runs the PlusPhaseStartContext kernel on the CPU.

func RunPlusPhaseStartContextGPU

func RunPlusPhaseStartContextGPU(n int)

RunPlusPhaseStartContextGPU runs the PlusPhaseStartContext kernel on the GPU. See RunPlusPhaseStartContext for more info.

func RunPlusPhaseStartNeuron

func RunPlusPhaseStartNeuron(n int)

RunPlusPhaseStartNeuron runs the PlusPhaseStartNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOnePlusPhaseStartNeuron call does Run and Done for a single run-and-sync case.

func RunPlusPhaseStartNeuronCPU

func RunPlusPhaseStartNeuronCPU(n int)

RunPlusPhaseStartNeuronCPU runs the PlusPhaseStartNeuron kernel on the CPU.

func RunPlusPhaseStartNeuronGPU

func RunPlusPhaseStartNeuronGPU(n int)

RunPlusPhaseStartNeuronGPU runs the PlusPhaseStartNeuron kernel on the GPU. See RunPlusPhaseStartNeuron for more info.

func RunPoolGi

func RunPoolGi(n int)

RunPoolGi runs the PoolGi kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOnePoolGi call does Run and Done for a single run-and-sync case.

func RunPoolGiCPU

func RunPoolGiCPU(n int)

RunPoolGiCPU runs the PoolGi kernel on the CPU.

func RunPoolGiGPU

func RunPoolGiGPU(n int)

RunPoolGiGPU runs the PoolGi kernel on the GPU. See RunPoolGi for more info.

func RunSendSpike

func RunSendSpike(n int)

RunSendSpike runs the SendSpike kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneSendSpike call does Run and Done for a single run-and-sync case.

func RunSendSpikeCPU

func RunSendSpikeCPU(n int)

RunSendSpikeCPU runs the SendSpike kernel on the CPU.

func RunSendSpikeGPU

func RunSendSpikeGPU(n int)

RunSendSpikeGPU runs the SendSpike kernel on the GPU. See RunSendSpike for more info.

func RunSlowAdaptLayer

func RunSlowAdaptLayer(n int)

RunSlowAdaptLayer runs the SlowAdaptLayer kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneSlowAdaptLayer call does Run and Done for a single run-and-sync case.

func RunSlowAdaptLayerCPU

func RunSlowAdaptLayerCPU(n int)

RunSlowAdaptLayerCPU runs the SlowAdaptLayer kernel on the CPU.

func RunSlowAdaptLayerGPU

func RunSlowAdaptLayerGPU(n int)

RunSlowAdaptLayerGPU runs the SlowAdaptLayer kernel on the GPU. See RunSlowAdaptLayer for more info.

func RunSlowAdaptNeuron

func RunSlowAdaptNeuron(n int)

RunSlowAdaptNeuron runs the SlowAdaptNeuron kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneSlowAdaptNeuron call does Run and Done for a single run-and-sync case.

func RunSlowAdaptNeuronCPU

func RunSlowAdaptNeuronCPU(n int)

RunSlowAdaptNeuronCPU runs the SlowAdaptNeuron kernel on the CPU.

func RunSlowAdaptNeuronGPU

func RunSlowAdaptNeuronGPU(n int)

RunSlowAdaptNeuronGPU runs the SlowAdaptNeuron kernel on the GPU. See RunSlowAdaptNeuron for more info.

func RunWtFromDWtLayer

func RunWtFromDWtLayer(n int)

RunWtFromDWtLayer runs the WtFromDWtLayer kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneWtFromDWtLayer call does Run and Done for a single run-and-sync case.

func RunWtFromDWtLayerCPU

func RunWtFromDWtLayerCPU(n int)

RunWtFromDWtLayerCPU runs the WtFromDWtLayer kernel on the CPU.

func RunWtFromDWtLayerGPU

func RunWtFromDWtLayerGPU(n int)

RunWtFromDWtLayerGPU runs the WtFromDWtLayer kernel on the GPU. See RunWtFromDWtLayer for more info.

func RunWtFromDWtSyn

func RunWtFromDWtSyn(n int)

RunWtFromDWtSyn runs the WtFromDWtSyn kernel with given number of elements, on either the CPU or GPU depending on the UseGPU variable. Can call multiple Run* kernels in a row, which are then all launched in the same command submission on the GPU, which is by far the most efficient. MUST call RunDone (with optional vars to sync) after all Run calls. Alternatively, a single-shot RunOneWtFromDWtSyn call does Run and Done for a single run-and-sync case.

func RunWtFromDWtSynCPU

func RunWtFromDWtSynCPU(n int)

RunWtFromDWtSynCPU runs the WtFromDWtSyn kernel on the CPU.

func RunWtFromDWtSynGPU

func RunWtFromDWtSynGPU(n int)

RunWtFromDWtSynGPU runs the WtFromDWtSyn kernel on the GPU. See RunWtFromDWtSyn for more info.

func SaveWeights

func SaveWeights(net *Network, ctrString, runName string) string

SaveWeights saves network weights to filename with WeightsFilename information to identify the weights. only for 0 rank MPI if running mpi Returns the name of the file saved to, or empty if not saved.

func SaveWeightsIfConfigSet

func SaveWeightsIfConfigSet(net *Network, cfgWts bool, ctrString, runName string) string

SaveWeightsIfConfigSet saves network weights if the given config bool value has been set to true. uses WeightsFilename information to identify the weights. only for 0 rank MPI if running mpi Returns the name of the file saved to, or empty if not saved.

func SendSpike

func SendSpike(i uint32)

SendSpike is the kernel over Neurons * Data to send spike signal for neurons over threshold.

func SetNeuronExtPosNeg

func SetNeuronExtPosNeg(ctx *Context, ni, di uint32, val float32)

SetNeuronExtPosNeg sets neuron Ext value based on neuron index with positive values going in first unit, negative values rectified to positive in 2nd unit

func SigFun

func SigFun(w, gain, off float32) float32

SigFun is the sigmoid function for value w in 0-1 range, with gain and offset params

func SigFun61

func SigFun61(w float32) float32

SigFun61 is the sigmoid function for value w in 0-1 range, with default gain = 6, offset = 1 params

func SigInvFun

func SigInvFun(w, gain, off float32) float32

SigInvFun is the inverse of the sigmoid function

func SigInvFun61

func SigInvFun61(w float32) float32

SigInvFun61 is the inverse of the sigmoid function, with default gain = 6, offset = 1 params

func SigmoidFun

func SigmoidFun(cnSum, guSum float32) float32

SigmoidFun is the sigmoid function for computing give up probabilities

func SlowAdaptLayer

func SlowAdaptLayer(li uint32)

SlowAdaptLayer is the kernel over Layers (not * Data) to run slow adaptation functions. Calls AvgDifFromTrgAvg for Synaptic Scaling.

func SlowAdaptNeuron

func SlowAdaptNeuron(ni uint32)

SlowAdaptNeuron is the kernel over receiving Neurons to compute slow adaptation in receiving pathways.

func StatExcludeLevel

func StatExcludeLevel(level enums.Enum, exclude ...enums.Enum) bool

StatExcludeLevel returns true if given level is among the list of levels to exclude.

func StatLayerActGe

func StatLayerActGe(statsDir *tensorfs.Node, net *Network, trainMode, trialLevel enums.Enum, layerNames ...string) func(mode, level enums.Enum, start bool)

StatLayerActGe returns a Stats function that computes layer activity and Ge (excitatory conductdance; net input) stats, which are important targets of parameter tuning to ensure everything is in an appropriate dynamic range. It only runs for given trainMode at given trialLevel and above, with higher levels computing the Mean of lower levels.

func StatLayerGiMult

func StatLayerGiMult(statsDir *tensorfs.Node, net *Network, trainMode, epochLevel enums.Enum, layerNames ...string) func(mode, level enums.Enum, start bool)

StatLayerGiMult returns a Stats function that records LayerGiMult stats, for given layer names. This should be computed at the epoch level or above (not the trial level, because this value is not per-ndata and will not sync with other trial level stats).

func StatLayerState

func StatLayerState(statsDir *tensorfs.Node, net *Network, smode, slevel enums.Enum, isTrialLevel bool, variable string, layerNames ...string) func(mode, level enums.Enum, start bool)

StatLayerState returns a Stats function that records layer state It runs for given mode and level, recording given variable for given layer names. if isTrialLevel is true, the level is a trial level that needs iterating over NData.

func StatLevelAll

func StatLevelAll(statsDir *tensorfs.Node, srcMode, srcLevel enums.Enum, styleFunc func(s *plot.Style, col tensor.Values)) func(mode, level enums.Enum, start bool)

StatLevelAll returns a Stats function that copies stats from given mode and level, without resetting at the start, to accumulate all rows over time until reset manually. The styleFunc, if non-nil, does plot styling based on the current column.

func StatLoopCounters

func StatLoopCounters(statsDir, currentDir *tensorfs.Node, ls *looper.Stacks, net *Network, trialLevel enums.Enum, exclude ...enums.Enum) func(mode, level enums.Enum, start bool)

StatLoopCounters adds the counters from each stack, loop level for given looper Stacks to the given tensorfs stats. This is typically the first Stat to add, so these counters will be used for X axis values. The stat is run with start = true before returning, so that the stats are already initialized first before anything else. The first mode's counters (typically Train) are automatically added to all subsequent modes so they automatically track training levels.

  • currentDir is a tensorfs directory to store the current values of each counter.
  • trialLevel is the Trial level enum, which automatically handles the iteration over ndata parallel trials.
  • exclude is a list of loop levels to exclude (e.g., Cycle).

func StatPCA

func StatPCA(statsDir, currentDir *tensorfs.Node, net *Network, interval int, trainMode, trialLevel enums.Enum, layerNames ...string) func(mode, level enums.Enum, start bool, epc int)

StatPCA returns a Stats function that computes PCA NStrong, Top5, Next5, and Rest stats, which are important for tracking hogging dynamics where the representational space is not efficiently distributed. Uses Sample units for layers, and SVD computation is reasonably efficient. It only runs for given trainMode, from given Trial level upward, with higher levels computing the Mean of lower levels. Trial level just records ActM values for layers in a separate PCA subdir, which are input to next level computation where PCA is computed.

func StatPerTrialMSec

func StatPerTrialMSec(statsDir *tensorfs.Node, trainMode enums.Enum, trialLevel enums.Enum) func(mode, level enums.Enum, start bool)

StatPerTrialMSec returns a Stats function that reports the number of milliseconds per trial, for the given levels and training mode enum values. Stats will be recorded a levels above the given trial level.

func StatPrevCorSim

func StatPrevCorSim(statsDir, currentDir *tensorfs.Node, net *Network, trialLevel enums.Enum, layerNames ...string) func(mode, level enums.Enum, start bool)

StatPrevCorSim returns a Stats function that compute correlations between previous trial activity state and current minus phase and plus phase state. This is important for predictive learning.

func StatRunName

func StatRunName(statsDir, currentDir *tensorfs.Node, ls *looper.Stacks, net *Network, trialLevel enums.Enum, exclude ...enums.Enum) func(mode, level enums.Enum, start bool)

StatRunName adds a "RunName" stat to every mode and level of looper, subject to exclusion list, which records the current value of the "RunName" string in ss.Current, which identifies the parameters and tag for this run.

func StatTrialName

func StatTrialName(statsDir, currentDir *tensorfs.Node, ls *looper.Stacks, net *Network, trialLevel enums.Enum) func(mode, level enums.Enum, start bool)

StatTrialName adds a "TrialName" stat to the given Trial level in every mode of looper, which records the current value of the "TrialName" string in ss.Current, which contains a string description of the current trial.

func StatsLayerValues

func StatsLayerValues(net *Network, curDir *tensorfs.Node, mode enums.Enum, di int, layName, varName string) *tensor.Float32

func StatsNode

func StatsNode(statsDir *tensorfs.Node, mode, level enums.Enum) *tensorfs.Node

StatsNode returns tensorfs Dir Node for given mode, level.

func SynapseVarByName

func SynapseVarByName(varNm string) (int, error)

SynapseVarByName returns the index of the variable in the Synapse, or error

func SyncFromGPU

func SyncFromGPU(vars ...GPUVars)

SyncFromGPU synchronizes vars from the GPU to the actual variable.

func ToGPU

func ToGPU(vars ...GPUVars)

ToGPU copies given variables to the GPU for the system.

func ToGPUAll

func ToGPUAll()

ToGPUAll copies all state up to the GPU. Only for InitWeights.

func ToGPUCtxGlobal

func ToGPUCtxGlobal()

ToGPUCtxGlobal copies Context and Global vars to the GPU.

func ToGPUExts

func ToGPUExts()

ToGPUExts copies Exts and Context and Global vars to the GPU. This is done after ApplyInputs typically, and sets the network going at the start of a new trial.

func ToGPUIndexes

func ToGPUIndexes()

ToGPUIndexes copies indexes to the GPU.

func ToGPULayers

func ToGPULayers()

ToGPULayers copies all the layer-level state to the GPU, including context and globals.

func ToGPULayersNeurons

func ToGPULayersNeurons()

ToGPULayersNeurons copies all the layer-level and neuron state to the GPU.

func ToGPULayersSynapses

func ToGPULayersSynapses()

ToGPULayersSynapses copies the Layers and Synapse state to the GPU.

func ToGPUNeurons

func ToGPUNeurons()

ToGPUNeurons copies Neurons, NeuronAvgs to the GPU.

func ToGPUParams

func ToGPUParams()

ToGPUParams copies LayerParams and PathParams to the GPU.

func ToGPUSynapses

func ToGPUSynapses()

ToGPUSynapses copies the Synapse state to the GPU.

func ToGPUTensorStrides

func ToGPUTensorStrides()

ToGPUTensorStrides gets tensor strides and starts copying to the GPU.

func ToggleLayersOff

func ToggleLayersOff(net *Network, layerNames []string, off bool)

ToggleLayersOff can be used to disable layers in a Network, for example if you are doing an ablation study.

func WalkFields

func WalkFields(parent reflect.Value, should func(parent reflect.Value, field reflect.StructField, value reflect.Value) bool, walk func(parent reflect.Value, field reflect.StructField, value reflect.Value))

func WeightsFilename

func WeightsFilename(net *Network, ctrString, runName string) string

WeightsFilename returns default current weights file name, using train run and epoch counters from looper and the RunName string identifying tag, parameters and starting run,

func WtFromDWtLayer

func WtFromDWtLayer(li uint32)

WtFromDWtLayer is the kernel over Layers for layer-level Wt update. Does TrgAvg updating.

func WtFromDWtSyn

func WtFromDWtSyn(syni uint32)

WtFromDWtSyn is the kernel over Synapses (not * Data) to compute Wt from DWt weight changes.

Types

type ActAvgParams

type ActAvgParams struct {

	// Nominal is the estimated average activity level in the layer, which is
	// used in computing the scaling factor on sending pathways from this layer.
	// In general it should roughly match the layer ActAvg.ActMAvg value, which
	// can be logged using the axon.LogAddDiagnosticItems function.
	// If layers receiving from this layer are not getting enough Ge excitation,
	// then this Nominal level can be lowered to increase pathway strength
	// (fewer active neurons means each one contributes more, so scaling factor
	//
	//	goes as the inverse of activity level), or vice-versa if Ge is too high.
	//
	// It is also the basis for the target activity level used for the AdaptGi
	//
	//	option: see the Offset which is added to this value.
	Nominal float32 `min:"0" step:"0.01"`

	// RTThr is the reaction time (RT) threshold activity level in the layer,
	// in terms of the maximum CaP level of any neuron in the layer. The
	// LayerStates LayerRT value is recorded for the cycle at which this
	// level is exceeded within a theta cycle, after Acts.Dt.MaxCycStart cycles.
	RTThr float32 `default:"0.5"`

	// AdaptGi enables adapting of layer inhibition Gi multiplier factor
	// (stored in layer GiMult value) to maintain a target layer level of
	// ActAvg.Nominal. This generally works well and improves the long-term
	// stability of the models. It is not enabled by default because it depends
	// on having established a reasonable Nominal + Offset target activity level.
	AdaptGi slbool.Bool

	// Offset is added to Nominal for the target average activity that drives
	// adaptation of Gi for this layer.  Typically the Nominal level is good,
	// but sometimes Nominal must be adjusted up or down to achieve desired Ge
	// scaling, so this Offset can compensate accordingly.
	Offset float32 `default:"0" min:"0" step:"0.01"`

	// HiTol is the tolerance for higher than Target target average activation
	// as a proportion of that target value (0 = exactly the target, 0.2 = 20%
	// higher than target). Only once activations move outside this tolerance
	//
	//	are inhibitory values adapted.
	HiTol float32 `default:"0"`

	// LoTol is the tolerance for lower than Target target average activation
	// as a proportion of that target value (0 = exactly the target, 0.5 = 50%
	// lower than target). Only once activations move outside this tolerance are
	//
	//	inhibitory values adapted.
	LoTol float32 `default:"0.8"`

	// AdaptRate is the rate of Gi adaptation as function of
	// AdaptRate * (Target - ActMAvg) / Target. This occurs at spaced intervals
	// determined by Network.SlowInterval value. Slower values such as 0.05 may
	// be needed for large networks and sparse layers.
	AdaptRate float32 `default:"0.1"`

	// AdaptMax is the maximum adaptation step magnitude to take at any point.
	AdaptMax float32 `default:"0.01"`
}

ActAvgParams represents the nominal average activity levels in the layer and parameters for adapting the computed Gi inhibition levels to maintain average activity within a target range.

func (*ActAvgParams) Adapt

func (aa *ActAvgParams) Adapt(gimult *float32, act float32) bool

Adapt adapts the given gi multiplier factor as function of target and actual average activation, given current params.

func (*ActAvgParams) AvgFromAct

func (aa *ActAvgParams) AvgFromAct(avg *float32, act float32, dt float32)

AvgFromAct updates the running-average activation given average activity level in layer

func (*ActAvgParams) Defaults

func (aa *ActAvgParams) Defaults()

func (*ActAvgParams) ShouldDisplay

func (aa *ActAvgParams) ShouldDisplay(field string) bool

func (*ActAvgParams) Update

func (aa *ActAvgParams) Update()

type ActInitParams

type ActInitParams struct {

	// initial membrane potential -- see Erev.L for the resting potential (typically .3)
	Vm float32 `default:"0.3"`

	// initial activation value -- typically 0
	Act float32 `default:"0"`

	// baseline level of excitatory conductance (net input) -- Ge is initialized to this value, and it is added in as a constant background level of excitatory input -- captures all the other inputs not represented in the model, and intrinsic excitability, etc
	GeBase float32 `default:"0"`

	// baseline level of inhibitory conductance (net input) -- Gi is initialized to this value, and it is added in as a constant background level of inhibitory input -- captures all the other inputs not represented in the model
	GiBase float32 `default:"0"`

	// variance (sigma) of gaussian distribution around baseline Ge values, per unit, to establish variability in intrinsic excitability.  value never goes < 0
	GeVar float32 `default:"0"`

	// variance (sigma) of gaussian distribution around baseline Gi values, per unit, to establish variability in intrinsic excitability.  value never goes < 0
	GiVar float32 `default:"0"`
	// contains filtered or unexported fields
}

ActInitParams are initial values for key network state variables. Initialized in InitActs called by InitWeights, and provides target values for DecayState.

func (*ActInitParams) Defaults

func (ai *ActInitParams) Defaults()

func (*ActInitParams) GetGeBase

func (ai *ActInitParams) GetGeBase(rnd randx.Rand) float32

GeBase returns the baseline Ge value: Ge + rand(GeVar) > 0

func (*ActInitParams) GetGiBase

func (ai *ActInitParams) GetGiBase(rnd randx.Rand) float32

GiBase returns the baseline Gi value: Gi + rand(GiVar) > 0

func (*ActInitParams) Update

func (ai *ActInitParams) Update()

type ActParams

type ActParams struct {

	// Spiking function parameters
	Spikes SpikeParams `display:"inline"`

	// dendrite-specific parameters
	Dend DendParams `display:"inline"`

	// initial values for key network state variables -- initialized in InitActs called by InitWeights, and provides target values for DecayState
	Init ActInitParams `display:"inline"`

	// amount to decay between AlphaCycles, simulating passage of time and effects of saccades etc, especially important for environments with random temporal structure (e.g., most standard neural net training corpora)
	Decay DecayParams `display:"inline"`

	// time and rate constants for temporal derivatives / updating of activation state
	Dt DtParams `display:"inline"`

	// maximal conductances levels for channels
	Gbar chans.Chans `display:"inline"`

	// reversal potentials for each channel
	Erev chans.Chans `display:"inline"`

	// how external inputs drive neural activations
	Clamp ClampParams `display:"inline"`

	// how, where, when, and how much noise to add
	Noise SpikeNoiseParams `display:"inline"`

	// range for Vm membrane potential -- -- important to keep just at extreme range of reversal potentials to prevent numerical instability
	VmRange minmax.F32 `display:"inline"`

	// M-type medium time-scale afterhyperpolarization mAHP current -- this is the primary form of adaptation on the time scale of multiple sequences of spikes
	Mahp chans.MahpParams `display:"inline"`

	// slow time-scale afterhyperpolarization sAHP current -- integrates CaD at theta cycle intervals and produces a hard cutoff on sustained activity for any neuron
	Sahp chans.SahpParams `display:"inline"`

	// sodium-gated potassium channel adaptation parameters -- activates a leak-like current as a function of neural activity (firing = Na influx) at two different time-scales (Slick = medium, Slack = slow)
	KNa chans.KNaMedSlow `display:"inline"`

	// potassium (K) inwardly rectifying (ir) current, which is similar to GABAB
	// (which is a GABA modulated Kir channel).  This channel is off by default
	// but plays a critical role in making medium spiny neurons (MSNs) relatively
	// quiet in the striatum.
	Kir chans.KirParams `display:"inline"`

	// NMDA channel parameters used in computing Gnmda conductance for bistability, and postsynaptic calcium flux used in learning.  Note that Learn.Snmda has distinct parameters used in computing sending NMDA parameters used in learning.
	NMDA chans.NMDAParams `display:"inline"`

	// NMDA channel parameters used in computing Gnmda conductance for bistability, and postsynaptic calcium flux used in learning.  Note that Learn.Snmda has distinct parameters used in computing sending NMDA parameters used in learning.
	MaintNMDA chans.NMDAParams `display:"inline"`

	// GABA-B / GIRK channel parameters
	GabaB chans.GABABParams `display:"inline"`

	// voltage gated calcium channels -- provide a key additional source of Ca for learning and positive-feedback loop upstate for active neurons
	VGCC chans.VGCCParams `display:"inline"`

	// A-type potassium (K) channel that is particularly important for limiting the runaway excitation from VGCC channels
	AK chans.AKsParams `display:"inline"`

	// small-conductance calcium-activated potassium channel produces the pausing function as a consequence of rapid bursting.
	SKCa chans.SKCaParams `display:"inline"`

	// for self-maintenance simulating a population of
	// NMDA-interconnected spiking neurons
	SMaint SMaintParams `display:"inline"`

	// provides encoding population codes, used to represent a single continuous (scalar) value, across a population of units / neurons (1 dimensional)
	PopCode PopCodeParams `display:"inline"`
}

axon.ActParams contains all the activation computation params and functions for basic Axon, at the neuron level . This is included in axon.Layer to drive the computation.

func (*ActParams) AddGeNoise

func (ac *ActParams) AddGeNoise(ctx *Context, ni, di uint32)

AddGeNoise updates nrn.GeNoise if active

func (*ActParams) AddGiNoise

func (ac *ActParams) AddGiNoise(ctx *Context, ni, di uint32)

AddGiNoise updates nrn.GiNoise if active

func (*ActParams) DecayAHP

func (ac *ActParams) DecayAHP(ctx *Context, ni, di uint32, decay float32)

DecayAHP decays after-hyperpolarization variables by given factor (typically Decay.AHP)

func (*ActParams) DecayLearnCa

func (ac *ActParams) DecayLearnCa(ctx *Context, ni, di uint32, decay float32)

DecayLearnCa decays neuron-level calcium learning and spiking variables by given factor. Note: this is generally NOT useful, causing variability in these learning factors as a function of the decay parameter that then has impacts on learning rates etc. see Act.Decay.LearnCa param controlling this

func (*ActParams) DecayState

func (ac *ActParams) DecayState(ctx *Context, ni, di uint32, decay, glong, ahp float32)

DecayState decays the activation state toward initial values in proportion to given decay parameter. Special case values such as Glong and KNa are also decayed with their separately parameterized values. Called with ac.Decay.Act by Layer during NewState

func (*ActParams) Defaults

func (ac *ActParams) Defaults()

func (*ActParams) GSkCaFromCa

func (ac *ActParams) GSkCaFromCa(ctx *Context, ni, di uint32)

GSkCaFromCa updates the SKCa channel if used

func (*ActParams) GeFromSyn

func (ac *ActParams) GeFromSyn(ctx *Context, ni, di uint32, geSyn, geExt float32)

GeFromSyn integrates Ge excitatory conductance from GeSyn. geExt is extra conductance to add to the final Ge value

func (*ActParams) GiFromSyn

func (ac *ActParams) GiFromSyn(ctx *Context, ni, di uint32, giSyn float32) float32

GiFromSyn integrates GiSyn inhibitory synaptic conductance from GiRaw value (can add other terms to geRaw prior to calling this)

func (*ActParams) GkFromVm

func (ac *ActParams) GkFromVm(ctx *Context, ni, di uint32)

GkFromVm updates all the Gk-based conductances: Mahp, KNa, Gak

func (*ActParams) GvgccFromVm

func (ac *ActParams) GvgccFromVm(ctx *Context, ni, di uint32)

GvgccFromVm updates all the VGCC voltage-gated calcium channel variables from VmDend

func (*ActParams) InetFromG

func (ac *ActParams) InetFromG(vm, ge, gl, gi, gk float32) float32

InetFromG computes net current from conductances and Vm

func (*ActParams) InitActs

func (ac *ActParams) InitActs(ctx *Context, ni, di uint32)

InitActs initializes activation state in neuron -- called during InitWeights but otherwise not automatically called (DecayState is used instead)

func (*ActParams) InitLongActs

func (ac *ActParams) InitLongActs(ctx *Context, ni, di uint32)

InitLongActs initializes longer time-scale activation states in neuron (CaDPrev, Beta1, Beta2, ActM, ActP) Called from InitActs, which is called from InitWeights, but otherwise not automatically called (DecayState is used instead)

func (*ActParams) KNaNewState

func (ac *ActParams) KNaNewState(ctx *Context, ni, di uint32)

KNaNewState does TrialSlow version of KNa during NewState if option is set

func (*ActParams) MaintNMDAFromRaw

func (ac *ActParams) MaintNMDAFromRaw(ctx *Context, ni, di uint32)

MaintNMDAFromRaw updates all the Maint NMDA variables from GModRaw and current Vm, Spiking

func (*ActParams) NMDAFromRaw

func (ac *ActParams) NMDAFromRaw(ctx *Context, ni, di uint32, geTot float32)

NMDAFromRaw updates all the NMDA variables from total Ge (GeRaw + Ext) and current Vm, Spiking

func (*ActParams) SMaintFromISI

func (ac *ActParams) SMaintFromISI(ctx *Context, ni, di uint32)

SMaintFromISI updates the SMaint self-maintenance current into GMaintRaw

func (*ActParams) SpikeFromVm

func (ac *ActParams) SpikeFromVm(ctx *Context, ni, di uint32)

SpikeFromVm computes Spike from Vm and ISI-based activation

func (*ActParams) SpikeFromVmVars

func (ac *ActParams) SpikeFromVmVars(nrnISI, nrnISIAvg, nrnSpike, nrnSpiked, nrnAct *float32, nrnVm float32)

SpikeFromVmVars computes Spike from Vm and ISI-based activation, using pointers to variables

func (*ActParams) Update

func (ac *ActParams) Update()

Update must be called after any changes to parameters

func (*ActParams) VmFromG

func (ac *ActParams) VmFromG(ctx *Context, ni, di uint32)

VmFromG computes membrane potential Vm from conductances Ge, Gi, and Gk.

func (*ActParams) VmFromInet

func (ac *ActParams) VmFromInet(vm, dt, inet float32) float32

VmFromInet computes new Vm value from inet, clamping range

func (*ActParams) VmInteg

func (ac *ActParams) VmInteg(vm, dt, ge, gl, gi, gk float32, nvm, inet *float32)

VmInteg integrates Vm over VmSteps to obtain a more stable value Returns the new Vm and inet values.

type AvgMax

type AvgMax int32 //enums:enum

AvgMax are Avg and Max

const (
	Avg AvgMax = iota
	Max
)
const AvgMaxN AvgMax = 2

AvgMaxN is the highest valid value for type AvgMax, plus one.

func AvgMaxValues

func AvgMaxValues() []AvgMax

AvgMaxValues returns all possible values for the type AvgMax.

func (AvgMax) Desc

func (i AvgMax) Desc() string

Desc returns the description of the AvgMax value.

func (AvgMax) Int64

func (i AvgMax) Int64() int64

Int64 returns the AvgMax value as an int64.

func (AvgMax) MarshalText

func (i AvgMax) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*AvgMax) SetInt64

func (i *AvgMax) SetInt64(in int64)

SetInt64 sets the AvgMax value from an int64.

func (*AvgMax) SetString

func (i *AvgMax) SetString(s string) error

SetString sets the AvgMax value from its string representation, and returns an error if the string is invalid.

func (AvgMax) String

func (i AvgMax) String() string

String returns the string representation of this AvgMax value.

func (*AvgMax) UnmarshalText

func (i *AvgMax) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (AvgMax) Values

func (i AvgMax) Values() []enums.Enum

Values returns all possible values for the type AvgMax.

type AvgMaxPhases

type AvgMaxPhases int32 //enums:enum -trim-prefix AM

AvgMaxPhases are the different Phases over which AvgMax values are tracked.

const (
	// Cycle is the current cycle, which is the source for the rest.
	AMCycle AvgMaxPhases = iota

	// Minus is at the end of the minus phase.
	AMMinus

	// Plus is at the end of the plus phase.
	AMPlus

	// Prev is at the end of the previous plus phase.
	AMPrev
)
const AvgMaxPhasesN AvgMaxPhases = 4

AvgMaxPhasesN is the highest valid value for type AvgMaxPhases, plus one.

func AvgMaxPhasesValues

func AvgMaxPhasesValues() []AvgMaxPhases

AvgMaxPhasesValues returns all possible values for the type AvgMaxPhases.

func (AvgMaxPhases) Desc

func (i AvgMaxPhases) Desc() string

Desc returns the description of the AvgMaxPhases value.

func (AvgMaxPhases) Int64

func (i AvgMaxPhases) Int64() int64

Int64 returns the AvgMaxPhases value as an int64.

func (AvgMaxPhases) MarshalText

func (i AvgMaxPhases) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*AvgMaxPhases) SetInt64

func (i *AvgMaxPhases) SetInt64(in int64)

SetInt64 sets the AvgMaxPhases value from an int64.

func (*AvgMaxPhases) SetString

func (i *AvgMaxPhases) SetString(s string) error

SetString sets the AvgMaxPhases value from its string representation, and returns an error if the string is invalid.

func (AvgMaxPhases) String

func (i AvgMaxPhases) String() string

String returns the string representation of this AvgMaxPhases value.

func (*AvgMaxPhases) UnmarshalText

func (i *AvgMaxPhases) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (AvgMaxPhases) Values

func (i AvgMaxPhases) Values() []enums.Enum

Values returns all possible values for the type AvgMaxPhases.

type AvgMaxVars

type AvgMaxVars int32 //enums:enum -trim-prefix AM

AvgMaxVars are the different Neuron variables for which AvgMaxPhases is computed.

const (
	// CaP is the primary variable for tracking overall pool activity
	// over a recent timescale, integrated at roughly 40 msec time constant.
	AMCaP AvgMaxVars = iota

	// CaD is a slower moving activation signal, capable of reflecting
	// activity over the entire trial.
	AMCaD

	// CaPMax is the maximum CaP over the trial of processing.
	AMCaPMax

	// Act is the computed rate-code equivalent of current spike rate.
	AMAct

	// GeInt is the integrated running-average value of excitatory conductance.
	AMGeInt

	// GiInt is the integrated running-average value of inhibitory conductance.
	AMGiInt

	// AvgDif is the integrated AvgDif between ActPct - TrgAvg.
	// Only the Plus phase is used.
	AMAvgDif
)
const AvgMaxVarsN AvgMaxVars = 7

AvgMaxVarsN is the highest valid value for type AvgMaxVars, plus one.

func AvgMaxVarsValues

func AvgMaxVarsValues() []AvgMaxVars

AvgMaxVarsValues returns all possible values for the type AvgMaxVars.

func (AvgMaxVars) Desc

func (i AvgMaxVars) Desc() string

Desc returns the description of the AvgMaxVars value.

func (AvgMaxVars) Int64

func (i AvgMaxVars) Int64() int64

Int64 returns the AvgMaxVars value as an int64.

func (AvgMaxVars) MarshalText

func (i AvgMaxVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*AvgMaxVars) SetInt64

func (i *AvgMaxVars) SetInt64(in int64)

SetInt64 sets the AvgMaxVars value from an int64.

func (*AvgMaxVars) SetString

func (i *AvgMaxVars) SetString(s string) error

SetString sets the AvgMaxVars value from its string representation, and returns an error if the string is invalid.

func (AvgMaxVars) String

func (i AvgMaxVars) String() string

String returns the string representation of this AvgMaxVars value.

func (*AvgMaxVars) UnmarshalText

func (i *AvgMaxVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (AvgMaxVars) Values

func (i AvgMaxVars) Values() []enums.Enum

Values returns all possible values for the type AvgMaxVars.

type BLANovelPath

type BLANovelPath struct {
}

BLANovelPath connects all other pools to the first, Novelty, pool in a BLA layer. This allows the known US representations to specifically inhibit the novelty pool.

func NewBLANovelPath

func NewBLANovelPath() *BLANovelPath

func (*BLANovelPath) Connect

func (ot *BLANovelPath) Connect(send, recv *tensor.Shape, same bool) (sendn, recvn *tensor.Int32, cons *tensor.Bool)

func (*BLANovelPath) Name

func (ot *BLANovelPath) Name() string

type BLAPathParams

type BLAPathParams struct {

	// use 0.01 for acquisition (don't unlearn) and 1 for extinction -- negative delta learning rate multiplier
	NegDeltaLRate float32 `default:"0.01,1"`

	// threshold on this layer's ACh level for trace learning updates
	AChThr float32 `default:"0.1"`

	// proportion of US time stimulus activity to use for the trace component of
	USTrace float32 `default:"0,0.5"`
	// contains filtered or unexported fields
}

BLAPathParams has parameters for basolateral amygdala learning. Learning is driven by the Tr trace as function of ACh * Send Act recorded prior to US, and at US, recv unit delta: CaP - CaDPrev times normalized GeIntNorm for recv unit credit assignment. The Learn.DWt.Tau time constant determines trace updating over trials when ACh is above threshold -- this determines strength of second-order conditioning -- default of 1 means none, but can be increased as needed.

func (*BLAPathParams) Defaults

func (bp *BLAPathParams) Defaults()

func (*BLAPathParams) Update

func (bp *BLAPathParams) Update()

type BurstParams

type BurstParams struct {

	// Relative component of threshold on superficial activation value,
	// below which it does not drive Burst (and above which, Burst = CaP).
	// This is the distance between the average and maximum activation values
	// within layer (e.g., 0 = average, 1 = max).  Overall effective threshold
	// is MAX of relative and absolute thresholds.
	ThrRel float32 `max:"1" default:"0.1"`

	// Absolute component of threshold on superficial activation value,
	// below which it does not drive Burst (and above which, Burst = CaP).
	// Overall effective threshold is MAX of relative and absolute thresholds.
	ThrAbs float32 `min:"0" max:"1" default:"0.1"`
	// contains filtered or unexported fields
}

BurstParams determine how the 5IB Burst activation is computed from CaP integrated spiking values in Super layers -- thresholded.

func (*BurstParams) Defaults

func (bp *BurstParams) Defaults()

func (*BurstParams) ThrFromAvgMax

func (bp *BurstParams) ThrFromAvgMax(avg, mx float32) float32

ThrFromAvgMax returns threshold from average and maximum values

func (*BurstParams) Update

func (bp *BurstParams) Update()

type CTParams

type CTParams struct {

	// gain factor for context excitatory input, which is constant as compared to the spiking input from other pathways, so it must be downscaled accordingly.  This can make a difference and may need to be scaled up or down.
	GeGain float32 `default:"0.05,0.1,1,2"`

	// decay time constant for context Ge input -- if > 0, decays over time so intrinsic circuit dynamics have to take over.  For single-step copy-based cases, set to 0, while longer-time-scale dynamics should use 50 (80 for 280 cycles)
	DecayTau float32 `default:"0,50,70"`

	// OFCposPT is set for the OFCposPT PTMaintLayer, which sets the
	// GvOFCposPTMaint global variable.
	OFCposPT slbool.Bool

	// 1 / tau
	DecayDt float32 `display:"-" json:"-" xml:"-"`
}

CTParams control the CT corticothalamic neuron special behavior

func (*CTParams) DecayForNCycles

func (cp *CTParams) DecayForNCycles(ncycles int)

func (*CTParams) Defaults

func (cp *CTParams) Defaults()

func (*CTParams) Update

func (cp *CTParams) Update()

type ClampParams

type ClampParams struct {

	// amount of Ge driven for clamping -- generally use 0.8 for Target layers, 1.5 for Input layers
	Ge float32 `default:"0.8,1.5"`

	// add external conductance on top of any existing -- generally this is not a good idea for target layers (creates a main effect that learning can never match), but may be ok for input layers
	Add slbool.Bool `default:"false"`

	// threshold on neuron Act activity to count as active for computing error relative to target in PctErr method
	ErrThr float32 `default:"0.5"`
	// contains filtered or unexported fields
}

ClampParams specify how external inputs drive excitatory conductances (like a current clamp) -- either adds or overwrites existing conductances. Noise is added in either case.

func (*ClampParams) Defaults

func (cp *ClampParams) Defaults()

func (*ClampParams) Update

func (cp *ClampParams) Update()

type Context

type Context struct {

	// number of data parallel items to process currently.
	NData uint32 `min:"1"`

	// current running mode, using sim-defined enum, e.g., Train, Test, etc.
	Mode int32

	// Testing is true if the model is being run in a testing mode,
	// so no weight changes or other associated computations should be done.
	// This flag should only affect learning-related behavior.
	Testing slbool.Bool `edit:"-"`

	// Phase counter: typicaly 0-1 for minus-plus.
	Phase int32

	// PlusPhase is true if this is the plus phase, when the outcome / bursting
	// is occurring, driving positive learning; else minus phase.
	PlusPhase slbool.Bool

	// Cycle within current phase, minus or plus.
	PhaseCycle int32

	// Cycle within Trial: number of iterations of activation updating (settling)
	// on the current state. This is reset at NewState.
	Cycle int32

	// ThetaCycles is the length of the theta cycle (i.e., Trial), in terms of 1 msec Cycles.
	// Some network update steps depend on doing something at the end of the
	// theta cycle (e.g., CTCtxtPath).
	ThetaCycles int32 `default:"200"`

	// PlusCycles is the number of cycles in the plus phase. Typically 50,
	// but may be set longer if ThetaCycles is above default of 200.
	PlusCycles int32 `default:"50"`

	// CaBinCycles is the number of cycles for neuron [CaBins] values used in
	// computing synaptic calcium values. Total number of bins = ThetaCycles / CaBinCycles.
	// This is fixed at 10.
	CaBinCycles int32 `default:"10"`

	// CyclesTotal is the accumulated cycle count, which increments continuously
	// from whenever it was last reset. Typically this is the number of milliseconds
	// in simulation time.
	CyclesTotal int32

	// Time is the accumulated amount of time the network has been running,
	// in simulation-time (not real world time), in seconds.
	Time float32

	// TrialsTotal is the total trial count, which increments continuously in NewState
	// _only in Train mode_ from whenever it was last reset. Can be used for synchronizing
	// weight updates across nodes.
	TrialsTotal int32

	// TimePerCycle is the amount of Time to increment per cycle.
	TimePerCycle float32 `default:"0.001"`

	// SlowInterval is how frequently in Trials to perform slow adaptive processes
	// such as synaptic scaling, associated in the brain with sleep,
	// via the SlowAdapt method.  This should be long enough for meaningful changes
	// to accumulate. 100 is default but could easily be longer in larger models.
	// Because SlowCounter is incremented by NData, high NData cases (e.g. 16) likely need to
	// increase this value, e.g., 400 seems to produce overall consistent results in various models.
	SlowInterval int32 `default:"100"`

	// SlowCounter increments for each training trial, to trigger SlowAdapt at SlowInterval.
	// This is incremented by NData to maintain consistency across different values of this parameter.
	SlowCounter int32 `edit:"-"`

	// AdaptGiInterval is how frequently in Trials to perform inhibition adaptation,
	// which needs to be even slower than the SlowInterval.
	AdaptGiInterval int32 `default:"1000"`

	// AdaptGiCounter increments for each training trial, to trigger AdaptGi at AdaptGiInterval.
	// This is incremented by NData to maintain consistency across different values of this parameter.
	AdaptGiCounter int32 `edit:"-"`

	// RandCounter is the random counter, incremented by maximum number of
	// possible random numbers generated per cycle, regardless of how
	// many are actually used. This is shared across all layers so must
	// encompass all possible param settings.
	RandCounter slrand.Counter
	// contains filtered or unexported fields
}

Context contains all of the global context state info that is shared across every step of the computation. It is passed around to all relevant computational functions, and is updated on the CPU and synced to the GPU after every cycle. It contains timing, Testing vs. Training mode, random number context, etc. There is one canonical instance on the network as Ctx, always get it from the network.Context() method.

func GetCtx

func GetCtx(idx uint32) *Context

GetCtx returns a pointer to the given global variable: Ctx []Context at given index. This directly processed in the GPU code, so this function call is an equivalent for the CPU.

func NewContext

func NewContext() *Context

NewContext returns a new Time struct with default parameters

func (*Context) CycleInc

func (ctx *Context) CycleInc()

CycleInc increments at the cycle level. This is the one time when Context is used on GPU in read-write mode, vs. read-only.

func (*Context) DataIndex

func (ctx *Context) DataIndex(idx uint32) uint32

DataIndex returns the data index from an overall index over NItems * NData.

func (*Context) Defaults

func (ctx *Context) Defaults()

Defaults sets default values

func (*Context) ItemIndex

func (ctx *Context) ItemIndex(idx uint32) uint32

ItemIndex returns the main item index from an overall index over NItems * NData. (items = layers, neurons, synapses)

func (*Context) NCaBins

func (ctx *Context) NCaBins() int32

NCaBins returns ThetaCycles / CaBinCycles

func (*Context) NewState

func (ctx *Context) NewState(mode enums.Enum, testing bool)

NewState resets counters at start of new state (trial) of processing. Pass the evaluation mode associated with this new state and testing bool.

func (*Context) PlusPhaseStart

func (ctx *Context) PlusPhaseStart()

PlusPhaseStart resets PhaseCycle = 0 and sets the plus phase to true.

func (*Context) Reset

func (ctx *Context) Reset()

Reset resets the counters all back to zero

func (*Context) SetAdaptGiCounter

func (t *Context) SetAdaptGiCounter(v int32) *Context

SetAdaptGiCounter sets the [Context.AdaptGiCounter]: AdaptGiCounter increments for each training trial, to trigger AdaptGi at AdaptGiInterval. This is incremented by NData to maintain consistency across different values of this parameter.

func (*Context) SetAdaptGiInterval

func (t *Context) SetAdaptGiInterval(v int32) *Context

SetAdaptGiInterval sets the [Context.AdaptGiInterval]: AdaptGiInterval is how frequently in Trials to perform inhibition adaptation, which needs to be even slower than the SlowInterval.

func (*Context) SetCaBinCycles

func (t *Context) SetCaBinCycles(v int32) *Context

SetCaBinCycles sets the [Context.CaBinCycles]: CaBinCycles is the number of cycles for neuron CaBins values used in computing synaptic calcium values. Total number of bins = ThetaCycles / CaBinCycles. This is fixed at 10.

func (*Context) SetCycle

func (t *Context) SetCycle(v int32) *Context

SetCycle sets the [Context.Cycle]: Cycle within Trial: number of iterations of activation updating (settling) on the current state. This is reset at NewState.

func (*Context) SetCyclesTotal

func (t *Context) SetCyclesTotal(v int32) *Context

SetCyclesTotal sets the [Context.CyclesTotal]: CyclesTotal is the accumulated cycle count, which increments continuously from whenever it was last reset. Typically this is the number of milliseconds in simulation time.

func (*Context) SetMode

func (t *Context) SetMode(v int32) *Context

SetMode sets the [Context.Mode]: current running mode, using sim-defined enum, e.g., Train, Test, etc.

func (*Context) SetNData

func (t *Context) SetNData(v uint32) *Context

SetNData sets the [Context.NData]: number of data parallel items to process currently.

func (*Context) SetPhase

func (t *Context) SetPhase(v int32) *Context

SetPhase sets the [Context.Phase]: Phase counter: typicaly 0-1 for minus-plus.

func (*Context) SetPhaseCycle

func (t *Context) SetPhaseCycle(v int32) *Context

SetPhaseCycle sets the [Context.PhaseCycle]: Cycle within current phase, minus or plus.

func (*Context) SetPlusCycles

func (t *Context) SetPlusCycles(v int32) *Context

SetPlusCycles sets the [Context.PlusCycles]: PlusCycles is the number of cycles in the plus phase. Typically 50, but may be set longer if ThetaCycles is above default of 200.

func (*Context) SetPlusPhase

func (t *Context) SetPlusPhase(v slbool.Bool) *Context

SetPlusPhase sets the [Context.PlusPhase]: PlusPhase is true if this is the plus phase, when the outcome / bursting is occurring, driving positive learning; else minus phase.

func (*Context) SetRandCounter

func (t *Context) SetRandCounter(v slrand.Counter) *Context

SetRandCounter sets the [Context.RandCounter]: RandCounter is the random counter, incremented by maximum number of possible random numbers generated per cycle, regardless of how many are actually used. This is shared across all layers so must encompass all possible param settings.

func (*Context) SetSlowCounter

func (t *Context) SetSlowCounter(v int32) *Context

SetSlowCounter sets the [Context.SlowCounter]: SlowCounter increments for each training trial, to trigger SlowAdapt at SlowInterval. This is incremented by NData to maintain consistency across different values of this parameter.

func (*Context) SetSlowInterval

func (t *Context) SetSlowInterval(v int32) *Context

SetSlowInterval sets the [Context.SlowInterval]: SlowInterval is how frequently in Trials to perform slow adaptive processes such as synaptic scaling, associated in the brain with sleep, via the SlowAdapt method. This should be long enough for meaningful changes to accumulate. 100 is default but could easily be longer in larger models. Because SlowCounter is incremented by NData, high NData cases (e.g. 16) likely need to increase this value, e.g., 400 seems to produce overall consistent results in various models.

func (*Context) SetTesting

func (t *Context) SetTesting(v slbool.Bool) *Context

SetTesting sets the [Context.Testing]: Testing is true if the model is being run in a testing mode, so no weight changes or other associated computations should be done. This flag should only affect learning-related behavior.

func (*Context) SetThetaCycles

func (t *Context) SetThetaCycles(v int32) *Context

SetThetaCycles sets the [Context.ThetaCycles]: ThetaCycles is the length of the theta cycle (i.e., Trial), in terms of 1 msec Cycles. Some network update steps depend on doing something at the end of the theta cycle (e.g., CTCtxtPath).

func (*Context) SetTime

func (t *Context) SetTime(v float32) *Context

SetTime sets the [Context.Time]: Time is the accumulated amount of time the network has been running, in simulation-time (not real world time), in seconds.

func (*Context) SetTimePerCycle

func (t *Context) SetTimePerCycle(v float32) *Context

SetTimePerCycle sets the [Context.TimePerCycle]: TimePerCycle is the amount of Time to increment per cycle.

func (*Context) SetTrialsTotal

func (t *Context) SetTrialsTotal(v int32) *Context

SetTrialsTotal sets the [Context.TrialsTotal]: TrialsTotal is the total trial count, which increments continuously in NewState _only in Train mode_ from whenever it was last reset. Can be used for synchronizing weight updates across nodes.

func (*Context) SlowInc

func (ctx *Context) SlowInc() (slow bool, adaptgi bool)

SlowInc increments the Slow and AdaptGi counters and returns true if it is time to perform SlowAdapt or AdaptGi functions.

type DAModTypes

type DAModTypes int32 //enums:enum

DAModTypes are types of dopamine modulation of neural activity.

const (
	// NoDAMod means there is no effect of dopamine on neural activity
	NoDAMod DAModTypes = iota

	// D1Mod is for neurons that primarily express dopamine D1 receptors,
	// which are excitatory from DA bursts, inhibitory from dips.
	// Cortical neurons can generally use this type, while subcortical
	// populations are more diverse in having both D1 and D2 subtypes.
	D1Mod

	// D2Mod is for neurons that primarily express dopamine D2 receptors,
	// which are excitatory from DA dips, inhibitory from bursts.
	D2Mod

	// D1AbsMod is like D1Mod, except the absolute value of DA is used
	// instead of the signed value.
	// There are a subset of DA neurons that send increased DA for
	// both negative and positive outcomes, targeting frontal neurons.
	D1AbsMod
)
const DAModTypesN DAModTypes = 4

DAModTypesN is the highest valid value for type DAModTypes, plus one.

func DAModTypesValues

func DAModTypesValues() []DAModTypes

DAModTypesValues returns all possible values for the type DAModTypes.

func (DAModTypes) Desc

func (i DAModTypes) Desc() string

Desc returns the description of the DAModTypes value.

func (DAModTypes) Int64

func (i DAModTypes) Int64() int64

Int64 returns the DAModTypes value as an int64.

func (DAModTypes) MarshalText

func (i DAModTypes) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*DAModTypes) SetInt64

func (i *DAModTypes) SetInt64(in int64)

SetInt64 sets the DAModTypes value from an int64.

func (*DAModTypes) SetString

func (i *DAModTypes) SetString(s string) error

SetString sets the DAModTypes value from its string representation, and returns an error if the string is invalid.

func (DAModTypes) String

func (i DAModTypes) String() string

String returns the string representation of this DAModTypes value.

func (*DAModTypes) UnmarshalText

func (i *DAModTypes) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (DAModTypes) Values

func (i DAModTypes) Values() []enums.Enum

Values returns all possible values for the type DAModTypes.

type DWtParams

type DWtParams struct {

	// Trace uses the default trace-based version of the kinase error-driven cortical
	// learning algorithm, where the per-trial error delta is computed from
	// [LearnCaP] - [LearnCaD], and the credit assignment factor is computed from the
	// synaptic product of [CaSyn], integrated over [CaBins] separately on the
	// sender and receiver neurons, which are then multiplied at each synapse and
	// integrated to efficiently compute synaptic CaP and CaD factors.
	// This synaptic CaD is integrated across theta cycle trials with the Tau
	// parameter to produce the final multiplicative credit assignment factor.
	// If Trace = false, then the synaptic CaP - CaD delta is used directly as
	// the error-driven learning signal, precluding the longer-timescale trace
	// integration factor (Trace = false is automatically used for Target layers).
	Trace slbool.Bool `default:"true"`

	// Tau is the time constant for integrating the synaptic trace [Tr]
	// over the theta cycle learning timescale. Larger values (greater than 1)
	// produce longer time windows of integration, and should only be used when
	// there is temporal structure to be learned across these longer timescales.
	Tau float32 `default:"1,2,4"`

	// SynCa20 uses an effective 20msec time window for synaptic calcium computation
	// from the [CaBins] values for send and recv neurons in computing the SynCa
	// synaptic calcium value. The default is 10msec, i.e., 1 bin, which works well
	// for most cases. This uses 2 bins if set.
	SynCa20 slbool.Bool

	// CaPScale is a separate multiplier for the CaP component of synaptic calcium, to
	// allow separate weighting of potentiation (CaP) vs. depression (CaD) factors.
	// An increased CaP level results in an overall potentiation bias, which acts
	// like a hebbian learning factor, whereas a lower value produces more negatively
	// biased synaptic weight changes, which may help with an overall hogging dynamic.
	// The default of 1 works best in most cases.
	CaPScale float32 `default:"1,0.95,1.05"`

	// SubMean is the amount of the mean [dWt] to subtract for updating the online
	// learning [LWt] values, producing a zero-sum effect. 1.0 = full zero-sum dWt.
	// Only applies to non-zero DWts. There is a separate such factor for [SWt].
	// Typically set to 0 for standard trace learning pathways, although some require it
	// for stability over the long haul. Can use [Network.SetSubMean] to set to 1 after
	// significant early learning has occurred with 0.
	// Some special path types (e.g., Hebb) benefit from SubMean = 1 always.
	SubMean float32 `default:"0,1"`

	// LearnThr is the threshold for learning, for specialized learning algorithms.
	// This is not relevant for the standard kinase error-driven cortical learning algorithm.
	// In Matrix and VSPatch it applies to normalized GeIntNorm value: setting this relatively
	// high encourages sparser representations.
	LearnThr float32

	// Dt rate = 1 / tau
	Dt float32 `display:"-" json:"-" xml:"-" edit:"-"`
	// contains filtered or unexported fields
}

DWtParams has misc parameters for computing weight changes (DWt) for the default kinase trace-based error-driven cortical learning rule, and for other specialized learning rules.

func (*DWtParams) Defaults

func (tp *DWtParams) Defaults()

func (*DWtParams) TrFromCa

func (tp *DWtParams) TrFromCa(tr float32, ca float32) float32

TrFromCa returns updated trace factor as function of a synaptic calcium update factor and current trace

func (*DWtParams) Update

func (tp *DWtParams) Update()

type DecayParams

type DecayParams struct {

	// proportion to decay most activation state variables toward initial values at start of every ThetaCycle (except those controlled separately below) -- if 1 it is effectively equivalent to full clear, resetting other derived values.  ISI is reset every AlphaCycle to get a fresh sample of activations (doesn't affect direct computation -- only readout).
	Act float32 `default:"0,0.2,0.5,1" max:"1" min:"0"`

	// proportion to decay long-lasting conductances, NMDA and GABA, and also the dendritic membrane potential -- when using random stimulus order, it is important to decay this significantly to allow a fresh start -- but set Act to 0 to enable ongoing activity to keep neurons in their sensitive regime.
	Glong float32 `default:"0,0.6" max:"1" min:"0"`

	// decay of afterhyperpolarization currents, including mAHP, sAHP, and KNa, Kir -- has a separate decay because often useful to have this not decay at all even if decay is on.
	AHP float32 `default:"0" max:"1" min:"0"`

	// decay of Ca variables driven by spiking activity used in learning: CaSpike* and Ca* variables. These are typically not decayed but may need to be in some situations.
	LearnCa float32 `default:"0" max:"1" min:"0"`

	// decay layer at end of ThetaCycle when there is a global reward -- true by default for PTPred, PTMaint and PFC Super layers
	OnRew slbool.Bool
	// contains filtered or unexported fields
}

DecayParams control the decay of activation state in the DecayState function called in NewState when a new state is to be processed.

func (*DecayParams) Defaults

func (dp *DecayParams) Defaults()

func (*DecayParams) Update

func (dp *DecayParams) Update()

type DendParams

type DendParams struct {

	// dendrite-specific strength multiplier of the exponential spiking drive on Vm -- e.g., .5 makes it half as strong as at the soma (which uses Gbar.L as a strength multiplier per the AdEx standard model)
	GbarExp float32 `default:"0.2,0.5"`

	// dendrite-specific conductance of Kdr delayed rectifier currents, used to reset membrane potential for dendrite -- applied for Tr msec
	GbarR float32 `default:"3,6"`

	// SST+ somatostatin positive slow spiking inhibition level specifically affecting dendritic Vm (VmDend) -- this is important for countering a positive feedback loop from NMDA getting stronger over the course of learning -- also typically requires SubMean = 1 for TrgAvgAct and learning to fully counter this feedback loop.
	SSGi float32 `default:"0,2"`

	// set automatically based on whether this layer has any recv pathways that have a GType conductance type of Modulatory -- if so, then multiply GeSyn etc by GModSyn
	HasMod slbool.Bool `edit:"-"`

	// multiplicative gain factor on the total modulatory input -- this can also be controlled by the PathScale.Abs factor on ModulatoryG inputs, but it is convenient to be able to control on the layer as well.
	ModGain float32

	// if true, modulatory signal also includes ACh multiplicative factor
	ModACh slbool.Bool

	// baseline modulatory level for modulatory effects -- net modulation is ModBase + ModGain * GModSyn
	ModBase float32
	// contains filtered or unexported fields
}

DendParams are the parameters for updating dendrite-specific dynamics

func (*DendParams) Defaults

func (dp *DendParams) Defaults()

func (*DendParams) ShouldDisplay

func (dp *DendParams) ShouldDisplay(field string) bool

func (*DendParams) Update

func (dp *DendParams) Update()

type DriveParams

type DriveParams struct {

	// minimum effective drive value, which is an automatic baseline ensuring
	// that a positive US results in at least some minimal level of reward.
	// Unlike Base values, this is not reflected in the activity of the drive
	// values, and applies at the time of reward calculation as a minimum baseline.
	DriveMin float32

	// baseline levels for each drive, which is what they naturally trend toward
	// in the absence of any input.  Set inactive drives to 0 baseline,
	// active ones typically elevated baseline (0-1 range).
	Base []float32

	// time constants in ThetaCycle (trial) units for natural update toward
	// Base values. 0 values means no natural update (can be updated externally).
	Tau []float32

	// decrement in drive value when US is consumed, thus partially satisfying
	// the drive. Positive values are subtracted from current Drive value.
	Satisfaction []float32

	// 1/Tau
	Dt []float32 `display:"-"`
}

DriveParams manages the drive parameters for computing and updating drive state. Most of the params are for optional case where drives are automatically updated based on US consumption (which satisfies drives) and time passing (which increases drives).

func (*DriveParams) AddTo

func (dp *DriveParams) AddTo(di uint32, drv uint32, delta float32) float32

AddTo increments drive by given amount, subject to 0-1 range clamping. Returns new val.

func (*DriveParams) Alloc

func (dp *DriveParams) Alloc(nDrives int)

func (*DriveParams) Defaults

func (dp *DriveParams) Defaults()

func (*DriveParams) EffectiveDrive

func (dp *DriveParams) EffectiveDrive(di uint32, i uint32) float32

EffectiveDrive returns the Max of Drives at given index and DriveMin. note that index 0 is the novelty / curiosity drive, which doesn't use DriveMin.

func (*DriveParams) ExpStep

func (dp *DriveParams) ExpStep(di uint32, drv uint32, dt, base float32) float32

ExpStep updates drive with an exponential step with given dt value toward given baseline value.

func (*DriveParams) ExpStepAll

func (dp *DriveParams) ExpStepAll(di uint32)

ExpStepAll updates given drives with an exponential step using dt values toward baseline values.

func (*DriveParams) SoftAdd

func (dp *DriveParams) SoftAdd(di uint32, drv uint32, delta float32) float32

SoftAdd increments drive by given amount, using soft-bounding to 0-1 extremes. if delta is positive, multiply by 1-val, else val. Returns new val.

func (*DriveParams) ToBaseline

func (dp *DriveParams) ToBaseline(di uint32)

ToBaseline sets all drives to their baseline levels

func (*DriveParams) ToZero

func (dp *DriveParams) ToZero(di uint32)

ToZero sets all drives to 0

func (*DriveParams) Update

func (dp *DriveParams) Update()

func (*DriveParams) VarToZero

func (dp *DriveParams) VarToZero(di uint32, gvar GlobalVectorVars)

VarToZero sets all values of given drive-sized variable to 0

type DtParams

type DtParams struct {

	// overall rate constant for numerical integration, for all equations at the unit level -- all time constants are specified in millisecond units, with one cycle = 1 msec -- if you instead want to make one cycle = 2 msec, you can do this globally by setting this integ value to 2 (etc).  However, stability issues will likely arise if you go too high.  For improved numerical stability, you may even need to reduce this value to 0.5 or possibly even lower (typically however this is not necessary).  MUST also coordinate this with network.time_inc variable to ensure that global network.time reflects simulated time accurately
	Integ float32 `default:"1,0.5" min:"0"`

	// membrane potential time constant in cycles, which should be milliseconds typically (tau is roughly how long it takes for value to change significantly -- 1.4x the half-life) -- reflects the capacitance of the neuron in principle -- biological default for AdEx spiking model C = 281 pF = 2.81 normalized
	VmTau float32 `default:"2.81" min:"1"`

	// dendritic membrane potential time constant in cycles, which should be milliseconds typically (tau is roughly how long it takes for value to change significantly -- 1.4x the half-life) -- reflects the capacitance of the neuron in principle -- biological default for AdEx spiking model C = 281 pF = 2.81 normalized
	VmDendTau float32 `default:"5" min:"1"`

	// number of integration steps to take in computing new Vm value -- this is the one computation that can be most numerically unstable so taking multiple steps with proportionally smaller dt is beneficial
	VmSteps int32 `default:"2" min:"1"`

	// time constant for decay of excitatory AMPA receptor conductance.
	GeTau float32 `default:"5" min:"1"`

	// time constant for decay of inhibitory GABAa receptor conductance.
	GiTau float32 `default:"7" min:"1"`

	// time constant for integrating values over timescale of an individual input state (e.g., roughly 200 msec -- theta cycle), used in computing ActInt, GeInt from Ge, and GiInt from GiSyn -- this is used for scoring performance, not for learning, in cycles, which should be milliseconds typically (tau is roughly how long it takes for value to change significantly -- 1.4x the half-life),
	IntTau float32 `default:"40" min:"1"`

	// time constant for integrating slower long-time-scale averages, such as nrn.ActAvg, Pool.ActsMAvg, ActsPAvg -- computed in NewState when a new input state is present (i.e., not msec but in units of a theta cycle) (tau is roughly how long it takes for value to change significantly) -- set lower for smaller models
	LongAvgTau float32 `default:"20" min:"1"`

	// cycle to start updating the CaPMaxCa, CaPMax values within a theta cycle -- early cycles often reflect prior state
	MaxCycStart int32 `default:"10" min:"0"`

	// nominal rate = Integ / tau
	VmDt float32 `display:"-" json:"-" xml:"-"`

	// nominal rate = Integ / tau
	VmDendDt float32 `display:"-" json:"-" xml:"-"`

	// 1 / VmSteps
	DtStep float32 `display:"-" json:"-" xml:"-"`

	// rate = Integ / tau
	GeDt float32 `display:"-" json:"-" xml:"-"`

	// rate = Integ / tau
	GiDt float32 `display:"-" json:"-" xml:"-"`

	// rate = Integ / tau
	IntDt float32 `display:"-" json:"-" xml:"-"`

	// rate = 1 / tau
	LongAvgDt float32 `display:"-" json:"-" xml:"-"`
}

DtParams are time and rate constants for temporal derivatives in Axon (Vm, G)

func (*DtParams) AvgVarUpdate

func (dp *DtParams) AvgVarUpdate(avg, vr *float32, val float32)

AvgVarUpdate updates the average and variance from current value, using LongAvgDt

func (*DtParams) Defaults

func (dp *DtParams) Defaults()

func (*DtParams) GeSynFromRaw

func (dp *DtParams) GeSynFromRaw(geSyn, geRaw float32) float32

GeSynFromRaw integrates a synaptic conductance from raw spiking using GeTau

func (*DtParams) GeSynFromRawSteady

func (dp *DtParams) GeSynFromRawSteady(geRaw float32) float32

GeSynFromRawSteady returns the steady-state GeSyn that would result from receiving a steady increment of GeRaw every time step = raw * GeTau. dSyn = Raw - dt*Syn; solve for dSyn = 0 to get steady state: dt*Syn = Raw; Syn = Raw / dt = Raw * Tau

func (*DtParams) GiSynFromRaw

func (dp *DtParams) GiSynFromRaw(giSyn, giRaw float32) float32

GiSynFromRaw integrates a synaptic conductance from raw spiking using GiTau

func (*DtParams) GiSynFromRawSteady

func (dp *DtParams) GiSynFromRawSteady(giRaw float32) float32

GiSynFromRawSteady returns the steady-state GiSyn that would result from receiving a steady increment of GiRaw every time step = raw * GiTau. dSyn = Raw - dt*Syn; solve for dSyn = 0 to get steady state: dt*Syn = Raw; Syn = Raw / dt = Raw * Tau

func (*DtParams) Update

func (dp *DtParams) Update()

type FieldValue

type FieldValue struct {
	Path          string
	Field         reflect.StructField
	Value, Parent reflect.Value
}

FieldValue holds the value of a field in a struct.

func StructValues

func StructValues(obj any, should func(parent reflect.Value, field reflect.StructField, value reflect.Value) bool) []*FieldValue

StructValues returns a list of [FieldValue]s for fields of given struct, including any sub-fields, subject to filtering from the given should function which returns true for anything to include and false to exclude. You must pass a pointer to the object, so that the values are addressable.

type GPLayerTypes

type GPLayerTypes int32 //enums:enum

GPLayerTypes is a GPLayer axon-specific layer type enum.

const (
	// GPePr is the set of prototypical GPe neurons, mediating classical NoGo
	GPePr GPLayerTypes = iota

	// GPeAk is arkypallidal layer of GPe neurons, receiving inhibition from GPePr
	// and projecting inhibition to Mtx
	GPeAk

	// GPi is the inner globus pallidus, functionally equivalent to SNr,
	// receiving from MtxGo and GPePr, and sending inhibition to VThal
	GPi
)

The GPLayer types

const GPLayerTypesN GPLayerTypes = 3

GPLayerTypesN is the highest valid value for type GPLayerTypes, plus one.

func GPLayerTypesValues

func GPLayerTypesValues() []GPLayerTypes

GPLayerTypesValues returns all possible values for the type GPLayerTypes.

func (GPLayerTypes) Desc

func (i GPLayerTypes) Desc() string

Desc returns the description of the GPLayerTypes value.

func (GPLayerTypes) Int64

func (i GPLayerTypes) Int64() int64

Int64 returns the GPLayerTypes value as an int64.

func (GPLayerTypes) MarshalText

func (i GPLayerTypes) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*GPLayerTypes) SetInt64

func (i *GPLayerTypes) SetInt64(in int64)

SetInt64 sets the GPLayerTypes value from an int64.

func (*GPLayerTypes) SetString

func (i *GPLayerTypes) SetString(s string) error

SetString sets the GPLayerTypes value from its string representation, and returns an error if the string is invalid.

func (GPLayerTypes) String

func (i GPLayerTypes) String() string

String returns the string representation of this GPLayerTypes value.

func (*GPLayerTypes) UnmarshalText

func (i *GPLayerTypes) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (GPLayerTypes) Values

func (i GPLayerTypes) Values() []enums.Enum

Values returns all possible values for the type GPLayerTypes.

type GPParams

type GPParams struct {

	// type of GP Layer -- must set during config using SetBuildConfig of GPType.
	GPType GPLayerTypes
	// contains filtered or unexported fields
}

GPLayer represents a globus pallidus layer, including: GPePr, GPeAk (arkypallidal), and GPi (see GPType for type). Typically just a single unit per Pool representing a given stripe.

func (*GPParams) Defaults

func (gp *GPParams) Defaults()

func (*GPParams) Update

func (gp *GPParams) Update()

type GPUVars

type GPUVars int32 //enums:enum

GPUVars is an enum for GPU variables, for specifying what to sync.

const (
	LayersVar        GPUVars = 0
	PathsVar         GPUVars = 1
	NetworkIxsVar    GPUVars = 2
	PoolIxsVar       GPUVars = 3
	NeuronIxsVar     GPUVars = 4
	SynapseIxsVar    GPUVars = 5
	PathSendConVar   GPUVars = 6
	RecvPathIxsVar   GPUVars = 7
	PathRecvConVar   GPUVars = 8
	RecvSynIxsVar    GPUVars = 9
	CtxVar           GPUVars = 10
	NeuronsVar       GPUVars = 11
	NeuronAvgsVar    GPUVars = 12
	LayerStatesVar   GPUVars = 13
	GlobalScalarsVar GPUVars = 14
	GlobalVectorsVar GPUVars = 15
	ExtsVar          GPUVars = 16
	PoolsVar         GPUVars = 17
	PoolsIntVar      GPUVars = 18
	PathGBufVar      GPUVars = 19
	PathGSynsVar     GPUVars = 20
	SynapsesVar      GPUVars = 21
	SynapseTracesVar GPUVars = 22
)
const GPUVarsN GPUVars = 23

GPUVarsN is the highest valid value for type GPUVars, plus one.

func GPUVarsValues

func GPUVarsValues() []GPUVars

GPUVarsValues returns all possible values for the type GPUVars.

func (GPUVars) Desc

func (i GPUVars) Desc() string

Desc returns the description of the GPUVars value.

func (GPUVars) Int64

func (i GPUVars) Int64() int64

Int64 returns the GPUVars value as an int64.

func (GPUVars) MarshalText

func (i GPUVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*GPUVars) SetInt64

func (i *GPUVars) SetInt64(in int64)

SetInt64 sets the GPUVars value from an int64.

func (*GPUVars) SetString

func (i *GPUVars) SetString(s string) error

SetString sets the GPUVars value from its string representation, and returns an error if the string is invalid.

func (GPUVars) String

func (i GPUVars) String() string

String returns the string representation of this GPUVars value.

func (*GPUVars) UnmarshalText

func (i *GPUVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (GPUVars) Values

func (i GPUVars) Values() []enums.Enum

Values returns all possible values for the type GPUVars.

type GScaleValues

type GScaleValues struct {

	// scaling factor for integrating synaptic input conductances (G's), originally computed as a function of sending layer activity and number of connections, and typically adapted from there -- see Path.PathScale adapt params
	Scale float32 `edit:"-"`

	// normalized relative proportion of total receiving conductance for this pathway: PathScale.Rel / sum(PathScale.Rel across relevant paths)
	Rel float32 `edit:"-"`
	// contains filtered or unexported fields
}

GScaleValues holds the conductance scaling values. These are computed once at start and remain constant thereafter, and therefore belong on Params and not on PathValues.

type GiveUpParams

type GiveUpParams struct {

	// threshold on GiveUp probability, below which no give up is triggered
	ProbThr float32 `default:"0.5"`

	// minimum GiveUpSum value, which is the denominator in the sigmoidal function.
	// This minimum prevents division by zero and any other degenerate values.
	MinGiveUpSum float32 `default:"0.1"`

	// the factor multiplying utility values: cost and expected positive outcome
	Utility float32 `default:"1"`

	// the factor multiplying timing values from VSPatch
	Timing float32 `default:"2"`

	// the factor multiplying progress values based on time-integrated progress
	// toward the goal
	Progress float32 `default:"1"`

	// minimum utility cost and reward estimate values -- when they are below
	// these levels (at the start) then utility is effectively neutral,
	// so the other factors take precedence.
	MinUtility float32 `default:"0.2"`

	// maximum VSPatchPosSum for normalizing the value for give-up weighing
	VSPatchSumMax float32 `default:"1"`

	// maximum VSPatchPosVar for normalizing the value for give-up weighing
	VSPatchVarMax float32 `default:"0.5"`

	// time constant for integrating the ProgressRate
	// values over time
	ProgressRateTau float32 `default:"2"`

	// 1/tau
	ProgressRateDt float32 `display:"-"`
}

GiveUpParams are parameters for computing when to give up, based on Utility, Timing and Progress factors.

func (*GiveUpParams) Defaults

func (gp *GiveUpParams) Defaults()

func (*GiveUpParams) Prob

func (gp *GiveUpParams) Prob(cnSum, guSum float32, rnd randx.Rand) (float32, bool)

Prob returns the probability and discrete bool give up for giving up based on given sums of continue and give up factors

func (*GiveUpParams) Sums

func (gp *GiveUpParams) Sums(di uint32) (cnSum, guSum float32)

Sums computes the summed weighting factors that drive continue and give up contributions to the probability function.

func (*GiveUpParams) Update

func (gp *GiveUpParams) Update()

type GlobalScalarVars

type GlobalScalarVars int32 //enums:enum

GlobalScalarVars are network-wide scalar variables, such as neuromodulators, reward, etc including the state for the Rubicon phasic dopamine model. These are stored in the Network.GlobalScalars tensor and corresponding global variable.

const (

	// Rew is the external reward value.  Must also set HasRew flag when Rew is set,
	// otherwise it is ignored. This is computed by the Rubicon algorithm from US
	// inputs set by Net.Rubicon methods, and can be directly set in simpler RL cases.
	GvRew GlobalScalarVars = iota

	// HasRew must be set to true (1) when an external reward / US input is present,
	// otherwise Rew is ignored.  This is also set when Rubicon BOA model gives up.
	// This drives ACh release in the Rubicon model.
	GvHasRew

	// RewPred is the reward prediction, computed by a special reward prediction layer,
	// e.g., the VSPatch layer in the Rubicon algorithm.
	GvRewPred

	// PrevPred is previous time step reward prediction, e.g., for TDPredLayer
	GvPrevPred

	// HadRew is HasRew state from the previous trial, copied from HasRew in NewState.
	// Used for updating Effort, Urgency at start of new trial.
	GvHadRew

	// DA is phasic dopamine that drives learning moreso than performance,
	// representing reward prediction error, signaled as phasic
	// increases or decreases in activity relative to a tonic baseline, which is
	// represented by a value of 0.  Released by the VTA (ventral tegmental area),
	// or SNc (substantia nigra pars compacta).
	GvDA

	// DAtonic is tonic dopamine, which has modulatory instead of learning effects.
	// Increases can drive greater propensity to engage in activities by biasing Go
	// vs No pathways in the basal ganglia, for example as a function of Urgency.
	GvDAtonic

	// ACh is acetylcholine, activated by salient events, particularly at the onset
	// of a reward / punishment outcome (US), or onset of a conditioned stimulus (CS).
	// Driven by BLA -> PPtg that detects changes in BLA activity, via LDTLayer type.
	GvACh

	// NE is norepinepherine -- not yet in use
	GvNE

	// Ser is serotonin -- not yet in use
	GvSer

	// AChRaw is raw ACh value used in updating global ACh value by LDTLayer.
	GvAChRaw

	// GoalMaint is the normalized (0-1) goal maintenance activity,
	// set in ApplyRubicon function at start of trial.
	// Drives top-down inhibition of LDT layer / ACh activity.
	GvGoalMaint

	// VSMatrixJustGated is VSMatrix just gated (to engage goal maintenance
	// in PFC areas), set at end of plus phase.  This excludes any gating
	// happening at time of US.
	GvVSMatrixJustGated

	// VSMatrixHasGated is VSMatrix has gated since the last time HasRew was set
	// (US outcome received or expected one failed to be received).
	GvVSMatrixHasGated

	// CuriosityPoolGated is true if VSMatrixJustGated and the first pool
	// representing the curiosity / novelty drive gated. This can change the
	// giving up Effort.Max parameter.
	GvCuriosityPoolGated

	// Time is the raw time counter, incrementing upward during goal engaged window.
	// This is also copied directly into NegUS[0] which tracks time, but we maintain
	// a separate effort value to make it clearer.
	GvTime

	// Effort is the raw effort counter, incrementing upward for each effort step
	// during goal engaged window.
	// This is also copied directly into NegUS[1] which tracks effort, but we maintain
	// a separate effort value to make it clearer.
	GvEffort

	// UrgencyRaw is the raw effort for urgency, incrementing upward from effort
	// increments per step when _not_ goal engaged.
	GvUrgencyRaw

	// Urgency is the overall urgency activity level (normalized 0-1),
	// computed from logistic function of GvUrgencyRaw.  This drives DAtonic
	// activity to increasingly bias Go firing.
	GvUrgency

	// HasPosUS indicates has positive US on this trial,
	// drives goal accomplishment logic and gating.
	GvHasPosUS

	// HadPosUS is state from the previous trial (copied from HasPosUS in NewState).
	GvHadPosUS

	// NegUSOutcome indicates that a phasic negative US stimulus was experienced,
	// driving phasic ACh, VSMatrix gating to reset current goal engaged plan (if any),
	// and phasic dopamine based on the outcome.
	GvNegUSOutcome

	// HadNegUSOutcome is state from the previous trial (copied from NegUSOutcome
	// in NewState)
	GvHadNegUSOutcome

	// PVposSum is the total weighted positive valence primary value
	// = sum of Weight * USpos * Drive
	GvPVposSum

	// PVpos is the normalized positive valence primary value
	// = (1 - 1/(1+PVposGain * PVposSum))
	GvPVpos

	// PVnegSum is the total weighted negative valence primary values including costs
	// = sum of Weight * Cost + Weight * USneg
	GvPVnegSum

	// PVpos is the normalized negative valence primary values, including costs
	// = (1 - 1/(1+PVnegGain * PVnegSum))
	GvPVneg

	// PVposEst is the estimated PVpos final outcome value
	// decoded from the network PVposFinal layer
	GvPVposEst

	// PVposVar is the estimated variance or uncertainty in the PVpos
	// final outcome value decoded from the network PVposFinal layer.
	GvPVposVar

	// PVnegEst is the estimated PVneg final outcome value
	// decoded from the network PVnegFinal layer.
	GvPVnegEst

	// PVnegVar is the estimated variance or uncertainty in the PVneg
	// final outcome value decoded from the network PVnegFinal layer.
	GvPVnegVar

	// GoalDistEst is the estimate of distance to the goal, in trial step units,
	// decreasing down to 0 as the goal approaches.
	GvGoalDistEst

	// GoalDistPrev is the previous estimate of distance to the goal,
	// in trial step units, decreasing down to 0 as the goal approaches.
	GvGoalDistPrev

	// ProgressRate is the negative time average change in GoalDistEst,
	// i.e., positive values indicate continued approach to the goal,
	// while negative values represent moving away from the goal.
	GvProgressRate

	// GiveUpUtility is total GiveUp weight as a function of Cost.
	GvGiveUpUtility

	// ContUtility is total Continue weight as a function of expected positive outcome PVposEst.
	GvContUtility

	// GiveUpTiming is total GiveUp weight as a function of VSPatchPosSum * (1 - VSPatchPosVar).
	GvGiveUpTiming

	// ContTiming is total Continue weight as a function of (1 - VSPatchPosSum) * VSPatchPosVar.
	GvContTiming

	// GiveUpProgress is total GiveUp weight as a function of ProgressRate.
	GvGiveUpProgress

	// ContProgress is total Continue weight as a function of ProgressRate.
	GvContProgress

	// GiveUpSum is total GiveUp weight: Utility + Timing + Progress.
	GvGiveUpSum

	// ContSum is total Continue weight: Utility + Timing + Progress.
	GvContSum

	// GiveUpProb is the probability of giving up: 1 / (1 + (GvContSum / GvGiveUpSum))
	GvGiveUpProb

	// GiveUp is true if a reset was triggered probabilistically based on GiveUpProb.
	GvGiveUp

	// GaveUp is copy of GiveUp from previous trial.
	GvGaveUp

	// VSPatchPos is the net shunting input from VSPatch (PosD1, named PVi in original Rubicon)
	// computed as the Max of US-specific VSPatch saved values, subtracting D1 - D2.
	// This is also stored as GvRewPred.
	GvVSPatchPos

	// VSPatchPosThr is a thresholded version of GvVSPatchPos,
	// applying Rubicon.LHb.VSPatchNonRewThr threshold for non-reward trials.
	// This is the version used for computing DA.
	GvVSPatchPosThr

	// VSPatchPosRPE is the reward prediction error for the VSPatchPos reward prediction
	// without any thresholding applied, and only for PV events.
	// This is used to train the VSPatch, assuming a local feedback circuit that does
	// not have the effective thresholding used for the broadcast critic signal that
	// trains the rest of the network.
	GvVSPatchPosRPE

	// VSPatchPosSum is the sum of VSPatchPos over goal engaged trials,
	// representing the integrated prediction that the US is going to occur
	GvVSPatchPosSum

	// VSPatchPosPrev is the previous trial VSPatchPosSum
	GvVSPatchPosPrev

	// VSPatchPosVar is the integrated temporal variance of VSPatchPos over goal engaged trials,
	// which determines when the VSPatchPosSum has stabilized
	GvVSPatchPosVar

	// computed LHb activity level that drives dipping / pausing of DA firing,
	// when VSPatch pos prediction > actual PV reward drive
	// or PVneg > PVpos
	GvLHbDip

	// LHbBurst is computed LHb activity level that drives bursts of DA firing,
	// when actual PV reward drive > VSPatch pos prediction
	GvLHbBurst

	// LHbPVDA is GvLHbBurst - GvLHbDip -- the LHb contribution to DA,
	// reflecting PV and VSPatch (PVi), but not the CS (LV) contributions
	GvLHbPVDA

	// CeMpos is positive valence central nucleus of the amygdala (CeM)
	// LV (learned value) activity, reflecting
	// |BLAposAcqD1 - BLAposExtD2|_+ positively rectified.
	// CeM sets Raw directly.  Note that a positive US onset even with no
	// active Drive will be reflected here, enabling learning about unexpected outcomes.
	GvCeMpos

	// CeMneg is negative valence central nucleus of the amygdala (CeM)
	// LV (learned value) activity, reflecting
	// |BLAnegAcqD2 - BLAnegExtD1|_+ positively rectified.  CeM sets Raw directly
	GvCeMneg

	// VtaDA is overall dopamine value reflecting all of the different inputs.
	GvVtaDA

	// CaBinWts are NCaBins starting here, of weights for integrating binned spikes
	// to compute synaptic calcium values that drive the trace factor in learning.
	// These are only stored for the first parallel data index di = 0.
	GvCaBinWts
)
const GlobalScalarVarsN GlobalScalarVars = 58

GlobalScalarVarsN is the highest valid value for type GlobalScalarVars, plus one.

func GlobalScalarVarsValues

func GlobalScalarVarsValues() []GlobalScalarVars

GlobalScalarVarsValues returns all possible values for the type GlobalScalarVars.

func (GlobalScalarVars) Desc

func (i GlobalScalarVars) Desc() string

Desc returns the description of the GlobalScalarVars value.

func (GlobalScalarVars) Int64

func (i GlobalScalarVars) Int64() int64

Int64 returns the GlobalScalarVars value as an int64.

func (GlobalScalarVars) MarshalText

func (i GlobalScalarVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*GlobalScalarVars) SetInt64

func (i *GlobalScalarVars) SetInt64(in int64)

SetInt64 sets the GlobalScalarVars value from an int64.

func (*GlobalScalarVars) SetString

func (i *GlobalScalarVars) SetString(s string) error

SetString sets the GlobalScalarVars value from its string representation, and returns an error if the string is invalid.

func (GlobalScalarVars) String

func (i GlobalScalarVars) String() string

String returns the string representation of this GlobalScalarVars value.

func (*GlobalScalarVars) UnmarshalText

func (i *GlobalScalarVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (GlobalScalarVars) Values

func (i GlobalScalarVars) Values() []enums.Enum

Values returns all possible values for the type GlobalScalarVars.

type GlobalVectorVars

type GlobalVectorVars int32 //enums:enum

GlobalVectorVars are network-wide vector variables, such as drives, costs, US outcomes, with MaxGlobalVecN values per variable. These are stored in the Network.GlobalVectors tensor and corresponding global variable.

const (

	// Cost are Time, Effort, etc costs, as normalized version of corresponding raw.
	// NCosts of them
	GvCost GlobalVectorVars = iota

	// CostRaw are raw, linearly incremented negative valence US outcomes,
	// this value is also integrated together with all US vals for PVneg
	GvCostRaw

	// USneg are negative valence US outcomes, normalized version of raw.
	// NNegUSs of them
	GvUSneg

	// USnegRaw are raw, linearly incremented negative valence US outcomes,
	// this value is also integrated together with all US vals for PVneg
	GvUSnegRaw

	// Drives are current drive state, updated with optional homeostatic
	// exponential return to baseline values.
	GvDrives

	// USpos are current positive-valence drive-satisfying input(s)
	// (unconditioned stimuli = US)
	GvUSpos

	// VSPatch is current reward predicting VSPatch (PosD1) values.
	GvVSPatchD1

	// VSPatch is current reward predicting VSPatch (PosD2) values.
	GvVSPatchD2

	// OFCposPTMaint is activity level of given OFCposPT maintenance pool
	// used in anticipating potential USpos outcome value.
	GvOFCposPTMaint

	// VSMatrixPoolGated indicates whether given VSMatrix pool gated
	// this is reset after last goal accomplished -- records gating since then.
	GvVSMatrixPoolGated
)
const GlobalVectorVarsN GlobalVectorVars = 10

GlobalVectorVarsN is the highest valid value for type GlobalVectorVars, plus one.

func GlobalVectorVarsValues

func GlobalVectorVarsValues() []GlobalVectorVars

GlobalVectorVarsValues returns all possible values for the type GlobalVectorVars.

func (GlobalVectorVars) Desc

func (i GlobalVectorVars) Desc() string

Desc returns the description of the GlobalVectorVars value.

func (GlobalVectorVars) Int64

func (i GlobalVectorVars) Int64() int64

Int64 returns the GlobalVectorVars value as an int64.

func (GlobalVectorVars) MarshalText

func (i GlobalVectorVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*GlobalVectorVars) SetInt64

func (i *GlobalVectorVars) SetInt64(in int64)

SetInt64 sets the GlobalVectorVars value from an int64.

func (*GlobalVectorVars) SetString

func (i *GlobalVectorVars) SetString(s string) error

SetString sets the GlobalVectorVars value from its string representation, and returns an error if the string is invalid.

func (GlobalVectorVars) String

func (i GlobalVectorVars) String() string

String returns the string representation of this GlobalVectorVars value.

func (*GlobalVectorVars) UnmarshalText

func (i *GlobalVectorVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (GlobalVectorVars) Values

func (i GlobalVectorVars) Values() []enums.Enum

Values returns all possible values for the type GlobalVectorVars.

type HebbParams

type HebbParams struct {

	// On turns on the use of the Hebbian learning rule instead of the default.
	On slbool.Bool

	// Up is the strength multiplier for hebbian increases, based on R * S * (1-LWt).
	Up float32 `default:"0.5"`

	// Down is the strength multiplier for hebbian decreases, based on R * (1 - S) * LWt.
	Down float32 `default:"1"`
	// contains filtered or unexported fields
}

HebbParams for optional hebbian learning that replaces the default learning rule, based on S = sending activity, R = receiving activity

func (*HebbParams) Defaults

func (hp *HebbParams) Defaults()

func (*HebbParams) ShouldDisplay

func (hp *HebbParams) ShouldDisplay(field string) bool

func (*HebbParams) Update

func (hp *HebbParams) Update()

type HipConfig

type HipConfig struct {

	// size of EC2
	EC2Size vecint.Vector2i `nest:"+"`

	// number of EC3 pools (outer dimension)
	EC3NPool vecint.Vector2i `nest:"+"`

	// number of neurons in one EC3 pool
	EC3NNrn vecint.Vector2i `nest:"+"`

	// number of neurons in one CA1 pool
	CA1NNrn vecint.Vector2i `nest:"+"`

	// size of CA3
	CA3Size vecint.Vector2i `nest:"+"`

	// size of DG / CA3
	DGRatio float32 `default:"2.236"`

	// percent connectivity from EC3 to EC2
	EC3ToEC2PCon float32 `default:"0.1"`

	// percent connectivity from EC2 to DG
	EC2ToDGPCon float32 `default:"0.25"`

	// percent connectivity from EC2 to CA3
	EC2ToCA3PCon float32 `default:"0.25"`

	// percent connectivity from CA3 to CA1
	CA3ToCA1PCon float32 `default:"0.25"`

	// percent connectivity into CA3 from DG
	DGToCA3PCon float32 `default:"0.02"`

	// lateral radius of connectivity in EC2
	EC2LatRadius int

	// lateral gaussian sigma in EC2 for how quickly weights fall off with distance
	EC2LatSigma float32

	// proportion of full mossy fiber strength (PathScale.Rel) for CA3 EDL in training, applied at the start of a trial to reduce DG -> CA3 strength.  1 = fully reduce strength, .5 = 50% reduction, etc
	MossyDelta float32 `default:"1"`

	// proportion of full mossy fiber strength (PathScale.Rel) for CA3 EDL in testing, applied during 2nd-3rd quarters to reduce DG -> CA3 strength.  1 = fully reduce strength, .5 = 50% reduction, etc
	MossyDeltaTest float32 `default:"0.75"`

	// low theta modulation value for temporal difference EDL -- sets PathScale.Rel on CA1 <-> EC paths consistent with Theta phase model
	ThetaLow float32 `default:"0.9"`

	// high theta modulation value for temporal difference EDL -- sets PathScale.Rel on CA1 <-> EC paths consistent with Theta phase model
	ThetaHigh float32 `default:"1"`

	// flag for clamping the EC5 from EC5ClampSrc
	EC5Clamp bool `default:"true"`

	// source layer for EC5 clamping activations in the plus phase -- biologically it is EC3 but can use an Input layer if available
	EC5ClampSrc string `default:"EC3"`

	// clamp the EC5 from EC5ClampSrc during testing as well as training -- this will overwrite any target values that might be used in stats (e.g., in the basic hip example), so it must be turned off there
	EC5ClampTest bool `default:"true"`

	// threshold for binarizing EC5 clamp values -- any value above this is clamped to 1, else 0 -- helps produce a cleaner learning signal.  Set to 0 to not perform any binarization.
	EC5ClampThr float32 `default:"0.1"`
}

HipConfig have the hippocampus size and connectivity parameters

func (*HipConfig) Defaults

func (hip *HipConfig) Defaults()

type HipPathParams

type HipPathParams struct {

	// Hebbian learning proportion
	Hebb float32 `default:"0"`

	// EDL proportion
	Err float32 `default:"1"`

	// proportion of correction to apply to sending average activation for hebbian learning component (0=none, 1=all, .5=half, etc)
	SAvgCor float32 `default:"0.4:0.8" min:"0" max:"1"`

	// threshold of sending average activation below which learning does not occur (prevents learning when there is no input)
	SAvgThr float32 `default:"0.01" min:"0"`

	// sending layer Nominal (need to manually set it to be the same as the sending layer)
	SNominal float32 `default:"0.1" min:"0"`
	// contains filtered or unexported fields
}

HipPathParams define behavior of hippocampus paths, which have special learning rules

func (*HipPathParams) Defaults

func (hp *HipPathParams) Defaults()

func (*HipPathParams) Update

func (hp *HipPathParams) Update()

type InhibParams

type InhibParams struct {

	// ActAvg has layer-level and pool-level average activation initial values
	// and updating / adaptation thereof.
	// Initial values help determine initial scaling factors.
	ActAvg ActAvgParams `display:"inline"`

	// Layer determines inhibition across the entire layer.
	// Input layers generally use Gi = 0.8 or 0.9, 1.3 or higher for sparse layers.
	// If the layer has sub-pools (4D shape) then this is effectively between-pool inhibition.
	Layer fsfffb.GiParams `display:"inline"`

	// Pool determines inhibition within sub-pools of units, for layers with 4D shape.
	// This is almost always necessary if the layer has sub-pools.
	Pool fsfffb.GiParams `display:"inline"`
}

InhibParams contains all the inhibition computation params and functions for basic Axon. This is included in LayerParams to support computation. Also includes the expected average activation in the layer, which is used for G conductance rescaling and potentially for adapting inhibition over time.

func (*InhibParams) Defaults

func (ip *InhibParams) Defaults()

func (*InhibParams) Update

func (ip *InhibParams) Update()

type LDTParams

type LDTParams struct {

	// SrcThr is the threshold per input source, on absolute value (magnitude),
	// to count as a significant reward event, which then drives maximal ACh.
	// Set to 0 to disable this nonlinear behavior.
	SrcThr float32 `default:"0.05"`

	// Rew uses the global Context.NeuroMod.HasRew flag to drive ACh:
	// if there is some kind of external reward being given, then
	// ACh goes to 1, else 0 for this component.
	Rew slbool.Bool `default:"true"`

	// MaintInhib is the extent to which active goal maintenance (via Global GoalMaint)
	// inhibits ACh signals: when goal engaged, distractability is lower.
	MaintInhib float32 `default:"0.8" max:"1" min:"0"`

	// index of Layer to get max activity from; set during Build from BuildConfig
	// SrcLay1Name if present -- -1 if not used.
	SrcLay1Index int32 `edit:"-"`

	// index of Layer to get max activity from; set during Build from BuildConfig
	// SrcLay2Name if present -- -1 if not used.
	SrcLay2Index int32 `edit:"-"`

	// index of Layer to get max activity from; set during Build from BuildConfig
	// SrcLay3Name if present -- -1 if not used.
	SrcLay3Index int32 `edit:"-"`

	// index of Layer to get max activity from; set during Build from BuildConfig
	// SrcLay4Name if present -- -1 if not used.
	SrcLay4Index int32 `edit:"-"`
	// contains filtered or unexported fields
}

LDTParams compute reward salience as ACh global neuromodulatory signal as a function of the MAX activation of its inputs from salience detecting layers (e.g., the superior colliculus: SC), and whenever there is an external US outcome input (signalled by the global GvHasRew flag). ACh from salience inputs is discounted by GoalMaint activity, reducing distraction when pursuing a goal, but US ACh activity is not so reduced. ACh modulates excitability of goal-gating layers.

func (*LDTParams) ACh

func (lp *LDTParams) ACh(ctx *Context, di uint32, srcLay1Act, srcLay2Act, srcLay3Act, srcLay4Act float32) float32

ACh returns the computed ACh salience value based on given source layer activations and key values from the ctx Context.

func (*LDTParams) Defaults

func (lp *LDTParams) Defaults()

func (*LDTParams) MaxSrcAct

func (lp *LDTParams) MaxSrcAct(maxSrcAct, srcLayAct float32) float32

MaxSrcAct returns the updated maxSrcAct value from given source layer activity value.

func (*LDTParams) Thr

func (lp *LDTParams) Thr(val float32) float32

Thr applies SrcThr threshold to given value

func (*LDTParams) Update

func (lp *LDTParams) Update()

type LHbParams

type LHbParams struct {

	// threshold on VSPatch prediction during a non-reward trial
	VSPatchNonRewThr float32 `default:"0.1"`

	// gain on the VSPatchD1 - D2 difference to drive the net VSPatch DA
	// prediction signal, which goes in VSPatchPos and RewPred global variables
	VSPatchGain float32 `default:"4"`

	// decay time constant for computing the temporal variance in VSPatch
	// values over time
	VSPatchVarTau float32 `default:"2"`

	// threshold factor that multiplies integrated pvNeg value
	// to establish a threshold for whether the integrated pvPos value
	// is good enough to drive overall net positive reward.
	// If pvPos wins, it is then multiplicatively discounted by pvNeg;
	// otherwise, pvNeg is discounted by pvPos.
	NegThr float32 `default:"1"`

	// gain multiplier on PVpos for purposes of generating bursts
	// (not for discounting negative dips).
	BurstGain float32 `default:"1"`

	// gain multiplier on PVneg for purposes of generating dips
	// (not for discounting positive bursts).
	DipGain float32 `default:"1"`

	// 1/tau
	VSPatchVarDt float32 `display:"-"`
}

LHbParams has values for computing LHb & RMTg which drives dips / pauses in DA firing. LHb handles all US-related (PV = primary value) processing. Positive net LHb activity drives dips / pauses in VTA DA activity, e.g., when predicted pos > actual or actual neg > predicted. Negative net LHb activity drives bursts in VTA DA activity, e.g., when actual pos > predicted (redundant with LV / Amygdala) or "relief" burst when actual neg < predicted.

func (*LHbParams) DAFromPVs

func (lh *LHbParams) DAFromPVs(pvPos, pvNeg, vsPatchPos, vsPatchPosSum float32) (burst, dip, da, rew float32)

DAFromPVs computes the overall PV DA in terms of LHb burst and dip activity from given pvPos, pvNeg, and vsPatchPos values. Also returns the net "reward" value as the discounted PV value, separate from the vsPatchPos prediction error factor.

func (*LHbParams) DAforNoUS

func (lh *LHbParams) DAforNoUS(di uint32) float32

DAforNoUS computes the LHb response when there is _NOT_ a primary positive reward value or a give-up state. In this case, inhibition of VS via tonic ACh is assumed to prevent activity of PVneg (and there is no PVpos). Because the LHb only responds when it decides to GiveUp, there is no response in this case. DA is instead driven by CS-based computation, in rubicon_layers.go, VTAParams.VTADA

func (*LHbParams) DAforUS

func (lh *LHbParams) DAforUS(di uint32, pvPos, pvNeg, vsPatchPos, vsPatchPosSum float32) float32

DAforUS computes the overall LHb Dip or Burst (one is always 0), and PVDA ~= Burst - Dip, for case when there is a primary positive reward value or a give-up state has triggered. Returns the overall net reward magnitude, prior to VSPatch discounting.

func (*LHbParams) Defaults

func (lh *LHbParams) Defaults()

func (*LHbParams) Reset

func (lh *LHbParams) Reset(di uint32)

Reset resets all LHb vars back to 0

func (*LHbParams) Update

func (lh *LHbParams) Update()

type LRateMod

type LRateMod struct {

	// toggle use of this modulation factor
	On slbool.Bool

	// baseline learning rate -- what you get for correct cases
	Base float32 `min:"0" max:"1"`

	// defines the range over which modulation occurs for the modulator factor -- Min and below get the Base level of learning rate modulation, Max and above get a modulation of 1
	Range minmax.F32
	// contains filtered or unexported fields
}

LRateMod implements global learning rate modulation, based on a performance-based factor, for example error. Increasing levels of the factor = higher learning rate. This can be added to a Sim and called prior to DWt() to dynamically change lrate based on overall network performance. It is not used by default in the standard params.

func (*LRateMod) Defaults

func (lr *LRateMod) Defaults()

func (*LRateMod) LRateMod

func (lr *LRateMod) LRateMod(net *Network, fact float32) float32

LRateMod calls LRateMod on given network, using computed Mod factor based on given normalized modulation factor (0 = no error = Base learning rate, 1 = maximum error). returns modulation factor applied.

func (*LRateMod) Mod

func (lr *LRateMod) Mod(fact float32) float32

Mod returns the learning rate modulation factor as a function of any kind of normalized modulation factor, e.g., an error measure. If fact <= Range.Min, returns Base If fact >= Range.Max, returns 1 otherwise, returns proportional value between Base..1

func (*LRateMod) ShouldDisplay

func (lr *LRateMod) ShouldDisplay(field string) bool

func (*LRateMod) Update

func (lr *LRateMod) Update()

type LRateParams

type LRateParams struct {

	// Base learning rate for this pathway, which can be modulated
	// by the other factors below. Generally larger networks use slower rates.
	Base float32 `default:"0.04,0.1,0.2"`

	// Sched is a scheduled learning rate multiplier, simulating reduction
	// in plasticity over aging. Use the [Network.LRateSched] method to apply
	// a given value to all pathways in the network.
	Sched float32

	// Mod is a dynamic learning rate modulation factor, typically driven by
	// neuromodulation (e.g., dopamine).
	Mod float32

	// Eff is the net effective actual learning rate multiplier used in
	// computing [DWt]: Eff = Mod * Sched * Base
	Eff float32 `edit:"-"`
}

LRateParams manages learning rate parameters for scaling DWt delta weight values that then update LWt online learned weights. It has two optional modulation factors on top of a Base learning rate.

func (*LRateParams) Defaults

func (ls *LRateParams) Defaults()

func (*LRateParams) Init

func (ls *LRateParams) Init()

Init initializes modulation values back to 1 and updates Eff

func (*LRateParams) Update

func (ls *LRateParams) Update()

func (*LRateParams) UpdateEff

func (ls *LRateParams) UpdateEff()

type Layer

type Layer struct {
	emer.LayerBase

	// Params are layer parameters (pointer to item in Network.LayerParams).
	Params *LayerParams

	// our parent network, in case we need to use it to find
	// other layers etc; set when added by network.
	Network *Network `copier:"-" json:"-" xml:"-" display:"-"`

	// Type is the type of layer, which drives specialized computation as needed.
	Type LayerTypes

	// NNeurons is the number of neurons in the layer.
	NNeurons uint32 `display:"-"`

	// NeurStIndex is the starting index of neurons for this layer within
	// the global Network list.
	NeurStIndex uint32 `display:"-" inactive:"-"`

	// NPools is the number of inhibitory pools based on layer shape,
	// with the first one representing the entire set of neurons in the layer,
	// and 4D shaped layers have sub-pools after that.
	NPools uint32 `display:"-"`

	// MaxData is the maximum amount of input data that can be processed in
	// parallel in one pass of the network (copied from [NetworkIndexes]).
	// Neuron, Pool, Values storage is allocated to hold this amount.
	MaxData uint32 `display:"-"`

	// RecvPaths is the list of receiving pathways into this layer from other layers.
	RecvPaths []*Path

	// SendPaths is the list of sending pathways from this layer to other layers.
	SendPaths []*Path

	// BuildConfig has configuration data set when the network is configured,
	// that is used during the network Build() process via PostBuild method,
	// after all the structure of the network has been fully constructed.
	// In particular, the Params is nil until Build, so setting anything
	// specific in there (e.g., an index to another layer) must be done
	// as a second pass.  Note that Params are all applied after Build
	// and can set user-modifiable params, so this is for more special
	// algorithm structural parameters set during ConfigNet() methods.
	BuildConfig map[string]string `table:"-"`

	// DefaultParams are closures that apply default parameters
	// prior to user-set parameters. These are useful for specific layer
	// functionality in specialized brain areas (e.g., Rubicon, BG etc)
	// not associated with a layer type, which otherwise is used to hard-code
	// initial default parameters.
	DefaultParams []func(ly *LayerParams) `display:"-"`
}

Layer implements the basic Axon spiking activation function, and manages learning in the pathways.

func (*Layer) AddClass

func (ly *Layer) AddClass(cls ...string) *Layer

func (*Layer) AddDefaultParams

func (ly *Layer) AddDefaultParams(fun func(ly *LayerParams))

AddDefaultParams adds given default param setting function.

func (*Layer) AllParams

func (ly *Layer) AllParams() string

AllParams returns a listing of all parameters in the Layer

func (*Layer) ApplyExt

func (ly *Layer) ApplyExt(di uint32, ext tensor.Tensor)

ApplyExt applies external input in the form of an tensor.Float32 or 64. Negative values and NaNs are not valid, and will be interpreted as missing inputs. The given data index di is the data parallel index (0 < di < MaxData): must present inputs separately for each separate data parallel set. If dimensionality of tensor matches that of layer, and is 2D or 4D, then each dimension is iterated separately, so any mismatch preserves dimensional structure. Otherwise, the flat 1D view of the tensor is used. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext. Also sets the Exts values on layer, which are used for the GPU version, which requires calling the network ApplyExts() method -- is a no-op for CPU.

func (*Layer) ApplyExt1D

func (ly *Layer) ApplyExt1D(di uint32, ext []float64)

ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext

func (*Layer) ApplyExt1D32

func (ly *Layer) ApplyExt1D32(di uint32, ext []float32)

ApplyExt1D32 applies external input in the form of a flat 1-dimensional slice of float32s. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext

func (*Layer) ApplyExt1DTsr

func (ly *Layer) ApplyExt1DTsr(di uint32, ext tensor.Tensor)

ApplyExt1DTsr applies external input using 1D flat interface into tensor. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext

func (*Layer) ApplyExt2D

func (ly *Layer) ApplyExt2D(di uint32, ext tensor.Tensor)

ApplyExt2D applies 2D tensor external input

func (*Layer) ApplyExt2Dto4D

func (ly *Layer) ApplyExt2Dto4D(di uint32, ext tensor.Tensor)

ApplyExt2Dto4D applies 2D tensor external input to a 4D layer

func (*Layer) ApplyExt4D

func (ly *Layer) ApplyExt4D(di uint32, ext tensor.Tensor)

ApplyExt4D applies 4D tensor external input

func (*Layer) ApplyExtFlags

func (ly *Layer) ApplyExtFlags() (clearMask, setMask NeuronFlags, toTarg bool)

ApplyExtFlags gets the clear mask and set mask for updating neuron flags based on layer type, and whether input should be applied to Target (else Ext)

func (*Layer) ApplyExtValue

func (ly *Layer) ApplyExtValue(lni, di uint32, val float32, clearMask, setMask NeuronFlags, toTarg bool)

ApplyExtVal applies given external value to given neuron using clearMask, setMask, and toTarg from ApplyExtFlags. Also saves Val in Exts for potential use by GPU.

func (*Layer) AvgMaxVarByPool

func (ly *Layer) AvgMaxVarByPool(varNm string, poolIndex, di int) minmax.AvgMax32

AvgMaxVarByPool returns the average and maximum value of given variable for given pool index (0 = entire layer, 1.. are subpools for 4D only). Uses fast index-based variable access.

func (*Layer) BGThalDefaults

func (ly *Layer) BGThalDefaults()

func (*Layer) BLADefaults

func (ly *Layer) BLADefaults()

func (*Layer) Build

func (ly *Layer) Build() error

Build constructs the layer state, including calling Build on the pathways

func (*Layer) BuildConfigByName

func (ly *Layer) BuildConfigByName(nm string) (string, error)

BuildConfigByName looks for given BuildConfig option by name, and reports & returns an error if not found.

func (*Layer) BuildConfigFindLayer

func (ly *Layer) BuildConfigFindLayer(nm string, mustName bool) int32

BuildConfigFindLayer looks for BuildConfig of given name and if found, looks for layer with corresponding name. if mustName is true, then an error is logged if the BuildConfig name does not exist. An error is always logged if the layer name is not found. -1 is returned in any case of not found.

func (*Layer) BuildPaths

func (ly *Layer) BuildPaths(ctx *Context) error

BuildPaths builds the pathways, send-side

func (*Layer) BuildPools

func (ly *Layer) BuildPools(ctx *Context, nn uint32) error

BuildPools builds the inhibitory pools structures -- nu = number of units in layer

func (*Layer) BuildSubPools

func (ly *Layer) BuildSubPools(ctx *Context)

BuildSubPools initializes neuron start / end indexes for sub-pools

func (*Layer) CTDefaultParamsFast

func (ly *Layer) CTDefaultParamsFast()

CTDefaultParamsFast sets fast time-integration parameters for CTLayer. This is what works best in the deep_move 1 trial history case, vs Medium and Long

func (*Layer) CTDefaultParamsLong

func (ly *Layer) CTDefaultParamsLong()

CTDefaultParamsLong sets long time-integration parameters for CTLayer. This is what works best in the deep_music test case integrating over long time windows, compared to Medium and Fast.

func (*Layer) CTDefaultParamsMedium

func (ly *Layer) CTDefaultParamsMedium()

CTDefaultParamsMedium sets medium time-integration parameters for CTLayer. This is what works best in the FSA test case, compared to Fast (deep_move) and Long (deep_music) time integration.

func (*Layer) CeMDefaults

func (ly *Layer) CeMDefaults()

func (*Layer) ClearTargExt

func (ly *Layer) ClearTargExt(ctx *Context)

ClearTargExt clears external inputs Ext that were set from target values Target. This can be called to simulate alpha cycles within theta cycles, for example.

func (*Layer) Defaults

func (ly *Layer) Defaults()

func (*Layer) GPDefaults

func (ly *Layer) GPDefaults()

func (*Layer) GPPostBuild

func (ly *Layer) GPPostBuild()

func (*Layer) GPiDefaults

func (ly *Layer) GPiDefaults()

func (*Layer) InitActAvg

func (ly *Layer) InitActAvg(ctx *Context)

InitActAvg initializes the running-average activation values that drive learning and the longer time averaging values.

func (*Layer) InitActAvgLayer

func (ly *Layer) InitActAvgLayer(ctx *Context)

InitActAvgLayer initializes the running-average activation values that drive learning and the longer time averaging values. version with just overall layer-level inhibition.

func (*Layer) InitActAvgPools

func (ly *Layer) InitActAvgPools(ctx *Context)

InitActAvgPools initializes the running-average activation values that drive learning and the longer time averaging values. version with pooled inhibition.

func (*Layer) InitActs

func (ly *Layer) InitActs(ctx *Context)

InitActs fully initializes activation state -- only called automatically during InitWeights

func (*Layer) InitExt

func (ly *Layer) InitExt()

InitExt initializes external input state. Should be called prior to ApplyExt on all layers receiving Ext input.

func (*Layer) InitGScale

func (ly *Layer) InitGScale(ctx *Context)

InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.

func (*Layer) InitWeights

func (ly *Layer) InitWeights(ctx *Context, nt *Network)

InitWeights initializes the weight values in the network, i.e., resetting learning Also calls InitActs

func (*Layer) InitWtSym

func (ly *Layer) InitWtSym(ctx *Context)

InitWeightsSym initializes the weight symmetry -- higher layers copy weights from lower layers

func (*Layer) LDTDefaults

func (ly *Layer) LDTDefaults()

func (*Layer) LDTPostBuild

func (ly *Layer) LDTPostBuild()

func (*Layer) LRateMod

func (ly *Layer) LRateMod(mod float32)

LRateMod sets the LRate modulation parameter for Paths, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.

func (*Layer) LRateSched

func (ly *Layer) LRateSched(sched float32)

LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.

func (*Layer) LesionNeurons

func (ly *Layer) LesionNeurons(prop float32) int

LesionNeurons lesions (sets the Off flag) for given proportion (0-1) of neurons in layer returns number of neurons lesioned. Emits error if prop > 1 as indication that percent might have been passed

func (*Layer) LocalistErr2D

func (ly *Layer) LocalistErr2D(ctx *Context) (err []bool, minusIndex, plusIndex []int)

LocalistErr2D decodes a 2D layer with Y axis = redundant units, X = localist units returning the indexes of the max activated localist value in the minus and plus phase activities, and whether these are the same or different (err = different) returns one result per data parallel index ([ctx.NData])

func (*Layer) LocalistErr4D

func (ly *Layer) LocalistErr4D(ctx *Context) (err []bool, minusIndex, plusIndex []int)

LocalistErr4D decodes a 4D layer with each pool representing a localist value. Returns the flat 1D indexes of the max activated localist value in the minus and plus phase activities, and whether these are the same or different (err = different)

func (*Layer) MakeToolbar

func (ly *Layer) MakeToolbar(p *tree.Plan)

MakeToolbar is the standard core GUI toolbar for the layer when edited.

func (*Layer) MatrixDefaults

func (ly *Layer) MatrixDefaults()

func (*Layer) MatrixPostBuild

func (ly *Layer) MatrixPostBuild()

func (*Layer) NumRecvPaths

func (ly *Layer) NumRecvPaths() int

func (*Layer) NumSendPaths

func (ly *Layer) NumSendPaths() int

func (*Layer) PTMaintDefaults

func (ly *Layer) PTMaintDefaults()

func (*Layer) PctUnitErr

func (ly *Layer) PctUnitErr(ctx *Context) []float64

PctUnitErr returns the proportion of units where the thresholded value of Target (Target or Compare types) or ActP does not match that of ActM. If Act > ly.Params.Acts.Clamp.ErrThr, effective activity = 1 else 0 robust to noisy activations. returns one result per data parallel index ([ctx.NData])

func (*Layer) PostBuild

func (ly *Layer) PostBuild()

PostBuild performs special post-Build() configuration steps for specific algorithms, using configuration data set in BuildConfig during the ConfigNet process.

func (*Layer) PulvPostBuild

func (ly *Layer) PulvPostBuild()

PulvPostBuild does post-Build config of Pulvinar based on BuildConfig options

func (*Layer) RWDaPostBuild

func (ly *Layer) RWDaPostBuild()

RWDaPostBuild does post-Build config

func (*Layer) RecipToRecvPath

func (ly *Layer) RecipToRecvPath(rpj *Path) (*Path, bool)

RecipToRecvPath finds the reciprocal pathway to the given recv pathway within the ly layer. i.e., where ly is instead the *sending* layer to same other layer B that is the sender of the rpj pathway we're receiving from.

ly = A, other layer = B:

rpj: R=A <- S=B spj: S=A -> R=B

returns false if not found.

func (*Layer) RecipToSendPath

func (ly *Layer) RecipToSendPath(spj *Path) (*Path, bool)

RecipToSendPath finds the reciprocal pathway to the given sending pathway within the ly layer. i.e., where ly is instead the *receiving* layer from same other layer B that is the receiver of the spj pathway we're sending to.

ly = A,  other layer = B:

spj: S=A -> R=B rpj: R=A <- S=B

returns false if not found.

func (*Layer) RecvPath

func (ly *Layer) RecvPath(idx int) emer.Path

func (*Layer) RecvPathValues

func (ly *Layer) RecvPathValues(vals *[]float32, varNm string, sendLay emer.Layer, sendIndex1D int, pathType string) error

RecvPathValues fills in values of given synapse variable name, for pathway into given sending layer and neuron 1D index, for all receiving neurons in this layer, into given float32 slice (only resized if not big enough). pathType is the string representation of the path type -- used if non-empty, useful when there are multiple pathways between two layers. Returns error on invalid var name. If the receiving neuron is not connected to the given sending layer or neuron then the value is set to math32.NaN(). Returns error on invalid var name or lack of recv path (vals always set to nan on path err).

func (*Layer) RubiconPostBuild

func (ly *Layer) RubiconPostBuild()

RubiconPostBuild is used for BLA, VSPatch, and PVLayer types to set NeuroMod params

func (*Layer) STNDefaults

func (ly *Layer) STNDefaults()

func (*Layer) SendPath

func (ly *Layer) SendPath(idx int) emer.Path

func (*Layer) SendPathValues

func (ly *Layer) SendPathValues(vals *[]float32, varNm string, recvLay emer.Layer, recvIndex1D int, pathType string) error

SendPathValues fills in values of given synapse variable name, for pathway into given receiving layer and neuron 1D index, for all sending neurons in this layer, into given float32 slice (only resized if not big enough). pathType is the string representation of the path type -- used if non-empty, useful when there are multiple pathways between two layers. Returns error on invalid var name. If the sending neuron is not connected to the given receiving layer or neuron then the value is set to math32.NaN(). Returns error on invalid var name or lack of recv path (vals always set to nan on path err).

func (*Layer) SetBuildConfig

func (ly *Layer) SetBuildConfig(param, val string)

SetBuildConfig sets named configuration parameter to given string value to be used in the PostBuild stage -- mainly for layer names that need to be looked up and turned into indexes, after entire network is built.

func (*Layer) SetOff

func (ly *Layer) SetOff(off bool)

todo: not standard:

func (*Layer) SetSubMean

func (ly *Layer) SetSubMean(trgAvg, path float32)

SetSubMean sets the SubMean parameters in all the layers in the network trgAvg is for Learn.TrgAvgAct.SubMean path is for the paths Learn.DWt.SubMean in both cases, it is generally best to have both parameters set to 0 at the start of learning

func (*Layer) SetWeights

func (ly *Layer) SetWeights(lw *weights.Layer) error

SetWeights sets the weights for this layer from weights.Layer decoded values

func (*Layer) TDDaPostBuild

func (ly *Layer) TDDaPostBuild()

TDDaPostBuild does post-Build config

func (*Layer) TDIntegPostBuild

func (ly *Layer) TDIntegPostBuild()

TDIntegPostBuild does post-Build config

func (*Layer) TargToExt

func (ly *Layer) TargToExt(ctx *Context)

TargToExt sets external input Ext from target values Target This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.

func (*Layer) TestValues

func (ly *Layer) TestValues(ctrKey string, vals map[string]float32)

TestValues returns a map of key vals for testing ctrKey is a key of counters to contextualize values.

func (*Layer) TypeName

func (ly *Layer) TypeName() string

func (*Layer) TypeNumber

func (ly *Layer) TypeNumber() int

func (*Layer) UnLesionNeurons

func (ly *Layer) UnLesionNeurons()

UnLesionNeurons unlesions (clears the Off flag) for all neurons in the layer

func (*Layer) UnitValue1D

func (ly *Layer) UnitValue1D(varIndex int, idx, di int) float32

UnitValue1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods.

func (*Layer) UnitVarIndex

func (ly *Layer) UnitVarIndex(varNm string) (int, error)

UnitVarIndex returns the index of given variable within the Neuron, according to *this layer's* UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*Layer) UnitVarNames

func (ly *Layer) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer

func (*Layer) UnitVarNum

func (ly *Layer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

func (*Layer) UnitVarProps

func (ly *Layer) UnitVarProps() map[string]string

UnitVarProps returns properties for variables

func (*Layer) Update

func (ly *Layer) Update()

Update is an interface for generically updating after edits this should be used only for the values on the struct itself. UpdateParams is used to update all parameters, including Path.

func (*Layer) UpdateExtFlags

func (ly *Layer) UpdateExtFlags(ctx *Context)

UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.

func (*Layer) UpdateParams

func (ly *Layer) UpdateParams()

UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving pathways of this layer. This is not called Update because it is not just about the local values in the struct.

func (*Layer) VarRange

func (ly *Layer) VarRange(varNm string) (min, max float32, err error)

VarRange returns the min / max values for given variable todo: support r. s. pathway values error occurs when variable name is not found.

func (*Layer) WriteWeightsJSON

func (ly *Layer) WriteWeightsJSON(w io.Writer, depth int)

WriteWeightsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.

type LayerIndexes

type LayerIndexes struct {
	// NPools is the total number of pools for this layer, including layer-wide.
	NPools uint32 `edit:"-"`

	// start of neurons for this layer in global array (same as Layer.NeurStIndex)
	NeurSt uint32 `edit:"-"`

	// number of neurons in layer
	NNeurons uint32 `edit:"-"`

	// start index into RecvPaths global array
	RecvSt uint32 `edit:"-"`

	// number of recv pathways
	RecvN uint32 `edit:"-"`

	// start index into RecvPaths global array
	SendSt uint32 `edit:"-"`

	// number of recv pathways
	SendN uint32 `edit:"-"`

	// starting neuron index in global Exts list of external input for this layer.
	// Only for Input / Target / Compare layer types
	ExtsSt uint32 `edit:"-"`

	// layer shape Pools Y dimension -- 1 for 2D
	ShpPlY int32 `edit:"-"`

	// layer shape Pools X dimension -- 1 for 2D
	ShpPlX int32 `edit:"-"`

	// layer shape Units Y dimension
	ShpUnY int32 `edit:"-"`

	// layer shape Units X dimension
	ShpUnX int32 `edit:"-"`
}

LayerIndexes contains index access into network global arrays for GPU.

type LayerInhibIndexes

type LayerInhibIndexes struct {

	// idx of Layer to get layer-level inhibition from -- set during Build from BuildConfig LayInhib1Name if present -- -1 if not used
	Index1 int32 `edit:"-"`

	// idx of Layer to get layer-level inhibition from -- set during Build from BuildConfig LayInhib2Name if present -- -1 if not used
	Index2 int32 `edit:"-"`

	// idx of Layer to get layer-level inhibition from -- set during Build from BuildConfig LayInhib3Name if present -- -1 if not used
	Index3 int32 `edit:"-"`

	// idx of Layer to geta layer-level inhibition from -- set during Build from BuildConfig LayInhib4Name if present -- -1 if not used
	Index4 int32 `edit:"-"`
}

LayerInhibIndexes contains indexes of layers for between-layer inhibition.

type LayerParams

type LayerParams struct {

	// Type is the functional type of layer, which determines the code path
	// for specialized layer types, and is synchronized with [Layer.Type].
	Type LayerTypes

	// Index of this layer in [Layers] list.
	Index uint32 `edit:"-"`

	// MaxData is the maximum number of data parallel elements.
	MaxData uint32 `display:"-"`

	// PoolSt is the start of pools for this layer; first one is always the layer-wide pool.
	PoolSt uint32 `display:"-"`

	// Activation parameters and methods for computing activations
	Acts ActParams `display:"add-fields"`

	// Inhibition parameters and methods for computing layer-level inhibition
	Inhib InhibParams `display:"add-fields"`

	// LayInhib has indexes of layers that contribute between-layer inhibition
	//  to this layer. Set these indexes via BuildConfig LayInhibXName (X = 1, 2...).
	LayInhib LayerInhibIndexes `display:"inline"`

	// Learn has learning parameters and methods that operate at the neuron level.
	Learn LearnNeuronParams `display:"add-fields"`

	// Bursts has [BurstParams] that determine how the 5IB Burst activation
	// is computed from CaP integrated spiking values in Super layers.
	Bursts BurstParams `display:"inline"`

	// CT has params for the CT corticothalamic layer and PTPred layer that
	// generates predictions over the Pulvinar using context. Uses the CtxtGe
	// excitatory input plus stronger NMDA channels to maintain context trace.
	CT CTParams `display:"inline"`

	// Pulv has parameters for how the plus-phase (outcome) state of Pulvinar
	// thalamic relay cell neurons is computed from the corresponding driver
	// neuron Burst activation (or CaP if not Super).
	Pulv PulvParams `display:"inline"`

	// Matrix has parameters for BG Striatum Matrix MSN layers, which are
	// the main Go / NoGo gating units in BG. GateThr also used in BGThal.
	Matrix MatrixParams `display:"inline"`

	// GP has params for GP (globus pallidus) of the BG layers.
	GP GPParams `display:"inline"`

	// LDT has parameters for laterodorsal tegmentum ACh salience neuromodulatory
	// signal, driven by superior colliculus stimulus novelty, US input / absence,
	// and OFC / ACC inhibition.
	LDT LDTParams `display:"inline"`

	// VTA has parameters for ventral tegmental area dopamine (DA) based on
	// LHb PVDA (primary value -- at US time, computed at start of each trial
	// and stored in LHbPVDA global value) and Amygdala (CeM) CS / learned
	// value (LV) activations, which update every cycle.
	VTA VTAParams `display:"inline"`

	// RWPred has parameters for reward prediction using a simple Rescorla-Wagner
	// learning rule (i.e., PV learning in the Rubicon framework).
	RWPred RWPredParams `display:"inline"`

	// RWDa has parameters for reward prediction dopamine using a simple
	// Rescorla-Wagner learning rule (i.e., PV learning in the Rubicon framework).
	RWDa RWDaParams `display:"inline"`

	// TDInteg has parameters for temporal differences (TD) reward integration layer.
	TDInteg TDIntegParams `display:"inline"`

	// TDDa has parameters for dopamine (DA) signal as the temporal difference
	// (TD) between the TDIntegLayer activations in the minus and plus phase.
	TDDa TDDaParams `display:"inline"`

	// Indexes has recv and send pathway array access info.
	Indexes LayerIndexes `display:"-"`
}

LayerParams contains all of the layer parameters. These values must remain constant over the course of computation. On the GPU, they are loaded into a uniform.

func GetLayers

func GetLayers(idx uint32) *LayerParams

GetLayers returns a pointer to the given global variable: Layers []LayerParams at given index. This directly processed in the GPU code, so this function call is an equivalent for the CPU.

func (*LayerParams) AdaptGi

func (ly *LayerParams) AdaptGi(ctx *Context)

AdaptGi adapts inhibition if enabled.

func (*LayerParams) AllParams

func (ly *LayerParams) AllParams() string

AllParams returns a listing of all parameters in the Layer

func (*LayerParams) AnyGated

func (ly *LayerParams) AnyGated(di uint32) bool

AnyGated returns true if the layer-level pool Gated flag is true, which indicates if any of the layers gated.

func (*LayerParams) ApplyExtFlags

func (ly *LayerParams) ApplyExtFlags(clearMask, setMask *NeuronFlags, toTarg *bool)

ApplyExtFlags gets the clear mask and set mask for updating neuron flags based on layer type, and whether input should be applied to Target (else Ext)

func (*LayerParams) ApplyExtValue

func (ly *LayerParams) ApplyExtValue(ni, di uint32, val float32)

ApplyExtValue applies given external value to given neuron, setting flags based on type of layer. Should only be called on Input, Target, Compare layers. Negative values are not valid, and will be interpreted as missing inputs.

func (*LayerParams) ApplyExtsNeuron

func (ly *LayerParams) ApplyExtsNeuron(ni, di uint32)

func (*LayerParams) AvgDifFromTrgAvg

func (ly *LayerParams) AvgDifFromTrgAvg(ctx *Context)

AvgDifFromTrgAvg updates neuron-level AvgDif values from AvgPct - TrgAvg which is then used for synaptic scaling of LWt values in Path SynScale.

func (*LayerParams) AvgGeM

func (ly *LayerParams) AvgGeM(ctx *Context, di uint32, geIntMinusMax, giIntMinusMax float32)

AvgGeM computes the average and max GeInt, GiInt in minus phase (AvgMaxGeM, AvgMaxGiM) stats, updated in MinusPhase, using values that already max across NData.

func (*LayerParams) Beta1Neuron

func (ly *LayerParams) Beta1Neuron(ctx *Context, ni, di uint32)

Beta1Neuron does neuron level Beta1 updating.

func (*LayerParams) Beta2Neuron

func (ly *LayerParams) Beta2Neuron(ctx *Context, ni, di uint32)

Beta2Neuron does neuron level Beta2 updating.

func (*LayerParams) BetweenGi

func (ly *LayerParams) BetweenGi(ctx *Context, di uint32)

BetweenGi computes inhibition Gi between layers.

func (*LayerParams) BetweenLayerGiMax

func (ly *LayerParams) BetweenLayerGiMax(di uint32, maxGi float32, layIndex int32) float32

BetweenLayerGiMax returns max gi value for input maxGi vs the given layIndex layer

func (*LayerParams) CTDefaults

func (ly *LayerParams) CTDefaults()

func (*LayerParams) CycleNeuron

func (ly *LayerParams) CycleNeuron(ctx *Context, ni, di uint32)

CycleNeuron does one cycle (msec) of updating at the neuron level Called directly by Network, iterates over data.

func (*LayerParams) CyclePost

func (ly *LayerParams) CyclePost(ctx *Context, di uint32)

CyclePost is called after the standard Cycle update, as a separate network layer loop. This is reserved for any kind of special ad-hoc types that need to do something special after Spiking is finally computed and Sent. Typically used for updating global values in the Context state, such as updating a neuromodulatory signal such as dopamine. Any updates here must also be done in gpu_wgsl/gpu_cyclepost.wgsl

func (*LayerParams) CyclePostCeMLayer

func (ly *LayerParams) CyclePostCeMLayer(ctx *Context, lpi, di uint32)

func (*LayerParams) CyclePostLDTLayer

func (ly *LayerParams) CyclePostLDTLayer(ctx *Context, di uint32, srcLay1Act, srcLay2Act, srcLay3Act, srcLay4Act float32)

func (*LayerParams) CyclePostLayer

func (ly *LayerParams) CyclePostLayer(ctx *Context, lpi, di uint32)

CyclePostLayer is called for all layer types

func (*LayerParams) CyclePostRWDaLayer

func (ly *LayerParams) CyclePostRWDaLayer(ctx *Context, di uint32)

func (*LayerParams) CyclePostTDDaLayer

func (ly *LayerParams) CyclePostTDDaLayer(ctx *Context, di uint32)

func (*LayerParams) CyclePostTDIntegLayer

func (ly *LayerParams) CyclePostTDIntegLayer(ctx *Context, di uint32)

func (*LayerParams) CyclePostTDPredLayer

func (ly *LayerParams) CyclePostTDPredLayer(ctx *Context, di uint32)

func (*LayerParams) CyclePostVSPatchLayer

func (ly *LayerParams) CyclePostVSPatchLayer(ctx *Context, pi, di uint32, spi int32)

note: needs to iterate over sub-pools in layer!

func (*LayerParams) CyclePostVTALayer

func (ly *LayerParams) CyclePostVTALayer(ctx *Context, di uint32)

func (*LayerParams) DTrgSubMean

func (ly *LayerParams) DTrgSubMean(ctx *Context)

DTrgSubMean subtracts the mean from DTrgAvg values. Called by TrgAvgFromD

func (*LayerParams) DWtSubMean

func (ly *LayerParams) DWtSubMean(ctx *Context, ri uint32)

DWtSubMean subtracts the mean DWt for each recv neuron.

func (*LayerParams) DecayState

func (ly *LayerParams) DecayState(ctx *Context, di uint32, decay, glong, ahp float32)

DecayState decays activation state by given proportion (default decay values are ly.Params.Acts.Decay.Act, Glong)

func (*LayerParams) DecayStateLayer

func (ly *LayerParams) DecayStateLayer(ctx *Context, di uint32, decay, glong, ahp float32)

DecayStateLayer does layer-level decay, but not neuron level

func (*LayerParams) DecayStateNeuronsAll

func (ly *LayerParams) DecayStateNeuronsAll(ctx *Context, decay, glong, ahp float32)

DecayStateNeuronsAll decays neural activation state by given proportion (default decay values are ly.Params.Acts.Decay.Act, Glong, AHP) for all data parallel indexes. Does not decay pool or layer state. This is used for minus phase of Pulvinar layers to clear state in prep for driver plus phase.

func (*LayerParams) DecayStatePool

func (ly *LayerParams) DecayStatePool(ctx *Context, pool int, decay, glong, ahp float32)

DecayStatePool decays activation state by given proportion in given sub-pool index (0 based)

func (*LayerParams) Defaults

func (ly *LayerParams) Defaults()

func (*LayerParams) DrivesDefaults

func (ly *LayerParams) DrivesDefaults()

func (*LayerParams) GFromRawSyn

func (ly *LayerParams) GFromRawSyn(ctx *Context, ni, di uint32)

GFromRawSyn computes overall Ge and GiSyn conductances for neuron from GeRaw and GeSyn values, including NMDA, VGCC, AMPA, and GABA-A channels. drvAct is for Pulvinar layers, activation of driving neuron

func (*LayerParams) GInteg

func (ly *LayerParams) GInteg(ctx *Context, pi, ni, di uint32)

GInteg integrates conductances G over time (Ge, NMDA, etc). calls SpecialGFromRawSyn, GiInteg

func (*LayerParams) GNeuroMod

func (ly *LayerParams) GNeuroMod(ctx *Context, ni, di uint32)

GNeuroMod does neuromodulation of conductances

func (*LayerParams) GatedFromCaPMax

func (ly *LayerParams) GatedFromCaPMax(ctx *Context, di uint32)

GatedFromCaPMax updates the Gated state in Pools of given layer, based on Avg CaPMax being above given threshold.

func (*LayerParams) GatherSpikes

func (ly *LayerParams) GatherSpikes(ctx *Context, ni, di uint32)

GatherSpikes integrates G*Raw and G*Syn values for given recv neuron while integrating the Recv Path-level GSyn integrated values.

func (*LayerParams) GatherSpikesInit

func (ly *LayerParams) GatherSpikesInit(ctx *Context, ni, di uint32)

GatherSpikesInit initializes G*Raw and G*Syn values for given neuron prior to integration.

func (*LayerParams) GiFromSpikes

func (ly *LayerParams) GiFromSpikes(ctx *Context, ni, di uint32)

GiFromSpikes gets the Spike, GeRaw and GeExt from neurons in the pools where Spike drives FBsRaw = raw feedback signal, GeRaw drives FFsRaw = aggregate feedforward excitatory spiking input. GeExt represents extra excitatory input from other sources. Then integrates new inhibitory conductances therefrom, at the layer and pool level. Called separately by Network.CycleImpl on all Layers Also updates all AvgMax values at the Cycle level.

func (*LayerParams) GiInteg

func (ly *LayerParams) GiInteg(ctx *Context, pi, ni, di uint32)

GiInteg adds Gi values from all sources including SubPool computed inhib and updates GABAB as well

func (*LayerParams) HasPoolInhib

func (ly *LayerParams) HasPoolInhib() bool

HasPoolInhib returns true if the layer is using pool-level inhibition (implies 4D too). This is the proper check for using pool-level target average activations, for example.

func (*LayerParams) InitExt

func (ly *LayerParams) InitExt(ni, di uint32)

InitExt initializes external input state for given neuron

func (*LayerParams) IsInput

func (ly *LayerParams) IsInput() bool

IsInput returns true if this layer is an Input layer. By default, returns true for layers of Type == axon.InputLayer Used to prevent adapting of inhibition or TrgAvg values.

func (*LayerParams) IsInputOrTarget

func (ly *LayerParams) IsInputOrTarget() bool

IsInputOrTarget returns true if this layer is either an Input or a Target layer.

func (*LayerParams) IsLearnTrgAvg

func (ly *LayerParams) IsLearnTrgAvg() bool

IsLearnTrgAvg returns true if this layer has Learn.TrgAvgAct.RescaleOn set for learning adjustments based on target average activity levels, and the layer is not an input or target layer.

func (*LayerParams) IsTarget

func (ly *LayerParams) IsTarget() bool

IsTarget returns true if this layer is a Target layer. By default, returns true for layers of Type == TargetLayer Other Target layers include the PulvinarLayer in deep predictive learning. It is used in SynScale to not apply it to target layers. In both cases, Target layers are purely error-driven.

func (*LayerParams) LDTSrcLayAct

func (ly *LayerParams) LDTSrcLayAct(layIndex int32, di uint32) float32

LDTSrcLayAct returns the overall activity level for given source layer for purposes of computing ACh salience value. Typically the input is a superior colliculus (SC) layer that rapidly accommodates after the onset of a stimulus. using lpl.AvgMax.CaP.Cycle.Max for layer activity measure.

func (*LayerParams) LayPoolGiFromSpikes

func (ly *LayerParams) LayPoolGiFromSpikes(ctx *Context, lpi, di uint32)

LayPoolGiFromSpikes computes inhibition Gi from Spikes for layer-level pool.

func (*LayerParams) LayerGi

func (ly *LayerParams) LayerGi(ctx *Context, li, di uint32)

LayerGi updates the layer-level Gi inhibition from spikes.

func (*LayerParams) LearnTrgAvgErrLRate

func (ly *LayerParams) LearnTrgAvgErrLRate() float32

LearnTrgAvgErrLRate returns the effective error-driven learning rate for adjusting target average activity levels. This is 0 if !IsLearnTrgAvg() and otherwise is Learn.TrgAvgAct.ErrLRate

func (*LayerParams) MatrixGated

func (ly *LayerParams) MatrixGated(ctx *Context)

MatrixGated is called after std PlusPhase, on CPU, has Pool info downloaded from GPU, to set Gated flag based on CaPMax activity

func (*LayerParams) MinusPhaseNeuron

func (ly *LayerParams) MinusPhaseNeuron(ctx *Context, ni, di uint32)

MinusPhaseNeuron does neuron level minus-phase updating

func (*LayerParams) MinusPhasePool

func (ly *LayerParams) MinusPhasePool(ctx *Context, pi uint32)

func (*LayerParams) MinusPhasePost

func (ly *LayerParams) MinusPhasePost(ctx *Context)

MinusPhasePost does special algorithm processing at end of minus

func (*LayerParams) NewStateLayer

func (ly *LayerParams) NewStateLayer(ctx *Context)

NewStateLayer does NewState at the layer level, called

func (*LayerParams) NewStateLayerActAvg

func (ly *LayerParams) NewStateLayerActAvg(ctx *Context, di uint32, actMinusAvg, actPlusAvg float32)

NewStateLayerActAvg updates ActAvg.ActMAvg and ActPAvg based on current values that have been averaged across NData already.

func (*LayerParams) NewStateNeuron

func (ly *LayerParams) NewStateNeuron(ctx *Context, ni, di uint32)

NewStateNeuron handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point.

func (*LayerParams) NewStatePool

func (ly *LayerParams) NewStatePool(ctx *Context, pi, di uint32)

func (*LayerParams) PTPredDefaults

func (ly *LayerParams) PTPredDefaults()

func (*LayerParams) PVDefaults

func (ly *LayerParams) PVDefaults()

func (*LayerParams) PhaseDiffFromActs

func (ly *LayerParams) PhaseDiffFromActs(ctx *Context)

PhaseDiffFromActs computes the phase-wise difference in the activity state between the minus ActM and plus ActP phases, measured using 1 minus the correlation (centered cosine aka normalized dot product). 0 = no difference, 2 = maximum difference.

func (*LayerParams) PlusPhaseActAvg

func (ly *LayerParams) PlusPhaseActAvg(ctx *Context)

PlusPhaseActAvg updates ActAvg and DTrgAvg at the plus phase Note: could be done on GPU but not worth it at this point..

func (*LayerParams) PlusPhaseNeuron

func (ly *LayerParams) PlusPhaseNeuron(ctx *Context, ni, di uint32)

PlusPhaseNeuron does neuron level plus-phase updating

func (*LayerParams) PlusPhasePool

func (ly *LayerParams) PlusPhasePool(ctx *Context, pi, di uint32)

func (*LayerParams) PlusPhasePost

func (ly *LayerParams) PlusPhasePost(ctx *Context)

PlusPhasePost does special algorithm processing at end of plus

func (*LayerParams) PlusPhaseStartNeuron

func (ly *LayerParams) PlusPhaseStartNeuron(ctx *Context, ni, di uint32)

PlusPhaseStartNeuron does neuron level plus-phase start: applies Target inputs as External inputs.

func (*LayerParams) PoolIndex

func (ly *LayerParams) PoolIndex(pi uint32) uint32

PoolIndex returns the global network index for pool with given pool (0 = layer pool, 1+ = subpools): just PoolSt + pi

func (*LayerParams) PostSpike

func (ly *LayerParams) PostSpike(ctx *Context, lpi, pi, ni, di uint32)

PostSpike does updates at neuron level after spiking has been computed. It calls PostSpikeSpecial. It also updates the CaPCyc stats.

func (*LayerParams) PostSpikeSpecial

func (ly *LayerParams) PostSpikeSpecial(ctx *Context, lpi, pi, ni, di uint32)

PostSpikeSpecial does updates at neuron level after spiking has been computed. This is where special layer types add extra code.

func (*LayerParams) PulvDefaults

func (ly *LayerParams) PulvDefaults()

called in Defaults for Pulvinar layer type

func (*LayerParams) PulvinarDriver

func (ly *LayerParams) PulvinarDriver(ctx *Context, lni, di uint32, drvGe, nonDrivePct *float32)

func (*LayerParams) RWDefaults

func (ly *LayerParams) RWDefaults()

func (*LayerParams) RWPredDefaults

func (ly *LayerParams) RWPredDefaults()

func (*LayerParams) SendSpike

func (ly *LayerParams) SendSpike(ctx *Context, ni, di uint32)

SendSpike sends spike to receivers for all neurons that spiked last step in Cycle, integrated the next time around. Called directly by Network, iterates over data.

func (*LayerParams) ShouldDisplay

func (ly *LayerParams) ShouldDisplay(field string) bool

func (*LayerParams) SlowAdaptLayer

func (ly *LayerParams) SlowAdaptLayer(ctx *Context)

SlowAdaptLayer is the layer-level slow adaptation functions. Calls AdaptInhib and AvgDifFromTrgAvg for Synaptic Scaling. Does NOT call pathway-level methods.

func (*LayerParams) SlowAdaptNeuron

func (ly *LayerParams) SlowAdaptNeuron(ctx *Context, ri uint32)

SlowAdaptNeuron does path & synapse level slow adaptation on SWt and overall synaptic scaling, per each receiving neuron ri.

func (*LayerParams) SpecialPostGs

func (ly *LayerParams) SpecialPostGs(ctx *Context, ni, di uint32, saveVal float32)

SpecialPostGs is used for special layer types to do things after the standard updates in GFromRawSyn. It is passed the saveVal from SpecialPreGs

func (*LayerParams) SpecialPreGs

func (ly *LayerParams) SpecialPreGs(ctx *Context, pi, ni, di uint32, drvGe float32, nonDrivePct float32) float32

SpecialPreGs is used for special layer types to do things to the conductance values prior to doing the standard updates in GFromRawSyn drvAct is for Pulvinar layers, activation of driving neuron

func (*LayerParams) SpikeFromG

func (ly *LayerParams) SpikeFromG(ctx *Context, lpi, ni, di uint32)

SpikeFromG computes Vm from Ge, Gi, Gl conductances and then Spike from that

func (*LayerParams) StyleClass

func (ly *LayerParams) StyleClass() string

StyleClass implements the params.Styler interface for parameter setting, and must only be called after the network has been built, and is current, because it uses the global CurrentNetwork variable.

func (*LayerParams) StyleName

func (ly *LayerParams) StyleName() string

StyleName implements the params.Styler interface for parameter setting, and must only be called after the network has been built, and is current, because it uses the global CurrentNetwork variable.

func (*LayerParams) SubPoolGiFromSpikes

func (ly *LayerParams) SubPoolGiFromSpikes(ctx *Context, lpi, pi, di uint32, lyInhib bool, giMult float32)

SubPoolGiFromSpikes computes inhibition Gi from Spikes within a sub-pool pl is guaranteed not to be the overall layer pool

func (*LayerParams) TDDefaults

func (ly *LayerParams) TDDefaults()

func (*LayerParams) TDPredDefaults

func (ly *LayerParams) TDPredDefaults()

func (*LayerParams) TrgAvgFromD

func (ly *LayerParams) TrgAvgFromD(ctx *Context)

TrgAvgFromD updates TrgAvg from DTrgAvg, called in PlusPhasePost.

func (*LayerParams) USDefaults

func (ly *LayerParams) USDefaults()

func (*LayerParams) Update

func (ly *LayerParams) Update()

func (*LayerParams) UrgencyDefaults

func (ly *LayerParams) UrgencyDefaults()

func (*LayerParams) VSGatedDefaults

func (ly *LayerParams) VSGatedDefaults()

func (*LayerParams) VSPatchDefaults

func (ly *LayerParams) VSPatchDefaults()

func (*LayerParams) WtFromDWtLayer

func (ly *LayerParams) WtFromDWtLayer(ctx *Context)

WtFromDWtLayer does weight update at the layer level. does NOT call main pathway-level WtFromDWt method. in base, only calls TrgAvgFromD

type LayerSel

type LayerSel = params.Sel[*LayerParams]

LayerSel is one Layer parameter Selector.

type LayerSheet

type LayerSheet = params.Sheet[*LayerParams]

LayerSheet is one Layer parameter Sheet.

type LayerSheets

type LayerSheets = params.Sheets[*LayerParams]

LayerSheets contains Layer parameter Sheets.

type LayerTypes

type LayerTypes int32 //enums:enum

LayerTypes enumerates all the different types of layers, for the different algorithm types supported. Class parameter styles automatically key off of these types.

const (
	// Super is a superficial cortical layer (lamina 2-3-4)
	// which does not receive direct input or targets.
	// In more generic models, it should be used as a Hidden layer,
	// and maps onto the Hidden type in LayerTypes.
	SuperLayer LayerTypes = iota

	// Input is a layer that receives direct external input
	// in its Ext inputs.  Biologically, it can be a primary
	// sensory layer, or a thalamic layer.
	InputLayer

	// Target is a layer that receives direct external target inputs
	// used for driving plus-phase learning.
	// Simple target layers are generally not used in more biological
	// models, which instead use predictive learning via Pulvinar
	// or related mechanisms.
	TargetLayer

	// Compare is a layer that receives external comparison inputs,
	// which drive statistics but do NOT drive activation
	// or learning directly.  It is rarely used in axon.
	CompareLayer

	// CT are layer 6 corticothalamic projecting neurons,
	// which drive "top down" predictions in Pulvinar layers.
	// They maintain information over time via stronger NMDA
	// channels and use maintained prior state information to
	// generate predictions about current states forming on Super
	// layers that then drive PT (5IB) bursting activity, which
	// are the plus-phase drivers of Pulvinar activity.
	CTLayer

	// Pulvinar are thalamic relay cell neurons in the higher-order
	// Pulvinar nucleus of the thalamus, and functionally isomorphic
	// neurons in the MD thalamus, and potentially other areas.
	// These cells alternately reflect predictions driven by CT pathways,
	// and actual outcomes driven by 5IB Burst activity from corresponding
	// PT or Super layer neurons that provide strong driving inputs.
	PulvinarLayer

	// TRNLayer is thalamic reticular nucleus layer for inhibitory competition
	// within the thalamus.
	TRNLayer

	// PTMaintLayer implements the subset of pyramidal tract (PT)
	// layer 5 intrinsic bursting (5IB) deep neurons that exhibit
	// robust, stable maintenance of activity over the duration of a
	// goal engaged window, modulated by basal ganglia (BG) disinhibitory
	// gating, supported by strong MaintNMDA channels and recurrent excitation.
	// The lateral PTSelfMaint pathway uses MaintG to drive GMaintRaw input
	// that feeds into the stronger, longer MaintNMDA channels,
	// and the ThalToPT ModulatoryG pathway from BGThalamus multiplicatively
	// modulates the strength of other inputs, such that only at the time of
	// BG gating are these strong enough to drive sustained active maintenance.
	// Use Act.Dend.ModGain to parameterize.
	PTMaintLayer

	// PTPredLayer implements the subset of pyramidal tract (PT)
	// layer 5 intrinsic bursting (5IB) deep neurons that combine
	// modulatory input from PTMaintLayer sustained maintenance and
	// CTLayer dynamic predictive learning that helps to predict
	// state changes during the period of active goal maintenance.
	// This layer provides the primary input to VSPatch US-timing
	// prediction layers, and other layers that require predictive dynamic
	PTPredLayer

	// MatrixLayer represents the matrisome medium spiny neurons (MSNs)
	// that are the main Go / NoGo gating units in BG.
	// These are strongly modulated by phasic dopamine: D1 = Go, D2 = NoGo.
	MatrixLayer

	// STNLayer represents subthalamic nucleus neurons, with two subtypes:
	// STNp are more strongly driven and get over bursting threshold, driving strong,
	// rapid activation of the KCa channels, causing a long pause in firing, which
	// creates a window during which GPe dynamics resolve Go vs. No balance.
	// STNs are more weakly driven and thus more slowly activate KCa, resulting in
	// a longer period of activation, during which the GPi is inhibited to prevent
	// premature gating based only MtxGo inhibition -- gating only occurs when
	// GPePr signal has had a chance to integrate its MtxNo inputs.
	STNLayer

	// GPLayer represents a globus pallidus layer in the BG, including:
	// GPeOut, GPePr, GPeAk (arkypallidal), and GPi.
	// Typically just a single unit per Pool representing a given stripe.
	GPLayer

	// BGThalLayer represents a BG gated thalamic layer,
	// which receives BG gating in the form of an
	// inhibitory pathway from GPi.  Located
	// mainly in the Ventral thalamus: VA / VM / VL,
	// and also parts of MD mediodorsal thalamus.
	BGThalLayer

	// VSGated represents explicit coding of VS gating status:
	// JustGated and HasGated (since last US or failed predicted US),
	// For visualization and / or motor action signaling.
	VSGatedLayer

	// BLALayer represents a basolateral amygdala layer
	// which learns to associate arbitrary stimuli (CSs)
	// with behaviorally salient outcomes (USs)
	BLALayer

	// CeMLayer represents a central nucleus of the amygdala layer.
	CeMLayer

	// VSPatchLayer represents a ventral striatum patch layer,
	// which learns to represent the expected amount of dopamine reward
	// and projects both directly with shunting inhibition to the VTA
	// and indirectly via the LHb / RMTg to cancel phasic dopamine firing
	// to expected rewards (i.e., reward prediction error).
	VSPatchLayer

	// LHbLayer represents the lateral habenula, which drives dipping
	// in the VTA.  It tracks the Global LHb values for
	// visualization purposes -- updated by VTALayer.
	LHbLayer

	// DrivesLayer represents the Drives in .Rubicon framework.
	// It tracks the Global Drives values for
	// visualization and predictive learning purposes.
	DrivesLayer

	// UrgencyLayer represents the Urgency factor in Rubicon framework.
	// It tracks the Global Urgency.Urge value for
	// visualization and predictive learning purposes.
	UrgencyLayer

	// USLayer represents a US unconditioned stimulus layer (USpos or USneg).
	// It tracks the Global USpos or USneg, for visualization
	// and predictive learning purposes. Actual US inputs are set in Rubicon.
	USLayer

	// PVLayer represents a PV primary value layer (PVpos or PVneg) representing
	// the total primary value as a function of US inputs, drives, and effort.
	// It tracks the Global VTA.PVpos, PVneg values for
	// visualization and predictive learning purposes.
	PVLayer

	// LDTLayer represents the laterodorsal tegmentum layer, which
	// is the primary limbic ACh (acetylcholine) driver to other ACh:
	// BG cholinergic interneurons (CIN) and nucleus basalis ACh areas.
	// The phasic ACh release signals reward salient inputs from CS, US
	// and US omssion, and it drives widespread disinhibition of BG gating
	// and VTA DA firing.
	// It receives excitation from superior colliculus which computes
	// a temporal derivative (stimulus specific adaptation, SSA)
	// of sensory inputs, and inhibitory input from OFC, ACC driving
	// suppression of distracting inputs during goal-engaged states.
	LDTLayer

	// VTALayer represents the ventral tegmental area, which releases
	// dopamine.  It computes final DA value from Rubicon-computed
	// LHb PVDA (primary value DA), updated at start of each trial from
	// updated US, Effort, etc state, and cycle-by-cycle LV learned value
	// state reflecting CS inputs, in the Amygdala (CeM).
	// Its activity reflects this DA level, which is effectively broadcast
	// vial Global state values to all layers.
	VTALayer

	// RewLayer represents positive (first unit) or negative (second unit)
	// reward values, showing spiking rates for each, and Act always represents
	// the signed value.
	RewLayer

	// RWPredLayer computes reward prediction for a simple Rescorla-Wagner
	// learning dynamic (i.e., PV learning in the Rubicon framework).
	// Activity is computed as linear function of excitatory conductance.
	// The first unit in the layer represents positive reward, second negative.
	// Use with RWPath which does simple delta-rule learning on minus-plus.
	RWPredLayer

	// RWDaLayer computes a dopamine (DA) signal based on a simple Rescorla-Wagner
	// learning dynamic (i.e., PV learning in the Rubicon framework).
	// It computes difference between r(t) and RWPred values.
	// r(t) is accessed directly from a Rew layer -- if no external input then no
	// DA is computed -- critical for effective use of RW only for PV cases.
	// RWPred prediction is also accessed directly from Rew layer to avoid any issues.
	RWDaLayer

	// TDPredLayer is the temporal differences reward prediction layer.
	// It represents estimated value V(t) in the minus phase, and computes
	// estimated V(t+1) based on its learned weights in plus phase,
	// using the TDPredPath pathway type for DA modulated learning.
	// The first unit in the layer represents positive reward, second negative.
	TDPredLayer

	// TDIntegLayer is the temporal differences reward integration layer.
	// It represents estimated value V(t) from prior time step in the minus phase,
	// and estimated discount * V(t+1) + r(t) in the plus phase.
	// It gets Rew, PrevPred from Context.NeuroMod, and Special
	// LayerValues from TDPredLayer.
	// The first unit in the layer represents positive reward, second negative.
	TDIntegLayer

	// TDDaLayer computes a dopamine (DA) signal as the temporal difference (TD)
	// between the TDIntegLayer activations in the minus and plus phase.
	// These are retrieved from Special LayerValues.
	TDDaLayer
)

The layer types

const LayerTypesN LayerTypes = 30

LayerTypesN is the highest valid value for type LayerTypes, plus one.

func LayerTypesValues

func LayerTypesValues() []LayerTypes

LayerTypesValues returns all possible values for the type LayerTypes.

func (LayerTypes) Desc

func (i LayerTypes) Desc() string

Desc returns the description of the LayerTypes value.

func (LayerTypes) Int64

func (i LayerTypes) Int64() int64

Int64 returns the LayerTypes value as an int64.

func (LayerTypes) IsExt

func (lt LayerTypes) IsExt() bool

IsExt returns true if the layer type deals with external input: Input, Target, Compare

func (LayerTypes) MarshalText

func (i LayerTypes) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*LayerTypes) SetInt64

func (i *LayerTypes) SetInt64(in int64)

SetInt64 sets the LayerTypes value from an int64.

func (*LayerTypes) SetString

func (i *LayerTypes) SetString(s string) error

SetString sets the LayerTypes value from its string representation, and returns an error if the string is invalid.

func (LayerTypes) String

func (i LayerTypes) String() string

String returns the string representation of this LayerTypes value.

func (*LayerTypes) UnmarshalText

func (i *LayerTypes) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (LayerTypes) Values

func (i LayerTypes) Values() []enums.Enum

Values returns all possible values for the type LayerTypes.

type LayerVars

type LayerVars int32 //enums:enum

LayerVars are layer-level state values.

const (
	// LayerActMAvg is the running-average minus-phase activity integrated
	// at Dt.LongAvgTau, used for adapting inhibition relative to target level.
	LayerActMAvg LayerVars = iota

	// LayerActPAvg is the running-average plus-phase activity integrated at Dt.LongAvgTau.
	LayerActPAvg

	// LayerAvgMaxGeM is the running-average max of minus-phase Ge value across the layer
	// integrated at Dt.LongAvgTau.
	LayerAvgMaxGeM

	// LayerAvgMaxGiM is the running-average max of minus-phase Gi value across the layer
	// integrated at Dt.LongAvgTau.
	LayerAvgMaxGiM

	// LayerGiMult is a multiplier on layer-level inhibition, which can be adapted to
	// maintain target activity level.
	LayerGiMult

	// LayerPhaseDiff is the phase-wise difference in the activity state between the
	// minus [ActM] and plus [ActP] phases, measured using 1 minus the correlation
	// (centered cosine aka normalized dot product).  0 = no difference,
	// 2 = maximum difference. Computed by PhaseDiffFromActs in the PlusPhase.
	LayerPhaseDiff

	// LayerPhaseDiffAvg is the running average of [LayerPhaseDiff] over time,
	// integrated at Dt.LongAvgTau.
	LayerPhaseDiffAvg

	// LayerPhaseDiffVar is the running variance of [LayerPhaseDiff], integrated
	// at Dt.LongAvgTau.
	LayerPhaseDiffVar

	// LayerRT is the reaction time for this layer in cycles, which is -1 until the
	// Max CaP level (after MaxCycStart) exceeds the Inhib.ActAvg.RTThr threshold.
	LayerRT

	// GatedRT is the reaction time for this layer in cycles, which is -1 until the
	// Layer-level [PoolGated] is true.
	GatedRT

	// LayerRewPredPos is the positive-valued Reward Prediction value, for
	// RL specific layers: [RWPredLayer], [TDPredLayer].
	// For [TDIntegLayer], this is the plus phase current integrated reward prediction.
	LayerRewPredPos

	// LayerRewPredNeg is the negative-valued Reward Prediction value, for
	// RL specific layers: [RWPredLayer], [TDPredLayer]
	// For [TDIntegLayer], this is the minus phase previous integrated reward prediction.
	LayerRewPredNeg
)
const LayerVarsN LayerVars = 12

LayerVarsN is the highest valid value for type LayerVars, plus one.

func LayerVarsValues

func LayerVarsValues() []LayerVars

LayerVarsValues returns all possible values for the type LayerVars.

func (LayerVars) Desc

func (i LayerVars) Desc() string

Desc returns the description of the LayerVars value.

func (LayerVars) Int64

func (i LayerVars) Int64() int64

Int64 returns the LayerVars value as an int64.

func (LayerVars) MarshalText

func (i LayerVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*LayerVars) SetInt64

func (i *LayerVars) SetInt64(in int64)

SetInt64 sets the LayerVars value from an int64.

func (*LayerVars) SetString

func (i *LayerVars) SetString(s string) error

SetString sets the LayerVars value from its string representation, and returns an error if the string is invalid.

func (LayerVars) String

func (i LayerVars) String() string

String returns the string representation of this LayerVars value.

func (*LayerVars) UnmarshalText

func (i *LayerVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (LayerVars) Values

func (i LayerVars) Values() []enums.Enum

Values returns all possible values for the type LayerVars.

type LearnCaParams

type LearnCaParams struct {

	// Norm is the denominator used for normalizing [LearnCa], so the
	// max is roughly 1 - 1.5 or so, which works best in terms of previous
	// standard learning rules, and overall learning performance.
	Norm float32 `default:"80"`

	// SpikeVGCC uses spikes to generate VGCC instead of actual VGCC current.
	// See SpikeVGCCa for calcium contribution from each spike.
	SpikeVGCC slbool.Bool `default:"true"`

	// SpikeVgccCa is the multiplier on spike for computing Ca contribution
	// to [LearnCa], in SpikeVGCC mode.
	SpikeVgccCa float32 `default:"35"`

	// VgccTau is the time constant of decay for VgccCa calcium.
	// It is highly transient around spikes, so decay and diffusion
	// factors are more important than for long-lasting NMDA factor.
	// VgccCa is integrated separately in [VgccCaInt] prior to adding
	// into NMDA Ca in [LearnCa].
	VgccTau float32 `default:"10"`

	// Dt are time constants for integrating [LearnCa] across
	// M, P and D cascading levels.
	Dt kinase.CaDtParams `display:"inline"`

	// VgccDt rate = 1 / tau
	VgccDt float32 `display:"-" json:"-" xml:"-" edit:"-"`

	// NormInv = 1 / Norm
	NormInv float32 `display:"-" json:"-" xml:"-" edit:"-"`
	// contains filtered or unexported fields
}

LearnCaParams parameterizes the neuron-level calcium signals driving learning: LearnCa = NMDA + VGCC Ca sources, where VGCC can be simulated from spiking or use the more complex and dynamaic VGCC channel directly. LearnCa is then integrated in a cascading manner at multiple time scales: CaM (as in calmodulin), CaP (ltP, CaMKII, plus phase), CaD (ltD, DAPK1, minus phase).

func (*LearnCaParams) Defaults

func (lc *LearnCaParams) Defaults()

func (*LearnCaParams) LearnCas

func (lc *LearnCaParams) LearnCas(ctx *Context, ni, di uint32)

LearnCas updates the LearnCa value and its cascaded values, based on NMDA, VGCC Ca it first calls VgccCa to update the spike-driven version of that variable, and perform its time-integration.

func (*LearnCaParams) ShouldDisplay

func (lc *LearnCaParams) ShouldDisplay(field string) bool

func (*LearnCaParams) Update

func (lc *LearnCaParams) Update()

func (*LearnCaParams) VgccCaFromSpike

func (lc *LearnCaParams) VgccCaFromSpike(ctx *Context, ni, di uint32)

VgccCa updates the simulated VGCC calcium from spiking, if that option is selected, and performs time-integration of VgccCa

type LearnNeuronParams

type LearnNeuronParams struct {

	// CaLearn parameterizes the neuron-level calcium signals driving learning:
	// LearnCa = NMDA + VGCC Ca sources, where VGCC can be simulated from spiking
	// or use the more complex and dynamic VGCC channel directly.  LearnCa is then
	// integrated in a cascading manner at multiple time scales:
	// LearnCaM (as in calmodulin), LearnCaP (ltP, CaMKII, plus phase),
	// LearnCaD (ltD, DAPK1, minus phase).
	CaLearn LearnCaParams `display:"inline"`

	// CaSpike parameterizes the neuron-level spike-driven calcium signals:
	// CaM (calmodulin), CaP (ltP, CaMKII, plus phase), CaD (ltD, DAPK1, minus phase).
	// These values are used in various cases as a proxy for the activation (spiking)
	// based learning signal.
	CaSpike kinase.CaSpikeParams `display:"inline"`

	// NMDA channel parameters used for learning, vs. the ones driving activation.
	// This allows exploration of learning parameters independent of their effects
	// on active maintenance contributions of NMDA, and may be supported by different
	// receptor subtypes.
	LearnNMDA chans.NMDAParams `display:"inline"`

	// TrgAvgAct has the synaptic scaling parameters for regulating overall average
	// activity compared to neuron's own target level.
	TrgAvgAct TrgAvgActParams `display:"inline"`

	// RLRate has the recv neuron learning rate modulation params: an additional
	// error-based modulation of learning for receiver side:
	// RLRate = |CaP - CaD| / Max(CaP, CaD)
	RLRate RLRateParams `display:"inline"`

	// NeuroMod parameterizes neuromodulation effects on learning rate and activity,
	// as a function of layer-level DA and ACh values, which are updated from global
	// Context values, and computed from reinforcement learning algorithms.
	NeuroMod NeuroModParams `display:"inline"`
}

LearnNeuronParams manages learning-related parameters at the neuron-level. This is mainly the running average activations that drive learning

func (*LearnNeuronParams) CaFromSpike

func (ln *LearnNeuronParams) CaFromSpike(ctx *Context, ni, di uint32)

CaFromSpike updates all spike-driven calcium variables, including LearnCa and CaSpike. Computed after new activation for current cycle is updated.

func (*LearnNeuronParams) Defaults

func (ln *LearnNeuronParams) Defaults()

func (*LearnNeuronParams) InitNeuronCa

func (ln *LearnNeuronParams) InitNeuronCa(ctx *Context, ni, di uint32)

InitNeuronCa initializes the neuron-level calcium learning and spking variables. Called by InitWeights (at start of learning).

func (*LearnNeuronParams) LearnNMDAFromRaw

func (ln *LearnNeuronParams) LearnNMDAFromRaw(ctx *Context, ni, di uint32, geTot float32)

LearnNMDAFromRaw updates the separate NMDA conductance and calcium values based on GeTot = GeRaw + external ge conductance. These are the variables that drive learning -- can be the same as activation but also can be different for testing learning Ca effects independent of activation effects.

func (*LearnNeuronParams) Update

func (ln *LearnNeuronParams) Update()

type LearnSynParams

type LearnSynParams struct {

	// Learn enables learning for this pathway.
	Learn slbool.Bool

	// LRateParams manages learning rate parameters for scaling [DWt] delta
	// weight values that then update [LWt] online learned weights.
	// It has two optional modulation factors on top of a Base learning rate.
	LRate LRateParams `display:"inline"`

	// DWtParams has misc parameters for computing weight changes ([DWt]) for the default
	// trace-based cortical learning rule and for other specialized learning rules.
	DWt DWtParams `display:"inline"`

	// hebbian learning option, which overrides the default learning rules
	Hebb HebbParams `display:"inline"`
	// contains filtered or unexported fields
}

LearnSynParams manages learning-related parameters at the synapse-level.

func (*LearnSynParams) CHLdWt

func (ls *LearnSynParams) CHLdWt(suCaP, suCaD, ruCaP, ruCaD float32) float32

CHLdWt returns the error-driven weight change component for a CHL contrastive hebbian learning rule, optionally using the checkmark temporally eXtended Contrastive Attractor Learning (XCAL) function

func (*LearnSynParams) Defaults

func (ls *LearnSynParams) Defaults()

func (*LearnSynParams) DeltaDWt

func (ls *LearnSynParams) DeltaDWt(plus, minus float32) float32

DeltaDWt returns the error-driven weight change component for a simple delta between a minus and plus phase factor, optionally using the checkmark temporally eXtended Contrastive Attractor Learning (XCAL) function

func (*LearnSynParams) ShouldDisplay

func (ls *LearnSynParams) ShouldDisplay(field string) bool

func (*LearnSynParams) Update

func (ls *LearnSynParams) Update()

type MatrixParams

type MatrixParams struct {

	// GateThr is the threshold on layer Avg CaPMax for Matrix Go and BG Thal
	// layers to count as having gated.
	GateThr float32 `default:"0.05"`

	// IsVS is this a ventral striatum (VS) matrix layer? If true, the gating
	// status of this layer is recorded in the Global state,
	// and used for updating effort and other factors.
	IsVS slbool.Bool

	// index of other matrix (Go if we are NoGo and vice-versa). Set during Build from BuildConfig OtherMatrixName
	OtherMatrixIndex int32 `edit:"-"`

	// index of thalamus layer that we gate.  needed to get gating information.  Set during Build from BuildConfig ThalLay1Name if present -- -1 if not used
	ThalLay1Index int32 `edit:"-"`

	// index of thalamus layer that we gate.  needed to get gating information.  Set during Build from BuildConfig ThalLay2Name if present -- -1 if not used
	ThalLay2Index int32 `edit:"-"`

	// index of thalamus layer that we gate.  needed to get gating information.  Set during Build from BuildConfig ThalLay3Name if present -- -1 if not used
	ThalLay3Index int32 `edit:"-"`

	// index of thalamus layer that we gate.  needed to get gating information.  Set during Build from BuildConfig ThalLay4Name if present -- -1 if not used
	ThalLay4Index int32 `edit:"-"`

	// index of thalamus layer that we gate.  needed to get gating information.  Set during Build from BuildConfig ThalLay5Name if present -- -1 if not used
	ThalLay5Index int32 `edit:"-"`

	// index of thalamus layer that we gate.  needed to get gating information.  Set during Build from BuildConfig ThalLay6Name if present -- -1 if not used
	ThalLay6Index int32 `edit:"-"`
	// contains filtered or unexported fields
}

MatrixParams has parameters for BG Striatum Matrix MSN layers These are the main Go / NoGo gating units in BG. DA, ACh learning rate modulation is pre-computed on the recv neuron RLRate variable via NeuroMod. Also uses Pool.Gated for InvertNoGate, updated in PlusPhase prior to DWt call. Must set Learn.NeuroMod.DAMod = D1Mod or D2Mod via SetBuildConfig("DAMod").

func (*MatrixParams) Defaults

func (mp *MatrixParams) Defaults()

func (*MatrixParams) Update

func (mp *MatrixParams) Update()

type MatrixPathParams

type MatrixPathParams struct {

	// proportion of trace activity driven by the basic credit assignment factor
	// based on the PF modulatory inputs and activity of the receiving neuron,
	// relative to the delta factor which is generally going to be smaller
	// because it is an activity difference.
	Credit float32 `default:"0.6"`

	// baseline amount of PF activity that modulates credit assignment learning,
	// for neurons with zero PF modulatory activity.
	// These were not part of the actual motor action, but can still get some
	// smaller amount of credit learning.
	BasePF float32 `default:"0.005"`

	// weight for trace activity that is a function of the minus-plus delta
	// activity signal on the receiving MSN neuron, independent of PF modulation.
	// This should always be 1 except for testing disabling: adjust NonDelta
	// relative to it, and the overall learning rate.
	Delta float32 `default:"1"`

	// for ventral striatum, learn based on activity at time of reward,
	// in inverse proportion to the GoalMaint activity: i.e., if there was no
	// goal maintenance, learn at reward to encourage goal engagement next time,
	// but otherwise, do not further reinforce at time of reward, because the
	// actual goal gating learning trace is a better learning signal.
	// Otherwise, only uses accumulated trace but doesn't include rew-time activity,
	// e.g., for testing cases that do not have GoalMaint.
	VSRewLearn slbool.Bool `default:"true"`
}

MatrixPathParams for trace-based learning in the MatrixPath. A trace of synaptic co-activity is formed, and then modulated by dopamine whenever it occurs. This bridges the temporal gap between gating activity and subsequent activity, and is based biologically on synaptic tags. Trace is applied to DWt and reset at the time of reward.

func (*MatrixPathParams) Defaults

func (tp *MatrixPathParams) Defaults()

func (*MatrixPathParams) Update

func (tp *MatrixPathParams) Update()

type NetViewUpdate

type NetViewUpdate struct {

	// On toggles update of display on
	On bool

	// Time scale to update the network view (Cycle to Trial timescales).
	Time ViewTimes

	// CounterFunc returns the counter string showing current counters etc.
	CounterFunc func(mode, level enums.Enum) string `display:"-"`

	// View is the network view.
	View *netview.NetView `display:"-"`
}

NetViewUpdate manages time scales for updating the NetView. Use one of these for each mode you want to control separately.

func (*NetViewUpdate) Config

func (vu *NetViewUpdate) Config(nv *netview.NetView, tm ViewTimes, fun func(mode, level enums.Enum) string)

Config configures for given NetView, time and counter function, which returns a string to show at the bottom of the netview, given the current mode and level.

func (*NetViewUpdate) GoUpdate

func (vu *NetViewUpdate) GoUpdate(mode, level enums.Enum)

GoUpdate does an update if view is On, visible and active, including recording new data and driving update of display. This version is only for calling from a separate goroutine, not the main event loop (see also Update).

func (*NetViewUpdate) IsCycleUpdating

func (vu *NetViewUpdate) IsCycleUpdating() bool

IsCycleUpdating returns true if the view is updating at a cycle level, either from raster or literal cycle level.

func (*NetViewUpdate) IsViewingSynapse

func (vu *NetViewUpdate) IsViewingSynapse() bool

IsViewingSynapse returns true if netview is actively viewing synapses.

func (*NetViewUpdate) RecordSyns

func (vu *NetViewUpdate) RecordSyns()

RecordSyns records synaptic data -- stored separate from unit data and only needs to be called when synaptic values are updated. Should be done when the DWt values have been computed, before updating Wts and zeroing. NetView displays this recorded data when Update is next called.

func (*NetViewUpdate) ShouldUpdate

func (vu *NetViewUpdate) ShouldUpdate() bool

ShouldUpdate returns true if the view is On, View is != nil, and it is visible.

func (*NetViewUpdate) Update

func (vu *NetViewUpdate) Update(mode, level enums.Enum)

Update does an update if view is On, visible and active, including recording new data and driving update of display. This version is only for calling from the main event loop (see also GoUpdate).

func (*NetViewUpdate) UpdateCycle

func (vu *NetViewUpdate) UpdateCycle(cyc int, mode, level enums.Enum)

UpdateCycle triggers an update at the Cycle (Millisecond) timescale, using given text to display at bottom of view

func (*NetViewUpdate) UpdateWhenStopped

func (vu *NetViewUpdate) UpdateWhenStopped(mode, level enums.Enum)

UpdateWhenStopped does an update when the network updating was stopped either via stepping or hitting the stop button. This has different logic for the raster view vs. regular. This is only for calling from a separate goroutine, not the main event loop.

type Network

type Network struct {
	emer.NetworkBase

	// Rubicon system for goal-driven motivated behavior,
	// including Rubicon phasic dopamine signaling.
	// Manages internal drives, US outcomes. Core LHb (lateral habenula)
	// and VTA (ventral tegmental area) dopamine are computed
	// in equations using inputs from specialized network layers
	// (LDTLayer driven by BLA, CeM layers, VSPatchLayer).
	// Renders USLayer, PVLayer, DrivesLayer representations
	// based on state updated here.
	Rubicon Rubicon

	// Layers is the array of layers, used for CPU initialization, not GPU computation.
	Layers []*Layer

	// Paths has pointers to all pathways in the network, sender-based, for CPU initialization,
	// not GPU computation.
	Paths []*Path `display:"-"`

	// LayerClassMap is a map from class name to layer names.
	LayerClassMap map[string][]string `display:"-"`

	// NThreads is number of threads to use for parallel processing.
	NThreads int

	// record function timer information.
	RecFunTimes bool `display:"-"`

	// timers for each major function (step of processing).
	FunTimes map[string]*timer.Time `display:"-"`

	// LayerParams are all the layer parameters. [NLayers]
	LayerParams []LayerParams `display:"-"`

	// PathParams are all the path parameters, in sending order. [NPaths]
	PathParams []PathParams `display:"-"`

	// NetworkIxs have indexes and sizes for entire network (one only).
	NetworkIxs []NetworkIndexes

	// PoolIxs have index values for each Pool.
	// [Layer * Pools][PoolIndexVars]
	PoolIxs tensor.Uint32 `display:"-"`

	// NeuronIxs have index values for each neuron: index into layer, pools.
	// [Neurons][Indexes]
	NeuronIxs tensor.Uint32 `display:"-"`

	// SynapseIxs have index values for each synapse:
	// providing index into recv, send neurons, path.
	// [Indexes][NSyns]; NSyns = [Layer][SendPaths][SendNeurons][Syns]
	SynapseIxs tensor.Uint32 `display:"-"`

	// PathSendCon are starting offset and N cons for each sending neuron,
	// for indexing into the Syns synapses, which are organized sender-based.
	// [NSendCon][StartNN]; NSendCon = [Layer][SendPaths][SendNeurons]
	PathSendCon tensor.Uint32 `display:"-"`

	// RecvPathIxs indexes into Paths (organized by SendPath) organized
	// by recv pathways. needed for iterating through recv paths efficiently on GPU.
	// [NRecvPaths] = [Layer][RecvPaths]
	RecvPathIxs tensor.Uint32 `display:"-"`

	// PathRecvCon are the receiving path starting index and number of connections.
	// [NRecvCon][StartNN]; NRecvCon = [Layer][RecvPaths][RecvNeurons]
	PathRecvCon tensor.Uint32 `display:"-"`

	// RecvSynIxs are the indexes into Synapses for each recv neuron, organized
	// into blocks according to PathRecvCon, for receiver-based access.
	// [NSyns] = [Layer][RecvPaths][RecvNeurons][Syns]
	RecvSynIxs tensor.Uint32 `display:"-"`

	// Ctx is the context state (one). Other copies of Context can be maintained
	// and [SetContext] to update this one, but this instance is the canonical one.
	Ctx []Context `display:"-"`

	// Neurons are all the neuron state variables.
	// [Neurons][Data][Vars]
	Neurons tensor.Float32 `display:"-"`

	// NeuronAvgs are variables with averages over the
	// Data parallel dimension for each neuron.
	// [Neurons][Vars]
	NeuronAvgs tensor.Float32 `display:"-"`

	// Pools are the [PoolVars] float32 state values for layer and sub-pool inhibition,
	// Including the float32 AvgMax values by Phase and variable: use [AvgMaxVarIndex].
	// [Layer * Pools][Data][PoolVars+AvgMax]
	Pools tensor.Float32

	// PoolsInt are the [PoolIntVars] int32 state values for layer and sub-pool
	// inhibition, AvgMax atomic integration, and other vars: use [AvgMaxIntVarIndex]
	// [Layer * Pools][Data][PoolIntVars+AvgMax]
	PoolsInt tensor.Int32

	// LayerStates holds layer-level state values, with variables defined in
	// [LayerVars], for each layer and Data parallel index.
	// [Layer][Data][LayerVarsN]
	LayerStates tensor.Float32 `display:"-"`

	// GlobalScalars are the global scalar state variables.
	// [GlobalScalarsN+2*NCaBins][Data]
	GlobalScalars tensor.Float32 `display:"-"`

	// GlobalVectors are the global vector state variables.
	// [GlobalVectorsN][MaxGlobalVecN][Data]
	GlobalVectors tensor.Float32 `display:"-"`

	// Exts are external input values for all Input / Target / Compare layers
	// in the network. The ApplyExt methods write to this per layer,
	// and it is then actually applied in one consistent method.
	// [NExts][Data]; NExts = [In / Out Layers][Neurons]
	Exts tensor.Float32 `display:"-"`

	// PathGBuf is the conductance buffer for accumulating spikes.
	// Subslices are allocated to each pathway.
	// Uses int-encoded values for faster GPU atomic integration.
	// [NPathNeur][Data][MaxDel+1]; NPathNeur = [Layer][RecvPaths][RecvNeurons]
	PathGBuf tensor.Int32 `display:"-"`

	// PathGSyns are synaptic conductance integrated over time per pathway
	// per recv neurons. spikes come in via PathBuf.
	// subslices are allocated to each pathway.
	// [NPathNeur][Data]
	PathGSyns tensor.Float32 `display:"-"`

	//	Synapses are the synapse level variables (weights etc).
	//
	// These do not depend on the data parallel index, unlike [SynapseTraces].
	// [NSyns][Vars]; NSyns = [Layer][SendPaths][SendNeurons][Syns]
	Synapses tensor.Float32 `display:"-"`

	// SynapseTraces are synaptic variables that depend on the data
	// parallel index, for accumulating learning traces and weight changes per data.
	// This is the largest data size, so multiple instances are used
	// to handle larger networks.
	// [NSyns][Data][Vars]; NSyns = [Layer][SendPaths][SendNeurons][Syns]
	SynapseTraces tensor.Float32 `display:"-"`
}

Network implements the Axon spiking model. Most of the fields are copied to the global vars, needed for GPU, via the SetAsCurrent method, and must be slices or tensors so that there is one canonical underlying instance of all such data. There are also Layer and Path lists that are used to scaffold the building and display of the network, but contain no data.

var CurrentNetwork *Network

CurrentNetwork is set in Network.SetAsCurrent method, which sets all global variables to point to the current network to be processed. These global vars are necessary for GPU kernel computation.

func NewNetwork

func NewNetwork(name string) *Network

NewNetwork returns a new axon Network

func (*Network) AdaptGi

func (nt *Network) AdaptGi()

AdaptGi does adapting inhibition at a slower interval.

func (*Network) AddACCost

func (nt *Network) AddACCost(nCosts, accY, accX int, space float32) (acc, accCT, accPT, accPTp, accMD *Layer)

AddACCost adds anterior cingulate cost coding layers, for given number of cost pools (typically 2: time, effort), with given number of units per pool.

func (*Network) AddAmygdala

func (nt *Network) AddAmygdala(prefix string, neg bool, nNeurY, nNeurX int, space float32) (blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, cemPos, cemNeg, blaNov *Layer)

AddAmygdala adds a full amygdala complex including BLA, CeM, and LDT. Inclusion of negative valence is optional with neg arg -- neg* layers are nil if not included. Uses the network Rubicon.NPosUSs and NNegUSs for number of pools -- must be configured prior to calling this.

func (*Network) AddBGThalLayer2D

func (net *Network) AddBGThalLayer2D(name string, nNeurY, nNeurX int) *Layer

AddBGThalLayer2D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 2D structure

func (*Network) AddBGThalLayer4D

func (net *Network) AddBGThalLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddBGThalLayer4D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 4D structure, with Pools representing separable gating domains.

func (*Network) AddBLALayers

func (nt *Network) AddBLALayers(prefix string, pos bool, nUs, nNeurY, nNeurX int, rel relpos.Relations, space float32) (acq, ext *Layer)

AddBLALayers adds two BLA layers, acquisition / extinction / D1 / D2, for positive or negative valence

func (*Network) AddCTLayer2D

func (net *Network) AddCTLayer2D(name string, nNeurY, nNeurX int) *Layer

AddCTLayer2D adds a CT Layer of given size, with given name.

func (*Network) AddCTLayer4D

func (net *Network) AddCTLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddCTLayer4D adds a CT Layer of given size, with given name.

func (*Network) AddClampDaLayer

func (nt *Network) AddClampDaLayer(name string) *Layer

AddClampDaLayer adds a ClampDaLayer of given name

func (*Network) AddDMatrixLayer

func (net *Network) AddDMatrixLayer(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DAModTypes) *Layer

AddDMatrixLayer adds a Dorsal MatrixLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. da gives the DaReceptor type (D1R = Go, D2R = NoGo)

func (*Network) AddDorsalBG

func (net *Network) AddDorsalBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpePr, gpeAk, stn, gpi, pf *Layer)

AddDorsalBG adds Dorsal Basal Ganglia layers, using the PCore Pallidal Core framework where GPe plays a central role. Returns DMtxGo, DMtxNo, DGPePr, DGPeAk, DSTN, DGPi, PF layers, with given optional prefix. Makes 4D pools throughout the GP layers, with Pools representing separable gating domains, i.e., action domains. All GP / STN layers have gpNeur neurons. Appropriate PoolOneToOne connections are made between layers, using standard styles. space is the spacing between layers (2 typical)

func (*Network) AddDrivesLayer

func (nt *Network) AddDrivesLayer(nNeurY, nNeurX int) *Layer

AddDrivesLayer adds Rubicon layer representing current drive activity, from Global Drive.Drives. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions, per drive pool.

func (*Network) AddDrivesPulvLayer

func (nt *Network) AddDrivesPulvLayer(nNeurY, nNeurX int, space float32) (drv, drvP *Layer)

AddDrivesPulvLayer adds Rubicon layer representing current drive activity, from Global Drive.Drives. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions, per drive pool. Adds Pulvinar predictive layers for Drives.

func (*Network) AddGPeLayer2D

func (net *Network) AddGPeLayer2D(name, class string, nNeurY, nNeurX int) *Layer

AddGPLayer2D adds a GPLayer of given size, with given name. Must set the GPType BuildConfig setting to appropriate GPLayerType

func (*Network) AddGPeLayer4D

func (net *Network) AddGPeLayer4D(name, class string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddGPLayer4D adds a GPLayer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.

func (*Network) AddGPiLayer2D

func (net *Network) AddGPiLayer2D(name, class string, nNeurY, nNeurX int) *Layer

AddGPiLayer2D adds a GPiLayer of given size, with given name.

func (*Network) AddGPiLayer4D

func (net *Network) AddGPiLayer4D(name, class string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddGPiLayer4D adds a GPiLayer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.

func (*Network) AddHip

func (net *Network) AddHip(hip *HipConfig, space float32) (ec2, ec3, dg, ca3, ca1, ec5 *Layer)

AddHip adds a new Hippocampal network for episodic memory. Returns layers most likely to be used for remaining connections and positions.

func (*Network) AddInputPulv2D

func (net *Network) AddInputPulv2D(name string, nNeurY, nNeurX int, space float32) (*Layer, *Layer)

AddInputPulv2D adds an Input and Layer of given size, with given name. The Input layer is set as the Driver of the Layer. Both layers have SetClass(name) called to allow shared params.

func (*Network) AddInputPulv4D

func (net *Network) AddInputPulv4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32) (*Layer, *Layer)

AddInputPulv4D adds an Input and Layer of given size, with given name. The Input layer is set as the Driver of the Layer. Both layers have SetClass(name) called to allow shared params.

func (*Network) AddLDTLayer

func (nt *Network) AddLDTLayer(prefix string) *Layer

AddLDTLayer adds a LDTLayer

func (*Network) AddLayer

func (nt *Network) AddLayer(name string, typ LayerTypes, shape ...int) *Layer

AddLayer adds a new layer with given name and shape to the network. 2D and 4D layer shapes are generally preferred but not essential -- see AddLayer2D and 4D for convenience methods for those. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each unit group having 4 rows (Y) of 5 (X) units.

func (*Network) AddLayer2D

func (nt *Network) AddLayer2D(name string, typ LayerTypes, shapeY, shapeX int) *Layer

AddLayer2D adds a new layer with given name and 2D shape to the network. 2D and 4D layer shapes are generally preferred but not essential.

func (*Network) AddLayer4D

func (nt *Network) AddLayer4D(name string, typ LayerTypes, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddLayer4D adds a new layer with given name and 4D shape to the network. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each pool having 4 rows (Y) of 5 (X) neurons.

func (*Network) AddLayerInit

func (nt *Network) AddLayerInit(ly *Layer, name string, typ LayerTypes, shape ...int)

AddLayerInit adds layer to network with proper initialization.

func (*Network) AddOFCneg

func (nt *Network) AddOFCneg(nUSs, ofcY, ofcX int, space float32) (ofc, ofcCT, ofcPT, ofcPTp, ofcMD *Layer)

AddOFCneg adds orbital frontal cortex negative US-coding layers, for given number of neg US pools with given number of units per pool.

func (*Network) AddOFCpos

func (nt *Network) AddOFCpos(nUSs, nY, ofcY, ofcX int, space float32) (ofc, ofcCT, ofcPT, ofcPTp, ofcMD *Layer)

AddOFCpos adds orbital frontal cortex positive US-coding layers, for given number of pos US pools (first is novelty / curiosity pool), with given number of units per pool.

func (*Network) AddPFC2D

func (net *Network) AddPFC2D(name, thalSuffix string, nNeurY, nNeurX int, decayOnRew, selfMaint bool, space float32) (pfc, pfcCT, pfcPT, pfcPTp, pfcThal *Layer)

AddPFC2D adds a "full stack" of 2D PFC layers: * AddSuperCT2D (Super and CT) * AddPTMaintThal (PTMaint, BGThal) * AddPTPredLayer (PTPred) with given name prefix, which is also set as the Class for all layers & paths (+"Path"), and suffix for the BGThal layer (e.g., "MD" or "VM" etc for different thalamic nuclei). Sets PFCLayer as additional class for all cortical layers. OneToOne, full connectivity is used between layers. decayOnRew determines the Act.Decay.OnRew setting (true of OFC, ACC type for sure). if selfMaint is true, the SMaint self-maintenance mechanism is used instead of lateral connections. CT layer uses the Medium timescale params.

func (*Network) AddPFC4D

func (net *Network) AddPFC4D(name, thalSuffix string, nPoolsY, nPoolsX, nNeurY, nNeurX int, decayOnRew, selfMaint bool, space float32) (pfc, pfcCT, pfcPT, pfcPTp, pfcThal *Layer)

AddPFC4D adds a "full stack" of 4D PFC layers: * AddSuperCT4D (Super and CT) * AddPTMaintThal (PTMaint, BGThal) * AddPTPredLayer (PTPred) with given name prefix, which is also set as the Class for all layers & paths (+"Path"), and suffix for the BGThal layer (e.g., "MD" or "VM" etc for different thalamic nuclei). Sets PFCLayer as additional class for all cortical layers. OneToOne and PoolOneToOne connectivity is used between layers. decayOnRew determines the Act.Decay.OnRew setting (true of OFC, ACC type for sure). if selfMaint is true, the SMaint self-maintenance mechanism is used instead of lateral connections. CT layer uses the Medium timescale params. use, e.g., pfcCT.AddDefaultParams(func (ly *LayerParams) {ly.Inhib.Layer.Gi = 2.8} ) to change default params.

func (*Network) AddPTMaintLayer2D

func (net *Network) AddPTMaintLayer2D(name string, nNeurY, nNeurX int) *Layer

AddPTMaintLayer2D adds a PTMaintLayer of given size, with given name.

func (*Network) AddPTMaintLayer4D

func (net *Network) AddPTMaintLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddPTMaintLayer4D adds a PTMaintLayer of given size, with given name.

func (*Network) AddPTMaintThalForSuper

func (net *Network) AddPTMaintThalForSuper(super, ct *Layer, thalSuffix, pathClass string, superToPT, ptSelf, ptThal paths.Pattern, selfMaint bool, space float32) (ptMaint, thal *Layer)

AddPTMaintThalForSuper adds a PTMaint pyramidal tract active maintenance layer and a BG gated Thalamus layer for given superficial layer (SuperLayer) and associated CT, with given thal suffix (e.g., MD, VM). PT and Thal have SetClass(super.Name) called to allow shared params. Pathways are made with given classes: SuperToPT, PTSelfMaint, PTtoThal, ThalToPT, with optional extra class. if selfMaint is true, the SMaint self-maintenance mechanism is used instead of lateral connections. The PT and BGThal layers are positioned behind the CT layer.

func (*Network) AddPTPredLayer

func (net *Network) AddPTPredLayer(ptMaint, ct *Layer, ptToPredPath, ctToPredPath paths.Pattern, pathClass string, space float32) (ptPred *Layer)

AddPTPredLayer adds a PTPred pyramidal tract prediction layer for given PTMaint layer and associated CT. Sets SetClass(super.Name) to allow shared params. Pathways are made with given classes: PTtoPred, CTtoPred The PTPred layer is positioned behind the PT layer.

func (*Network) AddPTPredLayer2D

func (net *Network) AddPTPredLayer2D(name string, nNeurY, nNeurX int) *Layer

AddPTPredLayer2D adds a PTPredLayer of given size, with given name.

func (*Network) AddPTPredLayer4D

func (net *Network) AddPTPredLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddPTPredLayer4D adds a PTPredLayer of given size, with given name.

func (*Network) AddPVLayers

func (nt *Network) AddPVLayers(nNeurY, nNeurX int, rel relpos.Relations, space float32) (pvPos, pvNeg *Layer)

AddPVLayers adds PVpos and PVneg layers for positive or negative valence primary value representations, representing the total drive and effort weighted USpos outcome, or total USneg outcome. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions.

func (*Network) AddPVPulvLayers

func (nt *Network) AddPVPulvLayers(nNeurY, nNeurX int, rel relpos.Relations, space float32) (pvPos, pvNeg, pvPosP, pvNegP *Layer)

AddPVLayers adds PVpos and PVneg layers for positive or negative valence primary value representations, representing the total drive and effort weighted USpos outcomes, or total USneg outcomes. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions. Adds Pulvinar predictive layers for each.

func (*Network) AddPulvForLayer

func (net *Network) AddPulvForLayer(lay *Layer, space float32) *Layer

AddPulvForLayer adds a Pulvinar for given Layer (typically an Input type layer) with a P suffix. The Pulv.Driver is set to given Layer. The Pulv layer needs other CT connections from higher up to predict this layer. Pulvinar is positioned behind the given Layer.

func (*Network) AddPulvForSuper

func (net *Network) AddPulvForSuper(super *Layer, space float32) *Layer

AddPulvForSuper adds a Pulvinar for given superficial layer (SuperLayer) with a P suffix. The Pulv.Driver is set to Super, as is the Class on Pulv. The Pulv layer needs other CT connections from higher up to predict this layer. Pulvinar is positioned behind the CT layer.

func (*Network) AddPulvLayer2D

func (net *Network) AddPulvLayer2D(name string, nNeurY, nNeurX int) *Layer

AddPulvLayer2D adds a Pulvinar Layer of given size, with given name.

func (*Network) AddPulvLayer4D

func (net *Network) AddPulvLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddPulvLayer4D adds a Pulvinar Layer of given size, with given name.

func (*Network) AddRWLayers

func (nt *Network) AddRWLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, da *Layer)

AddRWLayers adds simple Rescorla-Wagner (PV only) dopamine system, with a primary Reward layer, a RWPred prediction layer, and a dopamine layer that computes diff. Only generates DA when Rew layer has external input -- otherwise zero.

func (*Network) AddRewLayer

func (nt *Network) AddRewLayer(name string) *Layer

AddRewLayer adds a RewLayer of given name

func (*Network) AddRubicon

func (nt *Network) AddRubicon(nYneur, popY, popX, bgY, bgX, pfcY, pfcX int, space float32) (vSgpi, vSmtxGo, vSmtxNo, urgency, pvPos, blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, blaNov, ofcPos, ofcPosCT, ofcPosPT, ofcPosPTp, ilPos, ilPosCT, ilPosPT, ilPosPTp, ofcNeg, ofcNegCT, ofcNegPT, ofcNegPTp, ilNeg, ilNegCT, ilNegPT, ilNegPTp, accCost, plUtil, sc *Layer)

AddRubicon builds a complete Rubicon model for goal-driven decision making. Uses the network Rubicon.NPosUSs and NNegUSs for number of pools -- must be configured prior to calling this. Calls: * AddRubiconOFCus -- Rubicon, and OFC us coding Makes all appropriate interconnections and sets default parameters. Needs CS -> BLA, OFC connections to be made. Returns layers most likely to be used for remaining connections and positions.

func (*Network) AddRubiconOFCus

func (nt *Network) AddRubiconOFCus(nYneur, popY, popX, bgY, bgX, ofcY, ofcX int, space float32) (vSgpi, vSmtxGo, vSmtxNo, vSpatchD1, vSpatchD2, urgency, usPos, pvPos, usNeg, usNegP, pvNeg, pvNegP, blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, blaNov, ofcPos, ofcPosCT, ofcPosPT, ofcPosPTp, ilPos, ilPosCT, ilPosPT, ilPosPTp, ilPosMD, ofcNeg, ofcNegCT, ofcNegPT, ofcNegPTp, accCost, accCostCT, accCostPT, accCostPTp, accCostMD, ilNeg, ilNegCT, ilNegPT, ilNegPTp, ilNegMD, sc *Layer)

AddRubiconOFCus builds a complete Rubicon network with OFCpos (orbital frontal cortex) US-coding layers, ILpos infralimbic abstract positive value, OFCneg for negative value inputs, and ILneg value layers, and ACCost cost prediction layers. Uses the network Rubicon.NPosUSs, NNegUSs, NCosts for number of pools -- must be configured prior to calling this. Calls: * AddVTALHbLDTLayers * AddRubiconPulvLayers * AddVS * AddAmygdala * AddOFCpos * AddOFCneg Makes all appropriate interconnections and sets default parameters. Needs CS -> BLA, OFC connections to be made. Returns layers most likely to be used for remaining connections and positions.

func (*Network) AddRubiconPulvLayers

func (nt *Network) AddRubiconPulvLayers(nYneur, popY, popX int, space float32) (drives, drivesP, urgency, usPos, usNeg, cost, costFinal, usPosP, usNegP, costP, pvPos, pvNeg, pvPosP, pvNegP *Layer)

AddRubiconPulvLayers adds Rubicon layers for PV-related information visualizing the internal states of the Global state, with Pulvinar prediction layers for training PFC layers. Uses the network Rubicon.NPosUSs, NNegUSs, NCosts for number of pools -- must be configured prior to calling this. * drives = popcode representation of drive strength (no activity for 0) number of active drives comes from Context; popY, popX neurons per pool. * urgency = popcode representation of urgency Go bias factor, popY, popX neurons. * us = popcode per US, positive & negative, cost * pv = popcode representation of final primary value on positive and negative valences -- this is what the dopamine value ends up conding (pos - neg). Layers are organized in depth per type: USs in one column, PVs in the next, with Drives in the back; urgency behind that.

func (*Network) AddSCLayer2D

func (nt *Network) AddSCLayer2D(prefix string, nNeurY, nNeurX int) *Layer

AddSCLayer2D adds superior colliculcus 2D layer which computes stimulus onset via trial-delayed inhibition (Inhib.FFPrv) -- connect with fixed random input from sensory input layers. Sets base name and class name to SC. Must set Inhib.FFPrv > 0 and Act.Decay.* = 0

func (*Network) AddSCLayer4D

func (nt *Network) AddSCLayer4D(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddSCLayer4D adds superior colliculcus 4D layer which computes stimulus onset via trial-delayed inhibition (Inhib.FFPrv) -- connect with fixed random input from sensory input layers. Sets base name and class name to SC. Must set Inhib.FFPrv > 0 and Act.Decay.* = 0

func (*Network) AddSTNLayer2D

func (net *Network) AddSTNLayer2D(name, class string, nNeurY, nNeurX int) *Layer

AddSTNLayer2D adds a subthalamic nucleus Layer of given size, with given name.

func (*Network) AddSTNLayer4D

func (net *Network) AddSTNLayer4D(name, class string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddSTNLayer4D adds a subthalamic nucleus Layer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.

func (*Network) AddSuperCT2D

func (net *Network) AddSuperCT2D(name, pathClass string, shapeY, shapeX int, space float32, pat paths.Pattern) (super, ct *Layer)

AddSuperCT2D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPath pathway from Super to CT using given pathway pattern, and NO Pulv Pulvinar. CT is placed Behind Super.

func (*Network) AddSuperCT4D

func (net *Network) AddSuperCT4D(name, pathClass string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32, pat paths.Pattern) (super, ct *Layer)

AddSuperCT4D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPath pathway from Super to CT using given pathway pattern, and NO Pulv Pulvinar. CT is placed Behind Super.

func (*Network) AddSuperLayer2D

func (net *Network) AddSuperLayer2D(name string, nNeurY, nNeurX int) *Layer

AddSuperLayer2D adds a Super Layer of given size, with given name.

func (*Network) AddSuperLayer4D

func (net *Network) AddSuperLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddSuperLayer4D adds a Super Layer of given size, with given name.

func (*Network) AddTDLayers

func (nt *Network) AddTDLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, ri, td *Layer)

AddTDLayers adds the standard TD temporal differences layers, generating a DA signal. Pathway from Rew to RewInteg is given class TDToInteg -- should have no learning and 1 weight.

func (*Network) AddUSLayers

func (nt *Network) AddUSLayers(popY, popX int, rel relpos.Relations, space float32) (usPos, usNeg, cost, costFinal *Layer)

AddUSLayers adds USpos, USneg, and Cost layers for positive or negative valence unconditioned stimuli (USs), using a pop-code representation of US magnitude. These track the Global USpos, USneg, Cost for visualization and predictive learning. Actual US inputs are set in Rubicon. Uses the network Rubicon.NPosUSs, NNegUSs, and NCosts for number of pools -- must be configured prior to calling this.

func (*Network) AddUSPulvLayers

func (nt *Network) AddUSPulvLayers(popY, popX int, rel relpos.Relations, space float32) (usPos, usNeg, cost, costFinal, usPosP, usNegP, costP *Layer)

AddUSPulvLayers adds USpos, USneg, and Cost layers for positive or negative valence unconditioned stimuli (USs), using a pop-code representation of US magnitude. These track the Global USpos, USneg, Cost, for visualization and predictive learning. Actual US inputs are set in Rubicon. Adds Pulvinar predictive layers for each.

func (*Network) AddUrgencyLayer

func (nt *Network) AddUrgencyLayer(nNeurY, nNeurX int) *Layer

AddUrgencyLayer adds Rubicon layer representing current urgency factor, from Global Urgency.Urge Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions.

func (*Network) AddVMatrixLayer

func (net *Network) AddVMatrixLayer(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DAModTypes) *Layer

AddVMatrixLayer adds a Ventral MatrixLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. da gives the DaReceptor type (D1R = Go, D2R = NoGo)

func (*Network) AddVSGatedLayer

func (net *Network) AddVSGatedLayer(prefix string, nYunits int) *Layer

AddVSGatedLayer adds a VSGatedLayer with given number of Y units and 2 pools, first one represents JustGated, second is HasGated.

func (*Network) AddVSPatchLayers

func (nt *Network) AddVSPatchLayers(prefix string, nUs, nNeurY, nNeurX int, space float32) (d1, d2 *Layer)

AddVSPatchLayers adds VSPatch (Pos, D1, D2)

func (*Network) AddVTALHbLDTLayers

func (nt *Network) AddVTALHbLDTLayers(rel relpos.Relations, space float32) (vta, lhb, ldt *Layer)

AddVTALHbLDTLayers adds VTA dopamine, LHb DA dipping, and LDT ACh layers which are driven by corresponding values in Global

func (*Network) AddVentralBG

func (net *Network) AddVentralBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpePr, gpeAk, stn, gpi *Layer)

AddVentralBG adds Ventral Basal Ganglia layers, using the PCore Pallidal Core framework where GPe plays a central role. Returns VMtxGo, VMtxNo, VGPePr, VGPeAk, VSTN, VGPi layers, with given optional prefix. Only the Matrix has pool-based 4D shape by default -- use pool for "role" like elements where matches need to be detected. All GP / STN layers have gpNeur neurons. Appropriate connections are made between layers, using standard styles. space is the spacing between layers (2 typical).

func (*Network) AllGlobalValues

func (nt *Network) AllGlobalValues(ctrKey string, vals map[string]float32)

AllGlobalValues adds to map of all Global variables and values. ctrKey is a key of counters to contextualize values.

func (*Network) AllGlobals

func (nt *Network) AllGlobals() string

AllGlobals returns a listing of all Global variables and values.

func (*Network) AllLayerInhibs

func (nt *Network) AllLayerInhibs() string

AllLayerInhibs returns a listing of all Layer Inhibition parameters in the Network

func (*Network) AllPathScales

func (nt *Network) AllPathScales() string

AllPathScales returns a listing of all PathScale parameters in the Network in all Layers, Recv pathways. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.

func (*Network) ApplyExts

func (nt *Network) ApplyExts()

ApplyExts applies external inputs to layers, based on values that were set in prior layer-specific ApplyExt calls. This does nothing on the CPU, but is critical for the GPU, and should be added to all sims where GPU will be used.

func (*Network) Beta1

func (nt *Network) Beta1()

Beta1 does updating at Beta1 timescale.

func (*Network) Beta2

func (nt *Network) Beta2()

Beta2 does updating at Beta1 timescale.

func (*Network) BidirConnectLayerNames

func (nt *Network) BidirConnectLayerNames(low, high string, pat paths.Pattern) (lowlay, highlay *Layer, fwdpt, backpt *Path, err error)

BidirConnectLayerNames establishes bidirectional pathways between two layers, referenced by name, with low = the lower layer that sends a Forward pathway to the high layer, and receives a Back pathway in the opposite direction. Returns error if not successful.

func (*Network) BidirConnectLayers

func (nt *Network) BidirConnectLayers(low, high *Layer, pat paths.Pattern) (fwdpt, backpt *Path)

BidirConnectLayers establishes bidirectional pathways between two layers, with low = lower layer that sends a Forward pathway to the high layer, and receives a Back pathway in the opposite direction.

func (*Network) Build

func (nt *Network) Build() error

Build constructs the layer and pathway state based on the layer shapes and patterns of interconnectivity. Everything in the network must have been configured by this point, including key values in Context such as ThetaCycles and CaBinCycles which drive allocation of number of CaBins neuron variables and corresponding GvCaBinWts global scalar variables.

func (*Network) BuildPathGBuf

func (nt *Network) BuildPathGBuf()

BuildPathGBuf builds the PathGBuf, PathGSyns, based on the MaxDelay values in the PathParams, which should have been configured by this point. Called by default in InitWeights()

func (*Network) CheckSameSize

func (nt *Network) CheckSameSize(on *Network) error

CheckSameSize checks if this network is the same size as given other, in terms of NNeurons, MaxData, and NSyns. Returns error message if not.

func (*Network) ClearTargExt

func (nt *Network) ClearTargExt()

ClearTargExt clears external inputs Ext that were set from target values Target. This can be called to simulate alpha cycles within theta cycles, for example.

func (*Network) CollectDWts

func (nt *Network) CollectDWts(dwts *[]float32) bool

CollectDWts writes all of the synaptic DWt values to given dwts slice which is pre-allocated to given nwts size if dwts is nil, in which case the method returns true so that the actual length of dwts can be passed next time around. Used for MPI sharing of weight changes across processors. This Sync's Layers and Synapses from GPU first (nop if not using).

func (*Network) ConfigLoopsHip

func (net *Network) ConfigLoopsHip(ctx *Context, ls *looper.Stacks, hip *HipConfig, pretrain *bool)

ConfigLoopsHip configures the hippocampal looper and should be included in ConfigLoops in model to make sure hip loops is configured correctly. see hip.go for an instance of implementation of this function. ec5ClampFrom specifies the layer to clamp EC5 plus phase values from: EC3 is the biological source, but can use Input layer for simple testing net.

func (*Network) ConnectCSToBLApos

func (nt *Network) ConnectCSToBLApos(cs, blaAcq, blaNov *Layer) (toAcq, toNov, novInhib *Path)

ConnectCSToBLApos connects the CS input to BLAposAcqD1, BLANovelCS layers using fixed, higher-variance weights, full pathway. Sets classes to: CSToBLApos, CSToBLANovel with default params

func (*Network) ConnectCTSelf

func (net *Network) ConnectCTSelf(ly *Layer, pat paths.Pattern, pathClass string) (ctxt, maint *Path)

ConnectCTSelf adds a Self (Lateral) CTCtxtPath pathway within a CT layer, in addition to a regular lateral pathway, which supports active maintenance. The CTCtxtPath has a Class label of CTSelfCtxt, and the regular one is CTSelfMaint with optional class added.

func (*Network) ConnectCtxtToCT

func (net *Network) ConnectCtxtToCT(send, recv *Layer, pat paths.Pattern) *Path

ConnectCtxtToCT adds a CTCtxtPath from given sending layer to a CT layer

func (*Network) ConnectLayerNames

func (nt *Network) ConnectLayerNames(send, recv string, pat paths.Pattern, typ PathTypes) (rlay, slay *Layer, pt *Path, err error)

ConnectLayerNames establishes a pathway between two layers, referenced by name adding to the recv and send pathway lists on each side of the connection. Returns error if not successful.

func (*Network) ConnectLayers

func (nt *Network) ConnectLayers(send, recv *Layer, pat paths.Pattern, typ PathTypes) *Path

ConnectLayers establishes a pathway between two layers, adding to the recv and send pathway lists on each side of the connection.

func (*Network) ConnectPTMaintSelf

func (net *Network) ConnectPTMaintSelf(ly *Layer, pat paths.Pattern, pathClass string) *Path

ConnectPTMaintSelf adds a Self (Lateral) pathway within a PTMaintLayer, which supports active maintenance, with a class of PTSelfMaint

func (*Network) ConnectPTPredSelf

func (net *Network) ConnectPTPredSelf(ly *Layer, pat paths.Pattern) *Path

ConnectPTPredSelf adds a Self (Lateral) pathway within a PTPredLayer, which supports active maintenance, with a class of PTSelfMaint

func (*Network) ConnectPTToPulv

func (net *Network) ConnectPTToPulv(ptMaint, ptPred, pulv *Layer, toPulvPat, fmPulvPat paths.Pattern, pathClass string) (ptToPulv, ptPredToPulv, toPTPred *Path)

ConnectPTToPulv connects PT, PTPred with given Pulv: PT -> Pulv is class PTToPulv; PT does NOT receive back from Pulv PTPred -> Pulv is class PTPredToPulv, From Pulv = type = Back, class = FromPulv toPulvPat is the paths.Pattern PT -> Pulv and fmPulvPat is Pulv -> PTPred Typically Pulv is a different shape than PTPred, so use Full or appropriate topological pattern. adds optional class name to pathway.

func (*Network) ConnectPTpToPulv

func (net *Network) ConnectPTpToPulv(ptPred, pulv *Layer, toPulvPat, fmPulvPat paths.Pattern, pathClass string) (ptToPulv, ptPredToPulv, toPTPred *Path)

ConnectPTpToPulv connects PTPred with given Pulv: PTPred -> Pulv is class PTPredToPulv, From Pulv = type = Back, class = FromPulv toPulvPat is the paths.Pattern PT -> Pulv and fmPulvPat is Pulv -> PTPred Typically Pulv is a different shape than PTPred, so use Full or appropriate topological pattern. adds optional class name to pathway.

func (*Network) ConnectSuperToCT

func (net *Network) ConnectSuperToCT(send, recv *Layer, pat paths.Pattern, pathClass string) *Path

ConnectSuperToCT adds a CTCtxtPath from given sending Super layer to a CT layer This automatically sets the FromSuper flag to engage proper defaults, Uses given pathway pattern -- e.g., Full, OneToOne, or PoolOneToOne

func (*Network) ConnectToBLAAcq

func (nt *Network) ConnectToBLAAcq(send, recv *Layer, pat paths.Pattern) *Path

ConnectToBLAAcq adds a BLAPath from given sending layer to a BLA layer, and configures it for acquisition parameters. Sets class to BLAAcqPath. This is for any CS or contextual inputs that drive acquisition.

func (*Network) ConnectToBLAExt

func (nt *Network) ConnectToBLAExt(send, recv *Layer, pat paths.Pattern) *Path

ConnectToBLAExt adds a BLAPath from given sending layer to a BLA layer, and configures it for extinctrion parameters. Sets class to BLAExtPath. This is for any CS or contextual inputs that drive extinction neurons to fire and override the acquisition ones.

func (*Network) ConnectToDSMatrix

func (net *Network) ConnectToDSMatrix(send, recv *Layer, pat paths.Pattern) *Path

ConnectToDSMatrix adds a DSMatrixPath from given sending layer to a matrix layer

func (*Network) ConnectToPFC

func (net *Network) ConnectToPFC(lay, layP, pfc, pfcCT, pfcPT, pfcPTp *Layer, pat paths.Pattern, pathClass string)

ConnectToPFC connects given predictively learned input to all relevant PFC layers: lay -> pfc (skipped if lay == nil) layP -> pfc, layP <-> pfcCT pfcPTp <-> layP if pfcPT != nil: pfcPT <-> layP sets PFCPath class name for pathways

func (*Network) ConnectToPFCBack

func (net *Network) ConnectToPFCBack(lay, layP, pfc, pfcCT, pfcPT, pfcPTp *Layer, pat paths.Pattern, pathClass string)

ConnectToPFCBack connects given predictively learned input to all relevant PFC layers: lay -> pfc using a BackPath -- weaker layP -> pfc, layP <-> pfcCT pfcPTp <-> layP

func (*Network) ConnectToPFCBidir

func (net *Network) ConnectToPFCBidir(lay, layP, pfc, pfcCT, pfcPT, pfcPTp *Layer, pat paths.Pattern, pathClass string) (ff, fb *Path)

ConnectToPFCBidir connects given predictively learned input to all relevant PFC layers, using bidirectional connections to super layers. lay <-> pfc bidirectional layP -> pfc, layP <-> pfcCT pfcPTp <-> layP

func (*Network) ConnectToPulv

func (net *Network) ConnectToPulv(super, ct, pulv *Layer, toPulvPat, fmPulvPat paths.Pattern, pathClass string) (toPulv, toSuper, toCT *Path)

ConnectToPulv adds the following pathways: layers | class | path type | path pat ------------+------------+-------------+---------- ct ->pulv | "CTToPulv" | ForwardPath | toPulvPat pulv->super | "FromPulv" | BackPath | fmPulvPat pulv->ct | "FromPulv" | BackPath | fmPulvPat

Typically pulv is a different shape than super and ct, so use Full or appropriate topological pattern. Adds optional pathClass name as a suffix.

func (*Network) ConnectToRWPath

func (nt *Network) ConnectToRWPath(send, recv *Layer, pat paths.Pattern) *Path

ConnectToRWPred adds a RWPath from given sending layer to a RWPred layer

func (*Network) ConnectToSC

func (nt *Network) ConnectToSC(send, recv *Layer, pat paths.Pattern) *Path

ConnectToSC adds a ForwardPath from given sending layer to a SC layer, setting class as ToSC -- should set params as fixed random with more variance than usual.

func (*Network) ConnectToSC1to1

func (nt *Network) ConnectToSC1to1(send, recv *Layer) *Path

ConnectToSC1to1 adds a 1to1 ForwardPath from given sending layer to a SC layer, copying the geometry of the sending layer, setting class as ToSC. The conection weights are set to uniform.

func (*Network) ConnectToVSMatrix

func (net *Network) ConnectToVSMatrix(send, recv *Layer, pat paths.Pattern) *Path

ConnectToVSMatrix adds a VSMatrixPath from given sending layer to a matrix layer

func (*Network) ConnectToVSPatch

func (nt *Network) ConnectToVSPatch(send, vspD1, vspD2 *Layer, pat paths.Pattern) (*Path, *Path)

ConnectToVSPatch adds a VSPatchPath from given sending layer to VSPatchD1, D2 layers

func (*Network) ConnectUSToBLA

func (nt *Network) ConnectUSToBLA(us, blaAcq, blaExt *Layer) (toAcq, toExt *Path)

ConnectUSToBLA connects the US input to BLApos(Neg)AcqD1(D2) and BLApos(Neg)ExtD2(D1) layers, using fixed, higher-variance weights, full pathway. Sets classes to: USToBLAAcq and USToBLAExt

func (*Network) Context

func (nt *Network) Context() *Context

Context gets the network context state.

func (*Network) CopyStateFrom

func (nt *Network) CopyStateFrom(on *Network) error

CopyStateFrom copies entire network state from other network. Other network must have identical configuration, as this just does a literal copy of the state values. This is checked and errors are returned (and logged). See also DiffFrom.

func (*Network) Cycle

func (nt *Network) Cycle(getNeurons bool)

Cycle runs one cycle of activation updating, equivalent to 1 msec. If getNeurons is true, then neuron state is synced back from the GPU (for cycle-level display etc). Otherwise, nothing is.

func (*Network) DWt

func (nt *Network) DWt()

DWt computes the weight change (learning) based on current running-average activation values. Copies synapses back from GPU, for case where viewing the synapses.

func (*Network) DWtToWt

func (nt *Network) DWtToWt()

DWtToWt computes the weight change (learning) based on current running-average activation values, and then WtFromDWt, without syncing any synapse-level state. This should be used when not viewing the weights. Also does SlowUpdate.

func (*Network) DecayState

func (nt *Network) DecayState(decay, glong, ahp float32)

DecayState decays activation state by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g) This is called automatically in NewState, but is avail here for ad-hoc decay cases.

func (*Network) DecayStateByClass

func (nt *Network) DecayStateByClass(decay, glong, ahp float32, classes ...string)

DecayStateByClass decays activation state for given class name(s) by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g)

func (*Network) DecayStateByType

func (nt *Network) DecayStateByType(decay, glong, ahp float32, types ...LayerTypes)

DecayStateByType decays activation state for given layer types by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g)

func (*Network) DecayStateLayers

func (nt *Network) DecayStateLayers(decay, glong, ahp float32, layers ...string)

DecayStateLayers decays activation state for given layers by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g). If this is not being called at the start, around NewState call, then you should also call: nt.GPU.SyncGBufToGPU() to zero the GBuf values which otherwise will persist spikes in flight.

func (*Network) Defaults

func (nt *Network) Defaults()

Defaults sets all the default parameters for all layers and pathways

func (*Network) DeleteAll

func (nt *Network) DeleteAll()

DeleteAll deletes all layers, prepares network for re-configuring and building

func (*Network) DiffFrom

func (nt *Network) DiffFrom(ctx *Context, on *Network, maxDiff int) string

DiffFrom returns a string reporting differences between this network and given other, up to given max number of differences (0 = all), for each state value.

func (*Network) EmerLayer

func (nt *Network) EmerLayer(idx int) emer.Layer

func (*Network) FunTimerStart

func (nt *Network) FunTimerStart(fun string)

FunTimerStart starts function timer for given function name -- ensures creation of timer

func (*Network) FunTimerStop

func (nt *Network) FunTimerStop(fun string)

FunTimerStop stops function timer -- timer must already exist

func (*Network) GPUTestWrite

func (nt *Network) GPUTestWrite()

GPUTestWrite writes values to neuron, for testing

func (*Network) Init

func (nt *Network) Init()

func (*Network) InitActs

func (nt *Network) InitActs()

InitActs fully initializes activation state -- not automatically called

func (*Network) InitExt

func (nt *Network) InitExt()

InitExt initializes external input state. Call prior to applying external inputs to layers.

func (*Network) InitGScale

func (nt *Network) InitGScale()

InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.

func (*Network) InitTopoSWts

func (nt *Network) InitTopoSWts()

InitTopoSWts initializes SWt structural weight parameters from path types that support topographic weight patterns, having flags set to support it, includes: paths.PoolTile paths.Circle. call before InitWeights if using Topo wts

func (*Network) InitWeights

func (nt *Network) InitWeights()

InitWeights initializes synaptic weights and all other associated long-term state variables including running-average state values (e.g., layer running average activations etc)

func (*Network) KeyLayerParams

func (nt *Network) KeyLayerParams() string

KeyLayerParams returns a listing for all layers in the network, of the most important layer-level params (specific to each algorithm).

func (*Network) KeyPathParams

func (nt *Network) KeyPathParams() string

KeyPathParams returns a listing for all Recv pathways in the network, of the most important pathway-level params (specific to each algorithm).

func (*Network) LRateMod

func (nt *Network) LRateMod(mod float32)

LRateMod sets the LRate modulation parameter for Paths, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.

func (*Network) LRateSched

func (nt *Network) LRateSched(sched float32)

LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.

func (*Network) LateralConnectLayer

func (nt *Network) LateralConnectLayer(lay *Layer, pat paths.Pattern) *Path

LateralConnectLayer establishes a self-pathway within given layer.

func (*Network) LateralConnectLayerPath

func (nt *Network) LateralConnectLayerPath(lay *Layer, pat paths.Pattern, pt *Path) *Path

LateralConnectLayerPath makes lateral self-pathway using given pathway.

func (*Network) LayerByName

func (nt *Network) LayerByName(name string) *Layer

LayerByName returns a layer by looking it up by name in the layer map (nil if not found).

func (*Network) LayersByClass

func (nt *Network) LayersByClass(classes ...string) []string

LayersByClass returns a list of layer names by given class(es). Lists are compiled when network Build() function called, or now if not yet present. The layer Type is always included as a Class, along with any other space-separated strings specified in Class for parameter styling, etc. If no classes are passed, all layer names in order are returned.

func (*Network) LayersByType

func (nt *Network) LayersByType(layType ...LayerTypes) []string

LayersByType returns a list of layer names by given layer type(s).

func (*Network) LayersSetOff

func (nt *Network) LayersSetOff(off bool)

LayersSetOff sets the Off flag for all layers to given setting

func (*Network) MakeToolbar

func (nt *Network) MakeToolbar(p *tree.Plan)

func (*Network) MaxParallelData

func (nt *Network) MaxParallelData() int

func (*Network) MinusPhase

func (nt *Network) MinusPhase()

MinusPhase does updating after end of minus phase.

func (*Network) NParallelData

func (nt *Network) NParallelData() int

func (*Network) NetIxs

func (nt *Network) NetIxs() *NetworkIndexes

func (*Network) NeuronsSlice

func (nt *Network) NeuronsSlice(vals *[]float32, nrnVar string, di int)

NeuronsSlice returns a slice of neuron values using given neuron variable, resizing as needed.

func (*Network) NewState

func (nt *Network) NewState(mode enums.Enum, testing bool)

NewState handles all initialization at start of new input pattern. This is called *before* applying external input data and operates across all data parallel values. The current Context.NData should be set properly prior to calling this and subsequent Cycle methods.

func (*Network) NumLayers

func (nt *Network) NumLayers() int

emer.Network interface methods:

func (*Network) PlusPhase

func (nt *Network) PlusPhase()

PlusPhase does updating after end of plus phase. On GPU this is when we finally sync back Layers and Neurons.

func (*Network) PlusPhaseStart

func (nt *Network) PlusPhaseStart()

PlusPhaseStart does updating at the start of the plus phase: applies Target inputs as External inputs.

func (*Network) ReadWeightsJSON

func (nt *Network) ReadWeightsJSON(r io.Reader) error

func (*Network) SaveAllLayerInhibs

func (nt *Network) SaveAllLayerInhibs(filename core.Filename) error

SaveAllLayerInhibs saves list of all layer Inhibition parameters to given file

func (*Network) SaveAllPathScales

func (nt *Network) SaveAllPathScales(filename core.Filename) error

SavePathScales saves a listing of all PathScale parameters in the Network in all Layers, Recv pathways. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.

func (*Network) SaveParamsSnapshot

func (nt *Network) SaveParamsSnapshot(cfg any, good bool) error

SaveParamsSnapshot saves various views of current parameters to either `params_good` if good = true (for current good reference params) or `params_2006_01_02` (year, month, day) datestamp, providing a snapshot of the simulation params for easy diffs and later reference. Also saves current Config state.

func (*Network) SetAsCurrent

func (nt *Network) SetAsCurrent()

SetAsCurrent sets this network's values as the current global variables, that are then processed in the code.

func (*Network) SetCaBinWts

func (nt *Network) SetCaBinWts()

SetCaBinWts sets the GvCaBinWts global ca bin weights for kinase trace learning rule integration of CaBins neuron-level spike values.

func (*Network) SetContext

func (nt *Network) SetContext(ctx *Context)

SetContext sets the values of the network context, which is the canonical instance.

func (*Network) SetDWts

func (nt *Network) SetDWts(dwts []float32, navg int)

SetDWts sets the DWt weight changes from given array of floats, which must be correct size navg is the number of processors aggregated in these dwts -- some variables need to be averaged instead of summed (e.g., ActAvg) This Sync's Layers and Synapses to the GPU after (nop if not using).

func (*Network) SetMaxData

func (nt *Network) SetMaxData(maxData int)

SetMaxData sets the MaxData and current NData to the same value.

func (*Network) SetNData

func (nt *Network) SetNData(nData int)

SetNData sets the NData in Context to given value.

func (*Network) SetNThreads

func (nt *Network) SetNThreads(nthr int)

SetNThreads sets number of threads to use for CPU parallel processing. pass 0 to use a default heuristic number based on current GOMAXPROCS processors and the number of neurons in the network (call after building)

func (*Network) SetSubMean

func (nt *Network) SetSubMean(trgAvg, path float32)

SetSubMean sets the SubMean parameters in all the layers in the network trgAvg is for Learn.TrgAvgAct.SubMean path is for the paths Learn.DWt.SubMean in both cases, it is generally best to have both parameters set to 0 at the start of learning

func (*Network) ShowAllGlobals

func (nt *Network) ShowAllGlobals()

ShowAllGlobals shows a listing of all Global variables and values.

func (*Network) SizeReport

func (nt *Network) SizeReport(detail bool) string

SizeReport returns a string reporting the size of each layer and pathway in the network, and total memory footprint. If detail flag is true, details per layer, pathway is included.

func (*Network) SlowAdapt

func (nt *Network) SlowAdapt()

SlowAdapt runs slow adaptation functions associated with sleep, including synaptic scaling associated with overall neural activity.

func (*Network) SlowUpdate

func (nt *Network) SlowUpdate()

SlowUpdate does ctx.SlowInc() and calls SlowAdapt at SlowInterval and AdaptGi at AdaptGiInterval.

func (*Network) SynVarNames

func (nt *Network) SynVarNames() []string

SynVarNames returns the names of all the variables on the synapses in this network. Not all pathways need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!

func (*Network) SynVarProps

func (nt *Network) SynVarProps() map[string]string

SynVarProps returns properties for variables

func (*Network) SynsSlice

func (nt *Network) SynsSlice(vals *[]float32, synvar SynapseVars)

SynsSlice returns a slice of synaptic values, in natural sending order, using given synaptic variable, resizing as needed.

func (*Network) TargToExt

func (nt *Network) TargToExt()

TargToExt sets external input Ext from target values Target This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.

func (*Network) TimerReport

func (nt *Network) TimerReport()

TimerReport reports the amount of time spent in each function, and in each thread

func (*Network) UnLesionNeurons

func (nt *Network) UnLesionNeurons()

UnLesionNeurons unlesions neurons in all layers in the network. Provides a clean starting point for subsequent lesion experiments.

func (*Network) UnitVarNames

func (nt *Network) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this network. Not all layers need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!

func (*Network) UnitVarProps

func (nt *Network) UnitVarProps() map[string]string

UnitVarProps returns properties for variables

func (*Network) UpdateExtFlags

func (nt *Network) UpdateExtFlags()

UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.

func (*Network) UpdateLayerMaps

func (nt *Network) UpdateLayerMaps()

func (*Network) UpdateParams

func (nt *Network) UpdateParams()

UpdateParams updates all the derived parameters if any have changed, for all layers and pathways

func (*Network) VarCategories

func (nt *Network) VarCategories() []emer.VarCategory

func (*Network) WeightsHash

func (nt *Network) WeightsHash() string

WeightsHash returns a hash code of all weight values

func (*Network) WriteWeightsJSON

func (nt *Network) WriteWeightsJSON(w io.Writer) error

func (*Network) WtFromDWt

func (nt *Network) WtFromDWt()

WtFromDWt updates the weights from delta-weight changes, after having done DWt previously. Also does SlowUpdate.

type NetworkIndexes

type NetworkIndexes struct {

	// MaxData is the maximum number of data inputs that can be processed
	// in parallel in one pass of the network.
	// Neuron storage is allocated to hold this amount during
	// Build process, and this value reflects that.
	MaxData uint32 `edit:"-"`

	// MaxDelay is the maximum synaptic delay across all pathways at the time of
	// [Network.Build]. This determines the size of the spike sending delay buffers.
	MaxDelay uint32 `edit:-"-"`

	// NCaBins is the total number of [CaBins] in the neuron state variables.
	// Set to [Context.ThetaCycles] / [Context.CaBinCycles] in Build.
	NCaBins int32 `edit:"-"`

	// NLayers is the number of layers in the network.
	NLayers uint32 `edit:"-"`

	// NNeurons is the total number of neurons.
	NNeurons uint32 `edit:"-"`

	// NPools is the total number of pools.
	NPools uint32 `edit:"-"`

	// NPaths is the total number of paths.
	NPaths uint32 `edit:"-"`

	// NSyns is the total number of synapses.
	NSyns uint32 `edit:"-"`

	// RubiconNPosUSs is the total number of Rubicon Drives / positive USs.
	RubiconNPosUSs uint32 `edit:"-"`

	// RubiconNCosts is the total number of Rubicon Costs.
	RubiconNCosts uint32 `edit:"-"`

	// RubiconNNegUSs is the total number of .Rubicon Negative USs.
	RubiconNNegUSs uint32 `edit:"-"`

	// GPUMaxBuffFloats is the maximum size in float32 (4 bytes) of a GPU buffer
	// needed for GPU access.
	GPUMaxBuffFloats uint32 `edit:"-"`

	// GPUSyncCaBanks is the total number of SynCa banks of GPUMaxBufferBytes arrays in GPU.
	GPUSynCaBanks uint32 `edit:"-"`
	// contains filtered or unexported fields
}

NetworkIndexes are indexes and sizes for processing network.

func GetNetworkIxs

func GetNetworkIxs(idx uint32) *NetworkIndexes

GetNetworkIxs returns a pointer to the given global variable: NetworkIxs []NetworkIndexes at given index. This directly processed in the GPU code, so this function call is an equivalent for the CPU.

type NeuroModParams

type NeuroModParams struct {

	// dopamine receptor-based effects of dopamine modulation on excitatory and inhibitory conductances: D1 is excitatory, D2 is inhibitory as a function of increasing dopamine
	DAMod DAModTypes

	// valence coding of this layer -- may affect specific layer types but does not directly affect neuromodulators currently
	Valence ValenceTypes

	// dopamine modulation of excitatory and inhibitory conductances (i.e., "performance dopamine" effect -- this does NOT affect learning dopamine modulation in terms of RLrate): g *= 1 + (DAModGain * DA)
	DAModGain float32

	// modulate the sign of the learning rate factor according to the DA sign, taking into account the DAMod sign reversal for D2Mod, also using BurstGain and DipGain to modulate DA value -- otherwise, only the magnitude of the learning rate is modulated as a function of raw DA magnitude according to DALRateMod (without additional gain factors)
	DALRateSign slbool.Bool

	// if not using DALRateSign, this is the proportion of maximum learning rate that Abs(DA) magnitude can modulate -- e.g., if 0.2, then DA = 0 = 80% of std learning rate, 1 = 100%
	DALRateMod float32 `min:"0" max:"1"`

	// proportion of maximum learning rate that ACh can modulate -- e.g., if 0.2, then ACh = 0 = 80% of std learning rate, 1 = 100%
	AChLRateMod float32 `min:"0" max:"1"`

	// amount of extra Gi inhibition added in proportion to 1 - ACh level -- makes ACh disinhibitory
	AChDisInhib float32 `min:"0" default:"0,5"`

	// multiplicative gain factor applied to positive dopamine signals -- this operates on the raw dopamine signal prior to any effect of D2 receptors in reversing its sign!
	BurstGain float32 `min:"0" default:"1"`

	// multiplicative gain factor applied to negative dopamine signals -- this operates on the raw dopamine signal prior to any effect of D2 receptors in reversing its sign! should be small for acq, but roughly equal to burst for ext
	DipGain float32 `min:"0" default:"1"`
	// contains filtered or unexported fields
}

NeuroModParams specifies the effects of neuromodulators on neural activity and learning rate. These can apply to any neuron type, and are applied in the core cycle update equations.

func (*NeuroModParams) DAGain

func (nm *NeuroModParams) DAGain(da float32) float32

DAGain returns DA dopamine value with Burst / Dip Gain factors applied

func (*NeuroModParams) DASign

func (nm *NeuroModParams) DASign() float32

DASign returns the sign of dopamine effects: D2Mod = -1, else 1

func (*NeuroModParams) Defaults

func (nm *NeuroModParams) Defaults()

func (*NeuroModParams) GGain

func (nm *NeuroModParams) GGain(da float32) float32

GGain returns effective Ge and Gi gain factor given total dopamine (DA) value: tonic + phasic. factor is 1 for no modulation, otherwise higher or lower.

func (*NeuroModParams) GiFromACh

func (nm *NeuroModParams) GiFromACh(ach float32) float32

GIFromACh returns amount of extra inhibition to add based on disinhibitory effects of ACh -- no inhibition when ACh = 1, extra when < 1.

func (*NeuroModParams) IsBLAExt

func (nm *NeuroModParams) IsBLAExt() bool

IsBLAExt returns true if this is Positive, D2 or Negative D1 -- BLA extinction

func (*NeuroModParams) LRMod

func (nm *NeuroModParams) LRMod(da, ach float32) float32

LRMod returns overall learning rate modulation factor due to neuromodulation from given dopamine (DA) and ACh inputs. If DALRateMod is true and DAMod == D1Mod or D2Mod, then the sign is a function of the DA

func (*NeuroModParams) LRModFact

func (nm *NeuroModParams) LRModFact(pct, val float32) float32

LRModFact returns learning rate modulation factor for given inputs.

func (*NeuroModParams) ShouldDisplay

func (nm *NeuroModParams) ShouldDisplay(field string) bool

func (*NeuroModParams) Update

func (nm *NeuroModParams) Update()

type NeuronAvgVars

type NeuronAvgVars int32 //enums:enum

NeuronAvgVars are mostly neuron variables involved in longer-term average activity which is aggregated over time and not specific to each input data state, along with any other state that is not input data specific.

const (
	// ActAvg is average activation (of minus phase activation state)
	// over long time intervals (time constant = Dt.LongAvgTau).
	// Useful for finding hog units and seeing overall distribution of activation.
	ActAvg NeuronAvgVars = iota

	// AvgPct is ActAvg as a proportion of overall layer activation.
	// This is used for synaptic scaling to match TrgAvg activation,
	// updated at SlowInterval intervals.
	AvgPct

	// TrgAvg is neuron's target average activation as a proportion
	// of overall layer activation, assigned during weight initialization,
	// driving synaptic scaling relative to AvgPct.
	TrgAvg

	// DTrgAvg is change in neuron's target average activation as a result
	// of unit-wise error gradient. Acts like a bias weight.
	// MPI needs to share these across processors.
	DTrgAvg

	// AvgDif is AvgPct - TrgAvg, i.e., the error in overall activity level
	// relative to set point for this neuron, which drives synaptic scaling.
	// Updated at SlowInterval intervals.
	AvgDif

	// GeBase is baseline level of Ge, added to GeRaw, for intrinsic excitability.
	GeBase

	// GiBase is baseline level of Gi, added to GiRaw, for intrinsic excitability.
	GiBase
)
const NeuronAvgVarsN NeuronAvgVars = 7

NeuronAvgVarsN is the highest valid value for type NeuronAvgVars, plus one.

func NeuronAvgVarsValues

func NeuronAvgVarsValues() []NeuronAvgVars

NeuronAvgVarsValues returns all possible values for the type NeuronAvgVars.

func (NeuronAvgVars) Desc

func (i NeuronAvgVars) Desc() string

Desc returns the description of the NeuronAvgVars value.

func (NeuronAvgVars) Int64

func (i NeuronAvgVars) Int64() int64

Int64 returns the NeuronAvgVars value as an int64.

func (NeuronAvgVars) MarshalText

func (i NeuronAvgVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*NeuronAvgVars) SetInt64

func (i *NeuronAvgVars) SetInt64(in int64)

SetInt64 sets the NeuronAvgVars value from an int64.

func (*NeuronAvgVars) SetString

func (i *NeuronAvgVars) SetString(s string) error

SetString sets the NeuronAvgVars value from its string representation, and returns an error if the string is invalid.

func (NeuronAvgVars) String

func (i NeuronAvgVars) String() string

String returns the string representation of this NeuronAvgVars value.

func (*NeuronAvgVars) UnmarshalText

func (i *NeuronAvgVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (NeuronAvgVars) Values

func (i NeuronAvgVars) Values() []enums.Enum

Values returns all possible values for the type NeuronAvgVars.

type NeuronFlags

type NeuronFlags int32 //enums:enum

NeuronFlags are bit-flags encoding relevant binary state for neurons

const (
	// NeuronOff flag indicates that this neuron has been turned off (i.e., lesioned).
	NeuronOff NeuronFlags = 1

	// NeuronHasExt means the neuron has external input in its Ext field.
	NeuronHasExt NeuronFlags = 2

	// NeuronHasTarg means the neuron has external target input in its Target field.
	NeuronHasTarg NeuronFlags = 4

	// NeuronHasCmpr means the neuron has external comparison input in its Target field.
	// Used for computing comparison statistics but does not drive neural activity ever.
	NeuronHasCmpr NeuronFlags = 8
)

The neuron flags

const NeuronFlagsN NeuronFlags = 9

NeuronFlagsN is the highest valid value for type NeuronFlags, plus one.

func NeuronFlagsValues

func NeuronFlagsValues() []NeuronFlags

NeuronFlagsValues returns all possible values for the type NeuronFlags.

func (NeuronFlags) Desc

func (i NeuronFlags) Desc() string

Desc returns the description of the NeuronFlags value.

func (NeuronFlags) Int64

func (i NeuronFlags) Int64() int64

Int64 returns the NeuronFlags value as an int64.

func (NeuronFlags) MarshalText

func (i NeuronFlags) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*NeuronFlags) SetInt64

func (i *NeuronFlags) SetInt64(in int64)

SetInt64 sets the NeuronFlags value from an int64.

func (*NeuronFlags) SetString

func (i *NeuronFlags) SetString(s string) error

SetString sets the NeuronFlags value from its string representation, and returns an error if the string is invalid.

func (NeuronFlags) String

func (i NeuronFlags) String() string

String returns the string representation of this NeuronFlags value.

func (*NeuronFlags) UnmarshalText

func (i *NeuronFlags) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (NeuronFlags) Values

func (i NeuronFlags) Values() []enums.Enum

Values returns all possible values for the type NeuronFlags.

type NeuronIndexVars

type NeuronIndexVars int32 //enums:enum

NeuronIndexVars are neuron-level indexes used to access layers and pools from the individual neuron level.

const (
	// NrnNeurIndex is the index of this neuron within its owning layer.
	NrnNeurIndex NeuronIndexVars = iota

	// NrnLayIndex is the index of the layer that this neuron belongs to,
	// needed for neuron-level parallel code.
	NrnLayIndex

	// NrnSubPool is the index of the sub-level inhibitory pool for this neuron
	// (only for 4D shapes, the pool (unit-group / hypercolumn) structure level).
	// Indicies start at 1 -- 0 is layer-level pool (is 0 if no sub-pools).
	NrnSubPool
)
const NeuronIndexVarsN NeuronIndexVars = 3

NeuronIndexVarsN is the highest valid value for type NeuronIndexVars, plus one.

func NeuronIndexVarsValues

func NeuronIndexVarsValues() []NeuronIndexVars

NeuronIndexVarsValues returns all possible values for the type NeuronIndexVars.

func (NeuronIndexVars) Desc

func (i NeuronIndexVars) Desc() string

Desc returns the description of the NeuronIndexVars value.

func (NeuronIndexVars) Int64

func (i NeuronIndexVars) Int64() int64

Int64 returns the NeuronIndexVars value as an int64.

func (NeuronIndexVars) MarshalText

func (i NeuronIndexVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*NeuronIndexVars) SetInt64

func (i *NeuronIndexVars) SetInt64(in int64)

SetInt64 sets the NeuronIndexVars value from an int64.

func (*NeuronIndexVars) SetString

func (i *NeuronIndexVars) SetString(s string) error

SetString sets the NeuronIndexVars value from its string representation, and returns an error if the string is invalid.

func (NeuronIndexVars) String

func (i NeuronIndexVars) String() string

String returns the string representation of this NeuronIndexVars value.

func (*NeuronIndexVars) UnmarshalText

func (i *NeuronIndexVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (NeuronIndexVars) Values

func (i NeuronIndexVars) Values() []enums.Enum

Values returns all possible values for the type NeuronIndexVars.

type NeuronVars

type NeuronVars int32 //enums:enum

NeuronVars are the neuron variables representing current active state, specific to each input data state. See NeuronAvgVars for vars shared across data.

const (

	// Spike is whether neuron has spiked or not on this cycle (0 or 1).
	Spike NeuronVars = iota

	// Spiked is 1 if neuron has spiked within the last 10 cycles (msecs),
	// corresponding to a nominal max spiking rate of 100 Hz, 0 otherwise.
	// Useful for visualization and computing activity levels in terms of
	// average spiked levels.
	Spiked

	// Act is rate-coded activation value reflecting instantaneous estimated rate
	// of spiking, based on 1 / ISIAvg. It is integrated over time for ActInt
	// which is then used for performance statistics and layer average activations, etc.
	// Should not be used for learning or other computations: just for stats / display.
	Act

	// ActInt is integrated running-average activation value computed from Act
	// with time constant Act.Dt.IntTau, to produce a longer-term integrated value
	// reflecting the overall activation state across the ThetaCycle time scale,
	// as the overall response of network to current input state. This is copied
	// to ActM and ActP at the ends of the minus and plus phases, respectively,
	// and used in computing some performance-level statistics (based on ActM).
	// Should not be used for learning or other computations.
	ActInt

	// Ge is total excitatory conductance, including all forms of excitation
	// (e.g., NMDA). Does *not* include the Gbar.E factor.
	Ge

	// Gi is total inhibitory synaptic conductance, i.e., the net inhibitory input
	// to the neuron. Does *not* include the Gbar.I factor.
	Gi

	// Gk is total potassium conductance, typically reflecting sodium-gated potassium
	// currents involved in adaptation effects. Does *not* include the Gbar.K factor.
	Gk

	// Inet is net current produced by all channels, which drives update of Vm.
	Inet

	// Vm is the membrane potential at the cell body, which integrates Inet current
	// over time, and drives spiking at the axon initial segment of the neuron.
	Vm

	// VmDend is the dendritic membrane potential, which has a slower time constant
	// than Vm and is not subject to the VmR reset after spiking.
	VmDend

	// ISI is the current inter-spike-interval, which counts up since last spike.
	// Starts at -1 when initialized.
	ISI

	// ISIAvg is the average inter-spike-interval, i.e., the average time interval
	// between spikes, integrated with ISITau rate constant (relatively fast) to
	// capture something close to an instantaneous spiking rate.  Starts at -1 when
	// initialized, and goes to -2 after first spike, and is only valid after the
	// second spike post-initialization.
	ISIAvg

	// Ext is the external input: drives activation of unit from outside influences
	// (e.g., sensory input).
	Ext

	// Target is the target value: drives learning to produce this activation value.
	Target

	// CaM is the spike-driven calcium trace at the neuron level, which then drives
	// longer time-integrated variables: [CaP] and [CaD]. These variables are used
	// for statistics and display to capture spiking activity at different timescales.
	// They fluctuate more than [Act] and [ActInt], but are closer to the biological
	// variables driving learning. CaM is the exponential integration of SpikeG * Spike
	// using the MTau time constant (typically 5), and simulates a calmodulin (CaM)
	// like signal, at an abstract level.
	CaM

	// CaP is the continuous cascaded integration of [CaM] using the PTau time constant
	// (typically 40), representing a neuron-level, purely spiking version of the plus,
	// LTP direction of weight change in the Kinase learning rule, dependent on CaMKII.
	// This is not used for learning (see [LearnCaP]), but instead for statistics
	// as a representation of recent activity.
	CaP

	// CaD is the continuous cascaded integration [CaP] using the DTau time constant
	// (typically 40), representing a neuron-level, purely spiking version of the minus,
	// LTD direction of weight change in the Kinase learning rule, dependent on DAPK1.
	// This is not used for learning (see [LearnCaD]), but instead for statistics
	// as a representation of trial-level activity.
	CaD

	// CaDPrev is the final [CaD] activation state at the end of previous theta cycle.
	// This is used for specialized learning mechanisms that operate on delayed
	// sending activations.
	CaDPrev

	// CaSyn is the neuron-level integration of spike-driven calcium, used to approximate
	// synaptic calcium influx as a product of sender and receiver neuron CaSyn values,
	// which are integrated separately because it is computationally much more efficient.
	// This value is driven directly by spikes, with an exponential integration time
	// constant of 30 msec (default), which captures the coincidence window for pre*post
	// firing on NMDA receptor opening. The neuron [CaBins] values record the temporal
	// trajectory of CaSyn over the course of the theta cycle window, and then the
	// pre*post product is integrated over these bins at the synaptic level.
	CaSyn

	// LearnCa is the receiving neuron calcium signal, which is integrated up to
	// [LearnCaP] and [LearnCaD], the difference of which is the temporal error
	// component of the standard axon cortical learning rule.
	// LearnCa combines NMDA via [NmdaCa] and spiking-driven VGCC [VgccCaInt] calcium
	// sources (vs. CaM which only reflects a simple spiking component).
	// The NMDA signal reflects both sending and receiving activity, while the
	// VGCC signal is purely receiver spiking, and a balance of both works best.
	LearnCa

	// LearnCaM is the integrated [LearnCa] at the MTau timescale (typically 5),
	// simulating a calmodulin (CaM) like signal, which then drives [LearnCaP],
	// and [LearnCaD] for the delta signal for error-driven learning.
	LearnCaM

	// LearnCaP is the cascaded integration of [LearnCaM] using the PTau time constant
	// (typically 40), representing the plus, LTP direction of weight change,
	// capturing the function of CaMKII in the Kinase learning rule.
	LearnCaP

	// LearnCaD is the cascaded integration of [LearnCaP] using the DTau time constant
	// (typically 40), representing the minus, LTD direction of weight change,
	// capturing the function of DAPK1 in the Kinase learning rule.
	LearnCaD

	// CaDiff is difference between [LearnCaP] - [LearnCaD].  This is the error
	// signal that drives error-driven learning.
	CaDiff

	// RLRate is recv-unit based learning rate multiplier, reflecting the sigmoid
	// derivative computed from [CaD] of recv unit, and the normalized difference
	// (CaP - CaD) / MAX(CaP - CaD).
	RLRate

	// GnmdaSyn is the integrated NMDA synaptic current on the receiving neuron.
	// It adds GeRaw and decays with a time constant.
	GnmdaSyn

	// Gnmda is the net postsynaptic (receiving) NMDA conductance,
	// after Mg V-gating and Gbar. This is added directly to Ge as it has the same
	// reversal potential.
	Gnmda

	// GnmdaLrn is learning version of integrated NMDA recv synaptic current.
	// It adds [GeRaw] and decays with a time constant. This drives [NmdaCa] that
	// then drives [LearnCa] for learning.
	GnmdaLrn

	// GnmdaMaint is net postsynaptic maintenance NMDA conductance, computed from
	// [GMaintSyn] and [GMaintRaw], after Mg V-gating and Gbar. This is added directly
	// to Ge as it has the same reversal potential.
	GnmdaMaint

	// NmdaCa is NMDA calcium computed from GnmdaLrn, drives learning via CaM.
	NmdaCa

	// Gvgcc is conductance (via Ca) for VGCC voltage gated calcium channels.
	Gvgcc

	// VgccM is activation gate of VGCC channels.
	VgccM

	// VgccH inactivation gate of VGCC channels.
	VgccH

	// VgccCa is the instantaneous VGCC calcium flux: can be driven by spiking
	// or directly from Gvgcc.
	VgccCa

	// VgccCaInt is the time-integrated VGCC calcium flux. This is actually
	// what drives learning.
	VgccCaInt

	// Burst is the layer 5 IB intrinsic bursting neural activation value,
	// computed by thresholding the [CaP] value in Super superficial layers.
	Burst

	// BurstPrv is previous Burst bursting activation from prior time step.
	// Used for context-based learning.
	BurstPrv

	// CtxtGe is context (temporally delayed) excitatory conductance,
	// driven by deep bursting at end of the plus phase, for CT layers.
	CtxtGe

	// CtxtGeRaw is raw update of context (temporally delayed) excitatory
	// conductance, driven by deep bursting at end of the plus phase, for CT layers.
	CtxtGeRaw

	// CtxtGeOrig is original CtxtGe value prior to any decay factor.
	// Updates at end of plus phase.
	CtxtGeOrig

	// GgabaB is net GABA-B conductance, after Vm gating and Gbar + Gbase.
	// Applies to Gk, not Gi, for GIRK, with .1 reversal potential.
	GgabaB

	// GABAB is GABA-B / GIRK activation, which is a time-integrated value
	// with rise and decay time constants.
	GABAB

	// GABABx is GABA-B / GIRK internal drive variable. This gets the raw
	// activation and decays.
	GABABx

	// Gak is the conductance of A-type K potassium channels.
	Gak

	// SSGiDend is the amount of SST+ somatostatin positive slow spiking
	// inhibition applied to dendritic Vm (VmDend).
	SSGiDend

	// GknaMed is the conductance of sodium-gated potassium channel (KNa)
	// medium dynamics (Slick), which produces accommodation / adaptation.
	GknaMed

	// GknaSlow is the conductance of sodium-gated potassium channel (KNa)
	// slow dynamics (Slack), which produces accommodation / adaptation.
	GknaSlow

	// Gkir is the conductance of the potassium (K) inwardly rectifying channel,
	// which is strongest at low membrane potentials.  Can be modulated by DA.
	Gkir

	// KirM is the Kir potassium (K) inwardly rectifying gating value.
	KirM

	// Gsk is Calcium-gated potassium channel conductance as a function
	// of Gbar * SKCaM.
	Gsk

	// SKCaIn is intracellular calcium store level, available to be released
	// with spiking as SKCaR, which can bind to SKCa receptors and drive K
	// current. replenishment is a function of spiking activity being below
	// a threshold.
	SKCaIn

	// SKCaR is the released amount of intracellular calcium, from SKCaIn,
	// as a function of spiking events. This can bind to SKCa channels and
	// drive K currents.
	SKCaR

	// SKCaM is the Calcium-gated potassium channel gating factor, driven by
	// SKCaR via a Hill equation as in chans.SKPCaParams.
	SKCaM

	// Gmahp is medium time scale AHP conductance.
	Gmahp

	// MahpN is accumulating voltage-gated gating value for the medium time
	// scale AHP.
	MahpN

	// Gsahp is slow time scale AHP conductance.
	Gsahp

	// SahpCa is slowly accumulating calcium value that drives the slow AHP.
	SahpCa

	// SahpN is the sAHP gating value.
	SahpN

	// ActM is ActInt activation state at end of third quarter, representing
	// the posterior-cortical minus phase activation. This is used for statistics
	// and monitoring network performance.
	// Should not be used for learning or other computations.
	ActM

	// ActP is ActInt activation state at end of fourth quarter, representing
	// the posterior-cortical plus_phase activation. This is used for statistics
	// and monitoring network performance.
	// Should not be used for learning or other computations.
	ActP

	// Beta1 is the activation state at the first beta cycle within current
	// state processing window (i.e., at 50 msec), as saved by Beta1() function.
	// Used for example in hippocampus for CA3, CA1 learning.
	Beta1

	// Beta2 is the activation state at the second beta cycle within current
	// state processing window (i.e., at 100 msec), as saved by Beta2() function.
	// Used for example in hippocampus for CA3, CA1 learning.
	Beta2

	// CaPMax is the maximum [CaP] across one theta cycle time window
	// (max of CaPMaxCa). It is used for specialized algorithms that have more
	// phasic behavior within a single trial, e.g., BG Matrix layer gating.
	// Also useful for visualization of peak activity of neurons.
	CaPMax

	// CaPMaxCa is the Ca integrated like [CaP] but only starting at
	// the MaxCycStart cycle, to prevent inclusion of carryover spiking from
	// prior theta cycle trial. The PTau time constant otherwise results in
	// significant carryover. This is the input to CaPMax.
	CaPMaxCa

	// GeNoise is integrated noise excitatory conductance, added into Ge.
	GeNoise

	// GeNoiseP is accumulating poisson probability factor for driving excitatory
	// noise spiking. Multiply times uniform random deviate at each time step,
	// until it gets below the target threshold based on poisson lambda as function
	//  of noise firing rate.
	GeNoiseP

	// GiNoise is integrated noise inhibotyr conductance, added into Gi.
	GiNoise

	// GiNoiseP is accumulating poisson probability factor for driving inhibitory
	// noise spiking. Multiply times uniform random deviate at each time step,
	// until it gets below the target threshold based on poisson lambda as a function
	// of noise firing rate.
	GiNoiseP

	// GeExt is extra excitatory conductance added to Ge, from Ext input, GeCtxt etc.
	GeExt

	// GeRaw is the raw excitatory conductance (net input) received from
	// senders = current raw spiking drive.
	GeRaw

	// GeSyn is the time-integrated total excitatory synaptic conductance,
	// with an instantaneous rise time from each spike (in GeRaw) and
	// exponential decay with Dt.GeTau, aggregated over pathways.
	// Does *not* include Gbar.E.
	GeSyn

	// GiRaw is the raw inhibitory conductance (net input) received from senders
	// = current raw spiking drive.
	GiRaw

	// GiSyn is time-integrated total inhibitory synaptic conductance, with an
	// instantaneous rise time from each spike (in GiRaw) and exponential decay
	// with Dt.GiTau, aggregated over pathways -- does *not* include Gbar.I.
	// This is added with computed FFFB inhibition to get the full inhibition in Gi.
	GiSyn

	// GeInt is integrated running-average activation value computed from Ge
	// with time constant Act.Dt.IntTau, to produce a longer-term integrated value
	// reflecting the overall Ge level across the ThetaCycle time scale (Ge itself
	// fluctuates considerably). This is useful for stats to set strength of
	// connections etc to get neurons into right range of overall excitatory drive.
	GeInt

	// GeIntNorm is normalized GeInt value (divided by the layer maximum).
	// This is used for learning in layers that require learning on
	//  subthreshold activity.
	GeIntNorm

	// GiInt is integrated running-average activation value computed from GiSyn
	// with time constant Act.Dt.IntTau, to produce a longer-term integrated
	// value reflecting the overall synaptic Gi level across the ThetaCycle
	// time scale (Gi itself fluctuates considerably). Useful for stats to set
	// strength of connections etc to get neurons into right range of overall
	// inhibitory drive.
	GiInt

	// GModRaw is raw modulatory conductance, received from GType
	// = ModulatoryG pathways.
	GModRaw

	// GModSyn is syn integrated modulatory conductance, received from GType
	// = ModulatoryG pathways.
	GModSyn

	// SMaintP is accumulating poisson probability factor for driving
	// self-maintenance by simulating a population of mutually interconnected neurons.
	// Multiply times uniform random deviate at each time step, until it gets below
	// the target threshold based on poisson lambda based on accumulating self maint
	// factor.
	SMaintP

	// GMaintRaw is raw maintenance conductance, received from GType
	// = MaintG pathways.
	GMaintRaw

	// GMaintSyn is syn integrated maintenance conductance, integrated
	// using MaintNMDA params.
	GMaintSyn

	// NeurFlags are bit flags for binary state variables, which are converted
	// to / from uint32. These need to be in Vars because they can be
	// differential per data (for ext inputs) and are writable (indexes are read only).
	NeurFlags

	// CaBins is a vector of values starting here, with aggregated [CaSyn] values
	// in time bins of [Context.CaBinCycles] across the theta cycle,
	// for computing synaptic calcium efficiently. Each bin = Sum(CaSyn) / CaBinCycles.
	// Total number of bins = [Context.ThetaCycles] / CaBinCycles.
	// Synaptic calcium is integrated from sender * receiver CaBins values,
	// with weights for CaP vs CaD that reflect their faster vs. slower time constants,
	// respectively. CaD is used for the credit assignment factor, while CaP - CaD is
	// used directly for error-driven learning at Target layers.
	CaBins
)
const NeuronVarsN NeuronVars = 83

NeuronVarsN is the highest valid value for type NeuronVars, plus one.

func NeuronVarsValues

func NeuronVarsValues() []NeuronVars

NeuronVarsValues returns all possible values for the type NeuronVars.

func (NeuronVars) Desc

func (i NeuronVars) Desc() string

Desc returns the description of the NeuronVars value.

func (NeuronVars) Int64

func (i NeuronVars) Int64() int64

Int64 returns the NeuronVars value as an int64.

func (NeuronVars) MarshalText

func (i NeuronVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*NeuronVars) SetInt64

func (i *NeuronVars) SetInt64(in int64)

SetInt64 sets the NeuronVars value from an int64.

func (*NeuronVars) SetString

func (i *NeuronVars) SetString(s string) error

SetString sets the NeuronVars value from its string representation, and returns an error if the string is invalid.

func (NeuronVars) String

func (i NeuronVars) String() string

String returns the string representation of this NeuronVars value.

func (*NeuronVars) UnmarshalText

func (i *NeuronVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (NeuronVars) Values

func (i NeuronVars) Values() []enums.Enum

Values returns all possible values for the type NeuronVars.

type Params

type Params struct {

	// Layer has the parameters to apply to the [LayerParams] for layers.
	Layer LayerSheets `display:"-"`

	// Path has the parameters to apply to the [PathParams] for paths.
	Path PathSheets `display:"-"`

	// ExtraSheets has optional additional sheets of parameters to apply
	// after the default Base sheet. Use "Script" for default Script sheet.
	// Multiple names separated by spaces can be used (don't put spaces in Sheet names!)
	ExtraSheets string

	// Tag is an optional additional tag to add to log file names to identify
	// a specific run of the model (typically set by a config file or args).
	Tag string

	// Script is a parameter setting script, which adds to the Layer and Path sheets
	// typically using the "Script" set name.
	Script string `display:"-"`

	// Interp is the yaegi interpreter for running the script.
	Interp *interp.Interpreter `display:"-"`
}

Params contains the LayerParams and PathParams parameter setting functions provided by the [emergent] params package.

func (*Params) ApplyAll

func (pr *Params) ApplyAll(net *Network)

ApplyAll applies all parameters to given network, using "Base" Sheet then any ExtraSheets, for Layer and Path params (each must have the named sheets, for proper error checking in case of typos).

func (*Params) ApplySheet

func (pr *Params) ApplySheet(net *Network, sheetName string) error

ApplySheet applies parameters for given params.Sheet name for Layer and Path params (each must have the named sheets, for proper error checking in case of typos).

func (*Params) Config

func (pr *Params) Config(layer LayerSheets, path PathSheets, extraSheets, tag string, sim reflect.Value)

Config configures the ExtraSheets, Tag, and Network fields, and initializes the yaegi interpreter for dynamic parameter scripts. Pass a reflect.ValueOf(*Sim) to initialize the yaegi interpreter. Sim must have Params in a field called Params.

func (*Params) Name

func (pr *Params) Name() string

Name returns name of current set of parameters, including Tag. if ExtraSheets is empty then it returns "Base", otherwise returns ExtraSheets

func (*Params) RunName

func (pr *Params) RunName(startRun int) string

RunName returns the name of a simulation run based on params Name() and starting run number.

type Path

type Path struct {
	emer.PathBase

	// path parameters.
	Params *PathParams

	// sending layer for this pathway.
	Send *Layer

	// receiving layer for this pathway.
	Recv *Layer

	// type of pathway.
	Type PathTypes

	// DefaultParams are functions to apply parameters prior to user-set
	// parameters. These are useful for specific functionality in specialized
	// brain areas (e.g., Rubicon, BG etc) not associated with a path type,
	// which otherwise is used to hard-code initial default parameters.
	DefaultParams []func(pt *PathParams) `display:"-"`

	// average and maximum number of recv connections in the receiving layer
	RecvConNAvgMax minmax.AvgMax32 `table:"-" edit:"-" display:"inline"`

	// average and maximum number of sending connections in the sending layer
	SendConNAvgMax minmax.AvgMax32 `table:"-" edit:"-" display:"inline"`

	// start index into global Synapse array:
	SynStIndex uint32 `display:"-"`

	// number of synapses in this pathway
	NSyns uint32 `display:"-"`

	// starting offset and N cons for each recv neuron, for indexing into the RecvSynIndex array of indexes into the Syns synapses, which are organized sender-based.  This is locally managed during build process, but also copied to network global PathRecvCons slice for GPU usage.
	RecvCon []StartN `display:"-"`

	// index into Syns synaptic state for each sending unit and connection within that, for the sending pathway which does not own the synapses, and instead indexes into recv-ordered list
	RecvSynIndex []uint32 `display:"-"`

	// for each recv synapse, this is index of *sending* neuron  It is generally preferable to use the Synapse SendIndex where needed, instead of this slice, because then the memory access will be close by other values on the synapse.
	RecvConIndex []uint32 `display:"-"`

	// starting offset and N cons for each sending neuron, for indexing into the Syns synapses, which are organized sender-based.  This is locally managed during build process, but also copied to network global PathSendCons slice for GPU usage.
	SendCon []StartN `display:"-"`

	// index of other neuron that receives the sender's synaptic input, ordered by the sending layer's order of units as the outer loop, and SendCon.N receiving units within that.  It is generally preferable to use the Synapse RecvIndex where needed, instead of this slice, because then the memory access will be close by other values on the synapse.
	SendConIndex []uint32 `display:"-"`
}

Path implements axon spiking communication and learning.

func (*Path) AddClass

func (pt *Path) AddClass(cls ...string) *Path

func (*Path) AddDefaultParams

func (pt *Path) AddDefaultParams(fun func(pt *PathParams))

AddDefaultParams adds given default param setting function.

func (*Path) AllParams

func (pt *Path) AllParams() string

AllParams returns a listing of all parameters in the Layer

func (*Path) Build

func (pt *Path) Build() error

Build constructs the full connectivity among the layers. Calls Validate and returns error if invalid. Pat.Connect is called to get the pattern of the connection. Then the connection indexes are configured according to that pattern. Does NOT allocate synapses -- these are set by Network from global slice.

func (*Path) Connect

func (pt *Path) Connect(slay, rlay *Layer, pat paths.Pattern, typ PathTypes)

Connect sets the connectivity between two layers and the pattern to use in interconnecting them

func (*Path) Defaults

func (pt *Path) Defaults()

func (*Path) InitWeights

func (pt *Path) InitWeights(ctx *Context, nt *Network)

InitWeights initializes weight values according to SWt params, enforcing current constraints.

func (*Path) InitWeightsSyn

func (pt *Path) InitWeightsSyn(ctx *Context, syni uint32, rnd randx.Rand, mean, spct float32)

InitWeightsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.

func (*Path) InitWeightsSynTrace

func (pt *Path) InitWeightsSynTrace(ctx *Context, syni, di uint32)

InitWeightsSynTrace initializes SynapseTraces values for an individual synapse.

func (*Path) InitWtSym

func (pt *Path) InitWtSym(ctx *Context, rpj *Path)

InitWtSym initializes weight symmetry. Is given the reciprocal pathway where the Send and Recv layers are reversed (see LayerBase RecipToRecvPath)

func (*Path) LRateMod

func (pt *Path) LRateMod(mod float32)

LRateMod sets the LRate modulation parameter for Paths, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.

func (*Path) LRateSched

func (pt *Path) LRateSched(sched float32)

LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.

func (*Path) NumSyns

func (pt *Path) NumSyns() int

NumSyns returns the number of synapses for this path as a 1D array. This is the max idx for SynVal1D and the number of vals set by SynValues.

func (*Path) RecvLayer

func (pt *Path) RecvLayer() emer.Layer

func (*Path) RecvSynIxs

func (pt *Path) RecvSynIxs(ri uint32) []uint32

RecvSynIxs returns the receiving synapse indexes for given recv unit index within the receiving layer, to be iterated over for recv-based processing.

func (*Path) SWtRescale

func (pt *Path) SWtRescale(ctx *Context)

SWtRescale rescales the SWt values to preserve the target overall mean value, using subtractive normalization.

func (*Path) SendLayer

func (pt *Path) SendLayer() emer.Layer

func (*Path) SetConStartN

func (pt *Path) SetConStartN(con *[]StartN, avgmax *minmax.AvgMax32, tn *tensor.Int32) uint32

SetConStartN sets the *Con StartN values given n tensor from Pat. Returns total number of connections for this direction.

func (*Path) SetPattern

func (pt *Path) SetPattern(pat paths.Pattern) *Path

func (*Path) SetSWtsFunc

func (pt *Path) SetSWtsFunc(ctx *Context, swtFun func(si, ri int, send, recv *tensor.Shape) float32)

SetSWtsFunc initializes structural SWt values using given function based on receiving and sending unit indexes.

func (*Path) SetSWtsRPool

func (pt *Path) SetSWtsRPool(ctx *Context, swts tensor.Tensor)

SetSWtsRPool initializes SWt structural weight values using given tensor of values which has unique values for each recv neuron within a given pool.

func (*Path) SetSynValue

func (pt *Path) SetSynValue(varNm string, sidx, ridx int, val float32) error

SetSynVal sets value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes) returns error for access errors.

func (*Path) SetWeights

func (pt *Path) SetWeights(pw *weights.Path) error

SetWeights sets the weights for this pathway from weights.Path decoded values

func (*Path) SetWeightsFunc

func (pt *Path) SetWeightsFunc(ctx *Context, wtFun func(si, ri int, send, recv *tensor.Shape) float32)

SetWeightsFunc initializes synaptic Wt value using given function based on receiving and sending unit indexes. Strongly suggest calling SWtRescale after.

func (*Path) String

func (pt *Path) String() string

String satisfies fmt.Stringer for path

func (*Path) SynIndex

func (pt *Path) SynIndex(sidx, ridx int) int

SynIndex returns the index of the synapse between given send, recv unit indexes (1D, flat indexes, layer relative). Returns -1 if synapse not found between these two neurons. Requires searching within connections for sending unit.

func (*Path) SynVal1DDi

func (pt *Path) SynVal1DDi(varIndex int, synIndex int, di int) float32

SynVal1DDi returns value of given variable index (from SynVarIndex) on given SynIndex. Returns NaN on invalid index. This is the core synapse var access method used by other methods. Includes Di data parallel index for data-parallel synaptic values.

func (*Path) SynValDi

func (pt *Path) SynValDi(varNm string, sidx, ridx int, di int) float32

SynValDi returns value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes). Returns math32.NaN() for access errors (see SynValTry for error message) Includes Di data parallel index for data-parallel synaptic values.

func (*Path) SynValue1D

func (pt *Path) SynValue1D(varIndex int, synIndex int) float32

SynValue1D returns value of given variable index (from SynVarIndex) on given SynIndex. Returns NaN on invalid index. This is the core synapse var access method used by other methods.

func (*Path) SynValues

func (pt *Path) SynValues(vals *[]float32, varNm string) error

SynValues sets values of given variable name for each synapse, using the natural ordering of the synapses (sender based for Axon), into given float32 slice (only resized if not big enough). Returns error on invalid var name.

func (*Path) SynVarIndex

func (pt *Path) SynVarIndex(varNm string) (int, error)

SynVarIndex returns the index of given variable within the synapse, according to *this path's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*Path) SynVarNames

func (pt *Path) SynVarNames() []string

func (*Path) SynVarNum

func (pt *Path) SynVarNum() int

SynVarNum returns the number of synapse-level variables for this paths. This is needed for extending indexes in derived types.

func (*Path) SynVarProps

func (pt *Path) SynVarProps() map[string]string

SynVarProps returns properties for variables

func (*Path) TypeName

func (pt *Path) TypeName() string

func (*Path) TypeNumber

func (pt *Path) TypeNumber() int

func (*Path) Update

func (pt *Path) Update()

Update is interface that does local update of struct vals

func (*Path) UpdateParams

func (pt *Path) UpdateParams()

UpdateParams updates all params given any changes that might have been made to individual values

func (*Path) Validate

func (pt *Path) Validate(logmsg bool) error

Validate tests for non-nil settings for the pathway -- returns error message or nil if no problems (and logs them if logmsg = true)

func (*Path) WriteWeightsJSON

func (pt *Path) WriteWeightsJSON(w io.Writer, depth int)

WriteWeightsJSON writes the weights from this pathway from the receiver-side perspective in a JSON text format.

type PathGTypes

type PathGTypes int32 //enums:enum

PathGTypes represents the conductance (G) effects of a given pathway, including excitatory, inhibitory, and modulatory.

const (
	// Excitatory pathways drive Ge conductance on receiving neurons,
	// which send to GiRaw and GiSyn neuron variables.
	ExcitatoryG PathGTypes = iota

	// Inhibitory pathways drive Gi inhibitory conductance,
	// which send to GiRaw and GiSyn neuron variables.
	InhibitoryG

	// Modulatory pathways have a multiplicative effect on other inputs,
	// which send to GModRaw and GModSyn neuron variables.
	ModulatoryG

	// Maintenance pathways drive unique set of NMDA channels that support
	// strong active maintenance abilities.
	// Send to GMaintRaw and GMaintSyn neuron variables.
	MaintG

	// Context pathways are for inputs to CT layers, which update
	// only at the end of the plus phase, and send to CtxtGe.
	ContextG
)

The pathway conductance types

const PathGTypesN PathGTypes = 5

PathGTypesN is the highest valid value for type PathGTypes, plus one.

func PathGTypesValues

func PathGTypesValues() []PathGTypes

PathGTypesValues returns all possible values for the type PathGTypes.

func (PathGTypes) Desc

func (i PathGTypes) Desc() string

Desc returns the description of the PathGTypes value.

func (PathGTypes) Int64

func (i PathGTypes) Int64() int64

Int64 returns the PathGTypes value as an int64.

func (PathGTypes) MarshalText

func (i PathGTypes) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*PathGTypes) SetInt64

func (i *PathGTypes) SetInt64(in int64)

SetInt64 sets the PathGTypes value from an int64.

func (*PathGTypes) SetString

func (i *PathGTypes) SetString(s string) error

SetString sets the PathGTypes value from its string representation, and returns an error if the string is invalid.

func (PathGTypes) String

func (i PathGTypes) String() string

String returns the string representation of this PathGTypes value.

func (*PathGTypes) UnmarshalText

func (i *PathGTypes) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (PathGTypes) Values

func (i PathGTypes) Values() []enums.Enum

Values returns all possible values for the type PathGTypes.

type PathIndexes

type PathIndexes struct {
	// RecvLayer is the index of the receiving layer in global list of layers.
	RecvLayer uint32

	// RecvNeurSt is the starting index of neurons in recv layer,
	// so we don't need layer to get to neurons.
	RecvNeurSt uint32

	// RecvNeurN is the number of neurons in recv layer.
	RecvNeurN uint32

	// SendLayer is the index of the sending layer in global list of layers.
	SendLayer uint32

	// SendNeurSt is the starting index of neurons in sending layer,
	// so we don't need layer to get to neurons.
	SendNeurSt uint32

	// SendNeurN is the number of neurons in send layer
	SendNeurN uint32

	// SynapseSt is the start index into global Synapse array.
	// [Layer][SendPaths][Synapses].
	SynapseSt uint32

	// SendConSt is the start index into global PathSendCon array.
	// [Layer][SendPaths][SendNeurons]
	SendConSt uint32

	// RecvConSt is the start index into global PathRecvCon array.
	// [Layer][RecvPaths][RecvNeurons]
	RecvConSt uint32

	// RecvSynSt is the start index into global sender-based Synapse index array.
	// [Layer][SendPaths][Synapses]
	RecvSynSt uint32

	// NPathNeurSt is the start NPathNeur index into PathGBuf, PathGSyns global arrays.
	// [Layer][RecvPaths][RecvNeurons]
	NPathNeurSt uint32
	// contains filtered or unexported fields
}

PathIndexes contains path-level index information into global memory arrays

func (*PathIndexes) RecvNIndexToLayIndex

func (pi *PathIndexes) RecvNIndexToLayIndex(ni uint32) uint32

RecvNIndexToLayIndex converts a neuron's index in network level global list of all neurons to receiving layer-specific index-- e.g., for accessing GBuf and GSyn values. Just subtracts RecvNeurSt -- docu-function basically..

func (*PathIndexes) SendNIndexToLayIndex

func (pi *PathIndexes) SendNIndexToLayIndex(ni uint32) uint32

SendNIndexToLayIndex converts a neuron's index in network level global list of all neurons to sending layer-specific index. Just subtracts SendNeurSt -- docu-function basically..

type PathParams

type PathParams struct {

	// Type is the functional type of path, which determines the code path
	// for specialized types, and is synchronized with [Path.Type].
	Type PathTypes

	// Index is the index of the pathway in global path list: [Layer][SendPaths]
	Index uint32 `edit:"-"`

	// recv and send neuron-level pathway index array access info
	Indexes PathIndexes `display:"-"`

	// synaptic communication parameters: delay, probability of failure
	Com SynComParams `display:"inline"`

	// pathway scaling parameters for computing GScale:
	// modulates overall strength of pathway, using both
	// absolute and relative factors, with adaptation option to maintain target max conductances
	PathScale PathScaleParams `display:"inline"`

	// slowly adapting, structural weight value parameters,
	// which control initial weight values and slower outer-loop adjustments
	SWts SWtParams `display:"add-fields"`

	// synaptic-level learning parameters for learning in the fast LWt values.
	Learn LearnSynParams `display:"add-fields"`

	// conductance scaling values
	GScale GScaleValues `display:"inline"`

	// Params for RWPath and TDPredPath for doing dopamine-modulated learning
	// for reward prediction: Da * Send activity.
	// Use in RWPredLayer or TDPredLayer typically to generate reward predictions.
	// If the Da sign is positive, the first recv unit learns fully; for negative,
	// second one learns fully.
	// Lower lrate applies for opposite cases.  Weights are positive-only.
	RLPred RLPredPathParams `display:"inline"`

	// for trace-based learning in the MatrixPath. A trace of synaptic co-activity
	// is formed, and then modulated by dopamine whenever it occurs.
	// This bridges the temporal gap between gating activity and subsequent activity,
	// and is based biologically on synaptic tags.
	// Trace is reset at time of reward based on ACh level from CINs.
	Matrix MatrixPathParams `display:"inline"`

	// Basolateral Amygdala pathway parameters.
	BLA BLAPathParams `display:"inline"`

	// Hip bench parameters.
	Hip HipPathParams `display:"inline"`
	// contains filtered or unexported fields
}

PathParams contains all of the path parameters. These values must remain constant over the course of computation. On the GPU, they are loaded into a uniform.

func GetPaths

func GetPaths(idx uint32) *PathParams

GetPaths returns a pointer to the given global variable: Paths []PathParams at given index. This directly processed in the GPU code, so this function call is an equivalent for the CPU.

func (*PathParams) AllParams

func (pt *PathParams) AllParams() string

func (*PathParams) BLADefaults

func (pj *PathParams) BLADefaults()

func (*PathParams) CTCtxtPathDefaults

func (pj *PathParams) CTCtxtPathDefaults()

func (*PathParams) DWtFromDi

func (pt *PathParams) DWtFromDi(ctx *Context, syni uint32)

DWtFromDi updates DWt from data parallel DiDWt values

func (*PathParams) DWtSubMean

func (pt *PathParams) DWtSubMean(ctx *Context, pti, ri, lni uint32)

DWtSubMean subtracts the mean for given recv neuron ri, for pathways that have SubMean > 0. This is called on *receiving* pathways, prior to WtFromDwt.

func (*PathParams) DWtSyn

func (pt *PathParams) DWtSyn(ctx *Context, rlay *LayerParams, syni, si, ri, di uint32)

DWtSyn is the overall entry point for weight change (learning) at given synapse. It selects appropriate function based on pathway type. rpl is the receiving layer SubPool

func (*PathParams) DWtSynBLA

func (pt *PathParams) DWtSynBLA(ctx *Context, syni, si, ri, lpi, pi, di uint32)

DWtSynBLA computes the weight change (learning) at given synapse for BLAPath type. Like the BG Matrix learning rule, a synaptic tag "trace" is established at CS onset (ACh) and learning at US / extinction is a function of trace * delta from US activity (temporal difference), which limits learning.

func (*PathParams) DWtSynCortex

func (pt *PathParams) DWtSynCortex(ctx *Context, syni, si, ri, lpi, pi, di uint32, isTarget bool)

DWtSynCortex computes the weight change (learning) at given synapse, using the kinase error-driven learning rule for cortical neurons. The error delta is based on the receiving neuron's LearnCaP - LearnCaD, multiplied by a separate credit assignment trace factor computed from synaptic co-product CaD values that can be integrated across multiple theta cycle learning trials.

func (*PathParams) DWtSynDSMatrix

func (pt *PathParams) DWtSynDSMatrix(ctx *Context, syni, si, ri, lpi, pi, di uint32)

DWtSynDSMatrix computes the weight change (learning) at given synapse, for the DSMatrixPath type.

func (*PathParams) DWtSynHebb

func (pt *PathParams) DWtSynHebb(ctx *Context, syni, si, ri, lpi, pi, di uint32)

DWtSynHebb computes the weight change (learning) at given synapse for cortex. Uses synaptically integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets.

func (*PathParams) DWtSynHip

func (pt *PathParams) DWtSynHip(ctx *Context, syni, si, ri, lpi, pi, di uint32, isTarget bool)

DWtSynHip computes the weight change (learning) at given synapse for cortex + Hip (CPCA Hebb learning). Uses synaptically integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets. Adds proportional CPCA learning rule for hip-specific paths

func (*PathParams) DWtSynRWPred

func (pt *PathParams) DWtSynRWPred(ctx *Context, syni, si, ri, lpi, pi, di uint32)

DWtSynRWPred computes the weight change (learning) at given synapse, for the RWPredPath type

func (*PathParams) DWtSynTDPred

func (pt *PathParams) DWtSynTDPred(ctx *Context, syni, si, ri, lpi, pi, di uint32)

DWtSynTDPred computes the weight change (learning) at given synapse, for the TDPredPath type

func (*PathParams) DWtSynVSMatrix

func (pt *PathParams) DWtSynVSMatrix(ctx *Context, syni, si, ri, lpi, pi, di uint32)

DWtSynVSMatrix computes the weight change (learning) at given synapse, for the VSMatrixPath type.

func (*PathParams) DWtSynVSPatch

func (pt *PathParams) DWtSynVSPatch(ctx *Context, syni, si, ri, lpi, pi, di uint32)

DWtSynVSPatch computes the weight change (learning) at given synapse, for the VSPatchPath type.

func (*PathParams) Defaults

func (pt *PathParams) Defaults()

func (*PathParams) GatherSpikes

func (pt *PathParams) GatherSpikes(ctx *Context, ly *LayerParams, ni, di, lni uint32)

GatherSpikes integrates G*Raw and G*Syn values for given recv neuron while integrating the Recv Path-level GSyn integrated values.

func (*PathParams) GatherSpikesGSyn

func (pt *PathParams) GatherSpikesGSyn(ctx *Context, ly *LayerParams, ni, di uint32, gRaw float32, gSyn *float32)

GatherSpikesGSyn integrates G*Raw and G*Syn values for given neuron from the given Path-level GRaw value, first integrating pathway-level GSyn value.

func (*PathParams) HipDefaults

func (pj *PathParams) HipDefaults()

func (*PathParams) InitGBuffs

func (pt *PathParams) InitGBuffs(ctx *Context)

InitGBuffs initializes the per-pathway synaptic conductance buffers. This is not typically needed (called during InitWeights, InitActs) but can be called when needed. Must be called to completely initialize prior activity, e.g., full Glong clearing.

func (*PathParams) IsExcitatory

func (pt *PathParams) IsExcitatory() bool

func (*PathParams) IsInhib

func (pt *PathParams) IsInhib() bool

func (*PathParams) MatrixDefaults

func (pj *PathParams) MatrixDefaults()

func (*PathParams) RLPredDefaults

func (pj *PathParams) RLPredDefaults()

func (*PathParams) SWtFromWt

func (pt *PathParams) SWtFromWt(ctx *Context, rlay *LayerParams, pti, ri, lni uint32)

SWtFromWt updates structural, slowly adapting SWt value based on accumulated DSWt values, which are zero-summed with additional soft bounding relative to SWt limits.

func (*PathParams) SendSpike

func (pt *PathParams) SendSpike(ctx *Context, ni, di, lni uint32)

SendSpike sends a spike from the sending neuron at index sendIndex into the GBuf buffer on the receiver side. The buffer on the receiver side is a ring buffer, which is used for modelling the time delay between sending and receiving spikes.

func (*PathParams) SetFixedWts

func (pt *PathParams) SetFixedWts()

SetFixedWts sets parameters for fixed, non-learning weights with a default of Mean = 0.8, Var = 0 strength

func (*PathParams) ShouldDisplay

func (pt *PathParams) ShouldDisplay(field string) bool

func (*PathParams) SlowAdapt

func (pt *PathParams) SlowAdapt(ctx *Context, rlay *LayerParams, pti, ri, lni uint32)

SlowAdapt does the slow adaptation: SWt learning and SynScale

func (*PathParams) StyleClass

func (pt *PathParams) StyleClass() string

StyleClass implements the params.Styler interface for parameter setting, and must only be called after the network has been built, and is current, because it uses the global CurrentNetwork variable.

func (*PathParams) StyleName

func (pt *PathParams) StyleName() string

StyleName implements the params.Styler interface for parameter setting, and must only be called after the network has been built, and is current, because it uses the global CurrentNetwork variable.

func (*PathParams) SynCa

func (pt *PathParams) SynCa(ctx *Context, si, ri, di uint32, syCaP, syCaD *float32)

SynCa gets the synaptic calcium P (potentiation) and D (depression) values, using an optimized integration of neuron-level CaBins values, and weight factors to capture the different CaP vs. CaD time constants.

func (*PathParams) SynRecvLayerIndex

func (pt *PathParams) SynRecvLayerIndex(syni uint32) uint32

SynRecvLayerIndex converts the Synapse RecvIndex of recv neuron's index in network level global list of all neurons to receiving layer-specific index.

func (*PathParams) SynScale

func (pt *PathParams) SynScale(ctx *Context, rlay *LayerParams, pti, ri, lni uint32)

SynScale performs synaptic scaling based on running average activation vs. targets. Layer-level AvgDifFromTrgAvg function must be called first.

func (*PathParams) SynSendLayerIndex

func (pt *PathParams) SynSendLayerIndex(syni uint32) uint32

SynSendLayerIndex converts the Synapse SendIndex of sending neuron's index in network level global list of all neurons to sending layer-specific index.

func (*PathParams) Update

func (pt *PathParams) Update()

func (*PathParams) VSPatchDefaults

func (pj *PathParams) VSPatchDefaults()

func (*PathParams) WtFromDWtSyn

func (pt *PathParams) WtFromDWtSyn(ctx *Context, syni uint32)

WtFromDWtSyn is the overall entry point for updating weights from weight changes.

func (*PathParams) WtFromDWtSynCortex

func (pt *PathParams) WtFromDWtSynCortex(ctx *Context, syni uint32)

WtFromDWtSynCortex updates weights from dwt changes

func (*PathParams) WtFromDWtSynNoLimits

func (pt *PathParams) WtFromDWtSynNoLimits(ctx *Context, syni uint32)

WtFromDWtSynNoLimits -- weight update without limits

type PathScaleParams

type PathScaleParams struct {

	// relative scaling that shifts balance between different pathways -- this is subject to normalization across all other pathways into receiving neuron, and determines the GScale.Target for adapting scaling
	Rel float32 `min:"0"`

	// absolute multiplier adjustment factor for the path scaling -- can be used to adjust for idiosyncrasies not accommodated by the standard scaling based on initial target activation level and relative scaling factors -- any adaptation operates by directly adjusting scaling factor from the initially computed value
	Abs float32 `default:"1" min:"0"`
	// contains filtered or unexported fields
}

PathScaleParams are pathway scaling parameters: modulates overall strength of pathway, using both absolute and relative factors.

func (*PathScaleParams) Defaults

func (ws *PathScaleParams) Defaults()

func (*PathScaleParams) FullScale

func (ws *PathScaleParams) FullScale(savg, snu, ncon float32) float32

FullScale returns full scaling factor, which is product of Abs * Rel * SLayActScale

func (*PathScaleParams) SLayActScale

func (ws *PathScaleParams) SLayActScale(savg, snu, ncon float32) float32

SLayActScale computes scaling factor based on sending layer activity level (savg), number of units in sending layer (snu), and number of recv connections (ncon). Uses a fixed sem_extra standard-error-of-the-mean (SEM) extra value of 2 to add to the average expected number of active connections to receive, for purposes of computing scaling factors with partial connectivity For 25% layer activity, binomial SEM = sqrt(p(1-p)) = .43, so 3x = 1.3 so 2 is a reasonable default.

func (*PathScaleParams) Update

func (ws *PathScaleParams) Update()

type PathSel

type PathSel = params.Sel[*PathParams]

PathSel is one Path parameter Selector.

type PathSheet

type PathSheet = params.Sheet[*PathParams]

PathSheet is one Path parameter Sheet.

type PathSheets

type PathSheets = params.Sheets[*PathParams]

PathSheets contains Path parameter Sheets.

type PathTypes

type PathTypes int32 //enums:enum

PathTypes enumerates all the different types of axon pathways, for the different algorithm types supported. Class parameter styles automatically key off of these types.

const (
	// Forward is a feedforward, bottom-up pathway from sensory inputs to higher layers
	ForwardPath PathTypes = iota

	// Back is a feedback, top-down pathway from higher layers back to lower layers
	BackPath

	// Lateral is a lateral pathway within the same layer / area
	LateralPath

	// Inhib is an inhibitory pathway that drives inhibitory
	// synaptic conductances instead of the default excitatory ones.
	InhibPath

	// CTCtxt are pathways from Superficial layers to CT layers that
	// send Burst activations drive updating of CtxtGe excitatory conductance,
	// at end of plus (51B Bursting) phase.  Biologically, this pathway
	// comes from the PT layer 5IB neurons, but it is simpler to use the
	// Super neurons directly, and PT are optional for most network types.
	// These pathways also use a special learning rule that
	// takes into account the temporal delays in the activation states.
	// Can also add self context from CT for deeper temporal context.
	CTCtxtPath

	// RWPath does dopamine-modulated learning for reward prediction:
	// Da * Send.CaP (integrated current spiking activity).
	// Uses RLPredPath parameters.
	// Use in RWPredLayer typically to generate reward predictions.
	// If the Da sign is positive, the first recv unit learns fully;
	// for negative, second one learns fully.  Lower lrate applies for
	// opposite cases.  Weights are positive-only.
	RWPath

	// TDPredPath does dopamine-modulated learning for reward prediction:
	// DWt = Da * Send.CaDPrev (activity on *previous* timestep)
	// Uses RLPredPath parameters.
	// Use in TDPredLayer typically to generate reward predictions.
	// If the Da sign is positive, the first recv unit learns fully;
	// for negative, second one learns fully.  Lower lrate applies for
	// opposite cases.  Weights are positive-only.
	TDPredPath

	// BLAPath implements the Rubicon BLA learning rule:
	// dW = ACh * X_t-1 * (Y_t - Y_t-1)
	// The recv delta is across trials, where the US should activate on trial
	// boundary, to enable sufficient time for gating through to OFC, so
	// BLA initially learns based on US present - US absent.
	// It can also learn based on CS onset if there is a prior CS that predicts that.
	BLAPath

	HipPath

	// VSPatchPath implements the VSPatch learning rule:
	// dW = ACh * DA * X * Y
	// where DA is D1 vs. D2 modulated DA level, X = sending activity factor,
	// Y = receiving activity factor, and ACh provides overall modulation.
	VSPatchPath

	// VSMatrixPath is for ventral striatum matrix (SPN / MSN) neurons
	// supporting trace-based learning, where an initial
	// trace of synaptic co-activity is formed, and then modulated
	// by subsequent phasic dopamine & ACh when an outcome occurs.
	// This bridges the temporal gap between gating activity
	// and subsequent outcomes, and is based biologically on synaptic tags.
	// Trace is reset at time of reward based on ACh level (from CINs in biology).
	VSMatrixPath

	// DSMatrixPath is for dorsal striatum matrix (SPN / MSN) neurons
	// supporting trace-based learning, where an initial
	// trace of synaptic co-activity is formed, and then modulated
	// by subsequent phasic dopamine & ACh when an outcome occurs.
	// This bridges the temporal gap between gating activity
	// and subsequent outcomes, and is based biologically on synaptic tags.
	// Trace is reset at time of reward based on ACh level (from CINs in biology).
	DSMatrixPath
)

The pathway types

const PathTypesN PathTypes = 12

PathTypesN is the highest valid value for type PathTypes, plus one.

func PathTypesValues

func PathTypesValues() []PathTypes

PathTypesValues returns all possible values for the type PathTypes.

func (PathTypes) Desc

func (i PathTypes) Desc() string

Desc returns the description of the PathTypes value.

func (PathTypes) Int64

func (i PathTypes) Int64() int64

Int64 returns the PathTypes value as an int64.

func (PathTypes) MarshalText

func (i PathTypes) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*PathTypes) SetInt64

func (i *PathTypes) SetInt64(in int64)

SetInt64 sets the PathTypes value from an int64.

func (*PathTypes) SetString

func (i *PathTypes) SetString(s string) error

SetString sets the PathTypes value from its string representation, and returns an error if the string is invalid.

func (PathTypes) String

func (i PathTypes) String() string

String returns the string representation of this PathTypes value.

func (*PathTypes) UnmarshalText

func (i *PathTypes) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (PathTypes) Values

func (i PathTypes) Values() []enums.Enum

Values returns all possible values for the type PathTypes.

type PoolIndexVars

type PoolIndexVars int32 //enums:enum
const (
	// PoolNeurSt is the starting layer-wise index within the list
	// of neurons in this pool.
	// Add layer starting neuron index (NeurSt) to get index into global
	// network neurons list.
	PoolNeurSt PoolIndexVars = iota

	// PoolNeurEd is the ending (exclusive) layer-wise index within the list
	// of neurons in this pool.
	// Add layer starting neuron index (NeurSt) to get index into global
	// network neurons list.
	PoolNeurEd

	// PoolLayerIdx is the layer index for this pool.
	PoolLayerIdx

	// PoolIsLayer is true (> 0) if this pool represents the entire layer,
	// which is always the first pool in the list of pools for a layer.
	PoolIsLayer
)
const PoolIndexVarsN PoolIndexVars = 4

PoolIndexVarsN is the highest valid value for type PoolIndexVars, plus one.

func PoolIndexVarsValues

func PoolIndexVarsValues() []PoolIndexVars

PoolIndexVarsValues returns all possible values for the type PoolIndexVars.

func (PoolIndexVars) Desc

func (i PoolIndexVars) Desc() string

Desc returns the description of the PoolIndexVars value.

func (PoolIndexVars) Int64

func (i PoolIndexVars) Int64() int64

Int64 returns the PoolIndexVars value as an int64.

func (PoolIndexVars) MarshalText

func (i PoolIndexVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*PoolIndexVars) SetInt64

func (i *PoolIndexVars) SetInt64(in int64)

SetInt64 sets the PoolIndexVars value from an int64.

func (*PoolIndexVars) SetString

func (i *PoolIndexVars) SetString(s string) error

SetString sets the PoolIndexVars value from its string representation, and returns an error if the string is invalid.

func (PoolIndexVars) String

func (i PoolIndexVars) String() string

String returns the string representation of this PoolIndexVars value.

func (*PoolIndexVars) UnmarshalText

func (i *PoolIndexVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (PoolIndexVars) Values

func (i PoolIndexVars) Values() []enums.Enum

Values returns all possible values for the type PoolIndexVars.

type PoolIntVars

type PoolIntVars int32 //enums:enum

PoolIntVars are int32 pool variables, for computing fsfffb inhibition etc. Note that we use int32 instead of uint32 so that overflow errors can be detected. See [PoolVars] for float32 variables.

const (
	// Clamped if true (!=0), this layer is hard-clamped and should
	// use GeExts exclusively for PV.
	Clamped PoolIntVars = iota

	// PoolGated is true (> 0) if this pool gated (for [MatrixLayer], [BGThalLayer])
	PoolGated

	// FFsRawInt is the int32 atomic add compatible integration of [fsfffb.FFsRaw].
	FFsRawInt

	// FBsRawInt is the int32 atomic add compatible integration of [fsfffb.FBsRaw].
	FBsRawInt

	// GeExtRawInt is the int32 atomic add compatible integration of [fsfffb.GeExtRaw].
	GeExtRawInt

	// PoolIntAvgMaxStart is the starting point for int32 AvgMax variables.
	// Use AvgMaxIntVarIndex to get the relevant variable index.
	// There are only values for Cycle phase, for the different variables.
	PoolIntAvgMaxStart
)
const PoolIntVarsN PoolIntVars = 6

PoolIntVarsN is the highest valid value for type PoolIntVars, plus one.

func PoolIntVarsValues

func PoolIntVarsValues() []PoolIntVars

PoolIntVarsValues returns all possible values for the type PoolIntVars.

func (PoolIntVars) Desc

func (i PoolIntVars) Desc() string

Desc returns the description of the PoolIntVars value.

func (PoolIntVars) Int64

func (i PoolIntVars) Int64() int64

Int64 returns the PoolIntVars value as an int64.

func (PoolIntVars) MarshalText

func (i PoolIntVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*PoolIntVars) SetInt64

func (i *PoolIntVars) SetInt64(in int64)

SetInt64 sets the PoolIntVars value from an int64.

func (*PoolIntVars) SetString

func (i *PoolIntVars) SetString(s string) error

SetString sets the PoolIntVars value from its string representation, and returns an error if the string is invalid.

func (PoolIntVars) String

func (i PoolIntVars) String() string

String returns the string representation of this PoolIntVars value.

func (*PoolIntVars) UnmarshalText

func (i *PoolIntVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (PoolIntVars) Values

func (i PoolIntVars) Values() []enums.Enum

Values returns all possible values for the type PoolIntVars.

type PopCodeParams

type PopCodeParams struct {

	// use popcode encoding of variable(s) that this layer represents
	On slbool.Bool

	// Ge multiplier for driving excitatory conductance based on PopCode -- multiplies normalized activation values
	Ge float32 `default:"0.1"`

	// minimum value representable -- for GaussBump, typically include extra to allow mean with activity on either side to represent the lowest value you want to encode
	Min float32 `default:"-0.1"`

	// maximum value representable -- for GaussBump, typically include extra to allow mean with activity on either side to represent the lowest value you want to encode
	Max float32 `default:"1.1"`

	// activation multiplier for values at Min end of range, where values at Max end have an activation of 1 -- if this is &lt; 1, then there is a rate code proportional to the value in addition to the popcode pattern -- see also MinSigma, MaxSigma
	MinAct float32 `default:"1,0.5"`

	// sigma parameter of a gaussian specifying the tuning width of the coarse-coded units, in normalized 0-1 range -- for Min value -- if MinSigma &lt; MaxSigma then more units are activated for Max values vs. Min values, proportionally
	MinSigma float32 `default:"0.1,0.08"`

	// sigma parameter of a gaussian specifying the tuning width of the coarse-coded units, in normalized 0-1 range -- for Min value -- if MinSigma &lt; MaxSigma then more units are activated for Max values vs. Min values, proportionally
	MaxSigma float32 `default:"0.1,0.12"`

	// ensure that encoded and decoded value remains within specified range
	Clip slbool.Bool
}

PopCodeParams provides an encoding of scalar value using population code, where a single continuous (scalar) value is encoded as a gaussian bump across a population of neurons (1 dimensional). It can also modulate rate code and number of neurons active according to the value. This is for layers that represent values as in the Rubicon system (from Context.Rubicon). Both normalized activation values (1 max) and Ge conductance values can be generated.

func (*PopCodeParams) ClampValue

func (pc *PopCodeParams) ClampValue(val float32) float32

ClipVal returns clipped (clamped) value in min / max range

func (*PopCodeParams) Decode

func (pc *PopCodeParams) Decode(acts []float32) float32

Decode decodes value from a pattern of activation as the activation-weighted-average of the unit's preferred tuning values. must have 2 or more values in pattern pat.

func (*PopCodeParams) Defaults

func (pc *PopCodeParams) Defaults()

func (*PopCodeParams) EncodeGe

func (pc *PopCodeParams) EncodeGe(i, n uint32, val float32) float32

EncodeGe returns Ge value for given value, for neuron index i out of n total neurons. n must be 2 or more.

func (*PopCodeParams) EncodeValue

func (pc *PopCodeParams) EncodeValue(i, n uint32, val float32) float32

EncodeValue returns value for given value, for neuron index i out of n total neurons. n must be 2 or more.

func (*PopCodeParams) ProjectParam

func (pc *PopCodeParams) ProjectParam(minParam, maxParam, clipVal float32) float32

ProjectParam projects given min / max param value onto val within range

func (*PopCodeParams) SetRange

func (pc *PopCodeParams) SetRange(min, max, minSigma, maxSigma float32)

SetRange sets the min, max and sigma values

func (*PopCodeParams) ShouldDisplay

func (pc *PopCodeParams) ShouldDisplay(field string) bool

func (*PopCodeParams) Uncertainty

func (pc *PopCodeParams) Uncertainty(val float32, acts []float32) float32

Uncertainty returns the uncertainty of the given distribution of activities relative to a perfect code for the given value. Uncertainty is the average unit-wise standard deviation between the pop code encoding and the max-normalized activities.

func (*PopCodeParams) Update

func (pc *PopCodeParams) Update()

type PulvParams

type PulvParams struct {

	// multiplier on driver input strength, multiplies CaP from driver layer to produce Ge excitatory input to Pulv unit.
	DriveScale float32 `default:"0.1" min:"0.0"`

	// Level of Max driver layer CaP at which the drivers fully drive the burst phase activation.  If there is weaker driver input, then (Max/FullDriveAct) proportion of the non-driver inputs remain and this critically prevents the network from learning to turn activation off, which is difficult and severely degrades learning.
	FullDriveAct float32 `default:"0.6" min:"0.01"`

	// index of layer that generates the driving activity into this one -- set via SetBuildConfig(DriveLayName) setting
	DriveLayIndex int32 `edit:"-"`
	// contains filtered or unexported fields
}

PulvParams provides parameters for how the plus-phase (outcome) state of Pulvinar thalamic relay cell neurons is computed from the corresponding driver neuron Burst activation (or CaP if not Super)

func (*PulvParams) Defaults

func (tp *PulvParams) Defaults()

func (*PulvParams) DriveGe

func (tp *PulvParams) DriveGe(act float32) float32

DriveGe returns effective excitatory conductance to use for given driver input Burst activation

func (*PulvParams) NonDrivePct

func (tp *PulvParams) NonDrivePct(drvMax float32) float32

NonDrivePct returns the multiplier proportion of the non-driver based Ge to keep around, based on FullDriveAct and the max activity in driver layer.

func (*PulvParams) Update

func (tp *PulvParams) Update()

type RLPredPathParams

type RLPredPathParams struct {

	// how much to learn on opposite DA sign coding neuron (0..1)
	OppSignLRate float32

	// tolerance on DA -- if below this abs value, then DA goes to zero and there is no learning -- prevents prediction from exactly learning to cancel out reward value, retaining a residual valence of signal
	DaTol float32
	// contains filtered or unexported fields
}

RLPredPathParams does dopamine-modulated learning for reward prediction: Da * Send.Act Used by RWPath and TDPredPath within corresponding RWPredLayer or TDPredLayer to generate reward predictions based on its incoming weights, using linear activation function. Has no weight bounds or limits on sign etc.

func (*RLPredPathParams) Defaults

func (pj *RLPredPathParams) Defaults()

func (*RLPredPathParams) Update

func (pj *RLPredPathParams) Update()

type RLRateParams

type RLRateParams struct {

	// On toggles use of learning rate modulation.
	On slbool.Bool `default:"true"`

	// SigmoidLinear uses a linear sigmoid function: if act > .5: 1-act; else act
	// otherwise use the actual sigmoid derivative which is squared: a(1-a).
	// This can improve learning in some cases but is generally not beneficial.
	SigmoidLinear slbool.Bool `default:"false"`

	// SigmoidMin is the minimum learning rate multiplier for sigmoidal
	// act (1-act) factor, which prevents lrate from going too low for extreme values.
	// Set to 1 to disable Sigmoid derivative factor, which is default for Target layers.
	SigmoidMin float32 `default:"0.05,1"`

	// Diff modulates learning rate as a function of plus - minus differences.
	Diff slbool.Bool

	// SpikeThr is the threshold on Max(CaP, CaD) below which Min lrate applies.
	// Must be > 0 to prevent div by zero.
	SpikeThr float32 `default:"0.1"`

	// DiffThr is the threshold on recv neuron error delta, i.e., |CaP - CaD|
	//
	//	below which lrate is at Min value.
	DiffThr float32 `default:"0.02"`

	// Min is the minimum learning rate value when |CaP - CaD| Diff is below DiffThr.
	Min float32 `default:"0.001"`
	// contains filtered or unexported fields
}

RLRateParams are recv neuron learning rate modulation parameters. Has two factors: the derivative of the sigmoid based on CaD activity levels, and based on the phase-wise differences in activity (Diff).

func (*RLRateParams) Defaults

func (rl *RLRateParams) Defaults()

func (*RLRateParams) RLRateDiff

func (rl *RLRateParams) RLRateDiff(scap, scad float32) float32

RLRateDiff returns the learning rate as a function of difference between CaP and CaD values

func (*RLRateParams) RLRateSigDeriv

func (rl *RLRateParams) RLRateSigDeriv(act float32, laymax float32) float32

RLRateSigDeriv returns the sigmoid derivative learning rate factor as a function of spiking activity, with mid-range values having full learning and extreme values a reduced learning rate: deriv = 4*act*(1-act) or linear: if act > .5: 2*(1-act); else 2*act The activity should be CaP and the layer maximum is used to normalize that to a 0-1 range.

func (*RLRateParams) ShouldDisplay

func (rl *RLRateParams) ShouldDisplay(field string) bool

func (*RLRateParams) Update

func (rl *RLRateParams) Update()

type RWDaParams

type RWDaParams struct {

	// tonic baseline Ge level for DA = 0 -- +/- are between 0 and 2*TonicGe -- just for spiking display of computed DA value
	TonicGe float32

	// idx of RWPredLayer to get reward prediction from -- set during Build from BuildConfig RWPredLayName
	RWPredLayIndex int32 `edit:"-"`
	// contains filtered or unexported fields
}

RWDaParams computes a dopamine (DA) signal using simple Rescorla-Wagner learning dynamic (i.e., PV learning in the Rubicon framework).

func (*RWDaParams) Defaults

func (rp *RWDaParams) Defaults()

func (*RWDaParams) GeFromDA

func (rp *RWDaParams) GeFromDA(da float32) float32

GeFromDA returns excitatory conductance from DA dopamine value

func (*RWDaParams) Update

func (rp *RWDaParams) Update()

type RWPredParams

type RWPredParams struct {

	// default 0.1..0.99 range of predictions that can be represented -- having a truncated range preserves some sensitivity in dopamine at the extremes of good or poor performance
	PredRange minmax.F32
}

RWPredParams parameterizes reward prediction for a simple Rescorla-Wagner learning dynamic (i.e., PV learning in the Rubicon framework).

func (*RWPredParams) Defaults

func (rp *RWPredParams) Defaults()

func (*RWPredParams) Update

func (rp *RWPredParams) Update()

type RandFunIndex

type RandFunIndex uint32
const (
	RandFunActPGe RandFunIndex = iota
	RandFunActPGi
	RandFunActSMaintP
	RandFunIndexN
)

We use this enum to store a unique index for each function that requires random number generation. If you add a new function, you need to add a new enum entry here. RandFunIndexN is the total number of random functions. It autoincrements due to iota.

type Rubicon

type Rubicon struct {

	// number of possible positive US states and corresponding drives.
	// The first is always reserved for novelty / curiosity.
	// Must be set programmatically via SetNUSs method,
	// which allocates corresponding parameters.
	NPosUSs uint32 `edit:"-"`

	// number of possible phasic negative US states (e.g., shock, impact etc).
	// Must be set programmatically via SetNUSs method, which allocates corresponding
	// parameters.
	NNegUSs uint32 `edit:"-"`

	// number of possible costs, typically including accumulated time and effort costs.
	// Must be set programmatically via SetNUSs method, which allocates corresponding
	// parameters.
	NCosts uint32 `edit:"-"`

	// parameters and state for built-in drives that form the core motivations
	// of the agent, controlled by lateral hypothalamus and associated
	// body state monitoring such as glucose levels and thirst.
	Drive DriveParams `display:"inline"`

	// urgency (increasing pressure to do something) and parameters for
	//
	//	updating it. Raw urgency is incremented by same units as effort,
	//
	// but is only reset with a positive US.
	Urgency UrgencyParams `display:"inline"`

	// controls how positive and negative USs are weighted and integrated to
	// compute an overall PV primary value.
	USs USParams `display:"add-fields"`

	// lateral habenula (LHb) parameters and state, which drives
	// dipping / pausing in dopamine when the predicted positive
	// outcome > actual, or actual negative outcome > predicted.
	// Can also drive bursting for the converse, and via matrix phasic firing.
	LHb LHbParams `display:"inline"`

	// parameters for giving up based on PV pos - neg difference
	GiveUp GiveUpParams `display:"add-fields"`

	// population code decoding parameters for estimates from layers
	ValDecode popcode.OneD `display:"inline"`
	// contains filtered or unexported fields
}

Rubicon implements core elements of the Rubicon goal-directed motivational model, representing the core brainstem-level (hypothalamus) bodily drives and resulting dopamine from US (unconditioned stimulus) inputs, subsuming the earlier Rubicon model of primary value (PV) and learned value (LV), describing the functions of the Amygala, Ventral Striatum, VTA and associated midbrain nuclei (LDT, LHb, RMTg). Core LHb (lateral habenula) and VTA (ventral tegmental area) dopamine are computed in equations using inputs from specialized network layers (LDTLayer driven by BLA, CeM layers, VSPatchLayer). The Drives, Effort, US and resulting LHb PV dopamine computation all happens at the at the start of each trial (NewState, Step). The LV / CS dopamine is computed cycle-by-cycle by the VTA layer using parameters set by the VTA layer. Renders USLayer, PVLayer, DrivesLayer representations based on state updated here.

func (*Rubicon) AddTimeEffort

func (rp *Rubicon) AddTimeEffort(di uint32, effort float32)

AddTimeEffort adds a unit of time and an increment of effort

func (*Rubicon) DAFromPVs

func (rp *Rubicon) DAFromPVs(pvPos, pvNeg, vsPatchPos, vsPatchPosSum float32) (burst, dip, da, rew float32)

DAFromPVs computes the overall PV DA in terms of LHb burst and dip activity from given pvPos, pvNeg, and vsPatchPos values. Also returns the net "reward" value as the discounted PV value, separate from the vsPatchPos prediction error factor.

func (*Rubicon) DecodeFromLayer

func (rp *Rubicon) DecodeFromLayer(di uint32, net *Network, layName string) (val, vr float32, err error)

DecodeFromLayer decodes value and variance from the average activity (CaD) of the given layer name. Use for decoding PVposEst and Var, and PVnegEst and Var

func (*Rubicon) DecodePVEsts

func (rp *Rubicon) DecodePVEsts(di uint32, net *Network)

DecodePVEsts decodes estimated PV outcome values from PVposP and PVnegP prediction layers, saves in global PVposEst, Var and PVnegEst, Var

func (*Rubicon) Defaults

func (rp *Rubicon) Defaults()

func (*Rubicon) DriveUpdate

func (rp *Rubicon) DriveUpdate(di uint32)

DriveUpdate is used when auto-updating drive levels based on US consumption, which partially satisfies (decrements) corresponding drive, and on time passing, where drives adapt to their overall baseline levels.

func (*Rubicon) EffortUrgencyUpdate

func (rp *Rubicon) EffortUrgencyUpdate(di uint32, effort float32)

EffortUrgencyUpdate updates the Effort or Urgency based on given effort increment. Effort is incremented when VSMatrixHasGated (i.e., goal engaged) and Urgency updates otherwise (when not goal engaged) Call this at the start of the trial, in ApplyRubicon method, after NewState.

func (*Rubicon) GiveUpOnGoal

func (rp *Rubicon) GiveUpOnGoal(di uint32, rnd randx.Rand) bool

GiveUpOnGoal determines whether to give up on current goal based on Utility, Timing, and Progress weight factors.

func (*Rubicon) HasPosUS

func (rp *Rubicon) HasPosUS(di uint32) bool

HasPosUS returns true if there is at least one non-zero positive US

func (*Rubicon) InitDrives

func (rp *Rubicon) InitDrives(di uint32)

InitDrives initializes all the Drives to baseline values (default = 0)

func (*Rubicon) InitUS

func (rp *Rubicon) InitUS(di uint32)

InitUS initializes all the USs to zero

func (*Rubicon) NewState

func (rp *Rubicon) NewState(di uint32, rnd randx.Rand)

NewState is called at very start of new state (trial) of processing. sets HadRew = HasRew from last trial -- used to then reset various things after reward.

func (*Rubicon) PVDA

func (rp *Rubicon) PVDA(di uint32, rnd randx.Rand)

PVDA computes the PV (primary value) based dopamine based on current state information, at the start of a trial. PV DA is computed by the VS (ventral striatum) and the LHb / RMTg, and the resulting values are stored in global variables. Called after updating USs, Effort, Drives at start of trial step, in Step.

func (*Rubicon) PVcostEstFromCosts

func (rp *Rubicon) PVcostEstFromCosts(costs []float32) (pvCostSum, pvNeg float32)

PVcostEstFromUSs returns the estimated negative PV value based on given externally provided Cost values. This can be used to compute estimates to compare network performance.

func (*Rubicon) PVneg

func (rp *Rubicon) PVneg(di uint32) (pvNegSum, pvNeg float32)

PVneg returns the summed weighted negative value of current costs and negative US state, where each US is multiplied by a weighting factor and summed (usNegSum) and the normalized version of this sum (PVneg = overall negative PV) as 1 / (1 + (PVnegGain * PVnegSum))

func (*Rubicon) PVnegEstFromUSs

func (rp *Rubicon) PVnegEstFromUSs(uss []float32) (pvNegSum, pvNeg float32)

PVnegEstFromUSs returns the estimated negative PV value based on given externally provided US values. This can be used to compute estimates to compare network performance.

func (*Rubicon) PVpos

func (rp *Rubicon) PVpos(di uint32) (pvPosSum, pvPos float32)

PVpos returns the summed weighted positive value of current positive US state, where each US is multiplied by its current drive and weighting factor (pvPosSum), and the normalized version of this sum (PVpos = overall positive PV) as 1 / (1 + (PVposGain * pvPosSum))

func (*Rubicon) PVposEstFromUSs

func (rp *Rubicon) PVposEstFromUSs(di uint32, uss []float32) (pvPosSum, pvPos float32)

PVposEstFromUSs returns the estimated positive PV value based on drives and given US values. This can be used to compute estimates to compare network performance.

func (*Rubicon) PVposEstFromUSsDrives

func (rp *Rubicon) PVposEstFromUSsDrives(uss, drives []float32) (pvPosSum, pvPos float32)

PVposEstFromUSsDrives returns the estimated positive PV value based on given externally provided drives and US values. This can be used to compute estimates to compare network performance.

func (*Rubicon) PVsFromUSs

func (rp *Rubicon) PVsFromUSs(di uint32)

PVsFromUSs updates the current PV summed, weighted, normalized values from the underlying US values.

func (*Rubicon) Reset

func (rp *Rubicon) Reset(di uint32)

Reset resets all Rubicon state

func (*Rubicon) ResetGiveUp

func (rp *Rubicon) ResetGiveUp(di uint32)

ResetGiveUp resets all the give-up related global values.

func (*Rubicon) ResetGoalState

func (rp *Rubicon) ResetGoalState(di uint32)

ResetGoalState resets all the goal-engaged global values. Critically, this is only called after goal accomplishment, not after goal gating -- prevents "shortcutting" by re-gating.

func (*Rubicon) SetDrive

func (rp *Rubicon) SetDrive(di uint32, dr uint32, val float32)

RubiconSetDrive sets given Drive to given value

func (*Rubicon) SetDrives

func (rp *Rubicon) SetDrives(di uint32, curiosity float32, drives ...float32)

SetDrives is used when directly controlling drive levels externally. curiosity sets the strength for the curiosity drive and drives are strengths of the remaining sim-specified drives, in order. any drives not so specified are at the InitDrives baseline level.

func (*Rubicon) SetGoalDistEst

func (rp *Rubicon) SetGoalDistEst(di uint32, dist float32)

SetGoalDistEst sets the current estimated distance to the goal, in trial step units, which should decrease to 0 at the goal. This should be set at the start of every trial. Also computes the ProgressRate.

func (*Rubicon) SetGoalMaintFromLayer

func (rp *Rubicon) SetGoalMaintFromLayer(di uint32, net *Network, layName string, maxAct float32) error

SetGoalMaintFromLayer sets the GoalMaint global state variable from the average activity (CaD) of the given layer name. GoalMaint is normalized 0-1 based on the given max activity level, with anything out of range clamped to 0-1 range. Returns (and logs) an error if layer name not found.

func (*Rubicon) SetNUSs

func (rp *Rubicon) SetNUSs(nPos, nNeg int)

SetNUSs sets the number of _additional_ simulation-specific phasic positive and negative USs (primary value outcomes). This must be called _before_ network Build, which allocates global values that depend on these numbers. Any change must also call network.BuildGlobals. 1 PosUS (curiosity / novelty) is managed automatically by the Rubicon code. Two costs (Time, Effort) are also automatically allocated and managed. The USs specified here need to be managed by the simulation via the SetUS method. Positive USs each have corresponding Drives.

func (*Rubicon) SetUS

func (rp *Rubicon) SetUS(di uint32, valence ValenceTypes, usIndex int, magnitude float32)

SetUS sets the given _simulation specific_ unconditioned stimulus (US) state for Rubicon algorithm. usIndex = 0 is first US, etc. The US then drives activity of relevant Rubicon-rendered inputs, and dopamine, and sets the global HasRew flag, thus triggering a US learning event. Note that costs can be used to track negative USs that are not strong enough to trigger a US learning event.

func (*Rubicon) Step

func (rp *Rubicon) Step(di uint32, rnd randx.Rand)

Step does one step (trial) after applying USs, Drives, and updating Effort. It should be the final call in ApplyRubicon. Calls PVDA which does all US, PV, LHb, GiveUp updating.

func (*Rubicon) TimeEffortReset

func (rp *Rubicon) TimeEffortReset(di uint32)

TimeEffortReset resets the raw time and effort back to zero, at start of new gating event

func (*Rubicon) USnegIndex

func (rp *Rubicon) USnegIndex(simUsIndex int) int

USnegIndex allows for the possibility of automatically managed negative USs, by adding those to the given _simulation specific_ negative US index to get the actual US index.

func (*Rubicon) USposIndex

func (rp *Rubicon) USposIndex(simUsIndex int) int

USposIndex adds 1 to the given _simulation specific_ positive US index to get the actual US / Drive index, where the first pool is reserved for curiosity / novelty.

func (*Rubicon) Update

func (rp *Rubicon) Update()

func (*Rubicon) VSPatchNewState

func (rp *Rubicon) VSPatchNewState(di uint32)

VSPatchNewState does VSPatch processing in NewState: updates global VSPatchPos and VSPatchPosSum, sets to RewPred. uses max across recorded VSPatch activity levels.

type SMaintParams

type SMaintParams struct {

	// is self maintenance active?
	On slbool.Bool

	// number of neurons within the self-maintenance pool,
	// each of which is assumed to have the same probability of spiking
	NNeurons float32 `default:"10"`

	// conductance multiplier for self maintenance synapses
	Gbar float32 `default:"0.2"`

	// inhib controls how much of the extra maintenance conductance goes to the GeExt, which drives extra proportional inhibition
	Inhib float32 `default:"1"`

	// ISI (inter spike interval) range -- min is used as min ISIAvg for poisson spike rate expected from the population, and above max, no additional maintenance conductance is added
	ISI minmax.F32 `display:"inline"`
}

SMaintParams for self-maintenance simulating a population of NMDA-interconnected spiking neurons

func (*SMaintParams) Defaults

func (sm *SMaintParams) Defaults()

func (*SMaintParams) ExpInt

func (sm *SMaintParams) ExpInt(isi float32) float32

ExpInt returns the exponential interval value for determining when the next excitatory spike will arrive, based on given ISI value for this neuron.

func (*SMaintParams) ShouldDisplay

func (sm *SMaintParams) ShouldDisplay(field string) bool

func (*SMaintParams) Update

func (sm *SMaintParams) Update()

type SWtAdaptParams

type SWtAdaptParams struct {

	// On enables adaptation of [SWt] values at a slower time scale. If false, SWt
	// values are not updated, in which case it is generally good to set Init.SPct=0 too.
	On slbool.Bool

	// LRate is the learning rate multiplier on the accumulated [DWt] values
	// (which already have fast LRate applied), to drive updating of [SWt]
	// during slow outer loop updating. Lower values impose stronger constraints,
	// for larger networks that need more structural support, e.g., 0.001 is better
	// after 1,000 epochs in large models. 0.1 is fine for smaller models.
	LRate float32 `default:"0.1,0.01,0.001,0.0002"`

	// SubMean is the amount of the mean to subtract from [SWt] delta when updating,
	// to impose a zero-sum constraint on overall structural weight strengths.
	// Generally best to set to 1. There is a separate SubMean factor for [LWt].
	SubMean float32 `default:"1"`

	// HiMeanDecay specifies a decay factor applied across all [LWt] weights
	// in proportion to the deviation of the average effective weight value [Wt]
	// above the HiMeanThr threshold. This is applied at the slow learning interval
	// and should be very slow, for counteracting a gradual accumulation in overall
	// weights that can occur even with SubMean factors (which only operate on weights
	// that are actually changing on the current trial).
	HiMeanDecay float32 `default:"0.0008"`

	// HiMeanThr specifies a decay factor applied across all [LWt] weights
	// in proportion to the deviation of the average effective weight value [Wt]
	// away from SWt.Init.Mean. This is applied at the slow learning interval
	// and should be very slow, for counteracting a gradual accumulation in overall
	// weights that can occur even with SubMean factors, which only operate on weights
	// that are actually changing on the current trial.
	HiMeanThr float32 `default:"0.5"`

	// SigGain is the gain of the sigmoidal constrast enhancement function
	// used to transform learned, linear [LWt] values into [Wt] values.
	// This is critical to offset the damping effect of exponential soft bounding,
	// but some special cases with different learning rules may benefit by making
	// this linear (1) instead.
	SigGain float32 `default:"6"`
	// contains filtered or unexported fields
}

SWtAdaptParams manages adaptation of the SWt (slow, structural weight) values.

func (*SWtAdaptParams) Defaults

func (sp *SWtAdaptParams) Defaults()

func (*SWtAdaptParams) ShouldDisplay

func (sp *SWtAdaptParams) ShouldDisplay(field string) bool

func (*SWtAdaptParams) Update

func (sp *SWtAdaptParams) Update()

type SWtInitParams

type SWtInitParams struct {

	// SPct is how much of the initial random weights to capture in the
	// slow, structural SWt values, with the rest going into the online leanring
	// LWt values. 1 gives the strongest initial biasing effect, for larger
	// models that need more structural support. 0.5 should work for most models
	// where stronger constraints are not needed.
	SPct float32 `min:"0" max:"1" default:"0,1,0.5"`

	// Mean is the target mean weight value across receiving neuron's pathway.
	// The mean SWt values are constrained to remain at this value.
	// Some pathways may benefit from lower mean of .4.
	Mean float32 `default:"0.5,0.4"`

	// Var is the initial variance in weight values, prior to constraints.
	Var float32 `default:"0.25"`

	// Sym symmetrizes the initial weight values with those in reciprocal pathway.
	// Typically true for bidirectional excitatory connections.
	Sym slbool.Bool `default:"true"`
}

SWtInitParams for initial SWt (slow, structural weight) values.

func (*SWtInitParams) Defaults

func (sp *SWtInitParams) Defaults()

func (*SWtInitParams) RandVar

func (sp *SWtInitParams) RandVar(rnd randx.Rand) float32

RandVar returns the random variance in weight value (zero mean) based on Var param

func (*SWtInitParams) Update

func (sp *SWtInitParams) Update()

type SWtParams

type SWtParams struct {

	// Init controls the initialization of [SWt] values.
	Init SWtInitParams `display:"inline"`

	// Adapt controls adaptation of [SWt] values in response to [LWt] learning.
	Adapt SWtAdaptParams `display:"inline"`

	// Limit limits the range of [SWt] values, so that they do not fully
	// determine the effective overall weight value.
	Limit minmax.F32 `default:"{'Min':0.2,'Max':0.8}" display:"inline"`
}

SWtParams manages structural, slowly adapting weight values SWt, in terms of initialization and updating over course of learning. SWts impose initial and slowly adapting constraints on neuron connectivity to encourage differentiation of neuron representations and overall good behavior in terms of not hogging the representational space. The TrgAvg activity constraint is not enforced through SWt: it needs to be more dynamic and is supported by the regular learned weights LWt.

func (*SWtParams) ClipSWt

func (sp *SWtParams) ClipSWt(swt float32) float32

ClipSWt returns SWt value clipped to valid range

func (*SWtParams) ClipWt

func (sp *SWtParams) ClipWt(wt float32) float32

ClipWt returns Wt value clipped to 0-1 range

func (*SWtParams) Defaults

func (sp *SWtParams) Defaults()

func (*SWtParams) InitWeightsSyn

func (sp *SWtParams) InitWeightsSyn(ctx *Context, syni uint32, rnd randx.Rand, mean, spct float32)

InitWeightsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.

func (*SWtParams) InitWeightsSynTrace

func (sp *SWtParams) InitWeightsSynTrace(ctx *Context, syni, di uint32)

InitWeightsSynTrace initializes SynapseTrace values for an individual synapse.

func (*SWtParams) LWtFromWts

func (sp *SWtParams) LWtFromWts(wt, swt float32) float32

LWtFromWts returns linear, learning LWt from wt and swt. LWt is set to reproduce given Wt relative to given SWt base value.

func (*SWtParams) LinFromSigWt

func (sp *SWtParams) LinFromSigWt(wt float32) float32

LinFromSigWt returns linear weight from sigmoidal contrast-enhanced weight. wt is centered at 1, and normed in range +/- 1 around that, return value is in 0-1 range, centered at .5

func (*SWtParams) SigFromLinWt

func (sp *SWtParams) SigFromLinWt(lw float32) float32

SigFromLinWt returns sigmoidal contrast-enhanced weight from linear weight, centered at 1 and normed in range +/- 1 around that in preparation for multiplying times SWt

func (*SWtParams) Update

func (sp *SWtParams) Update()

func (*SWtParams) WtFromDWt

func (sp *SWtParams) WtFromDWt(wt, lwt *float32, dwt, swt float32)

WtFromDWt updates the synaptic weights from accumulated weight changes. wt is the sigmoidal contrast-enhanced weight and lwt is the linear weight value.

func (*SWtParams) WtValue

func (sp *SWtParams) WtValue(swt, lwt float32) float32

WtVal returns the effective Wt value given the SWt and LWt values

type SpikeNoiseParams

type SpikeNoiseParams struct {

	// add noise simulating background spiking levels
	On slbool.Bool

	// mean frequency of excitatory spikes -- typically 50Hz but multiple inputs increase rate -- poisson lambda parameter, also the variance
	GeHz float32 `default:"100"`

	// excitatory conductance per spike -- .001 has minimal impact, .01 can be strong, and .15 is needed to influence timing of clamped inputs
	Ge float32 `min:"0"`

	// mean frequency of inhibitory spikes -- typically 100Hz fast spiking but multiple inputs increase rate -- poisson lambda parameter, also the variance
	GiHz float32 `default:"200"`

	// excitatory conductance per spike -- .001 has minimal impact, .01 can be strong, and .15 is needed to influence timing of clamped inputs
	Gi float32 `min:"0"`

	// add Ge noise to GeMaintRaw instead of standard Ge -- used for PTMaintLayer for example
	MaintGe slbool.Bool

	// Exp(-Interval) which is the threshold for GeNoiseP as it is updated
	GeExpInt float32 `display:"-" json:"-" xml:"-"`

	// Exp(-Interval) which is the threshold for GiNoiseP as it is updated
	GiExpInt float32 `display:"-" json:"-" xml:"-"`
}

SpikeNoiseParams parameterizes background spiking activity impinging on the neuron, simulated using a poisson spiking process.

func (*SpikeNoiseParams) Defaults

func (an *SpikeNoiseParams) Defaults()

func (*SpikeNoiseParams) PGe

func (an *SpikeNoiseParams) PGe(ctx *Context, p *float32, ni, di uint32) float32

PGe updates the GeNoiseP probability, multiplying a uniform random number [0-1] and returns Ge from spiking if a spike is triggered

func (*SpikeNoiseParams) PGi

func (an *SpikeNoiseParams) PGi(ctx *Context, p *float32, ni, di uint32) float32

PGi updates the GiNoiseP probability, multiplying a uniform random number [0-1] and returns Gi from spiking if a spike is triggered

func (*SpikeNoiseParams) ShouldDisplay

func (an *SpikeNoiseParams) ShouldDisplay(field string) bool

func (*SpikeNoiseParams) Update

func (an *SpikeNoiseParams) Update()

type SpikeParams

type SpikeParams struct {

	// threshold value Theta (Q) for firing output activation (.5 is more accurate value based on AdEx biological parameters and normalization
	Thr float32 `default:"0.5"`

	// post-spiking membrane potential to reset to, produces refractory effect if lower than VmInit -- 0.3 is appropriate biologically based value for AdEx (Brette & Gurstner, 2005) parameters.  See also RTau
	VmR float32 `default:"0.3"`

	// post-spiking explicit refractory period, in cycles -- prevents Vm updating for this number of cycles post firing -- Vm is reduced in exponential steps over this period according to RTau, being fixed at Tr to VmR exactly
	Tr int32 `min:"1" default:"3"`

	// time constant for decaying Vm down to VmR -- at end of Tr it is set to VmR exactly -- this provides a more realistic shape of the post-spiking Vm which is only relevant for more realistic channels that key off of Vm -- does not otherwise affect standard computation
	RTau float32 `default:"1.6667"`

	// if true, turn on exponential excitatory current that drives Vm rapidly upward for spiking as it gets past its nominal firing threshold (Thr) -- nicely captures the Hodgkin Huxley dynamics of Na and K channels -- uses Brette & Gurstner 2005 AdEx formulation
	Exp slbool.Bool `default:"true"`

	// slope in Vm (2 mV = .02 in normalized units) for extra exponential excitatory current that drives Vm rapidly upward for spiking as it gets past its nominal firing threshold (Thr) -- nicely captures the Hodgkin Huxley dynamics of Na and K channels -- uses Brette & Gurstner 2005 AdEx formulation
	ExpSlope float32 `default:"0.02"`

	// membrane potential threshold for actually triggering a spike when using the exponential mechanism
	ExpThr float32 `default:"0.9"`

	// for translating spiking interval (rate) into rate-code activation equivalent, what is the maximum firing rate associated with a maximum activation value of 1
	MaxHz float32 `default:"180" min:"1"`

	// constant for integrating the spiking interval in estimating spiking rate
	ISITau float32 `default:"5" min:"1"`

	// rate = 1 / tau
	ISIDt float32 `display:"-"`

	// rate = 1 / tau
	RDt float32 `display:"-"`
	// contains filtered or unexported fields
}

SpikeParams contains spiking activation function params. Implements a basic thresholded Vm model, and optionally the AdEx adaptive exponential function (adapt is KNaAdapt)

func (*SpikeParams) ActFromISI

func (sk *SpikeParams) ActFromISI(isi, timeInc, integ float32) float32

ActFromISI computes rate-code activation from estimated spiking interval

func (*SpikeParams) ActToISI

func (sk *SpikeParams) ActToISI(act, timeInc, integ float32) float32

ActToISI compute spiking interval from a given rate-coded activation, based on time increment (.001 = 1msec default), Act.Dt.Integ

func (*SpikeParams) AvgFromISI

func (sk *SpikeParams) AvgFromISI(avg float32, isi float32) float32

AvgFromISI returns updated spiking ISI from current isi interval value

func (*SpikeParams) Defaults

func (sk *SpikeParams) Defaults()

func (*SpikeParams) ShouldDisplay

func (sk *SpikeParams) ShouldDisplay(field string) bool

func (*SpikeParams) Update

func (sk *SpikeParams) Update()

type StartN

type StartN struct {

	// starting offset
	Start uint32

	// number of items --
	N uint32
	// contains filtered or unexported fields
}

StartN holds a starting offset index and a number of items arranged from Start to Start+N (exclusive). This is not 16 byte padded and only for use on CPU side.

type SynComParams

type SynComParams struct {

	// type of conductance (G) communicated by this pathway
	GType PathGTypes

	// additional synaptic delay in msec for inputs arriving at this pathway.
	// Must be <= MaxDelay which is set during network building based on MaxDelay
	// of any existing Path in the network. Delay = 0 means a spike reaches
	// receivers in the next Cycle, which is the minimum time (1 msec).
	// Biologically, subtract 1 from biological synaptic delay values to set
	// corresponding Delay value.
	Delay uint32 `min:"0" default:"2"`

	// maximum value of Delay, based on MaxDelay values when the BuildGBuf
	// function was called during [Network.Build]. Cannot set it longer than this,
	// except by calling BuildGBuf on network after changing MaxDelay to a larger
	// value in any pathway in the network.
	MaxDelay uint32 `edit:"-"`

	// delay length = actual length of the GBuf buffer per neuron = Delay+1; just for speed
	DelLen uint32 `display:"-"`
}

SynComParams are synaptic communication parameters: used in the Path parameters. Includes delay and probability of failure, and Inhib for inhibitory connections, and modulatory pathways that have multiplicative-like effects.

func (*SynComParams) Defaults

func (sc *SynComParams) Defaults()

func (*SynComParams) FloatFromGBuf

func (sc *SynComParams) FloatFromGBuf(ival int32) float32

FloatFromGBuf converts the given int32 value produced via FloatToGBuf back into a float32 (divides by factor). If the value is negative, a panic is triggered indicating there was numerical overflow in the aggregation. If this occurs, the FloatToIntFactor needs to be decreased.

func (*SynComParams) FloatToGBuf

func (sc *SynComParams) FloatToGBuf(val float32) int32

FloatToGBuf converts the given floating point value to a large int32 for accumulating in GBuf. Note: more efficient to bake factor into scale factor per paths.

func (*SynComParams) FloatToIntFactor

func (sc *SynComParams) FloatToIntFactor() float32

FloatToIntFactor returns the factor used for converting float32 to int32 in GBuf encoding. Because total G is constrained via scaling factors to be around ~1, it is safe to use a factor that uses most of the available bits, leaving enough room to prevent overflow when adding together the different vals. For encoding, bake this into scale factor in SendSpike, and cast the result to int32.

func (*SynComParams) ReadIndex

func (sc *SynComParams) ReadIndex(rnIndex, di uint32, cycTot int32, nRecvNeurs, maxData uint32) uint32

ReadIndex returns index for reading existing spikes from the GBuf buffer, based on the layer-based recv neuron index, data parallel idx, and the ReadOff offset from the CyclesTotal.

func (*SynComParams) ReadOff

func (sc *SynComParams) ReadOff(cycTot int32) uint32

ReadOff returns offset for reading existing spikes from the GBuf buffer, based on Context CyclesTotal counter which increments each cycle. This is logically the zero position in the ring buffer.

func (*SynComParams) RingIndex

func (sc *SynComParams) RingIndex(i uint32) uint32

RingIndex returns the wrap-around ring index for given raw index. For writing and reading spikes to GBuf buffer, based on Context.CyclesTotal counter. RN: 0 1 2 <- recv neuron indexes DI: 0 1 2 0 1 2 0 1 2 <- delay indexes C0: ^ v <- cycle 0, ring index: ^ = write, v = read C1: ^ v <- cycle 1, shift over by 1 -- overwrite last read C2: v ^ <- cycle 2: read out value stored on C0 -- index wraps around

func (*SynComParams) Update

func (sc *SynComParams) Update()

func (*SynComParams) WriteOff

func (sc *SynComParams) WriteOff(cycTot int32) uint32

WriteOff returns offset for writing new spikes into the GBuf buffer, based on Context CyclesTotal counter which increments each cycle. This is logically the last position in the ring buffer.

type SynapseIndexVars

type SynapseIndexVars int32 //enums:enum

SynapseIndexVars are synapse-level indexes used to access neurons and paths from the individual synapse level of processing.

const (
	// SynRecvIndex is receiving neuron index in network's global list of neurons
	SynRecvIndex SynapseIndexVars = iota

	// SynSendIndex is sending neuron index in network's global list of neurons
	SynSendIndex

	// SynPathIndex is pathway index in global list of pathways organized as [Layers][RecvPaths]
	SynPathIndex
)
const SynapseIndexVarsN SynapseIndexVars = 3

SynapseIndexVarsN is the highest valid value for type SynapseIndexVars, plus one.

func SynapseIndexVarsValues

func SynapseIndexVarsValues() []SynapseIndexVars

SynapseIndexVarsValues returns all possible values for the type SynapseIndexVars.

func (SynapseIndexVars) Desc

func (i SynapseIndexVars) Desc() string

Desc returns the description of the SynapseIndexVars value.

func (SynapseIndexVars) Int64

func (i SynapseIndexVars) Int64() int64

Int64 returns the SynapseIndexVars value as an int64.

func (SynapseIndexVars) MarshalText

func (i SynapseIndexVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*SynapseIndexVars) SetInt64

func (i *SynapseIndexVars) SetInt64(in int64)

SetInt64 sets the SynapseIndexVars value from an int64.

func (*SynapseIndexVars) SetString

func (i *SynapseIndexVars) SetString(s string) error

SetString sets the SynapseIndexVars value from its string representation, and returns an error if the string is invalid.

func (SynapseIndexVars) String

func (i SynapseIndexVars) String() string

String returns the string representation of this SynapseIndexVars value.

func (*SynapseIndexVars) UnmarshalText

func (i *SynapseIndexVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (SynapseIndexVars) Values

func (i SynapseIndexVars) Values() []enums.Enum

Values returns all possible values for the type SynapseIndexVars.

type SynapseTraceVars

type SynapseTraceVars int32 //enums:enum

SynapseTraceVars are synaptic variables that depend on the data parallel index, for accumulating learning traces and weight changes per data.

const (
	// Tr is trace of synaptic activity over time, which is used for
	// credit assignment in learning.
	// In MatrixPath this is a tag that is then updated later when US occurs.
	Tr SynapseTraceVars = iota

	// DTr is delta (change in) Tr trace of synaptic activity over time.
	DTr

	// DiDWt is delta weight for each data parallel index (Di).
	// This is directly computed from the Ca values (in cortical version)
	// and then aggregated into the overall DWt (which may be further
	// integrated across MPI nodes), which then drives changes in Wt values.
	DiDWt
)
const SynapseTraceVarsN SynapseTraceVars = 3

SynapseTraceVarsN is the highest valid value for type SynapseTraceVars, plus one.

func SynapseTraceVarsValues

func SynapseTraceVarsValues() []SynapseTraceVars

SynapseTraceVarsValues returns all possible values for the type SynapseTraceVars.

func (SynapseTraceVars) Desc

func (i SynapseTraceVars) Desc() string

Desc returns the description of the SynapseTraceVars value.

func (SynapseTraceVars) Int64

func (i SynapseTraceVars) Int64() int64

Int64 returns the SynapseTraceVars value as an int64.

func (SynapseTraceVars) MarshalText

func (i SynapseTraceVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*SynapseTraceVars) SetInt64

func (i *SynapseTraceVars) SetInt64(in int64)

SetInt64 sets the SynapseTraceVars value from an int64.

func (*SynapseTraceVars) SetString

func (i *SynapseTraceVars) SetString(s string) error

SetString sets the SynapseTraceVars value from its string representation, and returns an error if the string is invalid.

func (SynapseTraceVars) String

func (i SynapseTraceVars) String() string

String returns the string representation of this SynapseTraceVars value.

func (*SynapseTraceVars) UnmarshalText

func (i *SynapseTraceVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (SynapseTraceVars) Values

func (i SynapseTraceVars) Values() []enums.Enum

Values returns all possible values for the type SynapseTraceVars.

type SynapseVars

type SynapseVars int32 //enums:enum

SynapseVars are the synapse variables representing synaptic weights, etc. These do not depend on the data parallel index (di). See SynapseTraceVars for variables that do depend on di.

const (
	// Wt is the effective synaptic weight value, determining how much conductance
	// one presynaptic spike drives into the receiving neuron. Biologically it represents
	// the number of effective AMPA receptors in the synapse.
	// Wt = [SWt] * WtSig([LWt]), where WtSig is the sigmoidal constrast enhancement
	// function that produces values between 0-2 based on LWt, centered on 1.
	Wt SynapseVars = iota

	// LWt is the rapid, online learning, linear weight value. It learns on every
	// trial according	to the learning rate (LRate) parameter. Biologically,
	// this represents the internal biochemical processes that drive the trafficking
	// of AMPA receptors in the synaptic density.
	LWt

	// SWt is a slowly adapting structural weight value, which acts as a
	// multiplicative scaling factor on net synaptic efficacy [Wt].
	// Biologically it represents the physical size and efficacy of the dendritic spine.
	// SWt values adapt in a slower outer loop along with synaptic scaling,
	// with constraints to prevent runaway positive feedback loops and maintain
	// variance and further capacity to learn. Initial weight variance is partially or
	// fully captured in the SWt values, with LWt capturing the remainder.
	SWt

	// DWt is delta (change in) synaptic weight, from learning. This updates [LWt]
	// on every trial. It is reset to 0 after it is applied, but the network view
	// captures this value just prior to application.
	DWt

	// DSWt is the accumulated change in the [SWt] slow structural weight, computed
	// as the accumulation of [DWt] values over the longer slow weight update window.
	DSWt
)
const SynapseVarsN SynapseVars = 5

SynapseVarsN is the highest valid value for type SynapseVars, plus one.

func SynapseVarsValues

func SynapseVarsValues() []SynapseVars

SynapseVarsValues returns all possible values for the type SynapseVars.

func (SynapseVars) Desc

func (i SynapseVars) Desc() string

Desc returns the description of the SynapseVars value.

func (SynapseVars) Int64

func (i SynapseVars) Int64() int64

Int64 returns the SynapseVars value as an int64.

func (SynapseVars) MarshalText

func (i SynapseVars) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*SynapseVars) SetInt64

func (i *SynapseVars) SetInt64(in int64)

SetInt64 sets the SynapseVars value from an int64.

func (*SynapseVars) SetString

func (i *SynapseVars) SetString(s string) error

SetString sets the SynapseVars value from its string representation, and returns an error if the string is invalid.

func (SynapseVars) String

func (i SynapseVars) String() string

String returns the string representation of this SynapseVars value.

func (*SynapseVars) UnmarshalText

func (i *SynapseVars) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (SynapseVars) Values

func (i SynapseVars) Values() []enums.Enum

Values returns all possible values for the type SynapseVars.

type TDDaParams

type TDDaParams struct {

	// tonic baseline Ge level for DA = 0 -- +/- are between 0 and 2*TonicGe -- just for spiking display of computed DA value
	TonicGe float32

	// idx of TDIntegLayer to get reward prediction from -- set during Build from BuildConfig TDIntegLayName
	TDIntegLayIndex int32 `edit:"-"`
	// contains filtered or unexported fields
}

TDDaParams are params for dopamine (DA) signal as the temporal difference (TD) between the TDIntegLayer activations in the minus and plus phase.

func (*TDDaParams) Defaults

func (tp *TDDaParams) Defaults()

func (*TDDaParams) GeFromDA

func (tp *TDDaParams) GeFromDA(da float32) float32

GeFromDA returns excitatory conductance from DA dopamine value

func (*TDDaParams) Update

func (tp *TDDaParams) Update()

type TDIntegParams

type TDIntegParams struct {

	// discount factor -- how much to discount the future prediction from TDPred
	Discount float32

	// gain factor on TD rew pred activations
	PredGain float32

	// idx of TDPredLayer to get reward prediction from -- set during Build from BuildConfig TDPredLayName
	TDPredLayIndex int32 `edit:"-"`
	// contains filtered or unexported fields
}

TDIntegParams are params for reward integrator layer

func (*TDIntegParams) Defaults

func (tp *TDIntegParams) Defaults()

func (*TDIntegParams) Update

func (tp *TDIntegParams) Update()

type TrgAvgActParams

type TrgAvgActParams struct {

	// GiBaseInit sets an initial [GiBase] value, as a proportion of TrgRange.Max - [TrgAvg].
	// This gives neurons differences in intrinsic inhibition / leak as a starting bias.
	// This is independent of using the target values to scale synaptic weights. Only used if > 0.
	GiBaseInit float32 `default:"0"`

	// RescaleOn is whether to use target average activity mechanism to rescale
	// synaptic weights, so that activity tracks the target values.
	RescaleOn slbool.Bool `default:"true"`

	// ErrLRate is the learning rate for adjustments to [TrgAvg] value based on the
	// neuron-level error signal. Population TrgAvg values are renormalized to
	// a fixed overall average, in TrgRange. Generally, deviating from the default value
	// of this parameter doesn't make much difference.
	ErrLRate float32 `default:"0.02"`

	// SynScaleRate is a rate parameter for how much to scale synaptic weights
	// in proportion to the [AvgDif] between target and actual proportion activity.
	// This determines the effective strength of the constraint, and larger models
	// may need more than the weaker default value.
	SynScaleRate float32 `default:"0.005,0.0002"`

	// SubMean is the amount of the mean [TrgAvg] change to subtract when updating.
	// 1 = full zero sum changes. 1 works best in general, but in some cases it
	// may be better to start with 0 and then increase using network SetSubMean
	// method at a later point.
	SubMean float32 `default:"0,1"`

	// Permute the order of TrgAvg values within layer. Otherwise they are just
	// assigned in order from highest to lowest for easy visualization.
	// Generally must be true if any topographic weights are being used.
	Permute slbool.Bool `default:"true"`

	// Pool means use pool-level target values if pool-level inhibition and
	// 4D pooled layers are present. If pool sizes are relatively small,
	// then may not be useful to distribute targets just within pool.
	Pool slbool.Bool

	// TrgRange is the range of target normalized average activations.
	// Individual neuron [TrgAvg] values are assigned values within this range,
	// and clamped within this range. This is a critical parameter and the default
	// usually works best.
	TrgRange minmax.F32 `default:"{'Min':0.5,'Max':2}"`
	// contains filtered or unexported fields
}

TrgAvgActParams govern the target and actual long-term average activity in neurons. Target value is adapted by neuron-wise error and difference in actual vs. target. drives synaptic scaling at a slow timescale (Network.SlowInterval).

func (*TrgAvgActParams) Defaults

func (ta *TrgAvgActParams) Defaults()

func (*TrgAvgActParams) ShouldDisplay

func (ta *TrgAvgActParams) ShouldDisplay(field string) bool

func (*TrgAvgActParams) Update

func (ta *TrgAvgActParams) Update()

type USParams

type USParams struct {

	// gain factor applied to sum of weighted, drive-scaled positive USs
	// to compute PVpos primary summary value.
	// This is multiplied prior to 1/(1+x) normalization.
	// Use this to adjust the overall scaling of PVpos reward within 0-1
	// normalized range (see also PVnegGain).
	// Each USpos is assumed to be in 0-1 range, with a default of 1.
	PVposGain float32 `default:"2"`

	// gain factor applied to sum of weighted negative USs and Costs
	// to compute PVneg primary summary value.
	// This is multiplied prior to 1/(1+x) normalization.
	// Use this to adjust overall scaling of PVneg within 0-1
	// normalized range (see also PVposGain).
	PVnegGain float32 `default:"1"`

	// Negative US gain factor for encoding each individual negative US,
	// within their own separate input pools, multiplied prior to 1/(1+x)
	// normalization of each term for activating the USneg pools.
	// These gains are _not_ applied in computing summary PVneg value
	// (see PVnegWts), and generally must be larger than the weights to leverage
	// the dynamic range within each US pool.
	USnegGains []float32

	// Cost gain factor for encoding the individual Time, Effort etc costs
	// within their own separate input pools, multiplied prior to 1/(1+x)
	// normalization of each term for activating the Cost pools.
	// These gains are _not_ applied in computing summary PVneg value
	// (see CostWts), and generally must be larger than the weights to use
	// the full dynamic range within each US pool.
	CostGains []float32

	// weight factor applied to each separate positive US on the way to computing
	// the overall PVpos summary value, to control the weighting of each US
	// relative to the others. Each pos US is also multiplied by its dynamic
	// Drive factor as well.
	// Use PVposGain to control the overall scaling of the PVpos value.
	PVposWts []float32

	// weight factor applied to each separate negative US on the way to computing
	// the overall PVneg summary value, to control the weighting of each US
	// relative to the others, and to the Costs.  These default to 1.
	PVnegWts []float32

	// weight factor applied to each separate Cost (Time, Effort, etc) on the
	// way to computing the overall PVneg summary value, to control the weighting
	// of each Cost relative to the others, and relative to the negative USs.
	// The first pool is Time, second is Effort, and these are typically weighted
	// lower (.02) than salient simulation-specific USs (1).
	PVcostWts []float32

	// computed estimated US values, based on OFCposPT and VSMatrix gating, in PVposEst
	USposEst []float32 `edit:"-"`
}

USParams control how positive and negative USs and Costs are weighted and integrated to compute an overall PV primary value.

func (*USParams) Alloc

func (us *USParams) Alloc(nPos, nNeg, nCost int)

func (*USParams) CostToZero

func (us *USParams) CostToZero(di uint32)

CostToZero sets all values of Cost, CostRaw to zero

func (*USParams) Defaults

func (us *USParams) Defaults()

func (*USParams) USnegCostFromRaw

func (us *USParams) USnegCostFromRaw(di uint32)

USnegCostFromRaw sets normalized NegUS, Cost values from Raw values

func (*USParams) USnegToZero

func (us *USParams) USnegToZero(di uint32)

USnegToZero sets all values of USneg, USnegRaw to zero

func (*USParams) USposToZero

func (us *USParams) USposToZero(di uint32)

USposToZero sets all values of USpos to zero

func (*USParams) Update

func (us *USParams) Update()

type UrgencyParams

type UrgencyParams struct {

	// value of raw urgency where the urgency activation level is 50%
	U50 float32

	// exponent on the urge factor -- valid numbers are 1,2,4,6
	Power int32 `default:"4"`

	// threshold for urge -- cuts off small baseline values
	Thr float32 `default:"0.2"`

	// gain factor for driving tonic DA levels as a function of urgency
	DAtonic float32 `default:"50"`
}

UrgencyParams has urgency (increasing pressure to do something) and parameters for updating it. Raw urgency integrates effort when _not_ goal engaged while effort (negative US 0) integrates when a goal _is_ engaged.

func (*UrgencyParams) AddEffort

func (ur *UrgencyParams) AddEffort(di uint32, inc float32)

AddEffort adds an effort increment of urgency and updates the Urge factor

func (*UrgencyParams) Defaults

func (ur *UrgencyParams) Defaults()

func (*UrgencyParams) Reset

func (ur *UrgencyParams) Reset(di uint32)

Reset resets the raw urgency back to zero -- at start of new gating event

func (*UrgencyParams) Update

func (ur *UrgencyParams) Update()

func (*UrgencyParams) Urge

func (ur *UrgencyParams) Urge(di uint32) float32

Urge computes normalized Urge value from Raw, and sets DAtonic from that

func (*UrgencyParams) UrgeFun

func (ur *UrgencyParams) UrgeFun(urgency float32) float32

UrgeFun is the urgency function: urgency / (urgency + 1) where urgency = (Raw / U50)^Power

type VTAParams

type VTAParams struct {

	// gain on CeM activity difference (CeMPos - CeMNeg) for generating LV CS-driven dopamine values
	CeMGain float32 `default:"0.75"`

	// gain on computed LHb DA (Burst - Dip) -- for controlling DA levels
	LHbGain float32 `default:"1.25"`

	// threshold on ACh level required to generate LV CS-driven dopamine burst
	AChThr float32 `default:"0.5"`
	// contains filtered or unexported fields
}

VTAParams are for computing overall VTA DA based on LHb PVDA (primary value -- at US time, computed at start of each trial and stored in LHbPVDA global value) and Amygdala (CeM) CS / learned value (LV) activations, which update every cycle.

func (*VTAParams) Defaults

func (vt *VTAParams) Defaults()

func (*VTAParams) Update

func (vt *VTAParams) Update()

func (*VTAParams) VTADA

func (vt *VTAParams) VTADA(ctx *Context, di uint32, ach float32, hasRew bool)

VTADA computes the final DA value from LHb values ACh value from LDT is passed as a parameter.

type ValenceTypes

type ValenceTypes int32 //enums:enum

ValenceTypes are types of valence coding: positive or negative.

const (
	// Positive valence codes for outcomes aligned with drives / goals.
	Positive ValenceTypes = iota

	// Negative valence codes for harmful or aversive outcomes.
	Negative

	// Cost codes for continous ongoing cost factors such as Time and Effort
	Cost
)
const ValenceTypesN ValenceTypes = 3

ValenceTypesN is the highest valid value for type ValenceTypes, plus one.

func ValenceTypesValues

func ValenceTypesValues() []ValenceTypes

ValenceTypesValues returns all possible values for the type ValenceTypes.

func (ValenceTypes) Desc

func (i ValenceTypes) Desc() string

Desc returns the description of the ValenceTypes value.

func (ValenceTypes) Int64

func (i ValenceTypes) Int64() int64

Int64 returns the ValenceTypes value as an int64.

func (ValenceTypes) MarshalText

func (i ValenceTypes) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*ValenceTypes) SetInt64

func (i *ValenceTypes) SetInt64(in int64)

SetInt64 sets the ValenceTypes value from an int64.

func (*ValenceTypes) SetString

func (i *ValenceTypes) SetString(s string) error

SetString sets the ValenceTypes value from its string representation, and returns an error if the string is invalid.

func (ValenceTypes) String

func (i ValenceTypes) String() string

String returns the string representation of this ValenceTypes value.

func (*ValenceTypes) UnmarshalText

func (i *ValenceTypes) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (ValenceTypes) Values

func (i ValenceTypes) Values() []enums.Enum

Values returns all possible values for the type ValenceTypes.

type ViewTimes

type ViewTimes int32 //enums:enum

ViewTimes are the options for when the NetView can be updated.

const (
	// Cycle is an update of neuron state, equivalent to 1 msec of real time.
	Cycle ViewTimes = iota

	// FastSpike is 10 cycles (msec) or 100hz. This is the fastest spiking time
	// generally observed in the neocortex.
	FastSpike

	// Gamma is 25 cycles (msec) or 40hz. Neocortical activity often exhibits
	// synchrony peaks in this range.
	Gamma

	// Beta is 50 cycles (msec) or 20 hz (two Gammas).
	// Gating in the basal ganglia and associated updating in prefrontal
	// cortex occurs at this frequency.
	Beta

	// Alpha is 100 cycle (msec) or 10 hz (two Betas).
	// Posterior neocortex exhibits synchrony peaks in this range,
	// corresponding to the intrinsic bursting frequency of layer 5
	// IB neurons, and corticothalamic loop resonance.
	Alpha

	// Phase is the Minus or Plus phase, where plus phase is bursting / outcome
	// that drives positive learning relative to prediction in minus phase.
	// Minus phase is at 150 cycles (msec).
	Phase

	// Theta is 200 cycles (msec) or 5 hz (two Alphas), i.e., a Trial.
	// This is the modal duration of a saccade, the update frequency of
	// medial temporal lobe episodic memory, and the minimal predictive learning cycle
	// (perceive on Alpha 1, predict on 2).
	Theta
)
const ViewTimesN ViewTimes = 7

ViewTimesN is the highest valid value for type ViewTimes, plus one.

func ViewTimesValues

func ViewTimesValues() []ViewTimes

ViewTimesValues returns all possible values for the type ViewTimes.

func (ViewTimes) Cycles

func (vt ViewTimes) Cycles() int

Cycles returns the number of cycles associated with a given view time.

func (ViewTimes) Desc

func (i ViewTimes) Desc() string

Desc returns the description of the ViewTimes value.

func (ViewTimes) Int64

func (i ViewTimes) Int64() int64

Int64 returns the ViewTimes value as an int64.

func (ViewTimes) MarshalText

func (i ViewTimes) MarshalText() ([]byte, error)

MarshalText implements the encoding.TextMarshaler interface.

func (*ViewTimes) SetInt64

func (i *ViewTimes) SetInt64(in int64)

SetInt64 sets the ViewTimes value from an int64.

func (*ViewTimes) SetString

func (i *ViewTimes) SetString(s string) error

SetString sets the ViewTimes value from its string representation, and returns an error if the string is invalid.

func (ViewTimes) String

func (i ViewTimes) String() string

String returns the string representation of this ViewTimes value.

func (*ViewTimes) UnmarshalText

func (i *ViewTimes) UnmarshalText(text []byte) error

UnmarshalText implements the encoding.TextUnmarshaler interface.

func (ViewTimes) Values

func (i ViewTimes) Values() []enums.Enum

Values returns all possible values for the type ViewTimes.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL