pbwm

package
v0.0.0-...-a493a85 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 25, 2020 License: BSD-3-Clause Imports: 15 Imported by: 0

README

PBWM

GoDoc

See sir2 example for working mdoel.

PBWM is the prefrontal-cortex basal-ganglia working memory model O'Reilly & Frank, 2006, where the basal ganglia (BG) drives gating of PFC working memory maintenance, switching it dynamically between updating of new information vs. maintenance of existing information. It was originally inspired by existing data, biology, and theory about the role of the BG in motor action selection, and the LSTM (long short-term-memory) computational model of Hochreiter & Schmidhuber, which solved limitations in existing recurrent backpropagation networks by adding dynamic input and output gates. These LSTM models have experienced a significant resurgence along with the backpropagation neural networks in general.

The simple computational idea is that the BG gating signals fire phasically to disinhibit corticothalamic loops through the PFC, enabling the robust maintenance of new information there. In the absence of such gating signals, the PFC will continue to maintain existing information. The output of the BG through the GPi (globus pallidus internal segment) and homologous SNr (substantia nigra pars reticulata) represents a major bottleneck with a relatively small number of neurons, meaning that each BG gating output affects a relatively large group of PFC neurons. One idea is that these BG gating signals target different PFC hypercolumns or stripes -- these correspond to Pools of neurons within the layers in the current implementation.

In the current version, we integrate with the broader DeepLeabra framework (in the deep directory) that incorporates the separation between superficial and deep layers in cortex and their connections with the thalamus: the thalamocortical loops are principally between the deep layers. Thus, within a given PFC area, you can have the superficial layers being more sensitive to current inputs, while the deep layers are more robustly maintaining information through the thalamocortical loops.

Furthermore, it allows a unification of maintenance and output gating, both of which are effectively opening up a gate between superficial to deep (via the thalamocortical loops) -- deep layers the drive principal output of frontal areas (e.g., in M1, deep layers directly drive motor actions through subcortical projections). In PFC, deep layers are a major source of top-down activation to other cortical areas, in keeping with the idea of executing "cognitive actions" that influence processing elsewhere in the brain. The only real difference is whether the neurons exhibit significant sustained maintenance, or are only phasically activated by gating. Both profiles are widely observed e.g., Sommer & Wurtz, 2000.

The key, more complex computational challenges are:

  • How to actually sequence the updating of PFC from maintaining prior information to now encoding new information, which requires some mechanism for clearing out the prior information.

  • How maintenance and output gating within a given area are organized and related to each other.

  • Learning when the BG should drive update vs. maintain signals, which is particularly challenging because of the temporally-delayed nature of the consequence of an earlier gating action -- you only know if it was useful to maintain something later when you need it. This is the temporal credit assignment problem.

Updating

For the updating question, we compute a BG gating signal in the middle of the 1st quarter (set by GPiThal.Timing.GateQtr) of the overall AlphaCycle of processing (cycle 18 of 25, per GPiThal.Timing.Cycle parameter), which has the immediate effect of clearing out the existing PFC activations (see PFCLayer.Maint.Clear, such that by the end of the next quarter (2), the new information is sufficiently represented in the superficial PFC neurons. At the end of 2nd quarter (per PFCLayer.DeepBurst.BurstQtr), the superficial activations drive updating of the deep layers (via the standard deep CtxtGe computation), to maintain the new information. In keeping with the beta frequency cycle of the BG / PFC circuit (20 Hz, 50 msec cycle time), we support a second round of gating in the middle of the 2nd quarter (again by GPiThal.Timing.GateQtr), followed by maintenance activating in deep layers after the 4th quarter.

For PFCout layers (with PFCLayer.Gate.OutGate set), there is an OutQ1Only option (default true) which, with PFCLayer.DeepBurst.BurstQtr set to Q1, causes output gating to update at the end of the 1st quarter, which gives more time for it to drive output responding. And the 2nd beta-frequency gating comes too late in a standard AlphaCycle based update sequence to drive output, so it is not useful. However, supporting two phases of maintenance updating allows for stripes cleared by output gating (see next subsection) to update in the 2nd half of the alpha cycle, which is useful.

In summary, for PFCmnt maintenance gating:

  • Q1, cycle 18: BG gating, PFC clearing of any existing act
  • Q2, end: Super -> Deep (CtxtGe)
  • Q2, cycle 18: BG gating, PFC clearing
  • Q4, end: Super -> Deep (CtxtGe)

And PFCout output gating:

  • Q1, cycle 18: BG gating -- triggers clearing of corresponding Maint stripe
  • Q1, end: Super -> Deep (CtxtGe) so Deep can drive network output layers

Maint & Output Organization

For the organization of Maint and Out gating, we make the simplifying assumption that each hypercolumn ("stripe") of maintenance PFC has a corresponding output stripe, so you can separately decide to maintain something for an arbitrary amount of time, and subsequently use that information via output gating. A key question then becomes: what happens to the maintained information? Empirically, many studies show a sudden termination of active maintenance at the point of an action using maintained information Sommer & Wurtz, 2000, which makes computational sense: "use it and lose it". In addition, it is difficult to come up with a good positive signal to independently drive clearing: it is much easier to know when you do need information, than to know the point at which you no longer need it. Thus, we have output gating clear corresponding maintenance gating (there is an option to turn this off too, if you want to experiment). The availability of "open" stripes for subsequent maintenance after this clearing seems to be computationally beneficial in our tests.

Learning

Finally, for the learning question, we adopt a computationally powerful form of trace-based dopamine-modulated learning (in MatrixTracePrjn), where each BG gating action leaves a synaptic trace, which is finally converted into a weight change as a function of the next phasic dopamine signal, providing a summary "outcome" evaluation of the net value of the recent gating actions. This directly solves the temporal credit assignment problem, by allowing the synapses to bridge the temporal gap between action and outcome, over a reasonable time window, with multiple such gating actions separately encodable.

Biologically, we suggest that widely-studied synaptic tagging mechanisms have the appropriate properties for this trace mechanism. Extensive research has shown that these synaptic tags, based on actin fiber networks in the synapse, can persist for up to 90 minutes, and when a subsequent strong learning event occurs, the tagged synapses are also strongly potentiated (Redondo & Morris, 2011, Rudy, 2015, Bosch & Hayashi, 2012).

This form of trace-based learning is very effective computationally, because it does not require any other mechanisms to enable learning about the reward implications of earlier gating events. In earlier versions of the PBWM model, we relied on CS (conditioned stimulus) based phasic dopamine to reinforce gating, but this scheme requires that the PFC maintained activations function as a kind of internal CS signal, and that the amygdala learn to decode these PFC activation states to determine if a useful item had been gated into memory.

The CS-driven DA under the trace-based framework effectively serves to reinforce sub-goal actions that lead to the activation of a CS, which in turn is predicting final reward outcomes. Thus, the CS DA provides an intermediate bridging kind of reinforcement evaluating the set of actions leading up to that point. Kind of a "check point" of success prior to getting the real thing.

Layers

Here are the details about each different layer type in PBWM:

  • MatrixLayer: this is the dynamic gating system representing the matrix units within the dorsal striatum of the basal ganglia. The MatrixGo layer contains the "Go" (direct pathway) units (DaR = D1), while the MatrixNoGo layer contains "NoGo" (indirect pathway, DaR = D2). The Go units, expressing more D1 receptors, increase their weights from dopamine bursts, and decrease weights from dopamine dips, and vice-versa for the NoGo units with more D2 receptors. As is more consistent with the BG biology than earlier versions of this model, most of the competition to select the final gating action happens in the GPe and GPi (with the hyperdirect pathway to the subthalamic nucleus also playing a critical role, but not included in this more abstracted model), with only a relatively weak level of competition within the Matrix layers. We also combine the maintenance and output gating stripes all in the same Matrix layer, which allows them to all compete with each other here, and more importantly in the subsequent GPi and GPe stripes. This competitive interaction is critical for allowing the system to learn to properly coordinate maintenance when it is appropriate to update/store new information for maintenance vs. when it is important to select from currently stored representations via output gating.

  • GPeNoGo: This is a standard provides a first round of competition between all the NoGo stripes, which critically prevents the model from driving NoGo to all of the stripes at once. Indeed, there is physiological and anatomical evidence for NoGo unit collateral inhibition onto other NoGo units. Without this NoGo-level competition, models frequently ended up in a state where all stripes were inhibited by NoGo, and when nothing happens, nothing can be learned, so the model essentially fails at that point!

  • GPiThalLayer: Has a strong competition for selecting which stripe gets to gate, based on projections from the MatrixGo units, and the NoGo influence from GPeNoGo, which can effectively veto a few of the possible stripes to prevent gating. We have combined the functions of the GPi (or SNr) and the Thalamus into a single abstracted layer, which has the excitatory kinds of outputs that we would expect from the thalamus, but also implements the stripe-level competition mediated by the GPi/SNr. If there is more overall Go than NoGo activity, then the GPiThal unit gets activated, which then effectively establishes an excitatory loop through the corresponding deep layers of the PFC, with which the thalamus neurons are bidirectionally interconnected. This layer uses GateLayer framework to update GateState which is broadcast to the Matrix and PFC, so they have current gating state information.

  • PFCLayer: Uses deep super vs. deep dynamics with gating (in GateState values broadcast from GPiThal) determining when super drives deep. Actual maintenance in deep layer can be set using PFCDyn fixed dynamics that provides a simple way of shaping a temporally-evolving activation pattern over the layer, with a minimal case of just stable fixed maintenance. Gating in the out stripe will drive clearing of maintenance in corresponding mnt stripe.

Dopamine layers

This package provides core infrastructure for neuromodulation of all types. The base type ModLayer contains layer-level variables recording DA dopamine, ACh acetylcholine, and SE serotonin neuromodulator values. Corresponding DaSrcLayer etc layers can broadcast these neuromodulators to a list of layers (note: we are not using MarkerConSpec from C++ version in this code -- instead just lists of layer names are used).

The minimal ClampDaLayer can be used to send an arbitrary DA signal. There are TD versions for temporal differences algorithm, and a basic Rescorla-Wagner delta rule version in RWDaLayer and RWPredLayer. The separate pvlv package builds the full biologically-based pvlv model on top of this basic DA infrastructure.

Given that PBWM minimally requires a RW-level "primary value" dopamine signal, basic models can use this as follows:

  • Rew, RWPred, SNc: The Rew layer represents the reward activation driven on the Recall trials based on whether the model gets the problem correct or not, with either a 0 (error, no reward) or 1 (correct, reward) activation. RWPred is the prediction layer that learns based on dopamine signals to predict how much reward will be obtained on this trial. The SNc is the final dopamine unit activation, reflecting reward prediction errors. When outcomes are better (worse) than expected or states are predictive of reward (no reward), this unit will increase (decrease) activity. For convenience, tonic (baseline) states are represented here with zero values, so that phasic deviations above and below this value are observable as positive or negative activations. (In the real system negative activations are not possible, but negative prediction errors are observed as a pause in dopamine unit activity, such that firing rate drops from baseline tonic levels). Biologically the SNc actually projects dopamine to the dorsal striatum, while the VTA projects to the ventral striatum, but there is no functional difference in this level of model.

Implementation Details

Network

The pbwm.Network provides "wizard" methods for constructing and configuring standard PBWM and RL components.

It extends the core Cycle method called every cycle of updating as follows:

func (nt *Network) Cycle(ltime *leabra.Time) {
	nt.Network.Network.Cycle(ltime) // basic version from leabra.Network (not deep.Network, which calls DeepBurst)
	nt.GateSend(ltime) // GateLayer (GPiThal) computes gating, sends to other layers
	nt.RecGateAct(ltime) // Record activation state at time of gating (in ActG neuron var)
	nt.DeepBurst(ltime) // Act -> Burst (during BurstQtr) (see deep for details)
	nt.SendMods(ltime) // send modulators (DA)
}

which determines the additional steps of computation after the activations have been updated in the current cycle, supporting the extra gating and DA modulation functions.

From deep.Network, there is a key addition to QuarterFinal method that calls DeepCtxt which in turn calls SendCtxtGe and CtxtFmGe -- this is how the deep layers get their "context" inputs from corresponding superficial layers (mediated through layer 5IB neurons in the biology, which burst periodically). This is when the PFC layers update deep from super.

GPiThal and GateState

GPiThalLayer is source of key GateState:

GateState.Cnt provides key tracker of gating state. It is separately updated in each layer -- GPiThal only broadcasts the basic Act signal and Now signals. For PFC, Cnt is:

  • -1 = initialized to this value, not maintaining.
  • 0 = just gated -- any time the GPiThal activity exceeds the gating threshold (at specified Timing.Cycle) we reset counter (re-gate)
  • >= 1: maintaining -- first gating goes to 1 in QuarterFinal of the BurstQtr gating quarter, counts up thereafter.
  • <= -1: not maintaining – when cleared, reset to -1 in Quarter_Init just following clearing quarter, counts down thereafter.

All gated PBWM layers are of type GateLayer which just has infrastructure to maintain GateState values and synchronize across layers.

PFCLayer

PFCLayer supports mnt and out and Super vs. Deep PFC layers.

  • Cycle:

    • ActFmG calls Gating which is only run on Super layers, and only does something when GateState.Now = true and it calls DecayStatePool if GateState.Act > 0 (i.e., the stripe has gated) to clear out any existing activation, and resets Cnt = 0 indicating just gated. For out layers, it clears the corresponding mnt stripe.

    • BurstFmAct for Super layers applies gating from Cnt state to Burst activations (which reflect 5IB activity as gated by BG, and are what is sent to the deep layer during SendCtxtGe)

  • QuarterFinal for Super calls GateStateToDeep to copy updated GateState info computed in Gating over to the corresponding Deep layer.

    • SendCtxtGe (called after QuarterFinal by Network in a separate pass) updates the GateState.Cnt for Super and Deep layers, incrementing Cnt up for maintaining layers, and decrementing for non-maintaining. Super then sends CtxtGe to Deep.

    • CtxtFmGe (only Deep) gets the CtxtGe value from super (always) and calls DeepMaint which applies the PFCDyn dynamics to the CtxtGe currents if using those. It saves the initial CtxtGe as a Maint neuron-level value which is visible as Cust1 variable in NetView, and is used to multiply the dynamics by the original activation strength.

In summary, for PFCmnt maintenance gating:

  • Q1, cycle 18: BG gating, PFC clearing of any existing act: Gating call
  • Q2, end: Super -> Deep (CtxtGe): QuarterFinal based on BurstFmAct gated Burst vals
  • ...

And PFCout output gating:

  • Q1, cycle 18: BG gating -- triggers clearing of corresponding Maint stripe
  • Q1, end: Super -> Deep (CtxtGe) so Deep can drive network output layers

TODO

  • Matrix uses net_gain = 0.5 -- why?? important for SIR2?, but not SIR1

  • patch -- not essential for SIR1, test in SIR2

  • TAN -- not essential for SIR1, test for SIR2

  • del_inhib -- delta inhibition -- SIR1 MUCH faster learning without! test for SIR2

  • slow_wts -- not important for SIR1, test for SIR2

  • GPe, GPi learning too -- allows Matrix to act like a hidden layer!

  • Currently only supporting 1-to-1 Maint and Out prjns -- Out gating automatically clears same pool in maint -- could explore different arrangements

References

Bosch, M., & Hayashi, Y. (2012). Structural plasticity of dendritic spines. Current Opinion in Neurobiology, 22(3), 383–388. https://doi.org/10.1016/j.conb.2011.09.002

Hochreiter, S., & Schmidhuber, J. (1997). Long Short Term Memory. Neural Computation, 9, 1735–1780.

O'Reilly, R.C. & Frank, M.J. (2006), Making Working Memory Work: A Computational Model of Learning in the Frontal Cortex and Basal Ganglia. Neural Computation, 18, 283-328.

Redondo, R. L., & Morris, R. G. M. (2011). Making memories last: The synaptic tagging and capture hypothesis. Nature Reviews Neuroscience, 12(1), 17–30. https://doi.org/10.1038/nrn2963

Rudy, J. W. (2015). Variation in the persistence of memory: An interplay between actin dynamics and AMPA receptors. Brain Research, 1621, 29–37. https://doi.org/10.1016/j.brainres.2014.12.009

Sommer, M. A., & Wurtz, R. H. (2000). Composition and topographic organization of signals sent from the frontal eye field to the superior colliculus. Journal of Neurophysiology, 83(4), 1979–2001.

Documentation

Overview

Package pbwm provides the prefrontal cortex basal ganglia working memory (PBWM) model of the basal ganglia (BG) and prefrontal cortex (PFC) circuitry that supports dynamic BG gating of PFC robust active maintenance.

This package builds on the deep package for defining thalamocortical circuits involved in predictive learning -- the BG basically acts to gate these circuits.

It provides a basis for dopamine-modulated processing of all types, and is the base package for the PVLV model package built on top of it.

There are multiple levels of functionality to allow for flexibility in exploring new variants.

Each different Layer type defines and manages its own Neuron type, despite some redundancy, so only one type is needed and it is exactly what that layer needs. However, a Network must have a single consistent set of Neuron variables, which is given by ModNeuronVars and NeuronVars enum. In many cases, those "neuron" variables are actually stored in the layer itself instead of on per-neuron level.

Naming rule: DA when a singleton, DaMod (lowercase a) when CamelCased with something else

############## # Basic Level

* ModLayer has DA, ACh, SE -- can be modulated

* DaSrcLayer sends DA to a list of layers (does not use Prjns)

  • AChSrcLayer, SeSrcLayer likewise for ACh and SE (serotonin)
  • GateLayer has GateStates in 1-to-1 correspondence with Pools, to keep track of gating state -- source gating layers can send updates to other layers.

################ # PBWM specific

  • MatrixLayer for dorsal striatum gating of DLPFC areas, separate D1R = Go, D2R = NoGo Each layer contains Maint and Out GateTypes, as function of outer 4D Pool X dimension (Maint on the left, Out on the right)
  • GPiThalLayer receives from Matrix Go and GPe NoGo to compute final WTA gating, and broadcasts GateState info to its SendTo layers. See Timing params for timing.
  • PFCLayer for active maintenance -- uses DeepLeabra framework, with update timing according to deep.Layer DeepBurst.BurstQtr. Gating is computed in quarter *before* updating in BurstQtr. At *end* of BurstQtr, Super Burst -> Deep Ctxt to drive maintenance via Ctxt in Deep.

Index

Constants

This section is empty.

Variables

View Source
var (
	// ModNeuronVars are the modulator neurons plus some custom variables that sub-types use for their
	// algo-specific cases -- need a consistent set of overall network-level vars for display / generic
	// interface.
	ModNeuronVars    = []string{"DA", "DALrn", "ACh", "SE", "GateAct", "GateNow", "GateCnt", "ActG", "Cust1"}
	ModNeuronVarsMap map[string]int
	ModNeuronVarsAll []string
)
View Source
var KiT_AChSrcLayer = kit.Types.AddType(&AChSrcLayer{}, deep.LayerProps)
View Source
var KiT_ClampDaLayer = kit.Types.AddType(&ClampDaLayer{}, deep.LayerProps)
View Source
var KiT_DaHebbPrjn = kit.Types.AddType(&DaHebbPrjn{}, deep.PrjnProps)
View Source
var KiT_DaReceptors = kit.Enums.AddEnum(DaReceptorsN, kit.NotBitFlag, nil)
View Source
var KiT_DaSrcLayer = kit.Types.AddType(&DaSrcLayer{}, deep.LayerProps)
View Source
var KiT_GPiThalLayer = kit.Types.AddType(&GPiThalLayer{}, deep.LayerProps)
View Source
var KiT_GPiThalPrjn = kit.Types.AddType(&GPiThalPrjn{}, deep.PrjnProps)
View Source
var KiT_GateLayer = kit.Types.AddType(&GateLayer{}, deep.LayerProps)
View Source
var KiT_GateTypes = kit.Enums.AddEnum(GateTypesN, kit.NotBitFlag, nil)
View Source
var KiT_Layer = kit.Types.AddType(&Layer{}, deep.LayerProps)
View Source
var KiT_MatrixLayer = kit.Types.AddType(&MatrixLayer{}, deep.LayerProps)
View Source
var KiT_ModLayer = kit.Types.AddType(&ModLayer{}, deep.LayerProps)
View Source
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
View Source
var KiT_PFCLayer = kit.Types.AddType(&PFCLayer{}, deep.LayerProps)
View Source
var KiT_RWDaLayer = kit.Types.AddType(&RWDaLayer{}, deep.LayerProps)
View Source
var KiT_RWPredLayer = kit.Types.AddType(&RWPredLayer{}, deep.LayerProps)
View Source
var KiT_RWPrjn = kit.Types.AddType(&RWPrjn{}, deep.PrjnProps)
View Source
var KiT_SeSrcLayer = kit.Types.AddType(&SeSrcLayer{}, deep.LayerProps)
View Source
var KiT_TDDaLayer = kit.Types.AddType(&TDDaLayer{}, deep.LayerProps)
View Source
var KiT_TDRewIntegLayer = kit.Types.AddType(&TDRewIntegLayer{}, deep.LayerProps)
View Source
var KiT_TDRewPredLayer = kit.Types.AddType(&TDRewPredLayer{}, deep.LayerProps)
View Source
var KiT_TDRewPredPrjn = kit.Types.AddType(&TDRewPredPrjn{}, deep.PrjnProps)
View Source
var KiT_Valences = kit.Enums.AddEnum(ValencesN, kit.NotBitFlag, nil)
View Source
var NetworkProps = deep.NetworkProps
View Source
var TraceSynVars = []string{"NTr", "Tr"}

Functions

This section is empty.

Types

type AChSrcLayer

type AChSrcLayer struct {
	ModLayer
	SendTo []string `desc:"list of layers to send ACh to"`
}

AChSrcLayer is the basic type of layer that sends ACh to other layers. Uses a list of layer names to send to -- not use Prjn infrastructure as it is global broadcast modulator -- individual neurons can use it in their own special way.

func (*AChSrcLayer) AddSendTo

func (ly *AChSrcLayer) AddSendTo(laynm string)

AddSendTo adds given layer name to list of those to send DA to

func (*AChSrcLayer) Build

func (ly *AChSrcLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*AChSrcLayer) SendACh

func (ly *AChSrcLayer) SendACh(ach float32)

SendACh sends ACh to SendTo list of layers

func (*AChSrcLayer) SendToCheck

func (ly *AChSrcLayer) SendToCheck() error

SendToCheck is called during Build to ensure that SendTo layers are valid

type ClampDaLayer

type ClampDaLayer struct {
	DaSrcLayer
}

ClampDaLayer is an Input layer that just sends its activity as the dopamine signal

func (*ClampDaLayer) SendMods

func (ly *ClampDaLayer) SendMods(ltime *leabra.Time)

SendMods is called at end of Cycle to send modulator signals (DA, etc) which will then be active for the next cycle of processing

type DaHebbPrjn

type DaHebbPrjn struct {
	deep.Prjn
}

DaHebbPrjn does dopamine-modulated Hebbian learning -- i.e., the 3-factor learning rule: Da * Recv.Act * Send.Act

func (*DaHebbPrjn) DWt

func (pj *DaHebbPrjn) DWt()

DWt computes the weight change (learning) -- on sending projections.

func (*DaHebbPrjn) Defaults

func (pj *DaHebbPrjn) Defaults()

type DaModParams

type DaModParams struct {
	On      bool    `desc:"whether to use dopamine modulation"`
	ModGain bool    `viewif:"On" desc:"modulate gain instead of Ge excitatory synaptic input"`
	Minus   float32 `` /* 145-byte string literal not displayed */
	Plus    float32 `` /* 144-byte string literal not displayed */
	NegGain float32 `` /* 208-byte string literal not displayed */
	PosGain float32 `` /* 208-byte string literal not displayed */
}

Params for effects of dopamine (Da) based modulation, typically adding a Da-based term to the Ge excitatory synaptic input. Plus-phase = learning effects relative to minus-phase "performance" dopamine effects

func (*DaModParams) Defaults

func (dm *DaModParams) Defaults()

func (*DaModParams) Gain

func (dm *DaModParams) Gain(da, gain float32, plusPhase bool) float32

Gain returns da-modulated gain value

func (*DaModParams) GainModOn

func (dm *DaModParams) GainModOn() bool

GainModOn returns true if modulating Gain

func (*DaModParams) Ge

func (dm *DaModParams) Ge(da, ge float32, plusPhase bool) float32

Ge returns da-modulated ge value

func (*DaModParams) GeModOn

func (dm *DaModParams) GeModOn() bool

GeModOn returns true if modulating Ge

type DaReceptors

type DaReceptors int

DaReceptors for D1R and D2R dopamine receptors

const (
	// D1R primarily expresses Dopamine D1 Receptors -- dopamine is excitatory and bursts of dopamine lead to increases in synaptic weight, while dips lead to decreases -- direct pathway in dorsal striatum
	D1R DaReceptors = iota

	// D2R primarily expresses Dopamine D2 Receptors -- dopamine is inhibitory and bursts of dopamine lead to decreases in synaptic weight, while dips lead to increases -- indirect pathway in dorsal striatum
	D2R

	DaReceptorsN
)

func (*DaReceptors) FromString

func (i *DaReceptors) FromString(s string) error

func (DaReceptors) MarshalJSON

func (ev DaReceptors) MarshalJSON() ([]byte, error)

func (DaReceptors) String

func (i DaReceptors) String() string

func (*DaReceptors) UnmarshalJSON

func (ev *DaReceptors) UnmarshalJSON(b []byte) error

type DaSrcLayer

type DaSrcLayer struct {
	ModLayer
	SendTo []string `desc:"list of layers to send dopamine to"`
}

DaSrcLayer is the basic type of layer that sends dopamine to other layers. Uses a list of layer names to send to -- not using Prjn infrastructure as it is global broadcast modulator -- individual neurons can use it in their own special way.

func (*DaSrcLayer) AddSendTo

func (ly *DaSrcLayer) AddSendTo(laynm string)

AddSendTo adds given layer name to list of those to send DA to

func (*DaSrcLayer) Build

func (ly *DaSrcLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*DaSrcLayer) SendDA

func (ly *DaSrcLayer) SendDA(da float32)

SendDA sends dopamine to SendTo list of layers

func (*DaSrcLayer) SendToAllBut

func (ly *DaSrcLayer) SendToAllBut(excl []string)

SendToAllBut adds all layers in network except those in list to the SendTo list of layers to send to -- this layer is automatically excluded as well.

func (*DaSrcLayer) SendToCheck

func (ly *DaSrcLayer) SendToCheck() error

SendToCheck is called during Build to ensure that SendTo layers are valid

type GPiGateParams

type GPiGateParams struct {
	GeGain float32 `` /* 217-byte string literal not displayed */
	NoGo   float32 `` /* 178-byte string literal not displayed */
	Thr    float32 `` /* 242-byte string literal not displayed */
	ThrAct bool    `` /* 159-byte string literal not displayed */
}

GPiGateParams has gating parameters for gating in GPiThal layer, including threshold

func (*GPiGateParams) Defaults

func (gp *GPiGateParams) Defaults()

func (*GPiGateParams) GeRaw

func (gp *GPiGateParams) GeRaw(goRaw, nogoRaw float32) float32

GeRaw returns the net GeRaw from go, nogo specific values

type GPiNeuron

type GPiNeuron struct {
	ActG float32 `desc:"gating activation -- the activity value when gating occurred in this pool"`
}

GPiNeuron contains extra variables for GPiThalLayer neurons -- stored separately

type GPiThalLayer

type GPiThalLayer struct {
	GateLayer
	Timing   GPiTimingParams `view:"inline" desc:"timing parameters determining when gating happens"`
	Gate     GPiGateParams   `view:"inline" desc:"gating parameters determining threshold for gating etc"`
	SendTo   []string        `desc:"list of layers to send GateState to"`
	GPiNeurs []GPiNeuron     `` /* 144-byte string literal not displayed */
}

GPiThalLayer represents the combined Winner-Take-All dynamic of GPi (SNr) and Thalamus. It is the final arbiter of gating in the BG, weighing Go (direct) and NoGo (indirect) inputs from MatrixLayers (indirectly via GPe layer in case of NoGo). Use 4D structure for this so it matches 4D structure in Matrix layers

func (*GPiThalLayer) AddSendTo

func (ly *GPiThalLayer) AddSendTo(laynm string)

AddSendTo adds given layer name to list of those to send DA to

func (*GPiThalLayer) AlphaCycInit

func (ly *GPiThalLayer) AlphaCycInit()

AlphaCycInit handles all initialization at start of new input pattern, including computing input scaling from running average activation etc. should already have presented the external input to the network at this point. need to clear incrementing GeRaw from prjns

func (*GPiThalLayer) Build

func (ly *GPiThalLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*GPiThalLayer) Defaults

func (ly *GPiThalLayer) Defaults()

func (*GPiThalLayer) GFmInc

func (ly *GPiThalLayer) GFmInc(ltime *leabra.Time)

GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.

func (*GPiThalLayer) GateFmAct

func (ly *GPiThalLayer) GateFmAct(ltime *leabra.Time)

GateFmAct updates GateState from current activations, at time of gating

func (*GPiThalLayer) GateSend

func (ly *GPiThalLayer) GateSend(ltime *leabra.Time)

GateSend updates gating state and sends it along to other layers

func (*GPiThalLayer) GateType

func (ly *GPiThalLayer) GateType() GateTypes

func (*GPiThalLayer) InitActs

func (ly *GPiThalLayer) InitActs()

func (*GPiThalLayer) MatrixPrjns

func (ly *GPiThalLayer) MatrixPrjns() (goPrjn, nogoPrjn *GPiThalPrjn, err error)

MatrixPrjns returns the recv prjns from Go and NoGo MatrixLayer pathways -- error if not found or if prjns are not of the GPiThalPrjn type

func (*GPiThalLayer) RecGateAct

func (ly *GPiThalLayer) RecGateAct(ltime *leabra.Time)

RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now

func (*GPiThalLayer) SendGateShape

func (ly *GPiThalLayer) SendGateShape() error

SendGateShape send GateShape info to all SendTo layers -- convenient config-time way to ensure all are consistent -- also checks validity of SendTo's

func (*GPiThalLayer) SendGateStates

func (ly *GPiThalLayer) SendGateStates()

SendGateStates sends GateStates to other layers

func (*GPiThalLayer) SendToCheck

func (ly *GPiThalLayer) SendToCheck() error

SendToCheck is called during Build to ensure that SendTo layers are valid

func (*GPiThalLayer) SendToMatrixPFC

func (ly *GPiThalLayer) SendToMatrixPFC(prefix string)

SendToMatrixPFC adds standard SendTo layers for PBWM: MatrixGo, NoGo, PFCmnt, PFCout with optional prefix -- excludes mnt, out cases if corresp shape = 0

func (*GPiThalLayer) UnitValByIdx

func (ly *GPiThalLayer) UnitValByIdx(vidx NeuronVars, idx int) float32

UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).

type GPiThalPrjn

type GPiThalPrjn struct {
	deep.Prjn           // access as .Prjn
	GeRaw     []float32 `desc:"per-recv, per-prjn raw excitatory input"`
}

GPiThalPrjn accumulates per-prjn raw conductance that is needed for separately weighting NoGo vs. Go inputs

func (*GPiThalPrjn) Build

func (pj *GPiThalPrjn) Build() error

func (*GPiThalPrjn) InitGInc

func (pj *GPiThalPrjn) InitGInc()

func (*GPiThalPrjn) RecvGInc

func (pj *GPiThalPrjn) RecvGInc()

RecvGInc increments the receiver's GeInc or GiInc from that of all the projections.

type GPiTimingParams

type GPiTimingParams struct {
	GateQtr leabra.Quarters `` /* 249-byte string literal not displayed */
	Cycle   int             `` /* 139-byte string literal not displayed */
}

GPiTimingParams has timing parameters for gating in the GPiThal layer

func (*GPiTimingParams) Defaults

func (gt *GPiTimingParams) Defaults()

func (*GPiTimingParams) IsGateQtr

func (gt *GPiTimingParams) IsGateQtr(qtr int) bool

IsGateQtr returns true if the given quarter (0-3) is set as a Gating quarter

func (*GPiTimingParams) SetGateQtr

func (gt *GPiTimingParams) SetGateQtr(qtr leabra.Quarters)

SetGateQtr sets given gating quarter (adds to any existing) -- Q1, 3 by default

type GateLayer

type GateLayer struct {
	ModLayer
	GateShp    GateShape   `desc:"shape of overall Maint + Out gating system that this layer is part of"`
	GateStates []GateState `` /* 192-byte string literal not displayed */
}

GateLayer is a layer that cares about thalamic (BG) gating signals, and has slice of GateState fields that a gating layer will update.

func (*GateLayer) AsGate

func (ly *GateLayer) AsGate() *GateLayer

func (*GateLayer) AvgMaxGeRaw

func (ly *GateLayer) AvgMaxGeRaw(ltime *leabra.Time)

AvgMaxGeRaw computes the average and max GeRaw stats

func (*GateLayer) Build

func (ly *GateLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*GateLayer) GateShape

func (ly *GateLayer) GateShape() *GateShape

func (*GateLayer) GateState

func (ly *GateLayer) GateState(poolIdx int) *GateState

GateState returns the GateState for given pool index (0 based) on this layer

func (*GateLayer) InitActs

func (ly *GateLayer) InitActs()

func (*GateLayer) SetGateState

func (ly *GateLayer) SetGateState(poolIdx int, state *GateState)

SetGateState sets the GateState for given pool index (individual pools start at 1) on this layer

func (*GateLayer) SetGateStates

func (ly *GateLayer) SetGateStates(states []GateState, typ GateTypes)

SetGateStates sets the GateStates from given source states, of given gating type

func (*GateLayer) UnitValByIdx

func (ly *GateLayer) UnitValByIdx(vidx NeuronVars, idx int) float32

UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).

type GateLayerer

type GateLayerer interface {
	// AsGate returns the layer as a GateLayer layer, for direct access to fields
	AsGate() *GateLayer

	// GateType returns the type of gating supported by this layer
	GateType() GateTypes

	// GateShape returns the shape of gating system that this layer is part of
	GateShape() *GateShape

	// GateState returns the GateState for given pool index (0-based) on this layer
	GateState(poolIdx int) *GateState

	// SetGateState sets the GateState for given pool index (0-based) on this layer
	SetGateState(poolIdx int, state *GateState)

	// SetGateStates sets the GateStates from given source states, of given gating type
	SetGateStates(states []GateState, typ GateTypes)
}

GateLayerer is an interface for GateLayer layers

type GateShape

type GateShape struct {
	Y      int `desc:"overall shape dimensions for the full set of gating pools, e.g., as present in the Matrix and GPiThal levels"`
	MaintX int `desc:"how many pools in the X dimension are Maint gating pools -- rest are Out"`
	OutX   int `desc:"how many pools in the X dimension are Out gating pools -- comes after Maint"`
}

GateShape defines the shape of the outer pool dimensions of gating layers, organized into Maint and Out subsets which are arrayed along the X axis with Maint first (to the left) then Out. Individual layers may only represent Maint or Out subsets of this overall shape, but all need to have this coordinated shape information to be able to share gating state information. Each layer represents gate state information in their native geometry -- FullIndex1D provides access from a subset to full set.

func (*GateShape) FullIndex1D

func (gs *GateShape) FullIndex1D(idx int, fmTyp GateTypes) int

FullIndex1D returns the index into full MaintOut GateStates for given 1D pool idx (0-based) *from given GateType*.

func (*GateShape) Index

func (gs *GateShape) Index(pY, pX int, typ GateTypes) int

Index returns the index into GateStates for given 2D pool coords for given GateType. Each type stores gate info in its "native" 2D format.

func (*GateShape) Set

func (gs *GateShape) Set(nY, maintX, outX int)

Set sets the shape parameters: number of Y dimension pools, and numbers of maint and out pools along X axis

func (*GateShape) TotX

func (gs *GateShape) TotX() int

TotX returns the total number of X-axis pools (Maint + Out)

type GateState

type GateState struct {
	Act   float32         `` /* 203-byte string literal not displayed */
	Now   bool            `desc:"gating timing signal -- true if this is the moment when gating takes place"`
	Cnt   int             `` /* 307-byte string literal not displayed */
	GeRaw minmax.AvgMax32 `copy:"-" desc:"not copies: average and max Ge Raw excitatory conductance values -- before being influenced by gating signals"`
}

GateState is gating state values stored in layers that receive thalamic gating signals including MatrixLayer, PFCLayer, GPiThal layer, etc -- use GateLayer as base layer to include.

func (*GateState) CopyFrom

func (gs *GateState) CopyFrom(fm *GateState)

CopyFrom copies from another GateState -- only the Act and Now signals are copied

func (*GateState) Init

func (gs *GateState) Init()

Init initializes the values -- call during InitActs()

type GateTypes

type GateTypes int

GateTypes for region of striatum

const (
	// Maint is maintenance gating -- toggles active maintenance in PFC
	Maint GateTypes = iota

	// Out is output gating -- drives deep layer activation
	Out

	// MaintOut for maint and output gating
	MaintOut

	GateTypesN
)

func (*GateTypes) FromString

func (i *GateTypes) FromString(s string) error

func (GateTypes) MarshalJSON

func (ev GateTypes) MarshalJSON() ([]byte, error)

func (GateTypes) String

func (i GateTypes) String() string

func (*GateTypes) UnmarshalJSON

func (ev *GateTypes) UnmarshalJSON(b []byte) error

type Layer

type Layer struct {
	ModLayer
	DaMod DaModParams `` /* 180-byte string literal not displayed */
}

pbwm.Layer is the default layer type for PBWM framework, based on the ModLayer with dopamine modulation -- can be used for basic DA-modulated learning.

func (*Layer) ActFmG

func (ly *Layer) ActFmG(ltime *leabra.Time)

ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act

func (*Layer) GFmInc

func (ly *Layer) GFmInc(ltime *leabra.Time)

GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.

type MatrixLayer

type MatrixLayer struct {
	GateLayer
	MaintN      int            `desc:"number of Maint Pools in X outer dimension of 4D shape -- Out gating after that"`
	DaR         DaReceptors    `desc:"dominant type of dopamine receptor -- D1R for Go pathway, D2R for NoGo"`
	Matrix      MatrixParams   `view:"inline" desc:"matrix parameters"`
	MatrixNeurs []MatrixNeuron `` /* 147-byte string literal not displayed */
}

MatrixLayer represents the dorsal matrisome MSN's that are the main Go / NoGo gating units in BG driving updating of PFC WM in PBWM. D1R = Go, D2R = NoGo, and outer 4D Pool X dimension determines GateTypes per MaintN (Maint on the left up to MaintN, Out on the right after)

func (*MatrixLayer) ActFmG

func (ly *MatrixLayer) ActFmG(ltime *leabra.Time)

ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act. Matrix extends to call DaAChFmLay

func (*MatrixLayer) Build

func (ly *MatrixLayer) Build() error

Build constructs the layer state, including calling Build on the projections you MUST have properly configured the Inhib.Pool.On setting by this point to properly allocate Pools for the unit groups if necessary.

func (*MatrixLayer) DALrnFmDA

func (ly *MatrixLayer) DALrnFmDA(da float32) float32

DALrnFmDA returns effective learning dopamine value from given raw DA value applying Burst and Dip Gain factors, and then reversing sign for D2R.

func (*MatrixLayer) DaAChFmLay

func (ly *MatrixLayer) DaAChFmLay(ltime *leabra.Time)

DaAChFmLay computes Da and ACh from layer and Shunt received from PatchLayer units

func (*MatrixLayer) Defaults

func (ly *MatrixLayer) Defaults()

func (*MatrixLayer) DoQuarter2DWt

func (ly *MatrixLayer) DoQuarter2DWt() bool

DoQuarter2DWt indicates whether to do optional Q2 DWt

func (*MatrixLayer) GateType

func (ly *MatrixLayer) GateType() GateTypes

func (*MatrixLayer) InhibFmGeAct

func (ly *MatrixLayer) InhibFmGeAct(ltime *leabra.Time)

InhibiFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools Matrix version applies OutAChInhib to bias output gating on reward trials

func (*MatrixLayer) InitActs

func (ly *MatrixLayer) InitActs()

func (*MatrixLayer) RecGateAct

func (ly *MatrixLayer) RecGateAct(ltime *leabra.Time)

RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now

func (*MatrixLayer) UnitValByIdx

func (ly *MatrixLayer) UnitValByIdx(vidx NeuronVars, idx int) float32

UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).

type MatrixNeuron

type MatrixNeuron struct {
	DA    float32 `desc:"per-neuron modulated dopamine level, derived from layer DA and Shunt"`
	DALrn float32 `desc:"per-neuron effective learning dopamine value -- gain modulated and sign reversed for D2R"`
	ACh   float32 `desc:"per-neuron modulated ACh level, derived from layer ACh and Shunt"`
	Shunt float32 `desc:"shunting input received from Patch neurons (in reality flows through SNc DA pathways)"`
	ActG  float32 `desc:"gating activation -- the activity value when gating occurred in this pool"`
}

MatrixNeuron contains extra variables for MatrixLayer neurons -- stored separately

type MatrixParams

type MatrixParams struct {
	PatchShunt  float32 `` /* 173-byte string literal not displayed */
	ShuntACh    bool    `` /* 269-byte string literal not displayed */
	OutAChInhib float32 `` /* 354-byte string literal not displayed */
	BurstGain   float32 `` /* 237-byte string literal not displayed */
	DipGain     float32 `` /* 237-byte string literal not displayed */
}

MatrixParams has parameters for Dorsal Striatum Matrix computation These are the main Go / NoGo gating units in BG driving updating of PFC WM in PBWM

func (*MatrixParams) Defaults

func (mp *MatrixParams) Defaults()

type MatrixTracePrjn

type MatrixTracePrjn struct {
	deep.Prjn
	Trace  TraceParams `view:"inline" desc:"special parameters for matrix trace learning"`
	TrSyns []TraceSyn  `desc:"trace synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array"`
}

MatrixTracePrjn does dopamine-modulated, gated trace learning, for Matrix learning in PBWM context

func (*MatrixTracePrjn) Build

func (pj *MatrixTracePrjn) Build() error

func (*MatrixTracePrjn) ClearTrace

func (pj *MatrixTracePrjn) ClearTrace()

func (*MatrixTracePrjn) DWt

func (pj *MatrixTracePrjn) DWt()

DWt computes the weight change (learning) -- on sending projections.

func (*MatrixTracePrjn) Defaults

func (pj *MatrixTracePrjn) Defaults()

func (*MatrixTracePrjn) InitWts

func (pj *MatrixTracePrjn) InitWts()

type ModLayer

type ModLayer struct {
	deep.Layer
	DA  float32 `desc:"current dopamine level for this layer"`
	ACh float32 `desc:"current acetylcholine level for this layer"`
	SE  float32 `desc:"current serotonin level for this layer"`
}

ModLayer is the base layer type for PBWM framework -- has variables for the layer-level neuromodulatory variables: dopamine, ach, serotonin. The pbwm.Layer is a usable generic version of this base ModLayer, and other more specialized types build directly from ModLayer.

func (*ModLayer) AsGate

func (ly *ModLayer) AsGate() *GateLayer

AsGate returns this layer as a pbwm.GateLayer -- nil for ModLayer

func (*ModLayer) AsMod

func (ly *ModLayer) AsMod() *ModLayer

AsMod returns this layer as a pbwm.ModLayer

func (*ModLayer) Defaults

func (ly *ModLayer) Defaults()

func (*ModLayer) DoQuarter2DWt

func (ly *ModLayer) DoQuarter2DWt() bool

DoQuarter2DWt indicates whether to do optional Q2 DWt

func (*ModLayer) GateSend

func (ly *ModLayer) GateSend(ltime *leabra.Time)

GateSend updates gating state and sends it along to other layers. most layers don't implement -- only gating layers

func (*ModLayer) InitActs

func (ly *ModLayer) InitActs()

func (*ModLayer) Quarter2DWt

func (ly *ModLayer) Quarter2DWt()

Quarter2DWt is optional Q2 DWt -- define where relevant

func (*ModLayer) QuarterFinal

func (ly *ModLayer) QuarterFinal(ltime *leabra.Time)

QuarterFinal does updating after end of a quarter

func (*ModLayer) RecGateAct

func (ly *ModLayer) RecGateAct(ltime *leabra.Time)

RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now -- only for gating layers

func (*ModLayer) SendMods

func (ly *ModLayer) SendMods(ltime *leabra.Time)

SendMods is called at end of Cycle to send modulator signals (DA, etc) which will then be active for the next cycle of processing

func (*ModLayer) UnitVal1DTry

func (ly *ModLayer) UnitVal1DTry(varNm string, idx int) (float32, error)

func (*ModLayer) UnitValByIdx

func (ly *ModLayer) UnitValByIdx(vidx NeuronVars, idx int) float32

UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).

func (*ModLayer) UnitValTry

func (ly *ModLayer) UnitValTry(varNm string, idx []int) (float32, error)

UnitValTry returns value of given variable name on given unit, using shape-based dimensional index

func (*ModLayer) UnitVals

func (ly *ModLayer) UnitVals(vals *[]float32, varNm string) error

UnitVals fills in values of given variable name on unit, for each unit in the layer, into given float32 slice (only resized if not big enough). Returns error on invalid var name.

func (*ModLayer) UnitValsTensor

func (ly *ModLayer) UnitValsTensor(tsr etensor.Tensor, varNm string) error

UnitValsTensor returns values of given variable name on unit for each unit in the layer, as a float32 tensor in same shape as layer units.

func (*ModLayer) UnitVarNames

func (ly *ModLayer) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer Mod returns *layer level* vars

func (*ModLayer) UpdateParams

func (ly *ModLayer) UpdateParams()

UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer

type Network

type Network struct {
	deep.Network
}

pbwm.Network has parameters for running a DeepLeabra network

func (*Network) AddClampDaLayer

func (nt *Network) AddClampDaLayer(name string) *ClampDaLayer

AddClampDaLayer adds a ClampDaLayer of given name

func (*Network) AddDorsalBG

func (nt *Network) AddDorsalBG(prefix string, nY, nMaint, nOut, nNeurY, nNeurX int) (mtxGo, mtxNoGo, gpe, gpi leabra.LeabraLayer)

AddDorsalBG adds MatrixGo, NoGo, GPe, and GPiThal layers, with given optional prefix. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. Appropriate PoolOneToOne connections are made to drive GPiThal, with BgFixed class name set so they can be styled appropriately (no learning, WtRnd.Mean=0.8, Var=0)

func (*Network) AddGPeLayer

func (nt *Network) AddGPeLayer(name string, nY, nMaint, nOut int) *ModLayer

AddGPeLayer adds a ModLayer to serve as a GPe layer, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has 1x1 neurons.

func (*Network) AddGPiThalLayer

func (nt *Network) AddGPiThalLayer(name string, nY, nMaint, nOut int) *GPiThalLayer

AddGPiThalLayer adds a GPiThalLayer of given size, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has 1x1 neurons.

func (*Network) AddMatrixLayer

func (nt *Network) AddMatrixLayer(name string, nY, nMaint, nOut, nNeurY, nNeurX int, da DaReceptors) *MatrixLayer

AddMatrixLayer adds a MatrixLayer of given size, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. da gives the DaReceptor type (D1R = Go, D2R = NoGo)

func (*Network) AddPBWM

func (nt *Network) AddPBWM(prefix string, nY, nMaint, nOut, nNeurBgY, nNeurBgX, nNeurPfcY, nNeurPfcX int) (mtxGo, mtxNoGo, gpe, gpi, pfcMnt, pfcMntD, pfcOut, pfcOutD leabra.LeabraLayer)

AddPBWM adds a DorsalBG an PFC with given params

func (*Network) AddPFC

func (nt *Network) AddPFC(prefix string, nY, nMaint, nOut, nNeurY, nNeurX int) (pfcMnt, pfcMntD, pfcOut, pfcOutD leabra.LeabraLayer)

AddPFC adds paired PFCmnt, PFCout and associated Deep layers, with given optional prefix. nY = number of pools in Y dimension, nMaint, nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. Appropriate PoolOneToOne connections are made within super / deep (see AddPFCLayer) and between PFCmntD -> PFCout.

func (*Network) AddPFCLayer

func (nt *Network) AddPFCLayer(name string, nY, nX, nNeurY, nNeurX int, out bool) (sp, dp *PFCLayer)

AddPFCLayer adds a PFCLayer, super and deep, of given size, with given name. nY, nX = number of pools in Y, X dimensions, and each pool has nNeurY, nNeurX neurons. out is true for output-gating layer. Both have the class "PFC" set. deep receives one-to-one projections of class "PFCToDeep" from super, and sends "PFCFmDeep", and is positioned behind it.

func (*Network) AddRWLayers

func (nt *Network) AddRWLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, da leabra.LeabraLayer)

AddRWLayers adds simple Rescorla-Wagner (PV only) dopamine system, with a primary Reward layer, a RWPred prediction layer, and a dopamine layer that computes diff. Only generates DA when Rew layer has external input -- otherwise zero. Projection from RWPred to DA is given class RWPredToDA -- should have no learning and 1 weight.

func (*Network) AddTDLayers

func (nt *Network) AddTDLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, ri, td leabra.LeabraLayer)

AddTDLayers adds the standard TD temporal differences layers, generating a DA signal. Projection from Rew to RewInteg is given class TDRewToInteg -- should have no learning and 1 weight.

func (*Network) Cycle

func (nt *Network) Cycle(ltime *leabra.Time)

Cycle runs one cycle of activation updating PBWM calls GateSend after Cycle and before DeepBurst Deep version adds call to update DeepBurst at end

func (*Network) Defaults

func (nt *Network) Defaults()

Defaults sets all the default parameters for all layers and projections

func (*Network) GateSend

func (nt *Network) GateSend(ltime *leabra.Time)

GateSend is called at end of Cycle, computes Gating and sends to other layers

func (*Network) NewLayer

func (nt *Network) NewLayer() emer.Layer

NewLayer returns new layer of default pbwm.Layer type

func (*Network) NewPrjn

func (nt *Network) NewPrjn() emer.Prjn

NewPrjn returns new prjn of default type

func (*Network) RecGateAct

func (nt *Network) RecGateAct(ltime *leabra.Time)

RecGateAct is called after GateSend, to record gating activations at time of gating

func (*Network) SendMods

func (nt *Network) SendMods(ltime *leabra.Time)

SendMods is called at end of Cycle to send modulator signals (DA, etc) which will then be active for the next cycle of processing

func (*Network) UpdateParams

func (nt *Network) UpdateParams()

UpdateParams updates all the derived parameters if any have changed, for all layers and projections

type NeuronVars

type NeuronVars int

NeuronVars are indexes into extra PBWM neuron-level variables

const (
	DA NeuronVars = iota
	DALrn
	ACh
	SE
	GateAct
	GateNow
	GateCnt
	ActG
	Cust1
	NeuronVarsN
)

type PBWMLayer

type PBWMLayer interface {
	deep.DeepLayer

	// AsMod returns this layer as a pbwm.ModLayer (minimum layer in PBWM)
	AsMod() *ModLayer

	// AsGate returns this layer as a pbwm.GateLayer (gated layer type) -- nil if not impl
	AsGate() *GateLayer

	// UnitValByIdx returns value of given PBWM-specific variable by variable index
	// and flat neuron index (from layer or neuron-specific one).
	UnitValByIdx(vidx NeuronVars, idx int) float32

	// GateSend updates gating state and sends it along to other layers.
	// Called after std Cycle methods.
	// Only implemented for gating layers.
	GateSend(ltime *leabra.Time)

	// RecGateAct records the gating activation from current activation, when gating occcurs
	// based on GateState.Now
	RecGateAct(ltime *leabra.Time)

	// SendMods is called at end of Cycle to send modulator signals (DA, etc)
	// which will then be active for the next cycle of processing
	SendMods(ltime *leabra.Time)

	// Quarter2DWt is optional Q2 DWt -- PFC and matrix layers can do this as appropriate
	Quarter2DWt()

	// DoQuarter2DWt returns true if this recv layer should have its weights updated
	DoQuarter2DWt() bool
}

PBWMLayer defines the essential algorithmic API for PBWM at the layer level. Builds upon the deep.DeepLayer API

type PBWMPrjn

type PBWMPrjn interface {
	deep.DeepPrjn
}

PBWMPrjn defines the essential algorithmic API for PBWM at the projection level. Builds upon the deep.DeepPrjn API

type PFCDyn

type PFCDyn struct {
	Init     float32 `desc:"initial value at point when gating starts -- MUST be > 0 when used."`
	RiseTau  float32 `` /* 161-byte string literal not displayed */
	DecayTau float32 `` /* 162-byte string literal not displayed */
	Desc     string  `desc:"description of this factor"`
}

PFC dynamic behavior element -- defines the dynamic behavior of deep layer PFC units

func (*PFCDyn) Defaults

func (pd *PFCDyn) Defaults()

func (*PFCDyn) Set

func (pd *PFCDyn) Set(init, rise, decay float32, desc string)

func (*PFCDyn) Value

func (pd *PFCDyn) Value(time float32) float32

Value returns dynamic value at given time point

type PFCDyns

type PFCDyns []*PFCDyn

PFCDyns is a slice of dyns. Provides deterministic control over PFC maintenance dynamics -- the rows of PFC units (along Y axis) behave according to corresponding index of Dyns. ensure layer Y dim has even multiple of len(Dyns).

func (*PFCDyns) FullDyn

func (pd *PFCDyns) FullDyn(tau float32)

FullDyn creates full dynamic Dyn configuration, with 5 different dynamic profiles: stable maint, phasic, rising maint, decaying maint, and up / down maint. tau is the rise / decay base time constant.

func (*PFCDyns) MaintOnly

func (pd *PFCDyns) MaintOnly()

MaintOnly creates basic default maintenance dynamic configuration -- every unit just maintains over time. This should be used for Output gating layer.

func (*PFCDyns) SetDyn

func (pd *PFCDyns) SetDyn(dyn int, init, rise, decay float32, desc string) *PFCDyn

SetDyn sets given dynamic maint element to given parameters (must be allocated in list first)

func (*PFCDyns) Value

func (pd *PFCDyns) Value(dyn int, time float32) float32

Value returns value for given dyn item at given time step

type PFCGateParams

type PFCGateParams struct {
	OutGate   bool    `desc:"if true, this PFC layer is an output gate layer, which means that it only has transient activation during gating"`
	OutQ1Only bool    `` /* 345-byte string literal not displayed */
	MntThal   float32 `` /* 277-byte string literal not displayed */
}

PFCGateParams has parameters for PFC gating

func (*PFCGateParams) Defaults

func (gp *PFCGateParams) Defaults()

type PFCLayer

type PFCLayer struct {
	GateLayer
	Gate     PFCGateParams  `view:"inline" desc:"PFC Gating parameters"`
	Maint    PFCMaintParams `view:"inline" desc:"PFC Maintenance parameters"`
	Dyns     PFCDyns        `` /* 257-byte string literal not displayed */
	PFCNeurs []PFCNeuron    `` /* 144-byte string literal not displayed */
}

PFCLayer is a Prefrontal Cortex BG-gated working memory layer

func (*PFCLayer) ActFmG

func (ly *PFCLayer) ActFmG(ltime *leabra.Time)

ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act. PFC extends to call Gating.

func (*PFCLayer) AvgMaxGe

func (ly *PFCLayer) AvgMaxGe(ltime *leabra.Time)

AvgMaxGe computes the average and max Ge stats, used in inhibition

func (*PFCLayer) Build

func (ly *PFCLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*PFCLayer) BurstFmAct

func (ly *PFCLayer) BurstFmAct(ltime *leabra.Time)

BurstFmAct updates Burst layer 5 IB bursting value from current Act (superficial activation) Subject to thresholding.

func (*PFCLayer) ClearCtxtPool

func (ly *PFCLayer) ClearCtxtPool(pool int)

ClearCtxtPool clears CtxtGe in given pool index (0 based)

func (*PFCLayer) ClearMaint

func (ly *PFCLayer) ClearMaint(pool int)

ClearMaint resets maintenance in corresponding pool (0 based) in maintenance layer

func (*PFCLayer) CtxtFmGe

func (ly *PFCLayer) CtxtFmGe(ltime *leabra.Time)

CtxtFmGe integrates new CtxtGe excitatory conductance from projections, and computes overall Ctxt value, only on Deep layers. This must be called at the end of the DeepBurst quarter for this layer, after SendCtxtGe.

func (*PFCLayer) DecayStatePool

func (ly *PFCLayer) DecayStatePool(pool int, decay float32)

DecayStatePool decays activation state by given proportion in given pool index (0 based)

func (*PFCLayer) DeepMaint

func (ly *PFCLayer) DeepMaint(ltime *leabra.Time)

DeepMaint updates deep maintenance activations -- called at end of bursting quarter via CtxtFmGe after CtxtGe is updated and available. quarter check is already called.

func (*PFCLayer) DeepPFC

func (ly *PFCLayer) DeepPFC() *PFCLayer

DeepPFC returns corresponding PFC deep layer with same name + D could be nil

func (*PFCLayer) Defaults

func (ly *PFCLayer) Defaults()

func (*PFCLayer) DoQuarter2DWt

func (ly *PFCLayer) DoQuarter2DWt() bool

DoQuarter2DWt indicates whether to do optional Q2 DWt

func (*PFCLayer) GFmInc

func (ly *PFCLayer) GFmInc(ltime *leabra.Time)

GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.

func (*PFCLayer) GateStateToDeep

func (ly *PFCLayer) GateStateToDeep(ltime *leabra.Time)

GateStateToDeep copies superficial gate state to corresponding deep layer. This happens at end of BurstQtr (from QuarterFinal), prior to SendCtxtGe call which happens at Network level after QuarterFinal

func (*PFCLayer) GateType

func (ly *PFCLayer) GateType() GateTypes

func (*PFCLayer) Gating

func (ly *PFCLayer) Gating(ltime *leabra.Time)

GateSend computes PFC Gating state

func (*PFCLayer) InitActs

func (ly *PFCLayer) InitActs()

func (*PFCLayer) MaintPFC

func (ly *PFCLayer) MaintPFC() *PFCLayer

MaintPFC returns corresponding PFC maintenance layer with same name but out -> mnt could be nil

func (*PFCLayer) QuarterFinal

func (ly *PFCLayer) QuarterFinal(ltime *leabra.Time)

QuarterFinal does updating after end of a quarter

func (*PFCLayer) RecGateAct

func (ly *PFCLayer) RecGateAct(ltime *leabra.Time)

RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now

func (*PFCLayer) SendCtxtGe

func (ly *PFCLayer) SendCtxtGe(ltime *leabra.Time)

SendCtxtGe sends full Burst activation over BurstCtxt projections to integrate CtxtGe excitatory conductance on deep layers. This must be called at the end of the DeepBurst quarter for this layer.

func (*PFCLayer) UnitValByIdx

func (ly *PFCLayer) UnitValByIdx(vidx NeuronVars, idx int) float32

UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).

type PFCMaintParams

type PFCMaintParams struct {
	SMnt     minmax.F32 `` /* 196-byte string literal not displayed */
	MntGeMax float32    `` /* 282-byte string literal not displayed */
	Clear    float32    `` /* 210-byte string literal not displayed */
	UseDyn   bool       `` /* 262-byte string literal not displayed */
	MaxMaint int        `` /* 200-byte string literal not displayed */
}

PFCMaintParams for PFC maintenance functions

func (*PFCMaintParams) Defaults

func (mp *PFCMaintParams) Defaults()

type PFCNeuron

type PFCNeuron struct {
	ActG  float32 `desc:"gating activation -- the activity value when gating occurred in this pool"`
	Maint float32 `desc:"maintenance value for Deep layers"`
}

PFCNeuron contains extra variables for PFCLayer neurons -- stored separately

type RWDaLayer

type RWDaLayer struct {
	DaSrcLayer
	RewLay string `desc:"name of Reward-representing layer from which this computes DA -- if nothing clamped, no dopamine computed"`
}

RWDaLayer computes a dopamine (Da) signal based on a simple Rescorla-Wagner learning dynamic (i.e., PV learning in the PVLV framework). It computes difference between r(t) and RWPred inputs. r(t) is accessed directly from a Rew layer -- if no external input then no DA is computed -- critical for effective use of RW only for PV cases. Receives RWPred prediction from direct (fixed) weights.

func (*RWDaLayer) ActFmG

func (ly *RWDaLayer) ActFmG(ltime *leabra.Time)

func (*RWDaLayer) Build

func (ly *RWDaLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*RWDaLayer) Defaults

func (ly *RWDaLayer) Defaults()

func (*RWDaLayer) RewLayer

func (ly *RWDaLayer) RewLayer() (*ModLayer, error)

func (*RWDaLayer) SendMods

func (ly *RWDaLayer) SendMods(ltime *leabra.Time)

SendMods is called at end of Cycle to send modulator signals (DA, etc) which will then be active for the next cycle of processing

type RWPredLayer

type RWPredLayer struct {
	ModLayer
	PredRange minmax.F32 `` /* 180-byte string literal not displayed */
}

RWPredLayer computes reward prediction for a simple Rescorla-Wagner learning dynamic (i.e., PV learning in the PVLV framework). Activity is computed as linear function of excitatory conductance (which can be negative -- there are no constraints). Use with RWPrjn which does simple delta-rule learning on minus-plus.

func (*RWPredLayer) ActFmG

func (ly *RWPredLayer) ActFmG(ltime *leabra.Time)

ActFmG computes linear activation for RWPred

func (*RWPredLayer) Defaults

func (ly *RWPredLayer) Defaults()

type RWPrjn

type RWPrjn struct {
	deep.Prjn
	DaTol float32 `` /* 208-byte string literal not displayed */
}

RWPrjn does dopamine-modulated learning for reward prediction: Da * Send.Act Use in RWPredLayer typically to generate reward predictions. Has no weight bounds or limits on sign etc.

func (*RWPrjn) DWt

func (pj *RWPrjn) DWt()

DWt computes the weight change (learning) -- on sending projections.

func (*RWPrjn) Defaults

func (pj *RWPrjn) Defaults()

func (*RWPrjn) WtFmDWt

func (pj *RWPrjn) WtFmDWt()

WtFmDWt updates the synaptic weight values from delta-weight changes -- on sending projections

type SeSrcLayer

type SeSrcLayer struct {
	ModLayer
	SendTo []string `desc:"list of layers to send Se to"`
}

SeSrcLayer is the basic type of layer that sends Se to other layers. Uses a list of layer names to send to -- not use Prjn infrastructure as it is global broadcast modulator -- individual neurons can use it in their own special way.

func (*SeSrcLayer) AddSendTo

func (ly *SeSrcLayer) AddSendTo(laynm string)

AddSendTo adds given layer name to list of those to send DA to

func (*SeSrcLayer) Build

func (ly *SeSrcLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*SeSrcLayer) SendSe

func (ly *SeSrcLayer) SendSe(se float32)

SendSe sends serotonin to SendTo list of layers

func (*SeSrcLayer) SendToCheck

func (ly *SeSrcLayer) SendToCheck() error

SendToCheck is called during Build to ensure that SendTo layers are valid

type TDDaLayer

type TDDaLayer struct {
	DaSrcLayer
	RewInteg string `desc:"name of TDRewIntegLayer from which this computes the temporal derivative"`
}

TDDaLayer computes a dopamine (Da) signal as the temporal difference (TD) between the TDRewIntegLayer activations in the minus and plus phase.

func (*TDDaLayer) ActFmG

func (ly *TDDaLayer) ActFmG(ltime *leabra.Time)

func (*TDDaLayer) Build

func (ly *TDDaLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*TDDaLayer) Defaults

func (ly *TDDaLayer) Defaults()

func (*TDDaLayer) RewIntegLayer

func (ly *TDDaLayer) RewIntegLayer() (*TDRewIntegLayer, error)

func (*TDDaLayer) SendMods

func (ly *TDDaLayer) SendMods(ltime *leabra.Time)

SendMods is called at end of Cycle to send modulator signals (DA, etc) which will then be active for the next cycle of processing

type TDRewIntegLayer

type TDRewIntegLayer struct {
	ModLayer
	RewInteg TDRewIntegParams `desc:"parameters for reward integration"`
}

TDRewIntegLayer is the temporal differences reward integration layer. It represents estimated value V(t) in the minus phase, and estimated V(t+1) + r(t) in the plus phase. It computes r(t) from (typically fixed) weights from a reward layer, and directly accesses values from RewPred layer.

func (*TDRewIntegLayer) ActFmG

func (ly *TDRewIntegLayer) ActFmG(ltime *leabra.Time)

func (*TDRewIntegLayer) Build

func (ly *TDRewIntegLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*TDRewIntegLayer) Defaults

func (ly *TDRewIntegLayer) Defaults()

func (*TDRewIntegLayer) RewPredLayer

func (ly *TDRewIntegLayer) RewPredLayer() (*TDRewPredLayer, error)

type TDRewIntegParams

type TDRewIntegParams struct {
	Discount float32 `desc:"discount factor -- how much to discount the future prediction from RewPred"`
	RewPred  string  `desc:"name of TDRewPredLayer to get reward prediction from "`
}

TDRewIntegParams are params for reward integrator layer

func (*TDRewIntegParams) Defaults

func (tp *TDRewIntegParams) Defaults()

type TDRewPredLayer

type TDRewPredLayer struct {
	ModLayer
}

TDRewPredLayer is the temporal differences reward prediction layer. It represents estimated value V(t) in the minus phase, and computes estimated V(t+1) based on its learned weights in plus phase. Use TDRewPredPrjn for DA modulated learning.

func (*TDRewPredLayer) ActFmG

func (ly *TDRewPredLayer) ActFmG(ltime *leabra.Time)

ActFmG computes linear activation for TDRewPred

type TDRewPredPrjn

type TDRewPredPrjn struct {
	deep.Prjn
}

TDRewPredPrjn does dopamine-modulated learning for reward prediction: DWt = Da * Send.ActQ0 (activity on *previous* timestep) Use in TDRewPredLayer typically to generate reward predictions. Has no weight bounds or limits on sign etc.

func (*TDRewPredPrjn) DWt

func (pj *TDRewPredPrjn) DWt()

DWt computes the weight change (learning) -- on sending projections.

func (*TDRewPredPrjn) Defaults

func (pj *TDRewPredPrjn) Defaults()

func (*TDRewPredPrjn) WtFmDWt

func (pj *TDRewPredPrjn) WtFmDWt()

WtFmDWt updates the synaptic weight values from delta-weight changes -- on sending projections

type TraceParams

type TraceParams struct {
	NotGatedLR    float32 `` /* 351-byte string literal not displayed */
	GateNoGoPosLR float32 `` /* 947-byte string literal not displayed */
	AChResetThr   float32 `min:"0" def:"0.5" desc:"threshold on receiving unit ACh value, sent by TAN units, for reseting the trace"`
	Deriv         bool    `` /* 305-byte string literal not displayed */
	Decay         float32 `` /* 294-byte string literal not displayed */
}

Params for for trace-based learning in the MatrixTracePrjn

func (*TraceParams) Defaults

func (tp *TraceParams) Defaults()

func (*TraceParams) LrateMod

func (tp *TraceParams) LrateMod(gated, d2r, posDa bool) float32

LrateMod returns the learning rate modulator based on gating, d2r, and posDa factors

func (*TraceParams) LrnFactor

func (tp *TraceParams) LrnFactor(act float32) float32

LrnFactor resturns multiplicative factor for level of msn activation. If Deriv is 2 * act * (1-act) -- the factor of 2 compensates for otherwise reduction in learning from these factors. Otherwise is just act.

type TraceSyn

type TraceSyn struct {
	NTr float32 `` /* 136-byte string literal not displayed */
	Tr  float32 `` /* 183-byte string literal not displayed */
}

TraceSyn holds extra synaptic state for trace projections

type Valences

type Valences int

Valences for Appetitive and Aversive valence coding

const (
	// Appetititve is a positive valence US (food, water, etc)
	Appetitive Valences = iota

	// Aversive is a negative valence US (shock, threat etc)
	Aversive

	ValencesN
)

func (*Valences) FromString

func (i *Valences) FromString(s string) error

func (Valences) MarshalJSON

func (ev Valences) MarshalJSON() ([]byte, error)

func (Valences) String

func (i Valences) String() string

func (*Valences) UnmarshalJSON

func (ev *Valences) UnmarshalJSON(b []byte) error

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL