hip

package
v0.0.0-...-da82786 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 21, 2022 License: BSD-3-Clause Imports: 3 Imported by: 0

README

Hippocampus

GoDoc

Package hip provides special hippocampus algorithms for implementing the Theta-phase hippocampus model from Ketz, Morkonda, & O'Reilly (2013).

timing of ThetaPhase dynamics -- based on quarter structure:

  • Q1: ECin -> CA1 -> ECout (CA3 -> CA1 off) : ActQ1 = minus phase for auto-encoder
  • Q2,3: CA3 -> CA1 -> ECout (ECin -> CA1 off) : ActM = minus phase for recall
  • Q4: ECin -> CA1, ECin -> ECout (CA3 -> CA1 off, ECin -> CA1 on): ActP = plus phase for everything
[  q1      ][  q2  q3  ][     q4     ]
[ ------ minus ------- ][ -- plus -- ]
[   auto-  ][ recall-  ][ -- plus -- ]

  DG -> CA3 -> CA1
 /    /      /    \
[----ECin---] -> [ ECout ]

minus phase: ECout unclamped, driven by CA1
auto-   CA3 -> CA1 = 0, ECin -> CA1 = 1
recall- CA3 -> CA1 = 1, ECin -> CA1 = 0

plus phase: ECin -> ECout auto clamped
CA3 -> CA1 = 0, ECin -> CA1 = 1
(same as auto- -- training signal for CA3 -> CA1 is what EC would produce!
  • ActQ1 = auto encoder minus phase state (in both CA1 and ECout used in EcCa1Prjn as minus phase relative to ActP plus phase in CHL)
  • ActM = recall minus phase (normal minus phase dynamics for CA3 recall learning)
  • ActP = plus (serves as plus phase for both auto and recall)

learning just happens at end of trial as usual, but encoder projections use the ActQ1, ActM, ActP variables to learn on the right signals

TODO

  • try error-driven CA3 learning based on DG -> CA3 plus phase per https://arxiv.org/abs/1909.10340

  • implement a two-trial version of the code to produce a true theta rhythm integrating over two adjacent alpha trials..

Documentation

Overview

Package hip provides special hippocampus algorithms for implementing the Theta-phase hippocampus model from Ketz, Morkonda, & O'Reilly (2013).

timing of ThetaPhase dynamics -- based on quarter structure:

Q1: ECin -> CA1 -> ECout (CA3 -> CA1 off) : ActQ1 = minus phase for auto-encoder Q2,3: CA3 -> CA1 -> ECout (ECin -> CA1 off) : ActM = minus phase for recall Q4: ECin -> CA1, ECin -> ECout (CA3 -> CA1 off, ECin -> CA1 on): ActP = plus phase for everything

[ q1 ][ q2 q3 ][ q4 ] [ ------ minus ------- ][ -- plus -- ] [ auto- ][ recall- ][ -- plus -- ]

 DG -> CA3 -> CA1
/    /      /    \

[----ECin---] -> [ ECout ]

minus phase: ECout unclamped, driven by CA1 auto- CA3 -> CA1 = 0, ECin -> CA1 = 1 recall- CA3 -> CA1 = 1, ECin -> CA1 = 0

plus phase: ECin -> ECout auto clamped CA3 -> CA1 = 0, ECin -> CA1 = 1 (same as auto- -- training signal for CA3 -> CA1 is what EC would produce!

ActQ1 = auto encoder minus phase state (in both CA1 and ECout

used in EcCa1Prjn as minus phase relative to ActP plus phase in CHL)

ActM = recall minus phase (normal minus phase dynamics for CA3 recall learning) ActP = plus (serves as plus phase for both auto and recall)

learning just happens at end of trial as usual, but encoder projections use the ActQ1, ActM, ActP variables to learn on the right signals

todo: implement a two-trial version of the code to produce a true theta rhythm integrating over two adjacent alpha trials..

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type CHLParams

type CHLParams struct {
	On      bool    `desc:"if true, use CHL learning instead of standard XCAL learning -- allows easy exploration of CHL vs. XCAL"`
	Hebb    float32 `def:"0.001" min:"0" max:"1" desc:"amount of hebbian learning (should be relatively small, can be effective at .0001)"`
	Err     float32 `def:"0.999" min:"0" max:"1" inactive:"+" desc:"amount of error driven learning, automatically computed to be 1-Hebb"`
	MinusQ1 bool    `desc:"if true, use ActQ1 as the minus phase -- otherwise ActM"`
	SAvgCor float32 `` /* 161-byte string literal not displayed */
	SAvgThr float32 `` /* 145-byte string literal not displayed */
}

Contrastive Hebbian Learning (CHL) parameters

func (*CHLParams) DWt

func (ch *CHLParams) DWt(hebb, err float32) float32

DWt computes the overall dwt from hebbian and error terms

func (*CHLParams) Defaults

func (ch *CHLParams) Defaults()

func (*CHLParams) ErrDWt

func (ch *CHLParams) ErrDWt(sactP, sactM, ractP, ractM, linWt float32) float32

ErrDWt computes the error-driven DWt value from sending, recv acts in both phases, and linear Wt, which is used for soft weight bounding (always applied here, separate from hebbian which has its own soft weight bounding dynamic).

func (*CHLParams) HebbDWt

func (ch *CHLParams) HebbDWt(sact, ract, savgCor, linWt float32) float32

HebbDWt computes the hebbian DWt value from sending, recv acts, savgCor, and linear Wt

func (*CHLParams) MinusAct

func (ch *CHLParams) MinusAct(actM, actQ1 float32) float32

MinusAct returns the minus-phase activation to use based on settings (ActM vs. ActQ1)

func (*CHLParams) Update

func (ch *CHLParams) Update()

type CHLPrjn

type CHLPrjn struct {
	leabra.Prjn           // access as .Prjn
	CHL         CHLParams `` /* 129-byte string literal not displayed */
}

hip.CHLPrjn is a Contrastive Hebbian Learning (CHL) projection, based on basic rate-coded leabra.Prjn, that implements a pure CHL learning rule, which works better in the hippocampus.

func (*CHLPrjn) DWt

func (pj *CHLPrjn) DWt()

DWt computes the weight change (learning) -- on sending projections CHL version supported if On

func (*CHLPrjn) DWtCHL

func (pj *CHLPrjn) DWtCHL()

DWtCHL computes the weight change (learning) for CHL

func (*CHLPrjn) Defaults

func (pj *CHLPrjn) Defaults()

func (*CHLPrjn) SAvgCor

func (pj *CHLPrjn) SAvgCor(slay *leabra.Layer) float32

SAvgCor computes the sending average activation, corrected according to the SAvgCor correction factor (typically makes layer appear more sparse than it is)

func (*CHLPrjn) SlpDWt

func (pj *CHLPrjn) SlpDWt(lrule string)

DS Added

func (*CHLPrjn) SlpDWtCHL

func (pj *CHLPrjn) SlpDWtCHL(lrule string)

DS Added SlpDWtCHL computes sleep error driven learning using avg plus phase and minus phase activations

func (*CHLPrjn) UpdateParams

func (pj *CHLPrjn) UpdateParams()

type EcCa1Prjn

type EcCa1Prjn struct {
	leabra.Prjn // access as .Prjn
}

hip.EcCa1Prjn is for EC <-> CA1 projections, to perform error-driven learning of this encoder pathway according to the ThetaPhase algorithm uses Contrastive Hebbian Learning (CHL) on ActP - ActQ1 Q1: ECin -> CA1 -> ECout : ActQ1 = minus phase for auto-encoder Q2, 3: CA3 -> CA1 -> ECout : ActM = minus phase for recall Q4: ECin -> CA1, ECin -> ECout : ActP = plus phase for everything

func (*EcCa1Prjn) DWt

func (pj *EcCa1Prjn) DWt()

DWt computes the weight change (learning) -- on sending projections Delta version

func (*EcCa1Prjn) Defaults

func (pj *EcCa1Prjn) Defaults()

func (*EcCa1Prjn) UpdateParams

func (pj *EcCa1Prjn) UpdateParams()

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL