sims

package module
v2.0.0-...-faa894c Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 8, 2025 License: BSD-3-Clause Imports: 0 Imported by: 0

README

Computational Cognitive Neuroscience Simulations

This repository contains the neural network simulation models for the CCN Textbook. For more information, see the simulations website.

Status

  • August 2024: The sims are now being updated to run on the web as documented on the website.

  • Feb 15, 2023: Version 1.3.3 release: updated to improved Vulkan driver selection in latest GoGi.

  • Sept 15, 2022: Version 1.3.2 release: updated to new NetView with raster view and separate weight recording.

  • Sept 9, 2021: Version 1.3.1 release: bug fixes, deep leabra version of sg, python works on windows.

  • Nov 23, 2020: Version 1.2.2 release: full set of Python versions and the pvlv model.

  • See https://github.com/CompCogNeuro/sims/releases for full history

Developer notes

This is not relevant for regular users

The Makefile contains targets that build all the sims programs and copy the resulting executable into a consolidated directory ~/ccnsimpkg/ which can then be used to make the .zip / .tar files for distribution purposes. The targets are: mac, linux, windows.

To build all windows targets using Makefile's on Windows (i.e., make windows), you have to use cygwin with native make installed -- could not get recursive invocation of make to work in powershell. Also have to mv /usr/bin/gcc.exe /usr/bin/gcc-cyg.exe so it will use TDM-GCC-64 version -- otherwise it won't build.

Documentation

Overview

Package sims are the neural network simulation models for the [CCN Textbook](https://github.com/CompCogNeuro/book).

These models are implemented in the new *Go* (golang) version of [emergent](https://github.com/emer/emergent), with Python versions available as well (note: not yet!).

This github repository contains the full source code and you can build and run the models by cloning the repository and building / running he individual projects as described in the emergent Wiki help page: [Wiki Install](https://github.com/emer/emergent/wiki/Install).

Directories

Path Synopsis
ch10
dyslexia
dyslexia simulates normal and disordered (dyslexic) reading performance in terms of a distributed representation of word-level knowledge across Orthography, Semantics, and Phonology.
dyslexia simulates normal and disordered (dyslexic) reading performance in terms of a distributed representation of word-level knowledge across Orthography, Semantics, and Phonology.
sem
sem is trained using Hebbian learning on paragraphs from an early draft of the *Computational Explorations..* textbook, allowing it to learn about the overall statistics of when different words co-occur with other words, and thereby learning a surprisingly capable (though clearly imperfect) level of semantic knowledge about the topics covered in the textbook.
sem is trained using Hebbian learning on paragraphs from an early draft of the *Computational Explorations..* textbook, allowing it to learn about the overall statistics of when different words co-occur with other words, and thereby learning a surprisingly capable (though clearly imperfect) level of semantic knowledge about the topics covered in the textbook.
sg
sg is the sentence gestalt model, which learns to encode both syntax and semantics of sentences in an integrated "gestalt" hidden layer.
sg is the sentence gestalt model, which learns to encode both syntax and semantics of sentences in an integrated "gestalt" hidden layer.
ss
ss explores the way that regularities and exceptions are learned in the mapping between spelling (orthography) and sound (phonology), in the context of a "direct pathway" mapping between these two forms of word representations.
ss explores the way that regularities and exceptions are learned in the mapping between spelling (orthography) and sound (phonology), in the context of a "direct pathway" mapping between these two forms of word representations.
ch2
detector
detector: This simulation shows how an individual neuron can act like a detector, picking out specific patterns from its inputs and responding with varying degrees of selectivity to the match between its synaptic weights and the input activity pattern.
detector: This simulation shows how an individual neuron can act like a detector, picking out specific patterns from its inputs and responding with varying degrees of selectivity to the match between its synaptic weights and the input activity pattern.
neuron
neuron: This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition).
neuron: This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition).
ch3
cats_dogs
cats_dogs: This project explores a simple **semantic network** intended to represent a (very small) set of relationships among different features used to represent a set of entities in the world.
cats_dogs: This project explores a simple **semantic network** intended to represent a (very small) set of relationships among different features used to represent a set of entities in the world.
faces
faces: This project explores how sensory inputs (in this case simple cartoon faces) can be categorized in multiple different ways, to extract the relevant information and collapse across the irrelevant.
faces: This project explores how sensory inputs (in this case simple cartoon faces) can be categorized in multiple different ways, to extract the relevant information and collapse across the irrelevant.
inhib
inhib: This simulation explores how inhibitory interneurons can dynamically control overall activity levels within the network, by providing both feedforward and feedback inhibition to excitatory pyramidal neurons.
inhib: This simulation explores how inhibitory interneurons can dynamically control overall activity levels within the network, by providing both feedforward and feedback inhibition to excitatory pyramidal neurons.
necker_cube
necker_cube: This simulation explores the use of constraint satisfaction in processing ambiguous stimuli, in this case the *Necker cube*, which can be viewed as a cube in one of two orientations, where people flip back and forth.
necker_cube: This simulation explores the use of constraint satisfaction in processing ambiguous stimuli, in this case the *Necker cube*, which can be viewed as a cube in one of two orientations, where people flip back and forth.
ch4
err_driven_hidden
err_driven_hidden shows how XCal error driven learning can train a hidden layer to solve problems that are otherwise impossible for a simple two layer network (as we saw in the Pattern Associator exploration, which should be completed first before doing this one).
err_driven_hidden shows how XCal error driven learning can train a hidden layer to solve problems that are otherwise impossible for a simple two layer network (as we saw in the Pattern Associator exploration, which should be completed first before doing this one).
family_trees
family_trees shows how learning can recode inputs that have no similarity structure into a hidden layer that captures the *functional* similarity structure of the items.
family_trees shows how learning can recode inputs that have no similarity structure into a hidden layer that captures the *functional* similarity structure of the items.
hebberr_combo
hebberr_combo shows how XCal hebbian learning in shallower layers of a network can aid an error driven learning network to generalize to unseen combinations of patterns.
hebberr_combo shows how XCal hebbian learning in shallower layers of a network can aid an error driven learning network to generalize to unseen combinations of patterns.
pat_assoc
pat_assoc illustrates how error-driven and hebbian learning can operate within a simple task-driven learning context, with no hidden layers.
pat_assoc illustrates how error-driven and hebbian learning can operate within a simple task-driven learning context, with no hidden layers.
self_org
self_org illustrates how self-organizing learning emerges from the interactions between inhibitory competition, rich-get-richer Hebbian learning, and homeostasis (negative feedback).
self_org illustrates how self-organizing learning emerges from the interactions between inhibitory competition, rich-get-richer Hebbian learning, and homeostasis (negative feedback).
ch6
attn
attn: This simulation illustrates how object recognition (ventral, what) and spatial (dorsal, where) pathways interact to produce spatial attention effects, and accurately capture the effects of brain damage to the spatial pathway.
attn: This simulation illustrates how object recognition (ventral, what) and spatial (dorsal, where) pathways interact to produce spatial attention effects, and accurately capture the effects of brain damage to the spatial pathway.
objrec
objrec explores how a hierarchy of areas in the ventral stream of visual processing (up to inferotemporal (IT) cortex) can produce robust object recognition that is invariant to changes in position, size, etc of retinal input images.
objrec explores how a hierarchy of areas in the ventral stream of visual processing (up to inferotemporal (IT) cortex) can produce robust object recognition that is invariant to changes in position, size, etc of retinal input images.
v1rf
v1rf illustrates how self-organizing learning in response to natural images produces the oriented edge detector receptive field properties of neurons in primary visual cortex (V1).
v1rf illustrates how self-organizing learning in response to natural images produces the oriented edge detector receptive field properties of neurons in primary visual cortex (V1).
ch7
abac
abac explores the classic paired associates learning task in a cortical-like network, which exhibits catastrophic levels of interference.
abac explores the classic paired associates learning task in a cortical-like network, which exhibits catastrophic levels of interference.
hip
hip runs a hippocampus model on the AB-AC paired associate learning task.
hip runs a hippocampus model on the AB-AC paired associate learning task.
priming
priming illustrates _weight-based priming_, that is, how small weight changes caused by the standard slow cortical learning rate can produce significant behavioral priming, causing the network to favor one output pattern over another.
priming illustrates _weight-based priming_, that is, how small weight changes caused by the standard slow cortical learning rate can produce significant behavioral priming, causing the network to favor one output pattern over another.
ch8
bg
bg is a simplified basal ganglia (BG) network showing how dopamine bursts can reinforce *Go* (direct pathway) firing for actions that lead to reward, and dopamine dips reinforce *NoGo* (indirect pathway) firing for actions that do not lead to positive outcomes, producing Thorndike's classic *Law of Effect* for instrumental conditioning, and also providing a mechanism to learn and select among actions with different reward probabilities over multiple experiences.
bg is a simplified basal ganglia (BG) network showing how dopamine bursts can reinforce *Go* (direct pathway) firing for actions that lead to reward, and dopamine dips reinforce *NoGo* (indirect pathway) firing for actions that do not lead to positive outcomes, producing Thorndike's classic *Law of Effect* for instrumental conditioning, and also providing a mechanism to learn and select among actions with different reward probabilities over multiple experiences.
rl
rl explores the temporal differences (TD) reinforcement learning algorithm under some basic Pavlovian conditioning environments.
rl explores the temporal differences (TD) reinforcement learning algorithm under some basic Pavlovian conditioning environments.
ch9
a_not_b
a_not_b explores how the development of PFC active maintenance abilities can help to make behavior more flexible, in the sense that it can rapidly shift with changes in the environment.
a_not_b explores how the development of PFC active maintenance abilities can help to make behavior more flexible, in the sense that it can rapidly shift with changes in the environment.
sir
sir illustrates the dynamic gating of information into PFC active maintenance, by the basal ganglia (BG).
sir illustrates the dynamic gating of information into PFC active maintenance, by the basal ganglia (BG).
stroop
stroop illustrates how the PFC can produce top-down biasing for executive control, in the context of the widely studied Stroop task.
stroop illustrates how the PFC can produce top-down biasing for executive control, in the context of the widely studied Stroop task.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL