package
0.0.0-20240905221516-6df987c31e31
Repository: https://github.com/princetoncompmemlab/neurodiff_leabra.git
Documentation: pkg.go.dev

# README

Installing emergent

simulationexample The current process for setting up the new emergent is distributed across several webpages and can be a bit confusing or even contradictory. This is an attempt to put it all in one place, potentially into simpler terms, and with troubleshooting tips informed by my own experience getting everything running.

File structure:

You need to set the GOPATH and GOROOT.

  • GOROOT is where the go executable is
  • GOPATH is where to install go packages.

You'll use GOPATH for the actual project folders for your model. Each project will have a directory, so it'll be something like GOPATH/src/github.com/emer/leabra/examples/PROJECTNAME.

Note that go forces the separation of GOPATH and GOROOT, so they can't be the same directory. This README will assume you are using a folder named go/ for GOROOT and gocode/ for for GOPATH.

So first, decide where you want these folders to be:

  1. By default, the MacOS package installer places the Go distribution to /usr/local/go
  2. By default, the Windows installation places it at c:\Go.
  3. You can also install the Go distribution somewhere else:
    1. So, if you don't want to use the default directories, you can make make these GOPATH and GOROOT directories where you want to (this can be in your home directory, in /tigress/username/ for della, or /jukebox/norman/username/ for spock.)
    2. If you do this, make sure you correctly set the path by adding the path to your go distribution to the GOPATH variable in ~/.bashrc file

We have to update the ~/.bashrc script in your home directory to match wherever you want these directories to be.

nano ~/.bashrc

Add the following lines:

export GOROOT="PATH/TO/CODE/go"
export GOPATH="PATH/TO/CODE/gocode"
export PATH=$GOROOT/bin:$PATH

Make sure to run:

source ~/.bashrc

Or else start a new session so the paths are updated.

Download Go

The first step to setting up emergent is to download Go. Pick your favored binary release here, download it, and run it. The MSI installers for Windows and Mac do all the work for you.

You should download versions 1.13 or later.

Downloading Go on the cluster

To download go on the cluster, run

wget https://dl.google.com/go/go1.14.1.linux-amd64.tar.gz

In general, to download a file, just run wget and then the download link. The above command downloads a .tar.gz file wherever you are. You probably want to run it from /PATH/TO/CODE, or else move the tar.gz file there after you download it.

Install Go

When you ran wget https://dl.google.com/go/go1.14.1.linux-amd64.tar.gzit created a .tar.gz file wherever you ran it. Move it to where you want the GOPATH directory to be, if you didn't run the command there already. Then unzip the file:

tar -xzf go1.14.1.linux-amd64.tar.gz

That should have created a folder called go/. Change the name to be whatever you called the GOPATH directory (i.e. gocode/).

Then make the directory for the GOROOT:

mkdir PATH/TO/CODE/go

Now you should have two folders that match GOPATH and GOROOT

Test Your Go Installation

You might want to make sure you installed Go successfully. To do this:

  1. Create a folder PATH/TO/CODE/gocode/src/hello
    1. Then create a file named hello.go with the following content:
package main

import "fmt"

func main() {
	fmt.Printf("hello, world\n")
}
  1. 'Build' the script by executing the command:
go build

within your hello folder. This creates a hello.exe file in your directory that you can then run to execute the code. 3. Run hello.exe. (You can do this in your Mac terminal with ./hello and in windows with hello. If the command returns hello, world, you're golden. If not, try running echo $PATH to see if GOROOT and GOPATH were added correctly.

Next, add the following lines to your ~/.bashrc This disables gomod, which is a new feature in Go 1.14 but causes problems for us

export GO111MODULE=off

gomod() {
    export GO111MODULE=on
}

nogomod() {
    export GO111MODULE=off
}

Install the GoGi toolkit

The GoGi GUI framework is responsible for the graphical user interface that visualizes our models and the interface for interacting with them. It can be set up using your go install. Depending on your initial success and operating system, you may have to install some additional materials before installing the toolkit:

Windows Pre-Installation Steps

The Windows install requires a "mingw compatible gcc" - a compiler for C, which Go is based on. The install wiki recommends this one.

MacOS Pre-Installation Steps

The MacOS install requires you first have XCode and relevant header files installed (e.g. gcc to compile go programs). You only need to do this once. This can be achieved two commands:

> xcode-select --install
> open /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg

Cluster Pre-Installation steps

If you're on the cluster, make sure that the correct version of gcc is loaded. The gcc default version on spock is too outdated, so run

module load rh/devtoolset/8

Toolkit Installation

No matter your OS, you have to complete these steps to execute the actual installation.

  1. Start with the terminal command
    1. Note: You may or may not see a warning, for example: "no Go files in..."; you can safely ignore the warning
go get github.com/goki/gi
  1. Next, we ensure that all dependencies are installed/updated in the relevant examples/widgets directory:
> cd PATH/TO/CODE/gocode/src/github.com/goki/gi/examples/widgets
> go get -u ./...

The location of the relevant directory could depend on your OS and Go path settings.

The goki install page includes some troubleshooting tips if you had trouble here. Or you can reach out to me!

Install leabra

leabra is considered the basic template/starting point for creating your own simulations, with the Go and Python versions closely matched in functionality to support development along either direction. The install process for it is pretty similar to that for the GoGi toolkit!

Either run 1 or 2. Do not run both!!

  1. If you'd like to install the official version of leabra
    1. Execute in your terminal. Again, ignore any warnings about no go files, etc.
go get github.com/emer/leabra
  1. If you'd like to install a different version of leabra (e.g. private-leabra), cd into the correct directory in GOPATH
    1. Make sure the correct branch is checked out
cd PATH/TO/CODE/gocode/src/github.com/
mkdir emer
cd emer
git clone https://github.com/PrincetonCompMemLab/neurodiff_leabra.git leabra
git clone https://github.com/PrincetonCompMemLab/neurodiff_emergent.git emergent
cd leabra
git checkout origin/dev
git checkout dev
  1. Ensure all dependencies are installed in the relevant examples/ra25 directory with these steps, again modifying paths to suit your settings and OS (you may want to do this for your particular model, instead of ra25):
> cd PATH/TO/CODE/gocode/src/github.com/emer/leabra/examples/ra25
> go get -u ./...
  1. Finally, to actually run the simulation, you build and run the executable associated with the script:
> cd PATH/TO/CODE/gocode/src/github.com/emer/leabra/examples/ra25
> go build
> ./ra25

Your setup has been successful if that last statement generates a window like the one at the top of this guide. In general, the expected process for making simulations via emergent is to copy the ra25.go code to your own repository and modify it according to your specifications. When you're done you run go build to turn the modified code into an executable simulation just like in step 3 above!

Adding Python Support

One of the most exciting possibilities realized by the new emergent, though, is the option to avoid developing your simulation in Go and instead write your code with Python. The anticipated development process is quite similar (you'll just be editing and executing a Python-based implementation of leabra), but extra installation steps are necessary to support integrating Python into the framework. Unfortunately, neither I nor the emergent developers have figured out how to make this extra functionality work with Windows yet, so at least for now these instructions are MacOS/Unix specific. Indeed, even the instructions presented here are temporary and likely to change after further updates to emergent's components.

To complete the process, you'll need pkg-config. One of the easiest ways to install it is with the homebrew command brew install pkg-config. If you don't have homebrew, you can get it with the command /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)".

  1. First, make sure you have the latest version of Python.
  2. Next, execute these commands in your terminal installing some key dependencies.
> python3 -m pip install --upgrade pybindgen setuptools wheel pandas
> go get golang.org/x/tools/cmd/goimports
> go get github.com/go-python/gopy
> go get github.com/goki/gopy
  1. These packages may need to be built into executables and added to your ~/go/bin directory:
> cd ~/go/src/golang.org/x/tools/cmd/goimports
> go install
> cd ~/go/src/github.com/go-python/gopy
> git fetch origin pull/180/head:pr180
> git checkout pr180
> go install

Check if gopy.exe and goimports.exe have been successfully added to your ~/go/bin directory. If not, you may have to add them yourself. The command go build in the respective directory will generate the relevant .exe file and you will be able to copy it over yourself. open . will open the terminal's current directory in your file browser.

Similarly, if along this process you obtain an error message about a missing gopyh module, may also need to manually relocate the gopyh folder at ~/go/src/github.com/goki/gopy to the ~/go/src/github.com/go-python/gopy directory.

  1. Next we install some more Python-sided interface dependencies.
> cd ~/go/src/github.com/goki/gi/python
> sudo make
> sudo make install
  1. The penultimate step sets up and places pyleabra.exe and pyemergent.exe into your usr/local/bin directory.
> cd ~/go/src/github.com/emer/leabra/python
> sudo make
> sudo make install
  1. Finally, we execute the python version of ra25. If this opens an interface and begins a simulation, then your installation was successful.
> cd ../examples/ra25
> pyleabra -i ra25.py

# Functions

JsonToParams reformates json output to suitable params display output.
NeuronVarByName returns the index of the variable in the Neuron, or error.
NewTime returns a new Time struct with default parameters.
SigFun is the sigmoid function for value w in 0-1 range, with gain and offset params.
SigFun61 is the sigmoid function for value w in 0-1 range, with default gain = 6, offset = 1 params.
SigInvFun is the inverse of the sigmoid function.
SigInvFun61 is the inverse of the sigmoid function, with default gain = 6, offset = 1 params.
SynapseVarByName returns the index of the variable in the Synapse, or error.

# Constants

ActNoise means noise is added to the final rate code activation.
The activation noise types.
AlphaCycle is typically 100 cycles = 100 msec (10 hz) = one alpha-frequency cycle, which is the fundamental unit of learning in posterior cortex.
BetaCycle is typically 50 cycles = 50 msec (20 hz) = one beta-frequency cycle.
Block is a collection of Trials, Sequences or Events, often used in experiments when conditions are varied across blocks.
Cycle is the finest time scale -- typically 1 msec -- a single activation update.
Episode is a sequence of scenes that constitutes the next larger-scale unit of naturalistic experience e.g., going to the grocery store or eating at a restaurant, attending a wedding or other "event".
Epoch is used in two different contexts.
Event is the smallest unit of naturalistic experience that coheres unto itself (e.g., something that could be described in a sentence).
Expt is an entire experiment -- multiple Runs through a given protocol / set of parameters.
FastSpike is typically 10 cycles = 10 msec (100hz) = the fastest spiking time generally observed in the brain.
GeMultNoise means that noise is multiplicative on the Ge excitatory conductance values.
GeNoise means noise is added to the excitatory conductance (Ge).
the commit JUST BEFORE the release.
The neuron flags.
NeurHasCmpr means the neuron has external comparison input in its Targ field -- used for computing comparison statistics but does not drive neural activity ever.
NeurHasExt means the neuron has external input in its Ext field.
NeurHasTarg means the neuron has external target input in its Targ field.
NeurOff flag indicates that this neuron has been turned off (i.e., lesioned).
NeuronVarStart is the byte offset of fields in the Neuron structure where the float32 named variables start.
NoNoise means no noise added.
Phase is either Minus or Plus phase -- Minus = first 3 quarters, Plus = last quarter.
Q1 is the first quarter, which, due to 0-based indexing, shows up as Quarter = 0 in timer.
The quarters.
The quarters.
The quarters.
Quarter is typically 25 cycles = 25 msec (40hz) = 1/4 of the 100 msec alpha trial This is also the GammaCycle (gamma = 40hz), but we use Quarter functionally by virtue of there being 4 per AlphaCycle.
The quarters.
Run is a complete run of a model / subject, from training to testing, etc.
Scene is a sequence of events that constitutes the next larger-scale coherent unit of naturalistic experience corresponding e.g., to a scene in a movie.
Sequence is a sequential group of Trials (not always needed).
ThetaCycle is typically 200 cycles = 200 msec (5 hz) = two alpha-frequency cycles.
Tick is one step in a sequence -- often it is useful to have Trial count up throughout the entire Epoch but also include a Tick to count trials within a Sequence.
The time scales.
Trial is one unit of behavior in an experiment -- it is typically environmentally defined instead of endogenously defined in terms of basic brain rhythms.
No description provided by the author
UTC.
VmNoise means noise is added to the membrane potential.

# Variables

No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author

# Structs

ActAvg are running-average activation levels used for netinput scaling and adaptive inhibition.
ActAvgParams represents expected average activity levels in the layer.
ActInitParams are initial values for key network state variables.
ActNoiseParams contains parameters for activation-level noise.
leabra.ActParams contains all the activation computation params and functions for basic Leabra, at the neuron level .
AvgLParams are parameters for computing the long-term floating average value, AvgL which is used for driving BCM-style hebbian learning in XCAL -- this form of learning increases contrast of weights and generally decreases overall activity of neuron, to prevent "hog" units -- it is computed as a running average of the (gain multiplied) medium-time-scale average activation at the end of the alpha-cycle.
ClampParams are for specifying how external inputs are clamped onto network activation values.
CosDiffParams specify how to integrate cosine of difference between plus and minus phase activations Used to modulate amount of hebbian learning, and overall learning rate.
CosDiffStats holds cosine-difference statistics at the layer level.
DtParams are time and rate constants for temporal derivatives in Leabra (Vm, net input).
DWtNormParams are weight change (dwt) normalization parameters, using MAX(ABS(dwt)) aggregated over Sending connections in a given projection for a given unit.
FFFBInhib contains values for computed FFFB inhibition.
leabra.InhibParams contains all the inhibition computation params and functions for basic Leabra This is included in leabra.Layer to support computation.
leabra.Layer has parameters for running a basic rate-coded Leabra layer.
leabra.LayerStru manages the structural elements of the layer, which are common to any Layer type.
leabra.LearnNeurParams manages learning-related parameters at the neuron-level.
leabra.LearnSynParams manages learning-related parameters at the synapse-level.
LrnActAvgParams has rate constants for averaging over activations at different time scales, to produce the running average activation values that then drive learning in the XCAL learning rules.
MomentumParams implements standard simple momentum -- accentuates consistent directions of weight change and cancels out dithering -- biologically captures slower timecourse of longer-term plasticity mechanisms.
leabra.Network has parameters for running a basic rate-coded Leabra network.
leabra.NetworkStru holds the basic structural components of a network (layers).
leabra.Neuron holds all of the neuron (unit) level variables -- this is the most basic version with rate-code only and no optional features at all.
OptThreshParams provides optimization thresholds for faster processing.
Pool contains computed values for FFFB inhibition, and various other state values for layers and pools (unit groups) that can be subject to inhibition, including: * average / max stats on Ge and Act that drive inhibition * average activity overall that is used for normalizing netin (at layer level).
leabra.Prjn is a basic Leabra projection with synaptic learning parameters.
PrjnStru contains the basic structural information for specifying a projection of synaptic connections between two layers, and maintaining all the synaptic connection-level data.
SelfInhibParams defines parameters for Neuron self-inhibition -- activation of the neuron directly feeds back to produce a proportional additional contribution to Gi.
leabra.Synapse holds state for the synaptic connection between neurons.
leabra.Time contains all the timing state and parameter information for running a model.
WtBalParams are weight balance soft renormalization params: maintains overall weight balance by progressively penalizing weight increases as a function of how strong the weights are overall (subject to thresholding) and long time-averaged activation.
WtBalRecvPrjn are state variables used in computing the WtBal weight balance function There is one of these for each Recv Neuron participating in the projection.
WtInitParams are weight initialization parameters -- basically the random distribution parameters but also Symmetry flag.
WtScaleParams are weight scaling parameters: modulates overall strength of projection, using both absolute and relative factors.
WtSigParams are sigmoidal weight contrast enhancement function parameters.
XCalParams are parameters for temporally eXtended Contrastive Attractor Learning function (XCAL) which is the standard learning equation for leabra .

# Interfaces

LeabraLayer defines the essential algorithmic API for Leabra, at the layer level.
LeabraNetwork defines the essential algorithmic API for Leabra, at the network level.
LeabraPrjn defines the essential algorithmic API for Leabra, at the projection level.

# Type aliases

ActNoiseType are different types / locations of random noise for activations.
NeurFlags are bit-flags encoding relevant binary state for neurons.
Quarters are the different alpha trial quarters, as a bitflag, for use in relevant timing parameters where quarters need to be specified.
TimeScales are the different time scales associated with overall simulation running, and can be used to parameterize the updating and control flow of simulations at different scales.