Skip to content

Commit 054619b

Browse files
ago109Alexander Ororbiarxng8willgebhardt
authored
Minor nudge to v3.0.1 (#129)
* minor edit to math in hh-lesson doc * Fix workflow, numpy install, and pytest bug in github action workflows (#117) * Update pyproject.toml * Update python-package-conda.yml * Update python-package-conda.yml * Update python-package-conda.yml * Update python-package-conda.yml * Update python-package-conda.yml * Update python-package-conda.yml * Update python-package-conda.yml * minor nudge/cleanup to minor patched version 2.0.1 * minor nudge/cleanup to minor patched version 2.0.3 * Merged back minor doc fix back to main (for syncing purposes) (#119) * Nudge of release to minor patched version 2.0.3 (#118) * nudge of doc to 2.0.2 (#115) Co-authored-by: Alexander Ororbia <[email protected]> * minor edit to math in hh-lesson doc * Fix workflow, numpy install, and pytest bug in github action workflows (#117) * Update pyproject.toml * Update python-package-conda.yml * Update python-package-conda.yml * Update python-package-conda.yml * Update python-package-conda.yml * Update python-package-conda.yml * Update python-package-conda.yml * Update python-package-conda.yml * minor nudge/cleanup to minor patched version 2.0.1 * minor nudge/cleanup to minor patched version 2.0.3 --------- Co-authored-by: Alexander Ororbia <[email protected]> Co-authored-by: Viet Dung Nguyen <[email protected]> * fixed typo/error in doc evolving_synapses.md --------- Co-authored-by: Alexander Ororbia <[email protected]> Co-authored-by: Viet Dung Nguyen <[email protected]> * minor clean-up in model_basics docs * minor fixes/cleanup of docs * fixed typo in integration tutorial doc * updated papers/talk page for ngclearn * Merging over v3 to main (for roll-out of v3 upgrade) (#125) * Working v3 * Undid fixed compartemts Undid the fixed compartments to work with new global constant tracking * Fixed an execution bug * ported over quad-lif to v3 - needs testing * ported over IF/quadLIF cells, minor revision to LIF cell * Start util cleanup * refactored/ported RAFCell to v3 * ported over/refactored WTASCell for v3 * wrote successful unit-test of WTASCell * put back in init-structure/pointers * fixed minor error in LIFCell, got unit-test for LIFCell to run * quad-lif test sketched * sketch of ifcell test * fixed minor bugs and tests locally pass for if, quad-lif, and lif-cells now, with minor patches to help fun and doc-strings * refactored raf-cell and test passed * refactored adex/test passed; minor cleanup in lif, raf, and wtas cells * refactored fn-cell and test passed * cleaned up lif, raf, wtas, fn, and quad-lif cells repr method * refactored and tests passed for izh and h-h cells * JaxProcess update * cleaned up dunder repr method, moved to JaxComponent parent; fixed __init__ pointer to tensorstats * refactored alpha and exp-synapses, tests passed; minor edit to __init__ for synapses * refactored short-term syn, tests passed - including stp-dense-syn and minor cleanup/edit to synapse __init__ * refactored bcm-syn and test passed * refactored exp-stdp-syn and passed tests for exp-stdp-syn and trace-stdp-syn * refactored event-stdp-syn and test passed * refactored mstdpet-syn and test passed * refactored stdp-conv-syn/conv-syn and test passed * refactored and passed test for deconv/stdp-deconv-syn and other minor cleanup for conv/deconv support * Refactoring neuronal and synaptic components (#123) - merge from fork to v3 * refactoring graded cells * update refactored models * update sLIF cell --------- Co-authored-by: Alex Ororbia <[email protected]> * commented out deprecator in hebb-syn and exp-kernel * update hebbian synapse * update hebbian synapse * working reinforce synapse * minor edits to exp-kernel/wtas-cell * update requirements * refactored conv/deconv-hebb-syn and tests passed * update hebbian synapse reset bug * update reset methods * update patched synapse reset * add `not self.inputs.targeted and ` to required components. Fixing general `__repr__` bug in `jaxcomponent` * minor edit to lif/modulated-syn init file * fixed some minor bugs in rate-coded cells/hebb-syn * update code * minor patches to components, including hebb-syn/conv/deconv and reward-cell * minor patches to components, including hebb-syn/conv/deconv and reward-cell * update testing for graded neurons and input encoders * update phasor cell * update test bernoulli cell and poisson cell * update components and their related test cases * fixed monitor bugs from v2, tweaked unit-tests for input-encoders/latency-cell * update test case for test_sLIFCell.py * some cleanup * made revisions to components/clean-up; added back in deprecators * removed lava sub-module, and removed monitor/base-monitor legacy components * minor cleanup of inits * refactored regression module to be compliant with v3 * adjusted sphinx-docs w.r.t. new v3 refactoring * minor revision to double-exp syn pointing, mods to modeling docs * updated adex tutorial doc to v3 * revised adex and error-cell neurocog tutorials * fixed minor issues in input-encoders, further revisions to docs for v3 * revised dyn/chem-syn neurocog doc, cleaned up dynamic syn * revised fn and hh-cell neurocog docs, added some refs to distribution generator * revised integration and izh-cell neurocog docs * revised izh-cell, cleaned-up fn-cell, and revised lif neurocog docs * revised metrics/plotting neurocog docs * revised mod/reward-stdp neurocog doc * revised stp-syn neurocog doc and updated stp-syn to use proper initializer * revised elements of utils to comply with docs * revised stdp neurocog doc to v3 * revised traces neurocog tutorial to v3 * cleaned up utils.optim and wrote compliant NAG optim * cleaned up utils.optim and wrote compliant NAG optim * cleanup of components, added leaky-noise-cell, minor edits * revised leaky-noise-cell, wrote its unit test, test-passed * some revisions/updates to toc/pointer/general tutorial docs * minor revisions to pyproject/req files * update reinforce synapse * update test cases * implemented in-house gmm, in-built to ngclearn; tested on gaussian mode data * wrote gmm density estimator tutorial * patched some tests/syn/neuron components, added sketch of bmm density * fixed test_laplacianErrorCell and laplace-cell bug * fixed test_laplacianErrorCell and laplace-cell bug * made patches to bmm * updated density tutorial/neurocog doc * minor edit to gmm/bmm docs * minor edit to gmm/bmm docs * cleaned up density structure, use parent mixture class to organize model variations * cleaned up density structure, use parent mixture class to organize model variations * added basic exp-mixture to utils.density * minor edits to emm * cleaned up mixtures and finished debugging EMM/works on example * removed old weight_distribution.py, other cleanup/revisions throughout * minor edit to data-loader * revised tests to no longer use weight_distribution/revisions throughout * minor edit to emm doc * added bic calculation to metric_utils * fix ratecell ug of passing unrelated kwargs to parent class * added calc_update() co-routine to hebbian-syn component * fix weight init * integrated rbm/harmonium model-exhibit * Update __init__.py Added the config/logging back to the init * placed pointer to rao-ballard1999 exhibit; updates to docs * updates to docs/revisions * removed flag from bernoulli/latency-cells for now; minor edit to doc * updates to theory doc * updated history log * minor clean-up of ngclearn.utils.viz.dim_reduce * Update jaxComponent.py Added support for turning off autosave * update hebbian synapse saving * update saving and loading utils, making hebbian synapse use these utils for custom optimizer params saving and loading * minor revisions/polish * modded docs to include v3 foundations * updates to init for logging * Updates to lessons * final cleanup/polish/update to docs for v3 nudge * updates to museum doc for v3 * nudged citation file * minor nudge to docs/files to point to v3 --------- Co-authored-by: Will Gebhardt <[email protected]> Co-authored-by: Alexander Ororbia <[email protected]> Co-authored-by: Viet Dung Nguyen <[email protected]> Co-authored-by: Viet Nguyen <[email protected]> Co-authored-by: Viet Dung Nguyen <[email protected]> * update to rbm/harmonium doc * updated leaky-noise-cell to maintain temporal derivative of state * minor revisons/updates to hebb/dense syn, metric utils * cleaned-up/revised leaky-noise-cell * cleaned-up/revised leaky-noise-cell --------- Co-authored-by: Alexander Ororbia <[email protected]> Co-authored-by: Viet Dung Nguyen <[email protected]> Co-authored-by: Will Gebhardt <[email protected]> Co-authored-by: Viet Nguyen <[email protected]> Co-authored-by: Viet Dung Nguyen <[email protected]>
1 parent 2d567be commit 054619b

File tree

6 files changed

+72
-75
lines changed

6 files changed

+72
-75
lines changed

docs/tutorials/neurocog/integration.md

Lines changed: 16 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,14 @@
11
# Numerical Integration
22

3-
In constructing one's own biophysical models, particularly those of phenomena that change with time, ngc-learn offers
4-
useful flexible tools for numerical integration that facilitate an easier time in constructing your own components that
5-
play well with the library's simulation backend. Knowing how things work beyond Euler integration -- the base/default
6-
form of integration often employed by ngc-learn -- might be useful for constructing and simulating dynamics more
7-
accurately (often at the cost of additional computational time).
3+
In constructing one's own biophysical models, particularly those of phenomena that change with time, ngc-learn offers useful flexible tools for numerical integration that facilitate an easier time in constructing your own components that play well with the library's simulation backend. Knowing how things work beyond Euler integration -- the base/default form of integration often employed by ngc-learn -- might be useful for constructing and simulating dynamics more accurately (often at the cost of additional computational time).
84

95
## Euler Integration
106

11-
Euler integration is very simple (and fast) way of using the ordinary differential equations you typically define for
12-
the cellular dynamics of various components in ngc-learn (which typically get called in any component's `advance_state()` command).
7+
Euler integration is very simple (and fast) way of using the ordinary differential equations you typically define for the cellular dynamics of various components in ngc-learn (which typically get called in any component's `advance_state()` command).
138

14-
While utilizing the numerical integrator will depend on your component's design and the (biophysical) elements you wish
15-
to model, let's observe ngc-learn's base backend utilities (its integration backend `ngclearn.utils.diffeq`) in the
16-
context of numerically integrating a simple differential equation; specifically the autonomous (linear) ordinary
17-
differential equation (ODE): $\frac{\partial y(t)}{\partial t} = y(t)$. The analytic solution to this equation is
18-
also simple -- it is $y(t) = e^{t}$.
9+
While utilizing the numerical integrator will depend on your component's design and the (biophysical) elements you wish to model, let's observe ngc-learn's base backend utilities (its integration backend `ngclearn.utils.diffeq`) in the context of numerically integrating a simple differential equation; specifically the autonomous (linear) ordinary differential equation (ODE): $\frac{\partial y(t)}{\partial t} = y(t)$. The analytic solution to this equation is also simple -- it is $y(t) = e^{t}$.
1910

20-
If you have defined your differential equation $\frac{\partial y(t)}{\partial t}$ in a rather simple format[^1], you
21-
can write the following code to examine how Euler integration approximates the analytical solution (in this example,
22-
we examine just two different step sizes, i.e., `dt = 0.1` and `dt = 0.09`)
11+
If you have defined your differential equation $\frac{\partial y(t)}{\partial t}$ in a rather simple format[^1], you can write the following code to examine how Euler integration approximates the analytical solution (in this example, we examine just two different step sizes, i.e., `dt = 0.1` and `dt = 0.09`)
2312

2413
```python
2514
from jax import numpy as jnp, random, jit, nn
@@ -82,30 +71,13 @@ which should yield you a plot like the one below:
8271

8372
<img src="../../images/tutorials/neurocog/euler_integration.jpg" width="500" />
8473

85-
Notice how the integration constant `dt` (or $\Delta t$) chosen affects the approximation of ngc-learn's Euler
86-
integrator and typically, when constructing your biophysical models, you will need to think about this constant in
87-
the context of your simulation time-scale and what you intend to model. Note that, in many biophysical component cells,
88-
you will have an integration time constant of some form, i.e., a $\tau$, that you can control, allowing you to fix
89-
your `dt` to your simulated time-scale (say to a value like `dt = 1` millisecond) while tuning/altering your time
90-
constant $\tau$ (since the differential equation will be weighted by $\frac{\Delta t}{\tau}$).
74+
Notice how the integration constant `dt` (or $\Delta t$) chosen affects the approximation of ngc-learn's Euler integrator and typically, when constructing your biophysical models, you will need to think about this constant in the context of your simulation time-scale and what you intend to model. Note that, in many biophysical component cells, you will have an integration time constant of some form, i.e., a $\tau$, that you can control, allowing you to fix your `dt` to your simulated time-scale (say to a value like `dt = 1` millisecond) while tuning/altering your time constant $\tau$ (since the differential equation will be weighted by $\frac{\Delta t}{\tau}$).
9175

9276
## Higher-Order Forms of (Explicit) Integration
9377

94-
Notably, ngc-learn has built-in several forms of (explicit) numerical integration beyond the Euler method, such as a
95-
second order Runge-Kutta (RK-2) method (also known as the midpoint method) and 4th-order Runge-Kutta (RK-4) method or
96-
an error-predictor method such as Heun's method (also known as the trapezoid method). These forms of integration might
97-
be useful particularly if a cell or plastic synaptic component you might be writing follows dynamics that are more
98-
nonlinear or biophysically complex (requiring a higher degree of simulation accuracy). For instance, ngc-learn's
99-
in-built cell components, particularly those of higher biophysical complexity -- like the [Izhikevich cell](ngclearn.components.neurons.spiking.izhikevichCell) or
100-
the [FitzhughNagumo cell](ngclearn.components.neurons.spiking.fitzhughNagumoCell) -- contain argument flags for switching their simulation steps to use RK-2.
101-
102-
To illustrate the value of higher-order numerical integration methods, let us examine a simple polynomial equation
103-
(thus nonlinear) that is further non-autonomous, i.e., it is a function of the time variable $t$ itself. A possible set
104-
of dynamics in this case might be: $\frac{\partial y(t)}{\partial t} = -2 t^3 + 12 t^2 - 20 t + 8.5$ which has the
105-
analytic solution $y(t) = -(1/2) t^4 + 4 t^3 - 10 t^2 + 8.5 t + C$ (where we will set $C = 1$). You can write code
106-
like below, importing from `ngclearn.utils.diffeq.ode_utils` the Euler routine (`step_euler`), the RK-2 routine
107-
(`step_rk2`), the RK-4 routine (`step_rk4`), and Heun's method (`step_heun`), and compare how these methods
108-
approximate the nonlinear dynamics inherent to our constructed $\frac{\partial y(t)}{\partial t}$ ODE below:
78+
Notably, ngc-learn has built-in several forms of (explicit) numerical integration beyond the Euler method, such as a second order Runge-Kutta (RK-2) method (also known as the midpoint method) and 4th-order Runge-Kutta (RK-4) method or an error-predictor method such as Heun's method (also known as the trapezoid method). These forms of integration might be useful particularly if a cell or plastic synaptic component you might be writing follows dynamics that are more nonlinear or biophysically complex (requiring a higher degree of simulation accuracy). For instance, ngc-learn's in-built cell components, particularly those of higher biophysical complexity -- like the [Izhikevich cell](ngclearn.components.neurons.spiking.izhikevichCell) or the [FitzhughNagumo cell](ngclearn.components.neurons.spiking.fitzhughNagumoCell) -- contain argument flags for switching their simulation steps to use RK-2.
79+
80+
To illustrate the value of higher-order numerical integration methods, let us examine a simple polynomial equation (thus nonlinear) that is further non-autonomous, i.e., it is a function of the time variable $t$ itself. A possible set of dynamics in this case might be: $\frac{\partial y(t)}{\partial t} = -2 t^3 + 12 t^2 - 20 t + 8.5$ which has the analytic solution $y(t) = -(1/2) t^4 + 4 t^3 - 10 t^2 + 8.5 t + C$ (where we will set $C = 1$). You can write code like below, importing from `ngclearn.utils.diffeq.ode_utils` the Euler routine (`step_euler`), the RK-2 routine (`step_rk2`), the RK-4 routine (`step_rk4`), and Heun's method (`step_heun`), and compare how these methods approximate the nonlinear dynamics inherent to our constructed $\frac{\partial y(t)}{\partial t}$ ODE below:
10981

11082
```python
11183
from jax import numpy as jnp, random, jit, nn
@@ -176,15 +148,11 @@ which should yield you a plot like the one below:
176148

177149
<img src="../../images/tutorials/neurocog/ode_method_comparison.jpg" width="500" />
178150

179-
As you might observe, RK-4 give the best approximation of the solution. In addition, when the integration step size is
180-
held constant, Euler integration does quite poorly over just a few steps while RK-2 and Heun's method do much better at
181-
approximating the analytical equation. In the end, the type of numerical integration method employed can matter
182-
depending on the ODE(s) you use in modeling, particularly if you seek higher accuracy for more nonlinear dynamics like
183-
in our example above.
184-
185-
[^1]: The format expected by ngc-learn's backend is that the differential equation provides a functional API/form
186-
like so: for instance `dy/dt = diff_eqn(t, y(t), params)`, representing
187-
$\frac{\partial \mathbf{y}(t, \text{params})}{\partial t}$, noting that you can name your 3-argument function (and
188-
its arguments) anything you like. Your function does not need to use all of the arguments (i.e., `t`, `y`, or
189-
`params`, the last of which is a tuple containing any fixed constants your equation might need) to produce its
190-
output. Finally, this function should only return the value(s) for `dy/dt` (vectors/matrices of values).
151+
As you might observe, RK-4 give the best approximation of the solution. In addition, when the integration step size is held constant, Euler integration does quite poorly over just a few steps while RK-2 and Heun's method do much better at approximating the analytical equation. In the end, the type of numerical integration method employed can matter depending on the ODE(s) you use in modeling, particularly if you seek higher accuracy for more nonlinear dynamics like in our example above.
152+
153+
[^1]: The format expected by ngc-learn's backend is that the differential equation
154+
provides a functional API/form like so: for instance `dy/dt = diff_eqn(t, y(t), params)`,
155+
representing $\frac{\partial \mathbf{y}(t, \text{params})}{\partial t}$,
156+
noting that you can name your 3-argument function (and its arguments) anything you like.
157+
Your function does not need to use all of the arguments (i.e., `t`, `y`, or `params`, the last of which is a tuple containing any fixed constants your equation might need) to produce its output.
158+
Finally, this function should only return the value(s) for `dy/dt` (vectors/matrices of values).

ngclearn/components/neurons/graded/leakyNoiseCell.py

Lines changed: 30 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -22,18 +22,23 @@ class LeakyNoiseCell(JaxComponent): ## Real-valued, leaky noise cell
2222
2323
The specific differential equation that characterizes this cell is (for adjusting x) is:
2424
25-
| tau_x * dx/dt = -x + j_rec + j_in + sqrt(2 alpha (sigma_rec)^2) * eps
25+
| tau_x * dx/dt = -x + j_rec + j_in + sqrt(2 alpha (sigma_pre)^2) * eps; and,
26+
| r = f(x) + (eps * sigma_post).
2627
| where j_in is the set of incoming input signals
2728
| and j_rec is the set of recurrent input signals
2829
| and eps is a sample of unit Gaussian noise, i.e., eps ~ N(0, 1)
30+
| and f(x) is the rectification function
31+
| and sigma_pre is the pre-rectification noise applied to membrane x
32+
| and sigma_post is the post-rectification noise applied to rates f(x)
2933
3034
| --- Cell Input Compartments: ---
3135
| j_input - input (bottom-up) electrical/stimulus current (takes in external signals)
3236
| j_recurrent - recurrent electrical/stimulus pressure
3337
| --- Cell State Compartments ---
3438
| x - noisy rate activity / current value of state
3539
| --- Cell Output Compartments: ---
36-
| r - post-rectified activity, i.e., fx(x) = relu(x)
40+
| r - post-rectified activity, e.g., fx(x) = relu(x)
41+
| r_prime - post-rectified temporal derivative, e.g., dfx(x) = d_relu(x)
3742
3843
Args:
3944
name: the string name of this cell
@@ -53,19 +58,23 @@ class LeakyNoiseCell(JaxComponent): ## Real-valued, leaky noise cell
5358
:Note: setting the integration type to the midpoint method will increase the accuracy of the estimate of
5459
the cell's evolution at an increase in computational cost (and simulation time)
5560
56-
sigma_rec: noise scaling factor / standard deviation (Default: 1)
61+
sigma_pre: pre-rectification noise scaling factor / standard deviation (Default: 0.1)
62+
63+
sigma_post: post-rectification noise scaling factor / standard deviation (Default: 0.)
64+
65+
leak_scale: degree to which membrane leak should be scaled (Default: 1)
5766
"""
5867

59-
# Define Functions
6068
def __init__(
61-
self, name, n_units, tau_x, act_fx="relu", integration_type="euler", batch_size=1, sigma_rec=1.,
62-
leak_scale=1., shape=None, **kwargs
69+
self, name, n_units, tau_x, act_fx="relu", integration_type="euler", batch_size=1, sigma_pre=0.1,
70+
sigma_post=0.1, leak_scale=1., shape=None, **kwargs
6371
):
6472
super().__init__(name, **kwargs)
6573

6674

6775
self.tau_x = tau_x
68-
self.sigma_rec = sigma_rec ## a "resistance" scaling factor
76+
self.sigma_pre = sigma_pre ## a pre-rectification scaling factor
77+
self.sigma_post = sigma_post ## a post-rectification scaling factor
6978
self.leak_scale = leak_scale ## the leak scaling factor (most appropriate default is 1)
7079

7180
## integration properties
@@ -89,13 +98,17 @@ def __init__(
8998
self.j_input = Compartment(restVals, display_name="Input Stimulus Current", units="mA") # electrical current
9099
self.j_recurrent = Compartment(restVals, display_name="Recurrent Stimulus Current", units="mA") # electrical current
91100
self.x = Compartment(restVals, display_name="Rate Activity", units="mA") # rate activity
92-
self.r = Compartment(restVals, display_name="Rectified Rate Activity") # rectified output
101+
self.r = Compartment(restVals, display_name="(Rectified) Rate Activity") # rectified output
102+
self.r_prime = Compartment(restVals, display_name="Derivative of rate activity")
93103

94104
@compilable
95105
def advance_state(self, t, dt):
96-
### run a step of integration over neuronal dynamics
106+
## run a step of integration over neuronal dynamics
107+
### Note: self.fx is the "rectifier" (rectification function)
108+
key, skey = random.split(self.key.get(), 2)
109+
eps_pre = random.normal(skey, shape=self.x.get().shape) ## pre-rectifier distributional noise
97110
key, skey = random.split(self.key.get(), 2)
98-
eps = random.normal(skey, shape=self.x.get().shape) ## sample of unit distributional noise
111+
eps_post = random.normal(skey, shape=self.x.get().shape) ## post-rectifier distributional noise
99112

100113
#x = _run_cell(dt, self.j_input.get(), self.j_recurrent.get(), self.x.get(), eps, self.tau_x, self.sigma_rec, integType=self.intgFlag)
101114
_step_fns = {
@@ -104,14 +117,16 @@ def advance_state(self, t, dt):
104117
2: step_rk4,
105118
}
106119
_step_fn = _step_fns[self.intgFlag] #_step_fns.get(self.intgFlag, step_euler)
107-
params = (self.j_input.get(), self.j_recurrent.get(), eps, self.tau_x, self.sigma_rec, self.leak_scale)
120+
params = (self.j_input.get(), self.j_recurrent.get(), eps_pre, self.tau_x, self.sigma_pre, self.leak_scale)
108121
_, x = _step_fn(0., self.x.get(), _dfz, dt, params) ## update state activation dynamics
109-
r = self.fx(x) ## calculate rectified / post-activation function value(s)
122+
r = self.fx(x) + (eps_post * self.sigma_post) ## calculate (rectified) activity rates; f(x)
123+
r_prime = self.dfx(x) ## calculate local deriv of activity rates; f'(x)
110124

111125
## set compartments to next state values in accordance with dynamics
112-
self.key.set(key)
126+
self.key.set(key) ## carry noise key over transition (to next state of component)
113127
self.x.set(x)
114128
self.r.set(r)
129+
self.r_prime.set(r_prime)
115130

116131
@compilable
117132
def reset(self):
@@ -123,6 +138,7 @@ def reset(self):
123138
self.j_recurrent.set(restVals)
124139
self.x.set(restVals)
125140
self.r.set(restVals)
141+
self.r_prime.set(restVals)
126142

127143
@classmethod
128144
def help(cls): ## component help function
@@ -142,7 +158,7 @@ def help(cls): ## component help function
142158
"n_units": "Number of neuronal cells to model in this layer",
143159
"batch_size": "Batch size dimension of this component",
144160
"tau_x": "State time constant",
145-
"sigma_rec": "The non-zero degree/scale of noise to inject into this neuron"
161+
"sigma_pre": "The non-zero degree/scale of (pre-rectification) noise to inject into this neuron"
146162
}
147163
info = {cls.__name__: properties,
148164
"compartments": compartment_props,

ngclearn/components/neurons/graded/rateCell.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -226,7 +226,7 @@ def advance_state(self, dt):
226226
## self.pressure <-- "top-down" expectation / contextual pressure
227227
## self.current <-- "bottom-up" data-dependent signal
228228
dfx_val = self.dfx(z)
229-
j = _modulate(j, dfx_val)
229+
j = _modulate(j, dfx_val) ## TODO: make this optional (for NGC circuit dynamics)
230230
j = j * self.resist_scale
231231
tmp_z = _run_cell(
232232
dt, j, j_td, z, self.tau_m, leak_gamma=self.priorLeakRate, integType=self.intgFlag,

ngclearn/components/synapses/denseSynapse.py

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,14 +36,20 @@ class DenseSynapse(JaxComponent): ## base dense synaptic cable
3636
p_conn: probability of a connection existing (default: 1.); setting
3737
this to < 1 and > 0. will result in a sparser synaptic structure
3838
(lower values yield sparse structure)
39+
40+
mask: if non-None, a (multiplicative) mask is applied to this synaptic weight matrix
3941
"""
4042

4143
def __init__(
42-
self, name, shape, weight_init=None, bias_init=None, resist_scale=1., p_conn=1., batch_size=1, **kwargs
44+
self, name, shape, weight_init=None, bias_init=None, resist_scale=1., p_conn=1., mask=None, batch_size=1,
45+
**kwargs
4346
):
4447
super().__init__(name, **kwargs)
4548

4649
self.batch_size = batch_size
50+
self.mask = 1.
51+
if mask is not None:
52+
self.mask = mask
4753

4854
## Synapse meta-parameters
4955
self.shape = shape
@@ -79,7 +85,9 @@ def __init__(
7985

8086
@compilable
8187
def advance_state(self):
82-
self.outputs.set((jnp.matmul(self.inputs.get(), self.weights.get()) * self.resist_scale) + self.biases.get())
88+
weights = self.weights.get()
89+
weights = weights * self.mask
90+
self.outputs.set((jnp.matmul(self.inputs.get(), weights) * self.resist_scale) + self.biases.get())
8391

8492
@compilable
8593
def reset(self):

0 commit comments

Comments
 (0)