You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The input/output signals to train the RNN can be any time series data. Let $X$ be the input signal and $Y$ be the output signal. $X$ should of shape `(n_timesteps, batch_size, input_dim)` and $Y$ should be of shape `(n_timesteps, batch_size, output_dim)`. These signals could be representations of many cognitive tasks, such as working memory, decision making, or motor control, etc. Here we use a simple sin wave as an example.
Copy file name to clipboardExpand all lines: _collection_2/parameters.md
+17-19Lines changed: 17 additions & 19 deletions
Original file line number
Diff line number
Diff line change
@@ -12,16 +12,12 @@ These parameters primarily determine the structure of the network. It is recomme
12
12
13
13
<divclass="table-wrapper"markdown="block">
14
14
15
-
|Parameter|Default|Type|Description|
15
+
|Parameter|Default|Type|Description|
16
16
|:-:|:-:|:-:|:-:|
17
-
| input_dim | 1 |`int`| Input dimension |
18
-
| output_dim | 1 |`int`| Output dimension |
19
-
| hidden_size | 100 |`int`| Number of hidden nodes |
20
-
| scaling | 1.0 |`float`| Scaling factor for the hidden weights, it will scale the hidden weight by $\frac{scaling}{\sqrt{N\_{hid}}}$. Won't be affected when the HiddenLayer distribution is `uniform`. |
21
-
| self_connections | False |`boolean`| Whether a neuron can connect to itself |
22
-
| activation |`relu`|`string`| Activation function, could be `relu`/`tanh` / `sigmoid`/`retanh`|
23
-
| layer_distributions |`uniform`|`string`/`list`| Layer distributions. Either `string` or a `list` of three elements. The `string` or `list` element must be either `uniform`, `normal`, or `zero`. If the given value is a `string`, it will be broadcasted to all layers. If the provided value is a `list`, its length must match the number of layers in the network and contains only valid distribution values. |
24
-
| layer_biases |`True`|`boolean` or `list`| Whether to use bias in each layer. Either a `boolean` or a `list` of three `boolean`s. If the given value is a list, its length must match the number of layers in the network and contains only `boolean` values. |
17
+
| dims |[1, 100, 1]|`list`| Dimensions of the network |
18
+
| activation | 'relu' |`string`| Activation function, could be 'relu', 'tanh', 'sigmoid', or 'retanh' |
19
+
| biases |`None`|`None`, `string`, or `list`| Use bias or not for each layer. A single value is broadcasted to a list of three values, which can be `None` (not using bias); 'zero' or 0 (bias initialized to 0 but could change during training); 'normal' (normal distribution), or ;'uniform' (bias initialized from a uniform distribution). If a list of three values is passed, each can be as described or a numpy array/torch tensor specifying the bias. |
20
+
| weights | 'uniform' |`string` or `list`| Distribution of weights for each layer. A single string is broadcasted to a list of three strings. Possible values: 'normal' (weights initialized from a normal distribution), 'uniform' (weights initialized from a uniform distribution). If a list of three values is passed, each can be as described or a numpy array/torch tensor specifying the weights. |
25
21
26
22
</div>
27
23
@@ -41,16 +37,17 @@ These parameters primarily determine the training process of the network. The `t
41
37
42
38
</div>
43
39
44
-
# Constraint Parameters
45
-
These parameters primarily determine the constraints of the network. By default, the network is initialized using the most lenient constraints, i.e., no constraints being enforced.
40
+
# Mask Parameters
41
+
When modeling the brain with neural networks, both the connections between neurons (synapses) and the neuron's non-linear activation functions are crucial components. Synapses, in particular, provide numerous degrees of freedom within the network. The connectivity matrix, for example, determines the network's structure, while various properties of synapses—such as their plasticity, whether they are excitatory or inhibitory, their strength, and the potential for new synapses to form—add layers of complexity and control. Here, we use masks to manage these characteristics.
46
42
47
43
<divclass="table-wrapper"markdown="block">
48
44
49
-
| Parameter | Default | Type | Description |
50
-
|:-:|:-:|:-:|:-:|
51
-
| positivity_constraints | False |`boolean`/`list`| Whether to enforce Dale's law. Either a `boolean` or a `list` of three `boolean`s. If the given value is a list, from the first element to the last element, corresponds to the InputLayer, HiddenLayer, and ReadoutLayer, respectively. |
52
-
| sparsity_constraints | True |`boolean`/`list`| Whether a neuron can grow new connections. See [constraints and masks](#constraints-and-masks). If it's a list, it must have precisely three elements. Note: this must be checked even if your mask is sparse, otherwise the new connection will still be generated. |
53
-
| layer_masks |`None` or `list`|`list` of `np.ndarray`| Layer masks if `sparsity_constraints/positivity_constraints is set to true. From the first to the last, the list elements correspond to the mask for Input-Hidden, Hidden-Hidden, and Hidden-Readout weights, respectively. Each mask must have the same dimension as the corresponding weight matrix. See [constraints and masks](#constraints-and-masks) for details. |
45
+
| Parameter | Default | Type | Description |
46
+
|:-: |:-: |:-: |:-: |
47
+
| sparsity_masks | None |`None` or `list`| Use `sparsity_masks` or not. A single `None` will be broadcasted to a list of three `None`s. If a list of three values is passed, each value can be either `None` or a numpy array/torch tensor specifying the `sparsity_masks`. |
48
+
| ei_masks | None |`None` or `list`| Use `ei_masks` or not. A single `None` will be broadcasted to a list of three `None`s. If a list of three values is passed, each value can be either `None` or a numpy array/torch tensor specifying the `ei_masks`. |
49
+
| plasticity_masks | None |`None` or `list`| Use `plasticity_masks` or not. A single `None` will be broadcasted to a list of three `None`s. If a list of three values is passed, each value can be either `None` or a numpy array/torch tensor specifying the `plasticity_masks`. |
50
+
| synapse_growth_masks | None |`None` or `list`| Use `synapse_growth_masks` or not. A single `None` will be broadcasted to a list of three `None`s. If a list of three values is passed, each value can be either `None` or a numpy array/torch tensor that directly specifies the probability of growing a synapse at the selected location if there is no synapse. |
54
51
55
52
</div>
56
53
@@ -104,11 +101,12 @@ Whether a neuron can connect to itself. This feature is enforced along with the
|[`save()`]({{ site.baseurl }}/rnn/methods/#ctrnnsave-) | Save the network to a given path. |
109
106
|[`load()`]({{ site.baseurl }}/rnn/methods/#ctrnnload-) | Load the network from a given path. |
110
-
|[`print_layers()`]({{ site.baseurl }}/rnn/methods/#ctrnnprint_layers-) | Print the network architecture and layer-by-layer specifications |
107
+
|[`print_layers()`]({{ site.baseurl }}/rnn/methods/#ctrnnprint_layers-) | Print the network architecture and layer-by-layer specifications.|
111
108
|[`train()`]({{ site.baseurl }}/rnn/methods/#ctrnntrain-) | Set the network to training mode, training will be performed and constraints will be enforced. Also, during training, the recurrent noises (preact_noise and postact_noise) won't be added. |
112
-
|[`eval()`]({{ site.baseurl }}/rnn/methods/#ctrnneval-) | Set the network to evaluation mode, no training will be performed and no constraints will be enforced |
109
+
|[`eval()`]({{ site.baseurl }}/rnn/methods/#ctrnneval-) | Set the network to evaluation mode, no training will be performed and no constraints will be enforced. |
110
+
|[`layers`]({{ site.baseurl }}/rnn/methods/#ctrnnlayers) | Return a list of the network layers. |
where $N_{batch}$ is the batch size, $T$ is the number of time steps, $N_{out}$ is the number of output neurons, $fr_{btn}$ is the firing rate of neuron $n$ at time $t$ in the $i$-th batch.
18
+
- $ B $: number of batches.
19
+
- $ T $: number of time steps.
20
+
- $ N_{hid} $: number of hidden neurons.
21
+
- $ r_{b,t,i} $: firing rate of the $i$-th neuron at time $t$ in the $b$-th batch.
19
22
20
23
21
24
# Firing Rate Constraint (SD)
22
25
**Lambda Key:**`lambda_fr_sd`
23
26
24
-
Regularize the SD of the HiddenLayer firing rates such that all neurons will fire at approximately the same rate. The Firing rate SD loss function is defined as:
27
+
Regularize the standard deviation of the firing rate.
where $N_{batch}$ is the batch size, $T$ is the number of time steps, $ y_{bt} $ is the predicted firing rate of the neuron at time $t$ in the $i$-th batch, and $f_{bt}$ is the ground truth firing rate of the neuron at time $t$ in the $i$-th batch.
33
+
- $ B $: number of batches.
34
+
- $ T $: number of time steps.
35
+
- $ N_{hid} $: number of hidden neurons.
36
+
- $ r_{b,t,i} $: firing rate of the $i$-th neuron at time $t$ in the $b$-th batch.
37
+
- $ \mu $: mean firing rate.
31
38
32
39
# Firing Rate Constraint (CV)
33
40
**Lambda Key:**`lambda_fr_cv`
34
41
35
-
Regularize the firing rate of a single neuron such that all neurons will fire at approximately the same rate. The Firing Rate Regularization loss function is defined as:
42
+
Regularize the coefficient of variation of the firing rate.
Copy file name to clipboardExpand all lines: _collection_5/base_struct.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ Methods that are shared by all structures.
34
34
|[`get_readout_idx()`]({{ site.baseurl }}/mask/methods/#maskget_readout_idx-) | Get indices of neurons that readout from. |
35
35
|[`get_non_input_idx()`]({{ site.baseurl }}/mask/methods/#maskget_non_input_idx-) | Get indices of neurons that don't receive input. |
36
36
|[`visualize()`]({{ site.baseurl }}/mask/methods/#maskvisualize-) | Visualize the generated masks. |
37
-
|[`masks()`]({{ site.baseurl }}/mask/methods/#maskmasks-) | Return a list of np.ndarray masks. It will be of length 3, where the first element is the input mask, the second element is the hidden mask, and the third element is the readout mask. For those structures that do not have specification for a certain mask, it will be an all-one matrix. |
37
+
|[`get_masks()`]({{ site.baseurl }}/mask/methods/#maskget_masks-) | Return a list of np.ndarray masks. It will be of length 3, where the first element is the input mask, the second element is the hidden mask, and the third element is the readout mask. For those structures that do not have specification for a certain mask, it will be an all-one matrix. |
38
38
|[`get_areas()`]({{ site.baseurl }}/mask/methods/#maskget_areas-) | Get a list of areas names. |
39
39
|[`get_area_idx()`]({{ site.baseurl }}/mask/methods/#maskget_area_idx-) | Get indices of neurons in a specific area. The parameter `area` could be either a string from the `get_areas()` or a index of the area. |
Copy file name to clipboardExpand all lines: _collection_6/v1.1.0.md
+6-5Lines changed: 6 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -16,10 +16,11 @@ order: 2
16
16
17
17
#### Todos:
18
18
-[ ] Add custom constraint wrapper.
19
-
-[] Change `visualize()` in the `nn4n.structure` to `plot_masks()`.
20
-
-[] Change `masks()` in the `nn4n.structure` to `get_masks()`.
21
-
-[ ] Change the loss to functional forms. (e.g. function(model) -> loss)
19
+
-[x] Change `visualize()` in the `nn4n.structure` to `plot_masks()`.
20
+
-[x] Change `masks()` in the `nn4n.structure` to `get_masks()`.
21
+
-[ ] Change the loss to functional forms. (e.g. function(model) -> loss). It might be better to also have a function that gets the specific model attributes (i.e., an interface cuz the model might change).
22
22
-[ ] Rename `criterion` to `constraint`.
23
23
-[ ] Change quickstart example to a more complex example.
24
-
-[ ] Change `BaseStruct` to `BaseMask`.
25
-
-[ ] Change top icons to actual links
24
+
-[x] Change `BaseStruct` to `BaseMask`.
25
+
-[ ] Change top icons to actual links
26
+
-[ ] The EIRNN example need to be change to the actual example.
0 commit comments