Skip to content

Commit cfdfd09

Browse files
committed
done migrating previous docs
v1.1.0 almost complete and created workflow
1 parent 7a18cd5 commit cfdfd09

37 files changed

+62308
-137
lines changed

.DS_Store

0 Bytes
Binary file not shown.

.github/workflows/jekyll.yml

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
# This workflow uses actions that are not certified by GitHub.
2+
# They are provided by a third-party and are governed by
3+
# separate terms of service, privacy policy, and support
4+
# documentation.
5+
6+
# Sample workflow for building and deploying a Jekyll site to GitHub Pages
7+
name: Deploy Jekyll site to Pages
8+
9+
on:
10+
# Runs on pushes targeting the default branch
11+
push:
12+
branches: ["main"]
13+
14+
# Allows you to run this workflow manually from the Actions tab
15+
workflow_dispatch:
16+
17+
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
18+
permissions:
19+
contents: read
20+
pages: write
21+
id-token: write
22+
23+
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
24+
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
25+
concurrency:
26+
group: "pages"
27+
cancel-in-progress: false
28+
29+
jobs:
30+
# Build job
31+
build:
32+
runs-on: ubuntu-latest
33+
steps:
34+
- name: Checkout
35+
uses: actions/checkout@v4
36+
- name: Setup Ruby
37+
uses: ruby/setup-ruby@8575951200e472d5f2d95c625da0c7bec8217c42 # v1.161.0
38+
with:
39+
ruby-version: '3.2.2' # Not needed with a .ruby-version file
40+
bundler-cache: true # runs 'bundle install' and caches installed gems automatically
41+
cache-version: 0 # Increment this number if you need to re-download cached gems
42+
- name: Setup Pages
43+
id: pages
44+
uses: actions/configure-pages@v5
45+
- name: Build with Jekyll
46+
# Outputs to the './_site' directory by default
47+
run: bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
48+
env:
49+
JEKYLL_ENV: production
50+
- name: Upload artifact
51+
# Automatically uploads an artifact from the './_site' directory by default
52+
uses: actions/upload-pages-artifact@v3
53+
54+
# Deployment job
55+
deploy:
56+
environment:
57+
name: github-pages
58+
url: ${{ steps.deployment.outputs.page_url }}
59+
runs-on: ubuntu-latest
60+
needs: build
61+
steps:
62+
- name: Deploy to GitHub Pages
63+
id: deployment
64+
uses: actions/deploy-pages@v4

README.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,5 +14,13 @@ Artificial Neural Networks (ANNs) trained with backpropagation, despite being bi
1414

1515
This project implements Recurrent Neural Networks (RNNs) and Multilayer Perceptrons (MLPs) designed for a parametrized and granular control over network modularity, synaptic plasticity, and other constraints to enable biologically feasible modeling of brain regions.
1616

17+
[GitHub Repository](https://github.com/NN4Neurosim/nn4n)
18+
1719
#### Acknowledgement
1820
Immense thanks to [Dr. Christopher J. Cueva](https://www.metaconscious.org/author/chris-cueva/) for his mentorship in developing this project. This project can't be done without his invaluable help.
21+
22+
#### License
23+
This project is licensed under the terms of the MIT license. See the [LICENSE](https://opensource.mit.edu/license) file for details.
24+
25+
#### Template
26+
The project documentation is based on [Jekyll](https://jekyllrb.com/) and uses [Jekyll Gitbook](https://github.com/sighingnow/jekyll-gitbook) theme developed by [sighingnow](https://github.com/sighingnow).

_collection_1/installation.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,18 +7,18 @@ layout: post
77
order: 1
88
---
99

10-
### Install using pip
10+
# Install using pip
1111
```
1212
pip install nn4n
1313
```
1414

15-
### Install from GitHub
15+
# Install from Source
1616
#### Clone the repository
1717
```
18-
git clone https://github.com/zhaozewang/NN4Neurosci.git
18+
git clone https://github.com/NN4Neurosim/nn4n.git
1919
```
2020
#### Navigate to the NN4Neurosci directory
2121
```
22-
cd NN4Neurosci/
22+
cd nn4n/
2323
pip install -e .
2424
```

_collection_1/quickstart.md

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,62 @@ layout: post
77
order: 2
88
---
99

10+
## Initialize a Continuous-Time RNN
11+
```python
12+
import torch
13+
from nn4n.model import CTRNN
1014

15+
rnn = CTRNN(input_dim=1, hidden_size=10, output_dim=1)
16+
optimizer = torch.optim.Adam(rnn.parameters(), lr=0.001)
17+
```
18+
19+
20+
## Define a Task
21+
The input/output signals to train the RNN can be any time series data. Let $X$ be the input signal and $Y$ be the output signal. $X$ should of shape `(n_timesteps, batch_size, input_dim)` and $Y$ should be of shape `(n_timesteps, batch_size, output_dim)`. These signals could be representations of many cognitive tasks, such as working memory, decision making, or motor control, etc. Here we use a simple sin wave as an example.
22+
```python
23+
import numpy as np
24+
import matplotlib.pyplot as plt
25+
26+
# predict sin wave
27+
inputs = np.sin(np.linspace(0, 10, 1000))
28+
inputs = torch.from_numpy(inputs).float().unsqueeze(1).unsqueeze(1)
29+
labels = inputs[1:]
30+
inputs = inputs[:-1]
31+
32+
plt.plot(inputs.squeeze(1).squeeze(1).numpy())
33+
plt.plot(labels.squeeze(1).squeeze(1).numpy())
34+
plt.show()
35+
```
36+
37+
<p align="center">
38+
<img src="{{ '/assets/images/results/sin_wave.png' | relative_url }}" width="400" alt="Sin Wave">
39+
</p>
40+
41+
## Train the RNN
42+
```python
43+
losses = []
44+
for epoch in range(500):
45+
outputs, states = rnn(inputs)
46+
loss = torch.nn.MSELoss()(outputs, labels)
47+
optimizer.zero_grad()
48+
loss.backward()
49+
optimizer.step()
50+
losses.append(loss.item())
51+
52+
if epoch % 50 == 0:
53+
print(f'Epoch {epoch} Loss {loss.item()}')
54+
```
55+
56+
##### Output:
57+
```
58+
Epoch 0 Loss 0.3866065442562103
59+
Epoch 50 Loss 0.20944912731647491
60+
Epoch 100 Loss 0.03360378369688988
61+
Epoch 150 Loss 0.016431370750069618
62+
Epoch 200 Loss 0.013084247708320618
63+
Epoch 250 Loss 0.010527823120355606
64+
Epoch 300 Loss 0.007640092633664608
65+
Epoch 350 Loss 0.005286946427077055
66+
Epoch 400 Loss 0.003560091834515333
67+
Epoch 450 Loss 0.0028597351629287004
68+
```

_collection_2/eirnn.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,12 @@
11
---
2-
title: Excitatory-Inhibitory RNN
2+
title: 'Example: Excitatory-Inhibitory RNN'
33
author: Zhaoze Wang
44
date: 2024-06-16
55
category: docs
66
layout: post
7-
order: 4
7+
order: 11
88
---
99

10-
To train a Excitatory-Inhibitory Constrained RNN (EIRNN)
1110
```python
1211
import nn4n
1312
from nn4n.model import CTRNN

_collection_2/intro.md renamed to _collection_2/introduction.md

Lines changed: 19 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
---
2-
title: RNN Introduction
2+
title: Introduction
33
author: Zhaoze Wang
44
date: 2024-06-14
55
category: docs
66
layout: post
77
order: 1
88
---
99

10-
## Mathematical Formulation
10+
# Mathematical Formulation
1111

1212
The terms 'nodes', 'neurons', and 'units' are used interchangably when referring nodes in the RNN/MLP models. Whereas the biological neurons are only referred to as neurons.
1313

14-
### Simplified Network Model
14+
## Simplified Network Model
1515

1616
The firing rate of a single neuron is described by the following equation:
1717

@@ -36,7 +36,7 @@ $$ \mathbf{r}^t = f\left( \mathbf{v}^t \right) $$
3636
- $ \mathbf{W} $: Weight matrix.
3737
- $ \mathbf{v}^{t-1} $: Vector of membrane potentials for all neurons at time $ t-1 $.
3838

39-
### RNN Dynamics
39+
## RNN Dynamics
4040
At every timestep, we assume that the neurons in the modeled brain region receive external inputs and signals from their neighboring neurons. These signals are then non-linearly integrated to produce an output. The dynamics of the Recurrent Neural Network (RNN) can be described by the following differential equation:
4141

4242
$$\tau \frac{d \mathbf{v}}{dt} = -\mathbf{v}^t + \mathbf{W}_{hid} f(\mathbf{v}^t) + \mathbf{W}_{in} \mathbf{u}^t + \mathbf{b}_{hid} + \epsilon_t$$
@@ -62,7 +62,18 @@ $$
6262
\Delta \mathbf{v}^t = \gamma (-\mathbf{v}^t + \mathbf{W}_{hid} f(\mathbf{v}^t) + \mathbf{W}_{in} \mathbf{u}^t + \mathbf{b}_{hid} + \epsilon_{t}) + \xi_{t}
6363
$$
6464

65-
## Vanilla CTRNN
65+
66+
# Model Structure
67+
```
68+
├── CTRNN
69+
│ ├── RecurrentLayer
70+
│ │ ├── InputLayer (class LinearLayer)
71+
│ │ ├── HiddenLayer
72+
│ ├── Readout_areas (class LinearLayer)
73+
```
74+
75+
76+
# Vanilla CTRNN
6677
A simplistic CTRNN contains three layers, an input layer, a hidden layer, and an readout layer as depicted below.
6778

6879
<p align="center">
@@ -71,13 +82,13 @@ A simplistic CTRNN contains three layers, an input layer, a hidden layer, and an
7182

7283
The yellow nodes represent neurons that project input signals to the hidden layer, the green neurons are in the hidden layer, and the purple nodes represent neurons that read out from the hidden layer neurons. Both input and readout neurons are 'imagined' to be there. I.e., they only project or receives signals and therefore do not have activations and internal states.
7384

74-
## Excitatory-Inhibitory Constrained CTRNN
75-
The implementation of CTRNN also supports Excitatory-Inhibitory constrained continuous-time RNN (EIRNN). EIRNN is proposed by H. Francis Song, Guangyu R. Yang, and Xiao-Jing Wang in [Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework](https://doi.org/10.1371/journal.pcbi.1004792)
85+
# Excitatory-Inhibitory Constrained CTRNN
86+
The implementation of CTRNN also supports Excitatory-Inhibitory constrained continuous-time RNN (EIRNN) similar to what was proposed by H. Francis Song, Guangyu R. Yang, and Xiao-Jing Wang in [Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework](https://doi.org/10.1371/journal.pcbi.1004792)
7687

7788
A visual illustration of the EIRNN is shown below.
7889

7990
<p align="center">
8091
<img src="{{ '/assets/images/basics/EIRNN_structure.png' | relative_url }}" width="400" alt="Description of Exciatory-Inhibitory RNN Structure">
8192
</p>
8293

83-
The yellow nodes denote nodes in the input layer. The middle circle denotes the hidden layer. There are blue nodes and red nodes, representing inhibitory neurons and excitatory neurons, respectively. The depicted network has an E/I ratio of 4/1. The purple nodes are ReadoutLayer neurons.
94+
The yellow nodes denote nodes in the input layer. The middle circle denotes the hidden layer. There are blue nodes and red nodes, representing inhibitory neurons and excitatory neurons, respectively. The depicted network has an E/I ratio of 4/1. The purple nodes are ReadoutLayer neurons.

0 commit comments

Comments
 (0)