A 3D brain that sees, learns, defends, and dreams — 35 cognitive layers, zero backprop, browser-native.
Live: brainsnn.com — paste any tweet, see which feeling it installs.
BrainSNN is a 3D neuromorphic brain viewer that runs entirely in your browser. Seven anatomical regions, ten plastic pathways, and 35 layered cognitive features stacked on top — a Cognitive Firewall, a self-evolving rule engine, multimodal RAG, an affective decoder, a neurochemistry sandbox, an idle Dream Mode, an MCP bridge to your AI agents, and more.
Drop a paragraph in. Watch the amygdala glow. Slide cortisol up. Watch the hippocampus drop. Open Brain Evolve. Watch the firewall grow new rules to catch the manipulation it just missed. Open Dream Mode. Walk away. Come back to a brain that's been consolidating its weights while idle.
No backprop. No retraining. No server required for the main demo — TRIBE v2, Gemma 4, and the WebSocket sync are optional upgrades, each behind one env var.
- Frontend: runs entirely in your browser.
npm install && npm run dev— done. See Run it locally for the production preview path. - TRIBE v2 backend: optional. Local:
cd brainsnn-r3f-app/server && uvicorn api:app --reload. Cloud configs (Fly.io / Railway / Docker) are checked in for when you want to host it remotely — see brainsnn-r3f-app/server/README.md.
The full feature catalog lives in .ai-memory/MEMORY.md. A curated tour:
flowchart LR
classDef browser fill:#0b1224,stroke:#5ad4ff,stroke-width:2px,color:#e6f1ff
classDef optional fill:#1a1f2e,stroke:#7c8aa1,stroke-dasharray:5 5,color:#cbd5e1
classDef external fill:#13231a,stroke:#5ee69a,color:#dcfce7
subgraph Browser["Browser (zero-install)"]
direction TB
ui[React 18 + Vite UI<br/>46 panels]
r3f[React Three Fiber<br/>3D brain + neural flow]
layers["35 cognitive layers<br/>Firewall · Evolve · RAG · Dream · etc"]
embed[transformers.js<br/>MiniLM-L6 in-browser embeddings]
mcp[MCP Bridge<br/>14 JSON-RPC tools]
ui --> r3f
ui --> layers
layers --> embed
layers --> mcp
end
tribe["FastAPI + TRIBE v2<br/>(real fMRI predictions)<br/>Fly.io / Railway"]
gemma["Gemma 4 endpoint<br/>(deep multimodal analysis)<br/>Google AI Studio / Ollama / vLLM"]
sync["WebSocket relay<br/>(multi-user live sync)"]
agents["Claude Code / Codex agents<br/>via stdio MCP server"]
layers -. VITE_TRIBE_API .-> tribe
layers -. VITE_GEMMA_API_ENDPOINT .-> gemma
layers -. VITE_SYNC_WS_URL .-> sync
mcp <-. WebSocket relay .-> agents
class Browser browser
class ui,r3f,layers,embed,mcp browser
class tribe,gemma,sync optional
class agents external
The browser column ships everything in the box. Every external arrow is gated by an env var — leave them blank and the corresponding layer falls back gracefully (TRIBE → STDP simulation, Gemma → regex scoring, sync → solo mode).
git clone https://github.com/slavazeph-coder/the-brain
cd the-brain/brainsnn-r3f-app
npm install # or: npm ci (uses .npmrc for legacy-peer-deps)
npm run dev # → http://localhost:5173That's it. The 3D brain renders, the simulation loop ticks, all 35 layers are wired. No keys needed.
All variables are optional. The app runs in pure-frontend mode without any of them set.
| Variable | What it unlocks | Where to get it |
|---|---|---|
VITE_TRIBE_API |
TRIBE v2 fMRI predictions instead of STDP simulation | Run brainsnn-r3f-app/server/ locally or deploy to Fly.io |
VITE_GEMMA_API_ENDPOINT |
Gemma 4 deep multimodal analysis (text, images, video, audio) | Google AI Studio, Ollama, or any OpenAI-compatible endpoint |
VITE_GEMMA_API_KEY |
Auth for the Gemma endpoint above | Same as above |
VITE_SYNC_WS_URL |
Multi-user live sync over WebSocket | Run any WebSocket relay; example schema in LiveSyncPanel.jsx |
See brainsnn-r3f-app/.env.example for the copyable template.
The 3D brain is a pure static SPA — no Node runtime, no server, no auth. Build it once, serve dist/ from anything.
cd brainsnn-r3f-app
npm install
npm run dev # → http://localhost:5173cd brainsnn-r3f-app
npm run build # → dist/ (~1.4 MB, three.js chunked separately)
npm run preview # → http://localhost:4173 — same bundle Vercel/Netlify would serveThe build output is just an index.html + a few hashed JS / CSS chunks. Drop it behind any static webserver:
# Built-in Python — zero install
cd brainsnn-r3f-app/dist && python3 -m http.server 8080
# Caddy — auto-HTTPS for a public hostname
caddy file-server --root brainsnn-r3f-app/dist --listen :8080
# Nginx — drop `try_files $uri /index.html;` for SPA routing
# root /srv/the-brain/brainsnn-r3f-app/dist;
# Tunnel a local server to a public URL on demand
cloudflared tunnel --url http://localhost:4173
# or: ngrok http 4173SPA routing note: any host you pick should rewrite unknown paths to
/index.html. Vite'snpm run previewalready does this; nginx/caddy snippets above show how.
The Python/FastAPI server is fully optional — without it, the app runs in STDP simulation mode and every panel still works. When you want real fMRI predictions:
cd brainsnn-r3f-app/server
docker build -t brainsnn-tribe .
docker run -p 8642:8642 --rm brainsnn-tribe
# then in brainsnn-r3f-app/.env:
echo "VITE_TRIBE_API=http://localhost:8642" >> ../.envCloud-host configs (Fly.io, Railway) are checked in for later. Full backend docs: brainsnn-r3f-app/server/README.md.
the-brain/
├── brainsnn-r3f-app/ ← the deployable: 35-layer 3D brain viewer
│ ├── src/
│ │ ├── components/ ← 46 React components, one per panel + brain scene
│ │ ├── utils/ ← simulation, embeddings, RAG, evolve, firewall, …
│ │ └── data/network.js ← 7 regions × 10 pathways topology
│ ├── server/ ← FastAPI + TRIBE v2 (optional backend)
│ │ ├── api.py ← /health · /scenarios · /predict
│ │ ├── Dockerfile ← Python 3.11-slim + nilearn pre-warm
│ │ ├── fly.toml ← Fly.io 4GB VM config
│ │ └── railway.toml ← Railway alternative
│ ├── mcp-server/ ← Node stdio MCP bridge for Claude Code / Codex
│ └── .env.example ← all 4 optional env vars documented
├── ui/
│ ├── brainsnn-site/ ← marketing landing page (GitHub Pages)
│ └── brainsnn-viewer/ ← alternate product-style viewer
├── agents/ ← OpenClaw agent library (177 templates + 9-agent system)
├── xio_evolve/ ← XIO-Evolve Learn→Design→Experiment→Analyze pipeline
├── docs/
│ ├── screenshots/ ← 12 panel shots + demo GIF (used by this README)
│ └── architecture.mmd ← Mermaid source for the diagram above
└── BRAINSNN_START_HERE.md ← multi-surface explainer (3 apps in one repo)
- Frontend: React 18, Vite 5, React Three Fiber 8, Three.js 0.170, postprocessing 6, FFmpeg.wasm
- In-browser ML: transformers.js (
Xenova/all-MiniLM-L6-v2, ~25MB quantized), pure-JS Louvain community detection, BM25 + trigram Jaccard hybrid search - Backend (optional): FastAPI, Uvicorn, Meta TRIBE v2, nilearn, NumPy
- Agent integration: Node stdio MCP server, WebSocket relay, 14 JSON-RPC tools
This repo is the joint workspace of Claude Code and Codex CLI, coordinated through .ai-memory/. The architecture and conventions live in .ai-memory/architecture.md and .ai-memory/conventions.md.
Issues and PRs welcome. Good first issues:
- A new manipulation category for the Cognitive Firewall + matching red-team corpus entries
- A new affect class for the 12-affect taxonomy with a Russell coordinate
- A new neurotransmitter preset (e.g. ketamine micro-dose, propofol)
- A new pre-computed scenario in brainsnn-r3f-app/server/scenarios/
MIT — see the per-file headers. Cannibalized work credited inline (GAIR-NLP/ASI-Evolve for Brain Evolve, HKUDS/RAG-Anything for Multimodal RAG, Meta facebookresearch/tribev2 for the fMRI backend, Xenova/transformers.js for in-browser embeddings).










