Avalanche is an independent research program studying LLM cognition under pressure: how models build, revise, and abandon theories when the task resists immediate solution.
This repository is a working research lab, not a single submission archive.
The current public frontier is:
V4.7.1apparatus work on graveyard legibility, ontology traps, and intervention design- the Generative Closure Fast Bench, a small public substrate for testing exact posterior vs diagnostic memory claims
- tokenizer and probe side-lines that remain useful as witnesses, controls, and negative results
- What kind of epistemic object is the graveyard?
- Can a model read the shape of its own eliminations?
- Can that read change live search rather than only offline interpretation?
- How do thermodynamic activity, basin escape, and ontology change relate to one another?
- The graveyard is structured and path-dependent, but currently more legible than usable.
- Thermodynamic activity measures search motion, not epistemic ascent.
- Score and ontology dissociate sharply.
- Read-only probes become useful once enough assembly exists.
- In the Fast Bench, exact knowledge is separated from diagnostic memory; graveyard and index surfaces are not the posterior itself.
docs/PROJECT_STATE.md- compact public state of the research programdocs/generative_closure/README.md- Fast Bench spec and Campaign 1 manifest bundleGENERATIVE_CLOSURE.md- March 2026 conceptual essay with April 2026 author notedocs/ARCHITECTURE.md- apparatus design and memory surfacesdocs/EXPERIMENT_INDEX.md- experiment map and witness runs
- Avalanche apparatus - Long-running oracle loops, graveyard formation, calorimetry, probe surfaces, and intervention design.
- Generative Closure Fast Bench - A binary cellular-automaton Drosophila for exact closure, witness-budgeted probing, and manifest-controlled hidden-rule campaigns.
- Gravity tokenizer and token probes - An earlier line of work around vocabulary structure, depth efficiency, and token crystallization. This line remains useful, but its older competition framing and stronger "gravity/lensing" claims should be read as historical artifacts rather than the current center of the project.
| Run | Role | Key Observation |
|---|---|---|
| run-12 | Recovery witness | Thick basin lost, then recovered |
| run-39 | Structural lifecycle | Promotion, decay, fossil preservation |
| run-51 | Graveyard legibility | Richest fossils, saturated altitude baseline |
| run-55 | Phase 2b witness | Best live negative-space altitude witness |
The V4.7 apparatus currently runs across three parallel backends:
hypervisor_v44.py- raw model APIhypervisor_v44_codex.py- Codex agent shellhypervisor_v44_claude.py- Claude Code agent
Observation surface:
- live dashboards
- spectral telemetry
- compression assays
- read-only probe tooling
Public control room: syntropy.city
docs/ Architecture, project state, experiment index
docs/generative_closure/ Public Fast Bench bundle and Campaign 1 manifest
compression_assay.py V4.7 assay runner
hypervisor_v44.py Raw-model hypervisor
hypervisor_v44_codex.py Codex-agent hypervisor
hypervisor_v44_claude.py Claude-agent hypervisor
v44_epistemics.py Basin/family/local state and graveyard logic
v43_metrics.py Shared telemetry and spectral probes
dashboard.py Live dashboard server
haiku_probe.py Read-only probe instrument
gravity-tokenizer/ Earlier tokenizer line, probes, and archival submission materials
local-runs/ Run archives and witness outputs
syntropy-site/ Public site assets
tests/ Regression coverage
- Some documents in
gravity-tokenizer/preserve the original competition-era framing because they are archival witnesses of that phase. - Some conceptual language in
GENERATIVE_CLOSURE.mdis stronger than the current empirical read; use the April 2026 author note and the Fast Bench bundle as the current public correction layer.