diff --git a/.github/workflows/update-site.yml b/.github/workflows/update-site.yml
new file mode 100644
index 0000000000..282db6da56
--- /dev/null
+++ b/.github/workflows/update-site.yml
@@ -0,0 +1,38 @@
+name: Update AGORA Leaderboard
+
+on:
+ schedule:
+ - cron: '*/30 * * * *' # Every 30 minutes
+ workflow_dispatch: {} # Manual trigger
+
+permissions:
+ contents: write
+ issues: write # For fetch_community.py to comment/close issues
+
+jobs:
+ update:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ ref: gh-pages
+
+ - uses: actions/setup-python@v5
+ with:
+ python-version: '3.12'
+
+ - name: Install dependencies
+ run: pip install requests
+
+ - name: Run pipeline
+ env:
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ run: python scripts/run_pipeline.py
+
+ - name: Commit and push if changed
+ run: |
+ git config user.name "github-actions[bot]"
+ git config user.email "github-actions[bot]@users.noreply.github.com"
+ git add index.html data/
+ git diff --staged --quiet || git commit -m "Auto-update leaderboard $(date -u +%Y-%m-%dT%H:%M:%SZ)"
+ git push
diff --git a/README.md b/README.md
index 34e1b74d88..b64226d3a2 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,21 @@
+> ## OLYMPUS: The Agora
+>
+> This fork hosts **[The Agora](https://matotezitanka.github.io/parameter-golf/)** — a community companion site for Parameter Golf.
+>
+> **What it does:** Live leaderboard (996 PRs auto-classified), compliance engine, technique map, BPB timeline, funding transparency tracker, compute survival guide, and community bug alerts.
+>
+> **Why it exists:** 1000+ PRs, rules that change mid-game, $1M in promised compute with no public distribution data, and critical bugs ([#897](https://github.com/openai/parameter-golf/issues/897), [#775](https://github.com/openai/parameter-golf/issues/775)) that affect scores. The competition needed a public square.
+>
+> - [View the live site](https://matotezitanka.github.io/parameter-golf/)
+> - [Submit feedback or corrections](https://github.com/MatoTeziTanka/parameter-golf/issues/new/choose)
+> - [Join the discussion](https://github.com/MatoTeziTanka/parameter-golf/discussions)
+>
+> Built by [@MatoTeziTanka](https://github.com/MatoTeziTanka) ([Light Speed Up](https://lightspeedup.com)). Not neutral — we're active participants. All classifications are automated and disputable.
+
+---
+
+*Original OpenAI README below:*
+
diff --git a/records/track_10min_16mb/2026-04-02_PROTEUS_v16_Scylla/README.md b/records/track_10min_16mb/2026-04-02_PROTEUS_v16_Scylla/README.md
new file mode 100644
index 0000000000..11370cd08b
--- /dev/null
+++ b/records/track_10min_16mb/2026-04-02_PROTEUS_v16_Scylla/README.md
@@ -0,0 +1,82 @@
+# Record: PROTEUS v1.6 — Scylla + Parallel Residuals + Depth Recurrence + Legal TTT — val_bpb 1.0819
+
+## Result
+
+**val_bpb: 1.0819** (3-seed mean, std: 0.00088) | Scylla tokenizer | 8×H100 SXM
+
+| Seed | Sliding Window BPB | Roundtrip BPB | Steps | Train Time |
+|------|-------------------|---------------|-------|------------|
+| 42 | 1.08075 | 1.10284 | 5,884 | 600.1s |
+| 1337 | 1.08289 | 1.10489 | 5,905 | 600.0s |
+| 2024 | 1.08213 | 1.10421 | 5,894 | 600.0s |
+
+## What This Submission Is
+
+**Skilled integration of community techniques onto a strong neural base.** The engineering work is ours — the foundational techniques are not. We credit every source below.
+
+### Our Engineering (original to this submission)
+
+1. **Mixed INT5/INT6 per-layer quantization** — INT5 for MLP layers, INT6 for attention, tuned to fit the 16 MB artifact budget
+2. **Learnable lane merge + separate `resid_mix_mlp`** — learnable scalar mixing for parallel residual streams with per-dimension MLP routing
+3. **Scylla retokenization pipeline** — on-pod retokenization from SP1024 shards to the Scylla vocabulary
+4. **Integration engineering** — making parallel residuals, depth recurrence, legal TTT, and the Scylla tokenizer work together in one training run
+5. **CPU e2e test suite** — 10 test cases covering imports, hyperparameters, model creation, forward pass, code size, quantization+artifact, step time, quant MSE, scale timing, and weight distribution
+
+### Our Prior Contributions to the Competition
+
+- **The Agora** — community compliance classification engine, live leaderboard, and regulatory tracker at [matotezitanka.github.io/parameter-golf](https://matotezitanka.github.io/parameter-golf). No other competitor built community infrastructure.
+- **LeakyReLU slope sweep** — controlled 7-slope experiment (0.1–0.9) showing monotonic improvement, slope 0.9 beats 0.5 by 0.013 BPB. Posted on [issue #140](https://github.com/openai/parameter-golf/issues/140#issuecomment-4127322055).
+- **Compliance analysis** — rule interpretation and technique legality mapping posted on [issue #140](https://github.com/openai/parameter-golf/issues/140) and [issue #1017](https://github.com/openai/parameter-golf/issues/1017).
+- **PROTEUS submission series** — 7 PRs (#95, #368, #512, #568, #633, #769, #1274) documenting iterative improvement from 1.2037 to 1.0819 BPB, including negative results (INT4, depth recurrence overhead, SWA).
+- **14 community contributions** across 4 issues (#140, #677, #942, #1017, #1175) plus the 7 PROTEUS PRs.
+- **Community toolkit** — Docker image, RunPod template, CPU test harness.
+
+### What's NOT Ours (full attribution)
+
+| Component | Source | PR | Author |
+|-----------|--------|-----|--------|
+| Training base architecture | LeakyReLU² + Parallel Muon | [#549](https://github.com/openai/parameter-golf/pull/549) | @abaybektursun |
+| GPTQ + XSA-all + BigramHash 3072 | AR Self-Gen GPTQ | [#1019](https://github.com/openai/parameter-golf/pull/1019) | @abaybektursun |
+| Scylla tokenizer | Novel TokenMonster-derived tokenizer | [#1143](https://github.com/openai/parameter-golf/pull/1143) | @simon-marcus |
+| Parallel residuals + depth recurrence | Separate attn/MLP lanes + layer 4-5 recurrence | [#1204](https://github.com/openai/parameter-golf/pull/1204) | @msisovic |
+| Legal TTT framework | Score-first SGD with momentum, frozen early blocks | [#461](https://github.com/openai/parameter-golf/pull/461) | @Christopher-Lee-McClendon |
+
+**Note on Scylla:** PR #1143 was closed by the author after byte-accounting errors were found (~4-6% BPB inflation from incorrect modifier token byte counts). Our implementation uses verified per-token UTF-8 byte lengths for all 998 tokens, with no modifier token inflation. See "Byte Accounting Verification" below.
+
+## Byte Accounting Verification
+
+Our Scylla byte counting uses three lookup tables built from the vocabulary:
+- `base_bytes[i]` = `len(token_i.encode('utf-8'))` — verified for all 998 tokens
+- `has_leading_space` — all False (TokenMonster has no space modifiers)
+- `is_boundary_token` — all False (no BOS/EOS/PAD tracked)
+
+BPB formula: `(nats / log(2)) × (token_count / byte_count)`
+
+This is immune to the PR #1143 failure mode. Five zero-byte tokens (empty strings) are correctly counted as 0 bytes. All 5 evaluation functions use identical byte-counting logic.
+
+## Architecture
+
+11L/512d/8H/4KV, MLP 3× LeakyReLU(0.5)², XSA last 4, Partial RoPE 16d, LN Scale, BigramHash, SmearGate, VE 128d (layers 9-10), EMA 0.997, QAT, Mixed INT5/INT6+LZMA, Muon optimizer, Parallel Residuals (from layer 7), Mini Depth Recurrence (layers 4-5, from step 3000), Legal Score-First TTT.
+
+Scylla tokenizer (998 tokens, TokenMonster-derived).
+
+## Compliance
+
+- [x] 8×H100 SXM training
+- [x] 10-minute wallclock (600s)
+- [x] Artifact ≤ 16 MB (prior identical-architecture runs: 15.0–15.8 MB; exact verification pending)
+- [x] No n-gram cache at eval
+- [x] No two-pass rescoring
+- [x] Score-first TTT (tokens scored before weight update)
+- [x] Autoregressive eval (causal)
+- [x] 3-seed validation (42: 1.0808, 1337: 1.0829, 2024: 1.0821, mean: 1.0819, std: 0.00088)
+
+## Known Limitation
+
+These runs used `ACTIVATION_NEG_SLOPE=0.5`. Our [slope sweep](https://github.com/openai/parameter-golf/issues/140#issuecomment-4127322055) on the non-parallel architecture showed slope=0.9 beats 0.5 by ~0.013 BPB. However, controlled A/B testing on the parallel residuals architecture showed slope=0.9 is **0.0054 BPB worse** than 0.5 — the parallel lanes prefer more aggressive gating. Slope 0.5 is correct for this architecture.
+
+## Platform
+
+RunPod 8×H100 80GB SXM, PyTorch 2.11.0+cu128.
+
+*Disclosure: I use Claude Code CLI, Codex CLI, and Gemini Pro as tools in my workflow. Human first, AI-assisted.*
diff --git a/records/track_10min_16mb/2026-04-02_PROTEUS_v16_Scylla/submission.json b/records/track_10min_16mb/2026-04-02_PROTEUS_v16_Scylla/submission.json
new file mode 100644
index 0000000000..f08aacc94a
--- /dev/null
+++ b/records/track_10min_16mb/2026-04-02_PROTEUS_v16_Scylla/submission.json
@@ -0,0 +1,30 @@
+{
+ "submission": "PROTEUS v1.6 — Scylla + Parallel Residuals + Depth Recurrence + Legal TTT",
+ "author": "MatoTeziTanka",
+ "val_bpb": 1.08192346,
+ "val_bpb_method": "int6_sliding_window_stride64",
+ "seeds": {
+ "42": {"val_bpb": 1.08075297, "val_loss": 1.93460240, "roundtrip_bpb": 1.10283785, "steps": 5884, "train_time_ms": 600122},
+ "1337": {"val_bpb": 1.08288515, "val_loss": 1.93841912, "roundtrip_bpb": 1.10488987, "steps": 5905, "train_time_ms": 600043},
+ "2024": {"val_bpb": 1.08213226, "val_loss": 1.93707140, "roundtrip_bpb": 1.10421302, "steps": 5894, "train_time_ms": 600038}
+ },
+ "mean_bpb": 1.08192346,
+ "std_bpb": 0.00088,
+ "tokenizer": "scylla (TokenMonster-derived, 998 tokens)",
+ "architecture": "11L/512d/8H/4KV, ParallelResiduals(layer7+), MiniDepthRecurrence(layers4-5,step3000), LeakyReLU(0.5)\u00b2, XSA4, Muon, EMA+SWA, INT6+LZMA",
+ "platform": "RunPod 8xH100 80GB SXM",
+ "compliance": {
+ "artifact_under_16mb": "TBD — verify",
+ "training_under_600s": true,
+ "no_ngram_cache": true,
+ "no_two_pass": true,
+ "score_first_ttt": true,
+ "three_seeds": true
+ },
+ "attribution": {
+ "base_architecture": "PR #549, #1019 by @abaybektursun",
+ "scylla_tokenizer": "PR #1143 by @simon-marcus",
+ "parallel_residuals": "PR #1204 by @msisovic",
+ "legal_ttt": "PR #461 by @Christopher-Lee-McClendon"
+ }
+}
diff --git a/records/track_10min_16mb/2026-04-02_PROTEUS_v16_Scylla/train_gpt.py b/records/track_10min_16mb/2026-04-02_PROTEUS_v16_Scylla/train_gpt.py
new file mode 100644
index 0000000000..f461112521
--- /dev/null
+++ b/records/track_10min_16mb/2026-04-02_PROTEUS_v16_Scylla/train_gpt.py
@@ -0,0 +1,2718 @@
+from __future__ import annotations
+import copy
+import glob
+import io
+import lzma
+import math
+import os
+import random
+import subprocess
+import sys
+import time
+import uuid
+import zlib
+from pathlib import Path
+try:
+ import zstandard
+ _COMPRESSOR = "zstd"
+except ImportError:
+ _COMPRESSOR = "zlib"
+import numpy as np
+import sentencepiece as spm
+import torch
+import torch.distributed as dist
+import torch.nn.functional as F
+from torch import Tensor, nn
+from torch.nn.parallel import DistributedDataParallel as DDP
+try:
+ from flash_attn_interface import flash_attn_func as flash_attn_3_func
+except ImportError:
+ def flash_attn_3_func(q, k, v, causal=True):
+ q2 = q.transpose(1, 2)
+ k2 = k.transpose(1, 2)
+ v2 = v.transpose(1, 2)
+ o = F.scaled_dot_product_attention(
+ q2, k2, v2, is_causal=causal, enable_gqa=(k2.size(1) != q2.size(1))
+ )
+ return o.transpose(1, 2)
+class Hyperparameters:
+ data_path = os.environ.get("DATA_PATH", "./data/datasets/fineweb10B_sp1024")
+ train_files = os.path.join(data_path, "fineweb_train_*.bin")
+ val_files = os.path.join(data_path, "fineweb_val_*.bin")
+ tokenizer_path = os.environ.get("TOKENIZER_PATH", "./data/tokenizers/fineweb_1024_bpe.model")
+ tokenizer_meta_path = os.environ.get("TOKENIZER_META_PATH", "")
+ tokenizer_meta_validate = bool(int(os.environ.get("TOKENIZER_META_VALIDATE", "0")))
+ run_id = os.environ.get("RUN_ID", str(uuid.uuid4()))
+ seed = int(os.environ.get("SEED", 1337))
+ val_batch_size = int(os.environ.get("VAL_BATCH_SIZE", 524_288))
+ val_tokens_limit = int(os.environ.get("VAL_TOKENS_LIMIT", 0))
+ val_loss_every = int(os.environ.get("VAL_LOSS_EVERY", 4000))
+ train_log_every = int(os.environ.get("TRAIN_LOG_EVERY", 500))
+ iterations = int(os.environ.get("ITERATIONS", 20000))
+ warmdown_iters = int(os.environ.get("WARMDOWN_ITERS", 3500))
+ warmup_steps = int(os.environ.get("WARMUP_STEPS", 20))
+ train_batch_tokens = int(os.environ.get("TRAIN_BATCH_TOKENS", 786_432))
+ train_seq_len = int(os.environ.get("TRAIN_SEQ_LEN", 2048))
+ eval_seq_len = int(os.environ.get("EVAL_SEQ_LEN", 2048))
+ max_wallclock_seconds = float(os.environ.get("MAX_WALLCLOCK_SECONDS", 600.0))
+ qk_gain_init = float(os.environ.get("QK_GAIN_INIT", 1.5))
+ vocab_size = int(os.environ.get("VOCAB_SIZE", 1024))
+ num_layers = int(os.environ.get("NUM_LAYERS", 11))
+ num_kv_heads = int(os.environ.get("NUM_KV_HEADS", 4))
+ model_dim = int(os.environ.get("MODEL_DIM", 512))
+ num_heads = int(os.environ.get("NUM_HEADS", 8))
+ mlp_mult = float(os.environ.get("MLP_MULT", 3.0))
+ tie_embeddings = bool(int(os.environ.get("TIE_EMBEDDINGS", "1")))
+ rope_base = float(os.environ.get("ROPE_BASE", 10000.0))
+ logit_softcap = float(os.environ.get("LOGIT_SOFTCAP", 30.0))
+ embed_lr = float(os.environ.get("EMBED_LR", 0.6))
+ head_lr = float(os.environ.get("HEAD_LR", 0.008))
+ tied_embed_lr = float(os.environ.get("TIED_EMBED_LR", 0.035))
+ tied_embed_init_std = float(os.environ.get("TIED_EMBED_INIT_STD", 0.005))
+ matrix_lr = float(os.environ.get("MATRIX_LR", 0.025))
+ scalar_lr = float(os.environ.get("SCALAR_LR", 0.025))
+ muon_momentum = float(os.environ.get("MUON_MOMENTUM", 0.99))
+ muon_backend_steps = int(os.environ.get("MUON_BACKEND_STEPS", 5))
+ muon_momentum_warmup_start = float(os.environ.get("MUON_MOMENTUM_WARMUP_START", 0.92))
+ muon_momentum_warmup_steps = int(os.environ.get("MUON_MOMENTUM_WARMUP_STEPS", 1500))
+ beta1 = float(os.environ.get("BETA1", 0.9))
+ beta2 = float(os.environ.get("BETA2", 0.95))
+ adam_eps = float(os.environ.get("ADAM_EPS", 1e-8))
+ grad_clip_norm = float(os.environ.get("GRAD_CLIP_NORM", 0.3))
+ eval_stride = int(os.environ.get("EVAL_STRIDE", 64))
+ mtp_num_heads = int(os.environ.get("MTP_NUM_HEADS", 0))
+ mtp_loss_weight = float(os.environ.get("MTP_LOSS_WEIGHT", 0.2))
+ muon_beta2 = float(os.environ.get("MUON_BETA2", 0.95))
+ swa_enabled = bool(int(os.environ.get("SWA_ENABLED", "1")))
+ swa_every = int(os.environ.get("SWA_EVERY", 50))
+ lawa_enabled = bool(int(os.environ.get("LAWA_ENABLED", "0")))
+ lawa_k = int(os.environ.get("LAWA_K", 10))
+ lawa_freq = int(os.environ.get("LAWA_FREQ", 100))
+ muon_wd = float(os.environ.get("MUON_WD", 0.04))
+ adam_wd = float(os.environ.get("ADAM_WD", 0.04))
+ qat_enabled = bool(int(os.environ.get("QAT_ENABLED", "0")))
+ bigram_vocab_size = int(os.environ.get("BIGRAM_VOCAB_SIZE", 2048))
+ bigram_dim = int(os.environ.get("BIGRAM_DIM", 128))
+ xsa_last_n = int(os.environ.get("XSA_LAST_N", 4))
+ rope_dims = int(os.environ.get("ROPE_DIMS", 16))
+ ln_scale = bool(int(os.environ.get("LN_SCALE", "1")))
+ dtg_enabled = bool(int(os.environ.get("DTG_ENABLED", "0")))
+ late_qat_threshold = float(os.environ.get("LATE_QAT_THRESHOLD", 0.15))
+ ve_enabled = bool(int(os.environ.get("VE_ENABLED", "1")))
+ ve_dim = int(os.environ.get("VE_DIM", 128))
+ ve_layers = os.environ.get("VE_LAYERS", "9,10")
+ gated_attention = bool(int(os.environ.get("GATED_ATTENTION", "0")))
+ value_residual = bool(int(os.environ.get("VALUE_RESIDUAL", "0")))
+ activation_mode = os.environ.get("ACTIVATION_MODE", "leaky_relu_sq")
+ activation_neg_slope = float(os.environ.get("ACTIVATION_NEG_SLOPE", 0.5))
+ asymmetric_square_init = float(os.environ.get("ASYMMETRIC_SQUARE_INIT", 0.25))
+ gated_square_beta_init = float(os.environ.get("GATED_SQUARE_BETA_INIT", 1.0))
+ ttt_enabled = bool(int(os.environ.get("TTT_ENABLED", "0")))
+ ttt_lr = float(os.environ.get("TTT_LR", 0.002))
+ ttt_epochs = int(os.environ.get("TTT_EPOCHS", 3))
+ ttt_chunk_tokens = int(os.environ.get("TTT_CHUNK_TOKENS", 32768))
+ ttt_freeze_blocks = int(os.environ.get("TTT_FREEZE_BLOCKS", 2))
+ ttt_momentum = float(os.environ.get("TTT_MOMENTUM", 0.9))
+ ttt_batch_seqs = int(os.environ.get("TTT_BATCH_SEQS", 32))
+ ttt_grad_clip = float(os.environ.get("TTT_GRAD_CLIP", 1.0))
+ # N-gram eval cache
+ ngram_enabled = bool(int(os.environ.get("NGRAM_ENABLED", "0")))
+ ngram_min_order = int(os.environ.get("NGRAM_MIN_ORDER", 2))
+ ngram_max_order = int(os.environ.get("NGRAM_MAX_ORDER", 12))
+ ngram_num_buckets = int(os.environ.get("NGRAM_NUM_BUCKETS", 16_777_216)) # 16M
+ ngram_chunk_size = int(os.environ.get("NGRAM_CHUNK_SIZE", 512))
+ ngram_alpha_min = float(os.environ.get("NGRAM_ALPHA_MIN", 0.05))
+ ngram_alpha_max = float(os.environ.get("NGRAM_ALPHA_MAX", 0.70))
+ ngram_entropy_center = float(os.environ.get("NGRAM_ENTROPY_CENTER", 3.0))
+ ngram_entropy_scale = float(os.environ.get("NGRAM_ENTROPY_SCALE", 2.0))
+ ngram_min_count = int(os.environ.get("NGRAM_MIN_COUNT", 2))
+ ngram_leave_one_out = bool(int(os.environ.get("NGRAM_LEAVE_ONE_OUT", "1")))
+ # Complementary training
+ complement_enabled = bool(int(os.environ.get("COMPLEMENT_ENABLED", "0")))
+ complement_alpha = float(os.environ.get("COMPLEMENT_ALPHA", 0.5))
+ # Parallel residuals (PR #1204 / modded-nanogpt #230)
+ parallel_start_layer = int(os.environ.get("PARALLEL_START_LAYER", "0"))
+ # Mini depth recurrence (PR #1204)
+ recur_layers = os.environ.get("RECUR_LAYERS", "")
+ recur_untie_mlp = bool(int(os.environ.get("RECUR_UNTIE_MLP", "1")))
+
+# --- Batched Newton-Schulz orthogonalization ---
+
+def zeropower_via_newtonschulz5(G: Tensor, steps: int = 5, eps: float = 1e-7) -> Tensor:
+ """Batched Newton-Schulz orthogonalization. G: (B,M,N) or (M,N)."""
+ a, b, c = (3.4445, -4.7750, 2.0315)
+ was_2d = G.ndim == 2
+ if was_2d:
+ G = G.unsqueeze(0)
+ X = G.bfloat16()
+ transposed = X.size(-2) > X.size(-1)
+ if transposed:
+ X = X.mT
+ X = X / (X.norm(dim=(-2, -1), keepdim=True) + eps)
+ for _ in range(steps):
+ A = X @ X.mT
+ B = b * A + c * (A @ A)
+ X = a * X + B @ X
+ if transposed:
+ X = X.mT
+ if was_2d:
+ X = X.squeeze(0)
+ return X
+
+# --- Parallel Muon optimizer ---
+
+class Muon(torch.optim.Optimizer):
+ """Parallel Muon: post-backward reduce-scatter -> local NS5 -> all-gather.
+
+ No DDP for bank params. After backward, this optimizer:
+ 1. Launches async reduce-scatter for all banks (biggest first)
+ 2. Returns control so Adam can step on small params while RS is in-flight
+ 3. Waits for each RS, runs local NS5 on the shard, launches async all-gather
+ 4. Each all-gather overlaps with next bank's NS5
+ """
+ def __init__(self, params, lr: float, momentum: float, backend_steps: int,
+ nesterov: bool = True, weight_decay: float = 0.0):
+ super().__init__(
+ params,
+ dict(lr=lr, momentum=momentum, backend_steps=backend_steps,
+ nesterov=nesterov, weight_decay=weight_decay),
+ )
+ self._built = False
+
+ def _build(self):
+ self._distributed = dist.is_available() and dist.is_initialized()
+ self._world_size = dist.get_world_size() if self._distributed else 1
+ self._rank = dist.get_rank() if self._distributed else 0
+ ws = self._world_size
+
+ self._bank_meta = []
+ for group in self.param_groups:
+ for p in group["params"]:
+ B = p.shape[0]
+ padded_B = ((B + ws - 1) // ws) * ws
+ shard_B = padded_B // ws
+ tail = p.shape[1:]
+ dev = p.device
+ self._bank_meta.append({
+ 'p': p,
+ 'B': B,
+ 'padded_grad': torch.zeros(padded_B, *tail, device=dev, dtype=torch.bfloat16),
+ 'shard': torch.zeros(shard_B, *tail, device=dev, dtype=torch.bfloat16),
+ 'shard_mom': torch.zeros(shard_B, *tail, device=dev, dtype=torch.bfloat16),
+ 'full_update': torch.zeros(padded_B, *tail, device=dev, dtype=torch.bfloat16),
+ 'scale': max(1, p.shape[-2] / p.shape[-1]) ** 0.5,
+ })
+ # Sort by size descending -- launch biggest reduce-scatters first
+ self._bank_meta.sort(key=lambda m: -m['p'].numel())
+ self._built = True
+
+ def launch_reduce_scatters(self):
+ """Phase 1: launch async reduce-scatter for all banks. Call right after backward."""
+ if not self._built:
+ self._build()
+ if not self._distributed:
+ return
+ self._rs_futures = []
+ for m in self._bank_meta:
+ p = m['p']
+ if p.grad is None:
+ self._rs_futures.append(None)
+ continue
+ pg = m['padded_grad']
+ pg[:m['B']].copy_(p.grad.bfloat16())
+ if pg.shape[0] > m['B']:
+ pg[m['B']:].zero_()
+ fut = dist.reduce_scatter_tensor(m['shard'], pg, op=dist.ReduceOp.AVG, async_op=True)
+ self._rs_futures.append(fut)
+
+ @torch.no_grad()
+ def step(self, closure=None):
+ """Phase 3: wait for RS, local NS5, all-gather. Call AFTER Adam steps."""
+ loss = None
+ if closure is not None:
+ with torch.enable_grad():
+ loss = closure()
+
+ if not self._built:
+ self._build()
+
+ for group in self.param_groups:
+ lr = group["lr"]
+ momentum = group["momentum"]
+ backend_steps = group["backend_steps"]
+ nesterov = group["nesterov"]
+ wd = group.get("weight_decay", 0.0)
+
+ prev_ag_handle = None
+ prev_m = None
+
+ sharded = self._distributed and hasattr(self, '_rs_futures')
+
+ for i, m in enumerate(self._bank_meta):
+ p = m['p']
+ if p.grad is None:
+ continue
+
+ if prev_ag_handle is not None:
+ prev_ag_handle.wait()
+ pp = prev_m['p']
+ upd = prev_m['full_update'][:prev_m['B']]
+ if wd > 0.0:
+ pp.data.mul_(1.0 - lr * wd)
+ pp.add_(upd.to(dtype=pp.dtype), alpha=-lr * prev_m['scale'])
+
+ if sharded and self._rs_futures[i] is not None:
+ self._rs_futures[i].wait()
+ g = m['shard']
+ buf = m['shard_mom']
+ else:
+ g = p.grad.bfloat16()
+ state = self.state[p]
+ if "momentum_buffer" not in state:
+ state["momentum_buffer"] = torch.zeros_like(g)
+ buf = state["momentum_buffer"]
+
+ buf.mul_(momentum).add_(g)
+ if nesterov:
+ update = g.add(buf, alpha=momentum)
+ else:
+ update = buf
+
+ update = zeropower_via_newtonschulz5(update, steps=backend_steps)
+
+ if sharded:
+ prev_ag_handle = dist.all_gather_into_tensor(
+ m['full_update'], update, async_op=True)
+ prev_m = m
+ else:
+ if wd > 0.0:
+ p.data.mul_(1.0 - lr * wd)
+ p.add_(update.to(dtype=p.dtype), alpha=-lr * m['scale'])
+
+ if prev_ag_handle is not None:
+ prev_ag_handle.wait()
+ pp = prev_m['p']
+ upd = prev_m['full_update'][:prev_m['B']]
+ if wd > 0.0:
+ pp.data.mul_(1.0 - lr * wd)
+ pp.add_(upd.to(dtype=pp.dtype), alpha=-lr * prev_m['scale'])
+
+ if hasattr(self, '_rs_futures'):
+ del self._rs_futures
+
+ return loss
+
+# --- Tokenizer evaluation helpers ---
+
+TOKENIZER_META_FORMAT_VERSION = 1
+TOKENIZER_META_SUFFIX = ".meta.npz"
+
+
+def _derive_tokenizer_meta_path(tokenizer_path: str) -> Path:
+ tokenizer = Path(tokenizer_path)
+ if tokenizer.suffix == ".model":
+ return tokenizer.with_suffix(TOKENIZER_META_SUFFIX)
+ return tokenizer.with_name(tokenizer.name + TOKENIZER_META_SUFFIX)
+
+
+def build_sentencepiece_luts_np(
+ sp: spm.SentencePieceProcessor, vocab_size: int
+) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
+ sp_vocab_size = int(sp.vocab_size())
+ table_size = max(sp_vocab_size, vocab_size)
+ base_bytes_np = np.zeros((table_size,), dtype=np.int16)
+ has_leading_space_np = np.zeros((table_size,), dtype=np.bool_)
+ is_boundary_token_np = np.ones((table_size,), dtype=np.bool_)
+ for token_id in range(sp_vocab_size):
+ if sp.is_control(token_id) or sp.is_unknown(token_id) or sp.is_unused(token_id):
+ continue
+ is_boundary_token_np[token_id] = False
+ if sp.is_byte(token_id):
+ base_bytes_np[token_id] = 1
+ continue
+ piece = sp.id_to_piece(token_id)
+ if piece.startswith("\u2581"):
+ has_leading_space_np[token_id] = True
+ piece = piece[1:]
+ base_bytes_np[token_id] = len(piece.encode("utf-8"))
+ return base_bytes_np, has_leading_space_np, is_boundary_token_np
+
+
+def build_sentencepiece_luts(
+ sp: spm.SentencePieceProcessor, vocab_size: int, device: torch.device
+) -> tuple[Tensor, Tensor, Tensor]:
+ base_bytes_np, has_leading_space_np, is_boundary_token_np = build_sentencepiece_luts_np(sp, vocab_size)
+ return (
+ torch.tensor(base_bytes_np, dtype=torch.int16, device=device),
+ torch.tensor(has_leading_space_np, dtype=torch.bool, device=device),
+ torch.tensor(is_boundary_token_np, dtype=torch.bool, device=device),
+ )
+
+
+def load_tokenizer_meta_luts_np(
+ meta_path: Path, vocab_size: int
+) -> tuple[np.ndarray, np.ndarray, np.ndarray, dict[str, object]]:
+ def _scalar(value):
+ arr = np.asarray(value)
+ if arr.ndim == 0:
+ return arr.item()
+ first = arr.reshape(-1)[0]
+ return first.item() if hasattr(first, "item") else first
+
+ with np.load(meta_path, allow_pickle=False) as data:
+ format_version = int(_scalar(data["format_version"]))
+ if format_version != TOKENIZER_META_FORMAT_VERSION:
+ raise ValueError(
+ f"Unsupported tokenizer meta format_version={format_version} expected={TOKENIZER_META_FORMAT_VERSION}"
+ )
+ meta_vocab_size = int(_scalar(data["vocab_size"]))
+ tokenizer_kind = str(_scalar(data["tokenizer_kind"]))
+ source_model_name = str(_scalar(data["source_model_name"]))
+ base_bytes_np = np.asarray(data["base_bytes"], dtype=np.int16)
+ has_leading_space_np = np.asarray(data["has_leading_space"], dtype=np.bool_)
+ is_boundary_token_np = np.asarray(data["is_boundary_token"], dtype=np.bool_)
+ table_size = max(meta_vocab_size, vocab_size)
+ if base_bytes_np.shape[0] < table_size:
+ padded_base_bytes = np.zeros((table_size,), dtype=np.int16)
+ padded_has_leading_space = np.zeros((table_size,), dtype=np.bool_)
+ padded_is_boundary = np.ones((table_size,), dtype=np.bool_)
+ padded_base_bytes[: base_bytes_np.shape[0]] = base_bytes_np
+ padded_has_leading_space[: has_leading_space_np.shape[0]] = has_leading_space_np
+ padded_is_boundary[: is_boundary_token_np.shape[0]] = is_boundary_token_np
+ base_bytes_np = padded_base_bytes
+ has_leading_space_np = padded_has_leading_space
+ is_boundary_token_np = padded_is_boundary
+ metadata = {
+ "format_version": format_version,
+ "tokenizer_kind": tokenizer_kind,
+ "source_model_name": source_model_name,
+ "vocab_size": meta_vocab_size,
+ "meta_path": str(meta_path),
+ }
+ return base_bytes_np, has_leading_space_np, is_boundary_token_np, metadata
+
+
+def load_tokenizer_luts(
+ tokenizer_path: str,
+ tokenizer_meta_path: str,
+ vocab_size: int,
+ device: torch.device,
+ *,
+ validate_meta: bool = False,
+) -> tuple[tuple[Tensor, Tensor, Tensor], dict[str, object]]:
+ meta_path = Path(tokenizer_meta_path) if tokenizer_meta_path else _derive_tokenizer_meta_path(tokenizer_path)
+ if meta_path.exists():
+ base_bytes_np, has_leading_space_np, is_boundary_token_np, metadata = load_tokenizer_meta_luts_np(
+ meta_path, vocab_size
+ )
+ if validate_meta and str(tokenizer_path).endswith(".model"):
+ sp = spm.SentencePieceProcessor(model_file=tokenizer_path)
+ sp_luts = build_sentencepiece_luts_np(sp, vocab_size)
+ if not (
+ np.array_equal(base_bytes_np, sp_luts[0])
+ and np.array_equal(has_leading_space_np, sp_luts[1])
+ and np.array_equal(is_boundary_token_np, sp_luts[2])
+ ):
+ raise ValueError(f"Tokenizer metadata mismatch for {meta_path}")
+ return (
+ torch.tensor(base_bytes_np, dtype=torch.int16, device=device),
+ torch.tensor(has_leading_space_np, dtype=torch.bool, device=device),
+ torch.tensor(is_boundary_token_np, dtype=torch.bool, device=device),
+ ), metadata
+ if not str(tokenizer_path).endswith(".model"):
+ raise FileNotFoundError(f"TOKENIZER_META_PATH does not exist: {meta_path}")
+ sp = spm.SentencePieceProcessor(model_file=tokenizer_path)
+ return build_sentencepiece_luts(sp, vocab_size, device), {
+ "tokenizer_kind": "sentencepiece",
+ "source_model_name": str(tokenizer_path),
+ "vocab_size": int(sp.vocab_size()),
+ "meta_path": None,
+ "fallback": True,
+ }
+def load_validation_tokens(pattern: str, seq_len: int, token_limit: int = 0) -> Tensor:
+ files = [Path(p) for p in sorted(glob.glob(pattern))]
+ if not files:
+ raise FileNotFoundError(f"No files found for pattern: {pattern}")
+ tokens = torch.cat([load_data_shard(file) for file in files]).contiguous()
+ if token_limit > 0:
+ tokens = tokens[: min(tokens.numel(), token_limit + 1)]
+ usable = ((tokens.numel() - 1) // seq_len) * seq_len
+ if usable <= 0:
+ raise ValueError(f"Validation split is too short for TRAIN_SEQ_LEN={seq_len}")
+ return tokens[: usable + 1]
+def eval_val(
+ args: Hyperparameters,
+ model: nn.Module,
+ rank: int,
+ world_size: int,
+ device: torch.device,
+ grad_accum_steps: int,
+ val_tokens: Tensor,
+ base_bytes_lut: Tensor,
+ has_leading_space_lut: Tensor,
+ is_boundary_token_lut: Tensor,
+ eval_seq_len: int | None = None,
+) -> tuple[float, float]:
+ seq_len = eval_seq_len or args.train_seq_len
+ local_batch_tokens = args.val_batch_size // (world_size * grad_accum_steps)
+ if local_batch_tokens < seq_len:
+ raise ValueError(
+ "VAL_BATCH_SIZE must provide at least one sequence per rank; "
+ f"got VAL_BATCH_SIZE={args.val_batch_size}, WORLD_SIZE={world_size}, "
+ f"GRAD_ACCUM_STEPS={grad_accum_steps}, seq_len={seq_len}"
+ )
+ local_batch_seqs = local_batch_tokens // seq_len
+ total_seqs = (val_tokens.numel() - 1) // seq_len
+ seq_start = (total_seqs * rank) // world_size
+ seq_end = (total_seqs * (rank + 1)) // world_size
+ val_loss_sum = torch.zeros((), device=device, dtype=torch.float64)
+ val_token_count = torch.zeros((), device=device, dtype=torch.float64)
+ val_byte_count = torch.zeros((), device=device, dtype=torch.float64)
+ model.eval()
+ with torch.inference_mode():
+ for batch_seq_start in range(seq_start, seq_end, local_batch_seqs):
+ batch_seq_end = min(batch_seq_start + local_batch_seqs, seq_end)
+ raw_start = batch_seq_start * seq_len
+ raw_end = batch_seq_end * seq_len + 1
+ local = val_tokens[raw_start:raw_end].to(device=device, dtype=torch.int64, non_blocking=True)
+ x = local[:-1].reshape(-1, seq_len)
+ y = local[1:].reshape(-1, seq_len)
+ with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=True):
+ batch_loss = model(x, y).detach()
+ batch_token_count = float(y.numel())
+ val_loss_sum += batch_loss.to(torch.float64) * batch_token_count
+ val_token_count += batch_token_count
+ prev_ids = x.reshape(-1)
+ tgt_ids = y.reshape(-1)
+ token_bytes = base_bytes_lut[tgt_ids].to(dtype=torch.int16)
+ token_bytes += (has_leading_space_lut[tgt_ids] & ~is_boundary_token_lut[prev_ids]).to(dtype=torch.int16)
+ val_byte_count += token_bytes.to(torch.float64).sum()
+ if dist.is_available() and dist.is_initialized():
+ dist.all_reduce(val_loss_sum, op=dist.ReduceOp.SUM)
+ dist.all_reduce(val_token_count, op=dist.ReduceOp.SUM)
+ dist.all_reduce(val_byte_count, op=dist.ReduceOp.SUM)
+ val_loss = val_loss_sum / val_token_count
+ bits_per_token = val_loss.item() / math.log(2.0)
+ tokens_per_byte = val_token_count.item() / val_byte_count.item()
+ model.train()
+ return float(val_loss.item()), float(bits_per_token * tokens_per_byte)
+
+# --- Quantization helpers ---
+
+CONTROL_TENSOR_NAME_PATTERNS = tuple(
+ pattern
+ for pattern in os.environ.get(
+ "CONTROL_TENSOR_NAME_PATTERNS",
+ "attn_scale,attn_scales,mlp_scale,mlp_scales,resid_mix,resid_mixes,q_gain,skip_weight,skip_weights,smear,dtg_gate,ve_layer_scales,ve_shared.scale,attn_gate,vr_lambda",
+ ).split(",")
+ if pattern
+)
+INT8_KEEP_FLOAT_FP32_NAME_PATTERNS = tuple(
+ pattern
+ for pattern in os.environ.get(
+ "INT8_KEEP_FLOAT_FP32_NAME_PATTERNS",
+ ",".join(CONTROL_TENSOR_NAME_PATTERNS),
+ ).split(",")
+ if pattern
+)
+INT8_KEEP_FLOAT_MAX_NUMEL = 65_536
+INT8_KEEP_FLOAT_STORE_DTYPE = torch.float16
+INT8_PER_ROW_SCALE_DTYPE = torch.float16
+INT8_CLIP_PERCENTILE = 99.99984
+INT8_CLIP_Q = INT8_CLIP_PERCENTILE / 100.0
+def tensor_nbytes(t: Tensor) -> int:
+ return int(t.numel()) * int(t.element_size())
+def keep_float_tensor(name: str, t: Tensor, passthrough_orig_dtypes: dict[str, str]) -> Tensor:
+ if any(pattern in name for pattern in INT8_KEEP_FLOAT_FP32_NAME_PATTERNS):
+ return t.float().contiguous()
+ if t.dtype in {torch.float32, torch.bfloat16}:
+ passthrough_orig_dtypes[name] = str(t.dtype).removeprefix("torch.")
+ return t.to(dtype=INT8_KEEP_FLOAT_STORE_DTYPE).contiguous()
+ return t
+def quantize_float_tensor(t: Tensor) -> tuple[Tensor, Tensor]:
+ t32 = t.float()
+ if t32.ndim == 2:
+ clip_abs = (
+ torch.quantile(t32.abs(), INT8_CLIP_Q, dim=1)
+ if t32.numel()
+ else torch.empty((t32.shape[0],), dtype=torch.float32)
+ )
+ clipped = torch.maximum(torch.minimum(t32, clip_abs[:, None]), -clip_abs[:, None])
+ scale = (clip_abs / 127.0).clamp_min(1.0 / 127.0)
+ q = torch.clamp(torch.round(clipped / scale[:, None]), -127, 127).to(torch.int8).contiguous()
+ return q, scale.to(dtype=INT8_PER_ROW_SCALE_DTYPE).contiguous()
+ clip_abs = float(torch.quantile(t32.abs().flatten(), INT8_CLIP_Q).item()) if t32.numel() else 0.0
+ scale = torch.tensor(clip_abs / 127.0 if clip_abs > 0 else 1.0, dtype=torch.float32)
+ q = torch.clamp(torch.round(torch.clamp(t32, -clip_abs, clip_abs) / scale), -127, 127).to(torch.int8).contiguous()
+ return q, scale
+def quantize_state_dict_int8(state_dict: dict[str, Tensor]):
+ quantized: dict[str, Tensor] = {}
+ scales: dict[str, Tensor] = {}
+ dtypes: dict[str, str] = {}
+ passthrough: dict[str, Tensor] = {}
+ passthrough_orig_dtypes: dict[str, str] = {}
+ qmeta: dict[str, dict[str, object]] = {}
+ stats = dict.fromkeys(
+ ("param_count", "num_tensors", "num_float_tensors", "num_nonfloat_tensors", "baseline_tensor_bytes", "int8_payload_bytes"),
+ 0,
+ )
+ for name, tensor in state_dict.items():
+ t = tensor.detach().to("cpu").contiguous()
+ stats["param_count"] += int(t.numel())
+ stats["num_tensors"] += 1
+ stats["baseline_tensor_bytes"] += tensor_nbytes(t)
+ if not t.is_floating_point():
+ stats["num_nonfloat_tensors"] += 1
+ passthrough[name] = t
+ stats["int8_payload_bytes"] += tensor_nbytes(t)
+ continue
+ if t.numel() <= INT8_KEEP_FLOAT_MAX_NUMEL:
+ kept = keep_float_tensor(name, t, passthrough_orig_dtypes)
+ passthrough[name] = kept
+ stats["int8_payload_bytes"] += tensor_nbytes(kept)
+ continue
+ stats["num_float_tensors"] += 1
+ q, s = quantize_float_tensor(t)
+ if s.ndim > 0:
+ qmeta[name] = {"scheme": "per_row", "axis": 0}
+ quantized[name] = q
+ scales[name] = s
+ dtypes[name] = str(t.dtype).removeprefix("torch.")
+ stats["int8_payload_bytes"] += tensor_nbytes(q) + tensor_nbytes(s)
+ obj: dict[str, object] = {
+ "__quant_format__": "int8_clean_per_row_v1",
+ "quantized": quantized,
+ "scales": scales,
+ "dtypes": dtypes,
+ "passthrough": passthrough,
+ }
+ if qmeta:
+ obj["qmeta"] = qmeta
+ if passthrough_orig_dtypes:
+ obj["passthrough_orig_dtypes"] = passthrough_orig_dtypes
+ return obj, stats
+def dequantize_state_dict_int8(obj: dict[str, object]) -> dict[str, Tensor]:
+ out: dict[str, Tensor] = {}
+ qmeta = obj.get("qmeta", {})
+ passthrough_orig_dtypes = obj.get("passthrough_orig_dtypes", {})
+ for name, q in obj["quantized"].items():
+ dtype = getattr(torch, obj["dtypes"][name])
+ s = obj["scales"][name]
+ if qmeta.get(name, {}).get("scheme") == "per_row" or s.ndim > 0:
+ s = s.to(dtype=torch.float32)
+ out[name] = (q.float() * s.view(q.shape[0], *([1] * (q.ndim - 1)))).to(dtype=dtype).contiguous()
+ else:
+ scale = float(s.item())
+ out[name] = (q.float() * scale).to(dtype=dtype).contiguous()
+ for name, t in obj["passthrough"].items():
+ out_t = t.detach().to("cpu").contiguous()
+ orig_dtype = passthrough_orig_dtypes.get(name)
+ if isinstance(orig_dtype, str):
+ out_t = out_t.to(dtype=getattr(torch, orig_dtype)).contiguous()
+ out[name] = out_t
+ return out
+
+# --- Data loading ---
+
+def load_data_shard(file: Path) -> Tensor:
+ header_bytes = 256 * np.dtype(" None:
+ self.file_idx = (self.file_idx + 1) % len(self.files)
+ self.tokens = load_data_shard(self.files[self.file_idx])
+ self.pos = 0
+ def take(self, n: int) -> Tensor:
+ chunks: list[Tensor] = []
+ remaining = n
+ while remaining > 0:
+ avail = self.tokens.numel() - self.pos
+ if avail <= 0:
+ self._advance_file()
+ continue
+ k = min(remaining, avail)
+ chunks.append(self.tokens[self.pos : self.pos + k])
+ self.pos += k
+ remaining -= k
+ return chunks[0] if len(chunks) == 1 else torch.cat(chunks)
+class DistributedTokenLoader:
+ def __init__(self, pattern: str, rank: int, world_size: int, device: torch.device):
+ self.rank = rank
+ self.world_size = world_size
+ self.device = device
+ self.stream = TokenStream(pattern)
+ def next_batch(self, global_tokens: int, seq_len: int, grad_accum_steps: int) -> tuple[Tensor, Tensor]:
+ local_tokens = global_tokens // (self.world_size * grad_accum_steps)
+ per_rank_span = local_tokens + 1
+ chunk = self.stream.take(per_rank_span * self.world_size)
+ start = self.rank * per_rank_span
+ local = chunk[start : start + per_rank_span].to(dtype=torch.int64)
+ x = local[:-1].reshape(-1, seq_len)
+ y = local[1:].reshape(-1, seq_len)
+ return x.to(self.device, non_blocking=True), y.to(self.device, non_blocking=True)
+
+# --- Transformer modules ---
+
+class RMSNorm(nn.Module):
+ def __init__(self, eps: float | None = None):
+ super().__init__()
+ self.eps = eps
+ def forward(self, x: Tensor) -> Tensor:
+ return F.rms_norm(x, (x.size(-1),), eps=self.eps)
+class CastedLinear(nn.Linear):
+ _qat_enabled: bool = False
+ def forward(self, x: Tensor) -> Tensor:
+ w = self.weight.to(x.dtype)
+ if CastedLinear._qat_enabled and self.training and w.ndim == 2:
+ with torch.no_grad():
+ w32 = self.weight.float()
+ row_max = w32.abs().amax(dim=1)
+ scale = (row_max / 31.0).clamp_min(1.0 / 31.0)
+ w_q = (torch.clamp(torch.round(w32 / scale[:, None]), -32, 31) * scale[:, None]).to(x.dtype)
+ w = w + (w_q - w).detach()
+ bias = self.bias.to(x.dtype) if self.bias is not None else None
+ return F.linear(x, w, bias)
+def restore_low_dim_params_to_fp32(module: nn.Module) -> None:
+ with torch.no_grad():
+ for name, param in module.named_parameters():
+ if (param.ndim < 2 or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS)) and param.dtype != torch.float32:
+ param.data = param.data.float()
+class Rotary(nn.Module):
+ def __init__(self, dim: int, base: float = 10000.0, train_seq_len: int = 1024, rope_dims: int = 0):
+ super().__init__()
+ self.dim = dim
+ self.base = base
+ self.train_seq_len = train_seq_len
+ self.rope_dims = rope_dims if rope_dims > 0 else dim
+ inv_freq = 1.0 / (base ** (torch.arange(0, self.rope_dims, 2, dtype=torch.float32) / self.rope_dims))
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
+ self._seq_len_cached = 0
+ self._cos_cached: Tensor | None = None
+ self._sin_cached: Tensor | None = None
+ def forward(self, seq_len: int, device: torch.device, dtype: torch.dtype) -> tuple[Tensor, Tensor]:
+ if (
+ self._cos_cached is None
+ or self._sin_cached is None
+ or self._seq_len_cached != seq_len
+ or self._cos_cached.device != device
+ ):
+ rd = self.rope_dims
+ if seq_len > self.train_seq_len:
+ scale = seq_len / self.train_seq_len
+ new_base = self.base * (scale ** (rd / (rd - 2)))
+ inv_freq = 1.0 / (new_base ** (torch.arange(0, rd, 2, dtype=torch.float32, device=device) / rd))
+ else:
+ inv_freq = self.inv_freq.to(device)
+ t = torch.arange(seq_len, device=device, dtype=inv_freq.dtype)
+ freqs = torch.outer(t, inv_freq)
+ self._cos_cached = freqs.cos()[None, :, None, :]
+ self._sin_cached = freqs.sin()[None, :, None, :]
+ self._seq_len_cached = seq_len
+ return self._cos_cached.to(dtype=dtype), self._sin_cached.to(dtype=dtype)
+def apply_rotary_emb(x: Tensor, cos: Tensor, sin: Tensor, rope_dims: int = 0) -> Tensor:
+ if rope_dims > 0 and rope_dims < x.size(-1):
+ x_rope, x_pass = x[..., :rope_dims], x[..., rope_dims:]
+ half = rope_dims // 2
+ x1, x2 = x_rope[..., :half], x_rope[..., half:]
+ x_rope = torch.cat((x1 * cos + x2 * sin, x1 * (-sin) + x2 * cos), dim=-1)
+ return torch.cat((x_rope, x_pass), dim=-1)
+ half = x.size(-1) // 2
+ x1, x2 = x[..., :half], x[..., half:]
+ return torch.cat((x1 * cos + x2 * sin, x1 * (-sin) + x2 * cos), dim=-1)
+
+class CausalSelfAttention(nn.Module):
+ def __init__(
+ self,
+ dim: int,
+ num_heads: int,
+ num_kv_heads: int,
+ rope_base: float,
+ qk_gain_init: float,
+ gated_attention: bool = False,
+ value_residual: bool = False,
+ ):
+ super().__init__()
+ if dim % num_heads != 0:
+ raise ValueError("model_dim must be divisible by num_heads")
+ if num_heads % num_kv_heads != 0:
+ raise ValueError("num_heads must be divisible by num_kv_heads")
+ self.num_heads = num_heads
+ self.num_kv_heads = num_kv_heads
+ self.head_dim = dim // num_heads
+ if self.head_dim % 2 != 0:
+ raise ValueError("head_dim must be even for RoPE")
+ # No CastedLinear -- weights come from banks
+ self.q_gain = nn.Parameter(torch.full((num_heads,), qk_gain_init, dtype=torch.float32))
+ self.rope_dims = 0 # set by GPT.__init__ for partial RoPE
+ self.rotary = Rotary(self.head_dim, base=rope_base, train_seq_len=1024)
+ self.use_xsa = False # set by GPT.__init__ for deep layers only
+ # Gated attention and value residual (non-banked small params)
+ self.gated_attention = gated_attention
+ if gated_attention:
+ self.attn_gate = nn.Linear(dim, num_heads, bias=True)
+ nn.init.zeros_(self.attn_gate.weight)
+ nn.init.constant_(self.attn_gate.bias, 4.0)
+ self.value_residual = value_residual
+ if value_residual:
+ self.vr_lambda = nn.Parameter(torch.tensor([0.5, 0.5], dtype=torch.float32))
+ def _xsa_efficient(self, y: Tensor, v: Tensor) -> Tensor:
+ """Efficient XSA: subtract self-value projection via GQA-aware reshape (no repeat_interleave).
+ y: [B, T, H, D], v: [B, T, Hkv, D]. H must be divisible by Hkv."""
+ B, T, H, D = y.shape
+ Hkv = v.size(-2)
+ group = H // Hkv
+ y_g = y.reshape(B, T, Hkv, group, D) # [B, T, Hkv, group, D]
+ vn = F.normalize(v, dim=-1).unsqueeze(-2) # [B, T, Hkv, 1, D] -- broadcast ready
+ proj = (y_g * vn).sum(dim=-1, keepdim=True) * vn
+ return (y_g - proj).reshape(B, T, H, D)
+ def forward(self, x: Tensor, q_w: Tensor, k_w: Tensor, v_w: Tensor, out_w: Tensor, v_embed: Tensor | None = None, v0: Tensor | None = None) -> tuple[Tensor, Tensor | None]:
+ bsz, seqlen, dim = x.shape
+ q = F.linear(x, q_w.to(x.dtype)).reshape(bsz, seqlen, self.num_heads, self.head_dim)
+ k = F.linear(x, k_w.to(x.dtype)).reshape(bsz, seqlen, self.num_kv_heads, self.head_dim)
+ v = F.linear(x, v_w.to(x.dtype))
+ if v_embed is not None:
+ v = v + v_embed
+ v = v.reshape(bsz, seqlen, self.num_kv_heads, self.head_dim)
+ raw_v = v if self.value_residual else None
+ if self.value_residual and v0 is not None:
+ lam = self.vr_lambda.to(dtype=v.dtype)
+ v = lam[0] * v0 + lam[1] * v
+ q = F.rms_norm(q, (q.size(-1),))
+ k = F.rms_norm(k, (k.size(-1),))
+ cos, sin = self.rotary(seqlen, x.device, q.dtype)
+ q = apply_rotary_emb(q, cos, sin, self.rope_dims)
+ k = apply_rotary_emb(k, cos, sin, self.rope_dims)
+ q = q * self.q_gain.to(dtype=q.dtype)[None, None, :, None]
+ y = flash_attn_3_func(q, k, v, causal=True)
+ if self.use_xsa:
+ y = self._xsa_efficient(y, v)
+ if self.gated_attention:
+ # gate shape: (bsz, seqlen, num_heads) -> (bsz, seqlen, num_heads, 1) for B,T,H,D layout
+ gate = torch.sigmoid(self.attn_gate(x)).unsqueeze(-1)
+ y = y * gate
+ y = y.reshape(bsz, seqlen, dim)
+ return F.linear(y, out_w.to(x.dtype)), raw_v
+
+class SmearGate(nn.Module):
+ def __init__(self, dim: int):
+ super().__init__()
+ self.gate = nn.Parameter(torch.zeros(dim, dtype=torch.float32))
+ def forward(self, x: Tensor) -> Tensor:
+ g = torch.sigmoid(self.gate.to(dtype=x.dtype))[None, None, :]
+ x_prev = torch.cat([torch.zeros_like(x[:, :1]), x[:, :-1]], dim=1)
+ return (1 - g) * x + g * x_prev
+
+class BigramHashEmbedding(nn.Module):
+ def __init__(self, bigram_vocab_size: int, bigram_dim: int, model_dim: int):
+ super().__init__()
+ self.bigram_vocab_size = bigram_vocab_size
+ self.embed = nn.Embedding(bigram_vocab_size, bigram_dim)
+ nn.init.zeros_(self.embed.weight)
+ self.proj = CastedLinear(bigram_dim, model_dim, bias=False) if bigram_dim != model_dim else None
+ if self.proj is not None:
+ nn.init.zeros_(self.proj.weight)
+ self.scale = nn.Parameter(torch.tensor(0.05, dtype=torch.float32))
+ def bigram_hash(self, tokens: Tensor) -> Tensor:
+ t = tokens.to(torch.int32)
+ mod = self.bigram_vocab_size - 1
+ out = torch.empty_like(t)
+ out[..., 0] = mod
+ out[..., 1:] = torch.bitwise_xor(36313 * t[..., 1:], 27191 * t[..., :-1]) % mod
+ return out.long()
+ def forward(self, token_ids: Tensor) -> Tensor:
+ h = self.embed(self.bigram_hash(token_ids))
+ if self.proj is not None:
+ h = self.proj(h)
+ return h * self.scale.to(dtype=h.dtype)
+
+class ValueEmbedding(nn.Module):
+ """Reinject token identity into attention values at specific layers.
+ Each table maps vocab tokens to a low-dim embedding, projected to model_dim."""
+ def __init__(self, vocab_size: int, ve_dim: int, model_dim: int):
+ super().__init__()
+ self.embed = nn.Embedding(vocab_size, ve_dim)
+ nn.init.normal_(self.embed.weight, std=0.01)
+ self.proj = CastedLinear(ve_dim, model_dim, bias=False) if ve_dim != model_dim else None
+ if self.proj is not None:
+ nn.init.zeros_(self.proj.weight)
+ self.scale = nn.Parameter(torch.tensor(0.1, dtype=torch.float32))
+ def forward(self, token_ids: Tensor) -> Tensor:
+ h = self.embed(token_ids)
+ if self.proj is not None:
+ h = self.proj(h)
+ return h * self.scale.to(dtype=h.dtype)
+
+class MLP(nn.Module):
+ def __init__(
+ self,
+ dim: int,
+ mlp_mult: int,
+ activation_mode: str = "leaky_relu_sq",
+ activation_neg_slope: float = 0.5,
+ asymmetric_square_init: float = 0.25,
+ gated_square_beta_init: float = 1.0,
+ ):
+ super().__init__()
+ # No CastedLinear -- weights come from banks
+ self.activation_mode = activation_mode
+ self.activation_neg_slope = activation_neg_slope
+ if activation_mode == "asymmetric_square":
+ self.neg_sq_scale = nn.Parameter(torch.tensor(asymmetric_square_init, dtype=torch.float32))
+ else:
+ self.neg_sq_scale = None
+ if activation_mode == "gated_square":
+ self.gated_square_beta = nn.Parameter(torch.tensor(gated_square_beta_init, dtype=torch.float32))
+ else:
+ self.gated_square_beta = None
+ def forward(self, x: Tensor, up_w: Tensor, down_w: Tensor) -> Tensor:
+ u = F.linear(x, up_w.to(x.dtype))
+ if self.activation_mode == "leaky_relu_sq":
+ h = F.leaky_relu(u, negative_slope=self.activation_neg_slope).square()
+ elif self.activation_mode == "asymmetric_square":
+ neg_sq_scale = self.neg_sq_scale.to(dtype=u.dtype).clamp(0.0, 4.0)
+ h = F.relu(u).square() + neg_sq_scale * F.relu(-u).square()
+ elif self.activation_mode == "gated_square":
+ beta = self.gated_square_beta.to(dtype=u.dtype).clamp(0.0, 8.0)
+ h = u.square() * torch.sigmoid(beta * u)
+ elif self.activation_mode == "sign_preserving_square":
+ h = u * u.abs()
+ else:
+ raise ValueError(f"Unknown ACTIVATION_MODE={self.activation_mode}")
+ return F.linear(h, down_w.to(x.dtype))
+
+class Block(nn.Module):
+ def __init__(
+ self,
+ dim: int,
+ num_heads: int,
+ num_kv_heads: int,
+ mlp_mult: int,
+ rope_base: float,
+ qk_gain_init: float,
+ layer_idx: int = 0,
+ ln_scale: bool = False,
+ dtg: bool = False,
+ gated_attention: bool = False,
+ value_residual: bool = False,
+ activation_mode: str = "leaky_relu_sq",
+ activation_neg_slope: float = 0.5,
+ asymmetric_square_init: float = 0.25,
+ gated_square_beta_init: float = 1.0,
+ parallel: bool = False,
+ ):
+ super().__init__()
+ self.attn_norm = RMSNorm()
+ self.mlp_norm = RMSNorm()
+ self.attn = CausalSelfAttention(dim, num_heads, num_kv_heads, rope_base, qk_gain_init,
+ gated_attention=gated_attention, value_residual=value_residual)
+ self.mlp = MLP(
+ dim,
+ mlp_mult,
+ activation_mode=activation_mode,
+ activation_neg_slope=activation_neg_slope,
+ asymmetric_square_init=asymmetric_square_init,
+ gated_square_beta_init=gated_square_beta_init,
+ )
+ self.attn_scale = nn.Parameter(torch.ones(dim, dtype=torch.float32))
+ self.mlp_scale = nn.Parameter(torch.ones(dim, dtype=torch.float32))
+ self.resid_mix = nn.Parameter(torch.stack((torch.ones(dim), torch.zeros(dim))).float())
+ self.ln_scale_factor = 1.0 / math.sqrt(layer_idx + 1) if ln_scale else 1.0
+ self.parallel = parallel
+ if dtg:
+ self.dtg_gate = nn.Linear(dim, 1, bias=True)
+ nn.init.zeros_(self.dtg_gate.weight)
+ nn.init.constant_(self.dtg_gate.bias, 2.0)
+ else:
+ self.dtg_gate = None
+ if parallel:
+ self.resid_mix_mlp = nn.Parameter(torch.stack((torch.ones(dim), torch.zeros(dim))).float())
+ self.route = nn.Parameter(torch.tensor([1.0, 1.0, 1.0, 1.0]))
+ def forward(self, x_attn: Tensor, x_mlp: Tensor, x0: Tensor, q_w: Tensor, k_w: Tensor, v_w: Tensor, out_w: Tensor, up_w: Tensor, down_w: Tensor, v_embed: Tensor | None = None, v0: Tensor | None = None) -> tuple[Tensor, Tensor, Tensor | None]:
+ if not self.parallel:
+ x = x_attn
+ mix = self.resid_mix.to(dtype=x.dtype)
+ x_in = mix[0][None, None, :] * x + mix[1][None, None, :] * x0
+ attn_out, raw_v = self.attn(self.attn_norm(x_in) * self.ln_scale_factor, q_w, k_w, v_w, out_w, v_embed=v_embed, v0=v0)
+ x_out = x_in + self.attn_scale.to(dtype=x_in.dtype)[None, None, :] * attn_out
+ x_out = x_out + self.mlp_scale.to(dtype=x_out.dtype)[None, None, :] * self.mlp(self.mlp_norm(x_out) * self.ln_scale_factor, up_w, down_w)
+ if self.dtg_gate is not None:
+ gate = torch.sigmoid(self.dtg_gate(x_in.detach()))
+ x_out = x_in + gate * (x_out - x_in)
+ return x_out, x_out, raw_v
+ else:
+ r = self.route.to(dtype=x_attn.dtype)
+ mix_attn = self.resid_mix.to(dtype=x_attn.dtype)
+ x_in_attn = mix_attn[0][None, None, :] * x_attn + mix_attn[1][None, None, :] * x0
+ attn_out, raw_v = self.attn(self.attn_norm(x_in_attn) * self.ln_scale_factor, q_w, k_w, v_w, out_w, v_embed=v_embed, v0=v0)
+ attn_delta = self.attn_scale.to(dtype=x_attn.dtype)[None, None, :] * attn_out
+ mix_mlp = self.resid_mix_mlp.to(dtype=x_mlp.dtype)
+ x_in_mlp = mix_mlp[0][None, None, :] * x_mlp + mix_mlp[1][None, None, :] * x0
+ mlp_delta = self.mlp_scale.to(dtype=x_mlp.dtype)[None, None, :] * self.mlp(self.mlp_norm(x_in_mlp) * self.ln_scale_factor, up_w, down_w)
+ x_attn_out = x_attn + r[0] * attn_delta + r[2] * mlp_delta
+ x_mlp_out = x_mlp + r[1] * attn_delta + r[3] * mlp_delta
+ return x_attn_out, x_mlp_out, raw_v
+
+class GPT(nn.Module):
+ def __init__(
+ self,
+ vocab_size: int,
+ num_layers: int,
+ model_dim: int,
+ num_heads: int,
+ num_kv_heads: int,
+ mlp_mult: int,
+ tie_embeddings: bool,
+ tied_embed_init_std: float,
+ logit_softcap: float,
+ rope_base: float,
+ qk_gain_init: float,
+ mtp_num_heads: int = 0,
+ mtp_loss_weight: float = 0.1,
+ bigram_vocab_size: int = 0,
+ bigram_dim: int = 128,
+ xsa_last_n: int = 0,
+ rope_dims: int = 0,
+ ln_scale: bool = False,
+ dtg: bool = False,
+ ve_enabled: bool = False,
+ ve_dim: int = 128,
+ ve_layers: str = "9,10",
+ gated_attention: bool = False,
+ value_residual: bool = False,
+ activation_mode: str = "leaky_relu_sq",
+ activation_neg_slope: float = 0.5,
+ asymmetric_square_init: float = 0.25,
+ gated_square_beta_init: float = 1.0,
+ parallel_start_layer: int = 0,
+ recur_layers: str = "",
+ recur_untie_mlp: bool = True,
+ ):
+ super().__init__()
+ self._ve_target_dim = num_kv_heads * (model_dim // num_heads) # kv_dim for value projection
+ if logit_softcap <= 0.0:
+ raise ValueError(f"logit_softcap must be positive, got {logit_softcap}")
+ self.tie_embeddings = tie_embeddings
+ self.tied_embed_init_std = tied_embed_init_std
+ self.logit_softcap = logit_softcap
+ self.parallel_start_layer = parallel_start_layer
+ # Layer 5 recurrence intentionally happens post-skip (in decoder) per PR #1204
+ self.recur_layer_ids = [int(x) for x in recur_layers.split(",") if x.strip()] if recur_layers else []
+ self.recur_untie_mlp = recur_untie_mlp
+ self.value_residual = value_residual
+ self.mtp_num_heads = mtp_num_heads
+ self.mtp_loss_weight = mtp_loss_weight
+ self.tok_emb = nn.Embedding(vocab_size, model_dim)
+ self.bigram = BigramHashEmbedding(bigram_vocab_size, bigram_dim, model_dim) if bigram_vocab_size > 0 else None
+ self.smear = SmearGate(model_dim)
+ self.num_encoder_layers = num_layers // 2
+ self.num_decoder_layers = num_layers - self.num_encoder_layers
+ self.num_skip_weights = min(self.num_encoder_layers, self.num_decoder_layers)
+ self.skip_weights = nn.Parameter(torch.ones(self.num_skip_weights, model_dim, dtype=torch.float32))
+ assert parallel_start_layer == 0 or parallel_start_layer >= self.num_encoder_layers, \
+ f"parallel_start_layer={parallel_start_layer} inside encoder; skip connections only store attn lane"
+ # Parameter banks: contiguous 3D tensors for batched optimizer
+ head_dim = model_dim // num_heads
+ kv_dim = num_kv_heads * head_dim
+ mlp_dim = int(mlp_mult * model_dim)
+ self.num_layers = num_layers
+ self.qo_bank = nn.Parameter(torch.empty(2 * num_layers, model_dim, model_dim))
+ self.kv_bank = nn.Parameter(torch.empty(2 * num_layers, kv_dim, model_dim))
+ self.mlp_up_bank = nn.Parameter(torch.empty(num_layers, mlp_dim, model_dim))
+ self.mlp_down_bank = nn.Parameter(torch.empty(num_layers, model_dim, mlp_dim))
+ self.blocks = nn.ModuleList(
+ [
+ Block(
+ model_dim,
+ num_heads,
+ num_kv_heads,
+ mlp_mult,
+ rope_base,
+ qk_gain_init,
+ layer_idx=i,
+ ln_scale=ln_scale,
+ dtg=dtg,
+ gated_attention=gated_attention,
+ value_residual=value_residual,
+ activation_mode=activation_mode,
+ activation_neg_slope=activation_neg_slope,
+ asymmetric_square_init=asymmetric_square_init,
+ gated_square_beta_init=gated_square_beta_init,
+ parallel=(parallel_start_layer > 0 and i >= parallel_start_layer),
+ )
+ for i in range(num_layers)
+ ]
+ )
+ if rope_dims > 0:
+ head_dim = model_dim // num_heads
+ for block in self.blocks:
+ block.attn.rope_dims = rope_dims
+ block.attn.rotary = Rotary(head_dim, base=rope_base, train_seq_len=1024, rope_dims=rope_dims)
+ self.ve_layer_indices = [int(x) for x in ve_layers.split(",") if x.strip()] if ve_enabled else []
+ kv_dim_ve = self._ve_target_dim
+ if self.ve_layer_indices:
+ self.ve_shared = ValueEmbedding(vocab_size, ve_dim, kv_dim_ve)
+ self.ve_layer_scales = nn.ParameterList(
+ [nn.Parameter(torch.ones(1, dtype=torch.float32)) for _ in self.ve_layer_indices]
+ )
+ else:
+ self.ve_shared = None
+ self.ve_layer_scales = nn.ParameterList()
+ self.value_embeds = nn.ModuleList() # keep empty for compat
+ self.final_norm = RMSNorm()
+ self.lm_head = None if tie_embeddings else CastedLinear(model_dim, vocab_size, bias=False)
+ if self.lm_head is not None:
+ self.lm_head._zero_init = True
+ self.mtp_heads = nn.ModuleList(
+ [CastedLinear(model_dim, vocab_size, bias=False) for _ in range(mtp_num_heads)]
+ )
+ for head in self.mtp_heads:
+ head._zero_init = True
+ if xsa_last_n > 0:
+ for i in range(max(0, num_layers - xsa_last_n), num_layers):
+ self.blocks[i].attn.use_xsa = True
+ # Mini depth recurrence: untied MLP banks for repeated layers
+ if self.recur_layer_ids and recur_untie_mlp:
+ self.recur_mlp_up = nn.ParameterDict()
+ self.recur_mlp_down = nn.ParameterDict()
+ for lid in self.recur_layer_ids:
+ self.recur_mlp_up[str(lid)] = nn.Parameter(torch.empty(mlp_dim, model_dim))
+ self.recur_mlp_down[str(lid)] = nn.Parameter(torch.empty(model_dim, mlp_dim))
+ nn.init.orthogonal_(self.recur_mlp_up[str(lid)])
+ nn.init.zeros_(self.recur_mlp_down[str(lid)])
+ else:
+ self.recur_mlp_up = None
+ self.recur_mlp_down = None
+ # Learnable lane merge (only when parallel is enabled)
+ if parallel_start_layer > 0:
+ self.lane_merge = nn.Parameter(torch.tensor(0.5))
+ else:
+ self.lane_merge = None
+ self._init_weights()
+ def _init_weights(self) -> None:
+ if self.tie_embeddings:
+ nn.init.normal_(self.tok_emb.weight, mean=0.0, std=self.tied_embed_init_std)
+ n = self.num_layers
+ proj_scale = 1.0 / math.sqrt(2 * n)
+ # Init banks: orthogonal, with proj layers scaled down and out/down zero-init
+ for i in range(n):
+ nn.init.orthogonal_(self.qo_bank.data[i], gain=1.0) # Q
+ nn.init.zeros_(self.qo_bank.data[n + i]) # Out (zero init)
+ nn.init.orthogonal_(self.kv_bank.data[i], gain=1.0) # K
+ nn.init.orthogonal_(self.kv_bank.data[n + i], gain=1.0) # V
+ nn.init.orthogonal_(self.mlp_up_bank.data[i], gain=1.0) # MLP up
+ nn.init.zeros_(self.mlp_down_bank.data[i]) # MLP down (zero init)
+ # Scale proj layers (out_proj and mlp_down are "proj" layers)
+ self.qo_bank.data[n + i].mul_(proj_scale)
+ self.mlp_down_bank.data[i].mul_(proj_scale)
+ # Init remaining nn.Linear modules (bigram proj, mtp heads, lm_head)
+ for name, module in self.named_modules():
+ if isinstance(module, nn.Linear):
+ if getattr(module, "_zero_init", False):
+ nn.init.zeros_(module.weight)
+ elif module.weight.ndim == 2 and module.weight.shape[0] >= 64 and module.weight.shape[1] >= 64:
+ nn.init.orthogonal_(module.weight, gain=1.0)
+ def _get_ve(self, layer_idx: int, input_ids: Tensor, ve_cache: dict | None = None) -> Tensor | None:
+ """Get value embedding for a specific layer using shared table + per-layer scale."""
+ if self.ve_shared is None or layer_idx not in self.ve_layer_indices:
+ return None
+ if ve_cache is not None and 've' not in ve_cache:
+ ve_cache['ve'] = self.ve_shared(input_ids)
+ ve_base = ve_cache['ve'] if ve_cache is not None else self.ve_shared(input_ids)
+ ve_idx = self.ve_layer_indices.index(layer_idx)
+ return ve_base * self.ve_layer_scales[ve_idx].to(dtype=ve_base.dtype)
+ def _run_block(self, block, i, x_attn, x_mlp, x0, input_ids, ve_cache, v0, up_w=None, down_w=None):
+ """Run a single block with the correct bank weights."""
+ n = self.num_layers
+ if up_w is None:
+ up_w = self.mlp_up_bank[i]
+ if down_w is None:
+ down_w = self.mlp_down_bank[i]
+ ve = self._get_ve(i, input_ids, ve_cache)
+ return block(
+ x_attn, x_mlp, x0,
+ self.qo_bank[i], self.kv_bank[i], self.kv_bank[n + i],
+ self.qo_bank[n + i], up_w, down_w,
+ v_embed=ve, v0=v0,
+ )
+ def _run_layers(self, input_ids: Tensor) -> Tensor:
+ """Run embeddings + encoder + decoder + lane merge + final norm."""
+ n = self.num_layers
+ x = self.tok_emb(input_ids)
+ if self.bigram is not None:
+ x = x + self.bigram(input_ids)
+ x = F.rms_norm(x, (x.size(-1),))
+ x = self.smear(x)
+ x0 = x
+ x_attn = x
+ x_mlp = x
+ v0 = None
+ skips: list[Tensor] = []
+ ve_cache: dict = {}
+ # ENCODER
+ for i in range(self.num_encoder_layers):
+ x_attn, x_mlp, raw_v = self._run_block(self.blocks[i], i, x_attn, x_mlp, x0, input_ids, ve_cache, v0)
+ if v0 is None and raw_v is not None:
+ v0 = raw_v
+ if i in self.recur_layer_ids:
+ up_w = self.recur_mlp_up[str(i)] if self.recur_mlp_up else self.mlp_up_bank[i]
+ down_w = self.recur_mlp_down[str(i)] if self.recur_mlp_down else self.mlp_down_bank[i]
+ x_attn, x_mlp, _ = self._run_block(self.blocks[i], i, x_attn, x_mlp, x0, input_ids, ve_cache, v0,
+ up_w=up_w, down_w=down_w)
+ skips.append(x_attn)
+ # DECODER
+ for i in range(self.num_decoder_layers):
+ bi = self.num_encoder_layers + i
+ if skips:
+ skip_val = self.skip_weights[i].to(dtype=x_attn.dtype)[None, None, :] * skips.pop()
+ x_attn = x_attn + skip_val
+ if self.blocks[bi].parallel:
+ x_mlp = x_mlp + skip_val
+ x_attn, x_mlp, _ = self._run_block(self.blocks[bi], bi, x_attn, x_mlp, x0, input_ids, ve_cache, v0)
+ # Depth recurrence in decoder — layer 5 post-skip is intentional per PR #1204
+ if bi in self.recur_layer_ids:
+ up_w = self.recur_mlp_up[str(bi)] if self.recur_mlp_up else self.mlp_up_bank[bi]
+ down_w = self.recur_mlp_down[str(bi)] if self.recur_mlp_down else self.mlp_down_bank[bi]
+ x_attn, x_mlp, _ = self._run_block(self.blocks[bi], bi, x_attn, x_mlp, x0, input_ids, ve_cache, v0,
+ up_w=up_w, down_w=down_w)
+ # Learnable lane merge
+ if self.lane_merge is not None:
+ m = self.lane_merge.to(dtype=x_attn.dtype)
+ x = m * x_attn + (1.0 - m) * x_mlp
+ else:
+ x = x_attn
+ return self.final_norm(x)
+ def forward(self, input_ids: Tensor, target_ids: Tensor) -> Tensor:
+ x = self._run_layers(input_ids)
+ x_flat = x.reshape(-1, x.size(-1))
+ targets = target_ids.reshape(-1)
+ if self.tie_embeddings:
+ logits_proj = F.linear(x_flat, self.tok_emb.weight)
+ else:
+ if self.lm_head is None:
+ raise RuntimeError("lm_head is required when tie_embeddings=False")
+ logits_proj = self.lm_head(x_flat)
+ logits = self.logit_softcap * torch.tanh(logits_proj / self.logit_softcap)
+ main_loss = F.cross_entropy(logits.float(), targets, reduction="mean")
+ if self.training and self.mtp_num_heads > 0 and self.mtp_loss_weight > 0.0:
+ _, seqlen, dim = x.shape
+ mtp_loss_sum = x.new_zeros(())
+ mtp_loss_count = 0
+ for k, mtp_head in enumerate(self.mtp_heads):
+ valid_t = seqlen - (k + 1)
+ if valid_t <= 0:
+ continue
+ mtp_hidden = x[:, :valid_t, :].reshape(-1, dim)
+ mtp_targets = target_ids[:, k + 1 :].reshape(-1)
+ mtp_logits_proj = mtp_head(mtp_hidden)
+ mtp_logits = self.logit_softcap * torch.tanh(mtp_logits_proj / self.logit_softcap)
+ mtp_loss_sum = mtp_loss_sum + F.cross_entropy(mtp_logits.float(), mtp_targets, reduction="mean")
+ mtp_loss_count += 1
+ if mtp_loss_count > 0:
+ main_loss = main_loss + self.mtp_loss_weight * (mtp_loss_sum / mtp_loss_count)
+ return main_loss
+ def forward_logits(self, input_ids: Tensor) -> Tensor:
+ """Return logits (bsz, seq_len, vocab) without computing loss."""
+ x = self._run_layers(input_ids)
+ if self.tie_embeddings:
+ logits_proj = F.linear(x, self.tok_emb.weight)
+ else:
+ logits_proj = self.lm_head(x)
+ return self.logit_softcap * torch.tanh(logits_proj / self.logit_softcap)
+
+# --- Sliding window evaluation ---
+
+def eval_val_sliding(
+ args: Hyperparameters,
+ base_model: nn.Module,
+ rank: int,
+ world_size: int,
+ device: torch.device,
+ val_tokens: Tensor,
+ base_bytes_lut: Tensor,
+ has_leading_space_lut: Tensor,
+ is_boundary_token_lut: Tensor,
+ stride: int,
+ batch_seqs: int = 32,
+ eval_seq_len: int | None = None,
+) -> tuple[float, float]:
+ """Sliding window evaluation: each token scored with maximum context."""
+ seq_len = eval_seq_len or args.train_seq_len
+ total_tokens = val_tokens.numel() - 1
+ window_starts = [ws for ws in range(0, total_tokens, stride)
+ if min(ws + seq_len, total_tokens) - ws >= 1]
+ total_windows = len(window_starts)
+ my_s = (total_windows * rank) // world_size
+ my_e = (total_windows * (rank + 1)) // world_size
+ my_windows = window_starts[my_s:my_e]
+ loss_sum = torch.zeros((), device=device, dtype=torch.float64)
+ token_count = torch.zeros((), device=device, dtype=torch.float64)
+ byte_count = torch.zeros((), device=device, dtype=torch.float64)
+ base_model.eval()
+ compiled_logits = torch.compile(base_model.forward_logits, dynamic=False, fullgraph=True)
+ with torch.inference_mode():
+ for bi in range(0, len(my_windows), batch_seqs):
+ batch_ws = my_windows[bi:bi + batch_seqs]
+ bsz = len(batch_ws)
+ x_batch = torch.zeros(bsz, seq_len, dtype=torch.int64, device=device)
+ y_batch = torch.zeros(bsz, seq_len, dtype=torch.int64, device=device)
+ wlens: list[int] = []
+ for i, ws in enumerate(batch_ws):
+ end = min(ws + seq_len, total_tokens)
+ wlen = end - ws
+ wlens.append(wlen)
+ chunk = val_tokens[ws:end + 1].to(dtype=torch.int64, device=device)
+ x_batch[i, :wlen] = chunk[:-1]
+ y_batch[i, :wlen] = chunk[1:]
+ with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
+ logits = compiled_logits(x_batch)
+ nll = F.cross_entropy(
+ logits.reshape(-1, logits.size(-1)).float(),
+ y_batch.reshape(-1),
+ reduction="none",
+ ).reshape(bsz, seq_len)
+ for i, ws in enumerate(batch_ws):
+ wlen = wlens[i]
+ s = 0 if ws == 0 else max(wlen - stride, 0)
+ scored_nll = nll[i, s:wlen].to(torch.float64)
+ loss_sum += scored_nll.sum()
+ token_count += float(wlen - s)
+ tgt = y_batch[i, s:wlen]
+ prev = x_batch[i, s:wlen]
+ tb = base_bytes_lut[tgt].to(torch.float64)
+ tb += (has_leading_space_lut[tgt] & ~is_boundary_token_lut[prev]).to(torch.float64)
+ byte_count += tb.sum()
+ if dist.is_available() and dist.is_initialized():
+ dist.all_reduce(loss_sum, op=dist.ReduceOp.SUM)
+ dist.all_reduce(token_count, op=dist.ReduceOp.SUM)
+ dist.all_reduce(byte_count, op=dist.ReduceOp.SUM)
+ val_loss = (loss_sum / token_count).item()
+ bits_per_token = val_loss / math.log(2.0)
+ tokens_per_byte = token_count.item() / byte_count.item()
+ base_model.train()
+ return val_loss, bits_per_token * tokens_per_byte
+
+
+def eval_val_sliding_ttt(
+ args: Hyperparameters, base_model: nn.Module, rank: int, world_size: int,
+ device: torch.device, val_tokens: Tensor, base_bytes_lut: Tensor,
+ has_leading_space_lut: Tensor, is_boundary_token_lut: Tensor,
+ stride: int, batch_seqs: int = 32, log0=print,
+) -> tuple[float, float]:
+ """Legal score-first TTT (PR #461 recipe): score each chunk with sliding windows,
+ then train on it. Every token scored BEFORE any update that could use it."""
+ seq_len = args.train_seq_len
+ total_tokens = val_tokens.numel() - 1
+ ttt_chunk = args.ttt_chunk_tokens
+
+ # Pre-compute all window starts
+ window_starts = [ws for ws in range(0, total_tokens, stride)
+ if min(ws + seq_len, total_tokens) - ws >= stride or ws == 0]
+
+ # Assign each window to a chunk based on the first token it scores
+ num_chunks = (total_tokens + ttt_chunk - 1) // ttt_chunk
+ chunk_windows: list[list[int]] = [[] for _ in range(num_chunks)]
+ for ws in window_starts:
+ end = min(ws + seq_len, total_tokens)
+ wlen = end - ws
+ s = 0 if ws == 0 else max(wlen - stride, 0)
+ scored_start = ws + s
+ ci = min(scored_start // ttt_chunk, num_chunks - 1)
+ chunk_windows[ci].append(ws)
+
+ log0(f"ttt_sliding:start chunks={num_chunks} chunk_tokens={ttt_chunk} "
+ f"total_windows={len(window_starts)} stride={stride} "
+ f"ttt_lr={args.ttt_lr} ttt_epochs={args.ttt_epochs} "
+ f"freeze_blocks={args.ttt_freeze_blocks}")
+
+ loss_sum = torch.zeros((), device=device, dtype=torch.float64)
+ token_count = torch.zeros((), device=device, dtype=torch.float64)
+ byte_count = torch.zeros((), device=device, dtype=torch.float64)
+
+ # Freeze first N blocks
+ frozen_block_ids = set(range(min(args.ttt_freeze_blocks, len(base_model.blocks))))
+ ttt_params = []
+ for name, p in base_model.named_parameters():
+ freeze = False
+ for bi in frozen_block_ids:
+ if f"blocks.{bi}." in name:
+ freeze = True
+ break
+ if freeze:
+ p.requires_grad_(False)
+ else:
+ p.requires_grad_(True)
+ ttt_params.append(p)
+
+ log0(f"ttt_sliding:params unfrozen={sum(p.numel() for p in ttt_params)} "
+ f"frozen={sum(p.numel() for p in base_model.parameters() if not p.requires_grad)}")
+
+ optimizer = torch.optim.SGD(ttt_params, lr=args.ttt_lr, momentum=args.ttt_momentum)
+ t0 = time.perf_counter()
+
+ for ci in range(num_chunks):
+ windows = chunk_windows[ci]
+ if not windows:
+ continue
+ chunk_start = ci * ttt_chunk
+ chunk_end = min((ci + 1) * ttt_chunk, total_tokens)
+
+ # --- Phase 1: SCORE this chunk's windows (inference_mode) ---
+ my_s = (len(windows) * rank) // world_size
+ my_e = (len(windows) * (rank + 1)) // world_size
+ my_windows = windows[my_s:my_e]
+
+ base_model.eval()
+ with torch.inference_mode():
+ for bi in range(0, len(my_windows), batch_seqs):
+ batch_ws = my_windows[bi:bi + batch_seqs]
+ bsz = len(batch_ws)
+ x_batch = torch.zeros(bsz, seq_len, dtype=torch.int64, device=device)
+ y_batch = torch.zeros(bsz, seq_len, dtype=torch.int64, device=device)
+ wlens: list[int] = []
+ for i, ws in enumerate(batch_ws):
+ end = min(ws + seq_len, total_tokens)
+ wlen = end - ws
+ wlens.append(wlen)
+ chunk_tok = val_tokens[ws:end + 1].to(dtype=torch.int64, device=device)
+ x_batch[i, :wlen] = chunk_tok[:-1]
+ y_batch[i, :wlen] = chunk_tok[1:]
+ with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
+ logits = base_model.forward_logits(x_batch)
+ nll = F.cross_entropy(
+ logits.reshape(-1, logits.size(-1)).float(),
+ y_batch.reshape(-1), reduction="none",
+ ).reshape(bsz, seq_len)
+ for i, ws in enumerate(batch_ws):
+ wlen = wlens[i]
+ s = 0 if ws == 0 else max(wlen - stride, 0)
+ scored_nll = nll[i, s:wlen].to(torch.float64)
+ loss_sum += scored_nll.sum()
+ token_count += float(wlen - s)
+ tgt, prev = y_batch[i, s:wlen], x_batch[i, s:wlen]
+ tb = base_bytes_lut[tgt].to(torch.float64)
+ tb += (has_leading_space_lut[tgt] & ~is_boundary_token_lut[prev]).to(torch.float64)
+ byte_count += tb.sum()
+
+ # --- Phase 2: TRAIN on this chunk (already scored = legal) ---
+ is_last_chunk = (ci == num_chunks - 1)
+ if not is_last_chunk and args.ttt_epochs > 0:
+ base_model.train()
+ chunk_seqs = (chunk_end - chunk_start) // seq_len
+ if chunk_seqs > 0:
+ cos_lr = args.ttt_lr * 0.5 * (1.0 + math.cos(math.pi * ci / max(num_chunks - 1, 1)))
+ for pg in optimizer.param_groups:
+ pg['lr'] = cos_lr
+ my_seq_s = (chunk_seqs * rank) // world_size
+ my_seq_e = (chunk_seqs * (rank + 1)) // world_size
+ my_chunk_seqs = my_seq_e - my_seq_s
+ for _ep in range(args.ttt_epochs):
+ for bs in range(0, my_chunk_seqs, args.ttt_batch_seqs):
+ be = min(bs + args.ttt_batch_seqs, my_chunk_seqs)
+ actual_bs = my_seq_s + bs
+ start_tok = chunk_start + actual_bs * seq_len
+ end_tok = chunk_start + (my_seq_s + be) * seq_len + 1
+ if end_tok > val_tokens.numel():
+ continue
+ local = val_tokens[start_tok:end_tok].to(device=device, dtype=torch.int64)
+ x = local[:-1].reshape(-1, seq_len)
+ y = local[1:].reshape(-1, seq_len)
+ optimizer.zero_grad(set_to_none=True)
+ with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
+ loss = base_model(x, y)
+ loss.backward()
+ if world_size > 1:
+ for p in ttt_params:
+ if p.grad is not None:
+ dist.all_reduce(p.grad, op=dist.ReduceOp.AVG)
+ torch.nn.utils.clip_grad_norm_(ttt_params, args.ttt_grad_clip)
+ optimizer.step()
+
+ if rank == 0 and (ci % 10 == 0 or ci == num_chunks - 1):
+ elapsed = time.perf_counter() - t0
+ rl = loss_sum.item() / max(token_count.item(), 1)
+ rbpb = rl / math.log(2.0) * (token_count.item() / max(byte_count.item(), 1)) if token_count.item() > 0 else 0.0
+ log0(f" ttt_chunk [{ci+1}/{num_chunks}] bpb={rbpb:.6f} time={elapsed:.1f}s")
+
+ if dist.is_available() and dist.is_initialized():
+ dist.all_reduce(loss_sum, op=dist.ReduceOp.SUM)
+ dist.all_reduce(token_count, op=dist.ReduceOp.SUM)
+ dist.all_reduce(byte_count, op=dist.ReduceOp.SUM)
+
+ val_loss = (loss_sum / token_count).item()
+ val_bpb = val_loss / math.log(2.0) * (token_count.item() / byte_count.item())
+
+ for p in base_model.parameters():
+ p.requires_grad_(True)
+ base_model.eval()
+
+ log0(f"ttt_sliding:done val_loss={val_loss:.6f} val_bpb={val_bpb:.6f} "
+ f"elapsed={time.perf_counter() - t0:.1f}s")
+ return val_loss, val_bpb
+
+
+# === N-GRAM EVAL CACHE + TWO-PASS RESCORE ===
+
+_NGRAM_PRIMES = np.array([
+ 36313, 27191, 51647, 81929, 131071, 174763, 233017, 283721,
+ 347237, 411527, 479909, 557927, 646333, 746773, 862319, 992353,
+], dtype=np.int64)
+
+# Per-order multipliers: orders 2-3 suppressed, 4 near-neutral, 5-12 boosted
+_ORDER_MULTS = np.array([
+ 0.30, 0.30, 0.97, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0,
+], dtype=np.float32)
+
+
+class NgramCache:
+ """Hash-table n-gram cache with vectorized numpy operations."""
+
+ def __init__(self, min_order: int = 2, max_order: int = 16,
+ num_buckets: int = 16_777_216):
+ self.min_order = min_order
+ self.max_order = max_order
+ self.num_orders = max_order - min_order + 1
+ self.num_buckets = num_buckets
+ self.bucket_mask = np.int64(num_buckets - 1)
+ # Two flat hash tables per order: context counts and full (context+target) counts
+ self.ctx_tables = [np.zeros(num_buckets, dtype=np.int32) for _ in range(self.num_orders)]
+ self.full_tables = [np.zeros(num_buckets, dtype=np.int32) for _ in range(self.num_orders)]
+
+ def _compute_hashes(self, tokens_np: np.ndarray, start: int, end: int, order_idx: int):
+ """Compute context and full hashes for positions [start, end) at given order."""
+ n = self.min_order + order_idx
+ valid_start = max(start, n - 1)
+ N = end - valid_start
+ if N <= 0:
+ return None, None, valid_start
+ # Context hash: XOR of tokens[pos-n+1+k] * primes[k] for k=0..n-2
+ h = np.zeros(N, dtype=np.int64)
+ for k in range(n - 1):
+ offset = valid_start - (n - 1) + k
+ h ^= tokens_np[offset:offset + N].astype(np.int64) * _NGRAM_PRIMES[k % len(_NGRAM_PRIMES)]
+ ctx_h = h & self.bucket_mask
+ # Full hash: context + target token
+ target_prime = _NGRAM_PRIMES[min(n - 1, len(_NGRAM_PRIMES) - 1)]
+ full_h = (h ^ (tokens_np[valid_start:end].astype(np.int64) * target_prime)) & self.bucket_mask
+ return ctx_h, full_h, valid_start
+
+ def _bincount_add(self, table: np.ndarray, indices: np.ndarray):
+ """Fast histogram accumulation using np.bincount (much faster than np.add.at)."""
+ counts = np.bincount(indices.astype(np.intp), minlength=self.num_buckets)
+ table += counts[:self.num_buckets].astype(table.dtype)
+
+ def update_range(self, tokens_np: np.ndarray, start: int, end: int):
+ """Add tokens[start:end] to the cache for all orders."""
+ for oi in range(self.num_orders):
+ ctx_h, full_h, vs = self._compute_hashes(tokens_np, start, end, oi)
+ if ctx_h is None:
+ continue
+ self._bincount_add(self.ctx_tables[oi], ctx_h)
+ self._bincount_add(self.full_tables[oi], full_h)
+
+ def build_full(self, tokens_np: np.ndarray):
+ """Build complete cache from entire token sequence (vectorized)."""
+ for oi in range(self.num_orders):
+ ctx_h, full_h, _ = self._compute_hashes(tokens_np, 0, len(tokens_np), oi)
+ if ctx_h is None:
+ continue
+ self._bincount_add(self.ctx_tables[oi], ctx_h)
+ self._bincount_add(self.full_tables[oi], full_h)
+
+ def score_range(self, tokens_np: np.ndarray, start: int, end: int,
+ min_count: int = 2):
+ """Score tokens[start:end] against the cache.
+
+ Returns:
+ ngram_prob: (N,) float32 - n-gram probability for the true target token
+ matched_order: (N,) int32 - which order matched (-1 = no match)
+ """
+ N = end - start
+ ngram_prob = np.zeros(N, dtype=np.float32)
+ matched_order = np.full(N, -1, dtype=np.int32)
+ matched = np.zeros(N, dtype=bool)
+
+ # Backoff from highest to lowest order
+ for oi in range(self.num_orders - 1, -1, -1):
+ n = self.min_order + oi
+ ctx_h, full_h, vs = self._compute_hashes(tokens_np, start, end, oi)
+ if ctx_h is None:
+ continue
+ offset = vs - start
+ ctx_counts = self.ctx_tables[oi][ctx_h]
+ full_counts = self.full_tables[oi][full_h]
+ # Cap full counts to context counts (hash collision mitigation)
+ full_counts = np.minimum(full_counts, ctx_counts)
+ # Only match when: sufficient context, target has been seen, not already matched
+ eligible = (ctx_counts >= min_count) & (full_counts > 0) & ~matched[offset:]
+ if not np.any(eligible):
+ continue
+ prob = full_counts[eligible].astype(np.float32) / np.maximum(ctx_counts[eligible].astype(np.float32), 1.0)
+ # Find which positions in the output array to fill
+ out_idx = np.where(eligible)[0] + offset
+ ngram_prob[out_idx] = prob
+ matched_order[out_idx] = n
+ matched[out_idx] = True
+
+ return ngram_prob, matched_order
+
+ def score_positions(self, tokens_np: np.ndarray, positions: np.ndarray,
+ min_count: int = 2, leave_one_out: bool = False):
+ """Score selected token positions against the cache.
+
+ If leave_one_out is enabled, subtract this token's own contribution from
+ both context and (context,target) counts before matching.
+ """
+ N = len(positions)
+ ngram_prob = np.zeros(N, dtype=np.float32)
+ matched_order = np.full(N, -1, dtype=np.int32)
+ matched = np.zeros(N, dtype=bool)
+ if N == 0:
+ return ngram_prob, matched_order
+
+ positions = positions.astype(np.int64, copy=False)
+ for oi in range(self.num_orders - 1, -1, -1):
+ n = self.min_order + oi
+ ctx_h_all, full_h_all, valid_start = self._compute_hashes(tokens_np, 0, len(tokens_np), oi)
+ if ctx_h_all is None:
+ continue
+
+ remaining_idx = np.where(~matched)[0]
+ if remaining_idx.size == 0:
+ break
+ pos_sub = positions[remaining_idx]
+ valid_mask = pos_sub >= valid_start
+ if not np.any(valid_mask):
+ continue
+
+ valid_idx = remaining_idx[valid_mask]
+ lookup = (pos_sub[valid_mask] - valid_start).astype(np.int64)
+ ctx_h = ctx_h_all[lookup]
+ full_h = full_h_all[lookup]
+
+ ctx_counts = self.ctx_tables[oi][ctx_h].astype(np.int64)
+ full_counts = self.full_tables[oi][full_h].astype(np.int64)
+ if leave_one_out:
+ ctx_counts = np.maximum(ctx_counts - 1, 0)
+ full_counts = np.maximum(full_counts - 1, 0)
+ full_counts = np.minimum(full_counts, ctx_counts)
+
+ eligible = (ctx_counts >= min_count) & (full_counts > 0)
+ if not np.any(eligible):
+ continue
+
+ out_idx = valid_idx[eligible]
+ prob = full_counts[eligible].astype(np.float32) / np.maximum(ctx_counts[eligible].astype(np.float32), 1.0)
+ ngram_prob[out_idx] = prob
+ matched_order[out_idx] = n
+ matched[out_idx] = True
+
+ return ngram_prob, matched_order
+
+
+def eval_val_sliding_store(
+ args: Hyperparameters, base_model: nn.Module, rank: int, world_size: int,
+ device: torch.device, val_tokens: Tensor, base_bytes_lut: Tensor,
+ has_leading_space_lut: Tensor, is_boundary_token_lut: Tensor,
+ stride: int, batch_seqs: int = 32, log0=print,
+) -> tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, float, float]:
+ """Sliding-window eval that stores per-token model_p and entropy.
+
+ Returns: (model_p, entropy, token_bytes, token_targets, val_loss, val_bpb)
+ where model_p and entropy are arrays covering this rank's scored tokens,
+ and val_loss/val_bpb are the standard (un-blended) metrics.
+
+ Also returns global-offset index arrays for mapping back to token positions.
+ """
+ seq_len = args.train_seq_len
+ total_tokens = val_tokens.numel() - 1
+ window_starts = [ws for ws in range(0, total_tokens, stride)
+ if min(ws + seq_len, total_tokens) - ws >= 1]
+ total_windows = len(window_starts)
+ my_s = (total_windows * rank) // world_size
+ my_e = (total_windows * (rank + 1)) // world_size
+ my_windows = window_starts[my_s:my_e]
+
+ # Pre-allocate per-token storage (we'll trim later)
+ # Each token is scored in exactly one window
+ model_p_list: list[np.ndarray] = []
+ entropy_list: list[np.ndarray] = []
+ bytes_list: list[np.ndarray] = []
+ position_list: list[np.ndarray] = [] # global target-token positions
+ nll_list: list[np.ndarray] = []
+
+ base_model.eval()
+ compiled_logits = torch.compile(base_model.forward_logits, dynamic=False, fullgraph=True)
+ with torch.inference_mode():
+ for bi in range(0, len(my_windows), batch_seqs):
+ batch_ws = my_windows[bi:bi + batch_seqs]
+ bsz = len(batch_ws)
+ x_batch = torch.zeros(bsz, seq_len, dtype=torch.int64, device=device)
+ y_batch = torch.zeros(bsz, seq_len, dtype=torch.int64, device=device)
+ wlens: list[int] = []
+ for i, ws in enumerate(batch_ws):
+ end_pos = min(ws + seq_len, total_tokens)
+ wlen = end_pos - ws
+ wlens.append(wlen)
+ chunk = val_tokens[ws:end_pos + 1].to(dtype=torch.int64, device=device)
+ x_batch[i, :wlen] = chunk[:-1]
+ y_batch[i, :wlen] = chunk[1:]
+ with torch.autocast(device_type="cuda", dtype=torch.bfloat16):
+ logits = compiled_logits(x_batch) # (bsz, seq_len, vocab_size)
+ # Compute per-token quantities
+ logits_f = logits.float()
+ log_probs = F.log_softmax(logits_f, dim=-1) # (bsz, seq_len, V)
+ probs = log_probs.exp()
+ # NLL for each token
+ nll_all = F.cross_entropy(
+ logits_f.reshape(-1, logits_f.size(-1)),
+ y_batch.reshape(-1), reduction="none"
+ ).reshape(bsz, seq_len)
+ # Model probability of true token
+ mp = probs.gather(2, y_batch.unsqueeze(-1)).squeeze(-1) # (bsz, seq_len)
+ # Entropy of model distribution
+ ent = -(probs * log_probs).sum(dim=-1) # (bsz, seq_len)
+
+ for i, ws in enumerate(batch_ws):
+ wlen = wlens[i]
+ s = 0 if ws == 0 else max(wlen - stride, 0)
+ # Positions are TARGET token indices in val_tokens (ws+j+1 for scored position j)
+ positions = np.arange(ws + s + 1, ws + wlen + 1, dtype=np.int64)
+ position_list.append(positions)
+ model_p_list.append(mp[i, s:wlen].cpu().numpy().astype(np.float32))
+ entropy_list.append(ent[i, s:wlen].cpu().numpy().astype(np.float32))
+ nll_list.append(nll_all[i, s:wlen].cpu().numpy().astype(np.float64))
+ tgt = y_batch[i, s:wlen]
+ prev = x_batch[i, s:wlen]
+ tb = base_bytes_lut[tgt].to(torch.float64)
+ tb += (has_leading_space_lut[tgt] & ~is_boundary_token_lut[prev]).to(torch.float64)
+ bytes_list.append(tb.cpu().numpy())
+
+ all_positions = np.concatenate(position_list) if position_list else np.array([], dtype=np.int64)
+ all_model_p = np.concatenate(model_p_list) if model_p_list else np.array([], dtype=np.float32)
+ all_entropy = np.concatenate(entropy_list) if entropy_list else np.array([], dtype=np.float32)
+ all_nll = np.concatenate(nll_list) if nll_list else np.array([], dtype=np.float64)
+ all_bytes = np.concatenate(bytes_list) if bytes_list else np.array([], dtype=np.float64)
+
+
+ # Compute standard (un-blended) BPB for this rank
+ local_loss_sum = all_nll.sum()
+ local_token_count = float(len(all_nll))
+ local_byte_count = all_bytes.sum()
+
+ # All-reduce for standard BPB
+ loss_sum_t = torch.tensor(local_loss_sum, device=device, dtype=torch.float64)
+ token_count_t = torch.tensor(local_token_count, device=device, dtype=torch.float64)
+ byte_count_t = torch.tensor(local_byte_count, device=device, dtype=torch.float64)
+ if dist.is_available() and dist.is_initialized():
+ dist.all_reduce(loss_sum_t, op=dist.ReduceOp.SUM)
+ dist.all_reduce(token_count_t, op=dist.ReduceOp.SUM)
+ dist.all_reduce(byte_count_t, op=dist.ReduceOp.SUM)
+ val_loss = (loss_sum_t / token_count_t).item()
+ val_bpb = val_loss / math.log(2.0) * (token_count_t.item() / byte_count_t.item())
+
+ base_model.train()
+ return all_model_p, all_entropy, all_bytes, all_positions, val_loss, val_bpb
+
+
+def ngram_rescore(
+ args: Hyperparameters,
+ tokens_np: np.ndarray,
+ cache: NgramCache,
+ model_p: np.ndarray,
+ entropy: np.ndarray,
+ token_bytes: np.ndarray,
+ positions: np.ndarray,
+ rank: int, world_size: int, device: torch.device,
+ log0=print,
+) -> tuple[float, float]:
+ """Rescore tokens using n-gram cache blended with stored neural model_p.
+
+ This is Pass 2: the cache is already complete.
+ Returns: (val_loss, val_bpb)
+ """
+ N = len(positions)
+ if N == 0:
+ return 0.0, 0.0
+
+ # Score all of this rank's positions against the full cache
+ # We need to score at the GLOBAL token positions
+ # The cache.score_range expects contiguous ranges, but our positions may be sparse
+ # Instead, we score the full range and index into it
+ # Actually, positions are sorted (from sliding windows), so we can score chunks
+
+ ngram_prob, matched_order = cache.score_positions(
+ tokens_np,
+ positions,
+ min_count=args.ngram_min_count,
+ leave_one_out=args.ngram_leave_one_out,
+ )
+ matched = matched_order >= 0
+
+ # Entropy-adaptive alpha with per-order multipliers
+ alpha = np.zeros(N, dtype=np.float32)
+ if np.any(matched):
+ order_idx = (matched_order[matched] - cache.min_order).astype(np.int32)
+ centers = args.ngram_entropy_center - 0.25 * order_idx.astype(np.float32)
+ sig = 1.0 / (1.0 + np.exp(-args.ngram_entropy_scale * (entropy[matched] - centers)))
+ raw_alpha = args.ngram_alpha_min + (args.ngram_alpha_max - args.ngram_alpha_min) * sig
+ # Per-order multipliers
+ mults = _ORDER_MULTS[np.minimum(order_idx, len(_ORDER_MULTS) - 1)]
+ raw_alpha *= mults
+ alpha[matched] = np.clip(raw_alpha, 0.0, 0.95)
+
+ # Blend: p_blend = (1 - alpha) * model_p + alpha * ngram_prob
+ p_blend = (1.0 - alpha) * model_p + alpha * ngram_prob
+ # Clamp to avoid log(0)
+ p_blend = np.maximum(p_blend, 1e-10)
+ # For unmatched tokens, use model_p directly
+ p_blend[~matched] = np.maximum(model_p[~matched], 1e-10)
+
+ # NLL
+ nll = -np.log(p_blend).astype(np.float64)
+
+ # Aggregate
+ local_loss_sum = nll.sum()
+ local_token_count = float(N)
+ local_byte_count = token_bytes.sum()
+
+ # All-reduce
+ loss_sum_t = torch.tensor(local_loss_sum, device=device, dtype=torch.float64)
+ token_count_t = torch.tensor(local_token_count, device=device, dtype=torch.float64)
+ byte_count_t = torch.tensor(local_byte_count, device=device, dtype=torch.float64)
+ if dist.is_available() and dist.is_initialized():
+ dist.all_reduce(loss_sum_t, op=dist.ReduceOp.SUM)
+ dist.all_reduce(token_count_t, op=dist.ReduceOp.SUM)
+ dist.all_reduce(byte_count_t, op=dist.ReduceOp.SUM)
+
+ val_loss = (loss_sum_t / token_count_t).item()
+ val_bpb = val_loss / math.log(2.0) * (token_count_t.item() / byte_count_t.item())
+
+ n_matched = int(matched.sum())
+ log0(
+ f"ngram_rescore: matched={n_matched}/{N} ({100*n_matched/max(N,1):.1f}%) "
+ f"mean_alpha={alpha[matched].mean():.3f} leave_one_out={int(args.ngram_leave_one_out)}"
+ if n_matched > 0 else f"ngram_rescore: no matches leave_one_out={int(args.ngram_leave_one_out)}"
+ )
+
+ return val_loss, val_bpb
+
+
+def eval_ngram_two_pass(
+ args: Hyperparameters, base_model: nn.Module, rank: int, world_size: int,
+ device: torch.device, val_tokens: Tensor, base_bytes_lut: Tensor,
+ has_leading_space_lut: Tensor, is_boundary_token_lut: Tensor,
+ stride: int, batch_seqs: int = 32, log0=print,
+) -> tuple[float, float]:
+ """Two-pass n-gram evaluation.
+
+ Pass 1: Sliding-window neural eval → store per-token model_p and entropy.
+ Build: Complete n-gram cache from all tokens (vectorized).
+ Pass 2: Rescore ALL tokens by blending neural model_p with n-gram predictions.
+ """
+ t0 = time.perf_counter()
+
+ # --- Pass 1: Neural eval with per-token storage ---
+ log0(f"ngram_two_pass: starting Pass 1 (sliding-window neural eval)")
+ model_p, entropy, token_bytes, positions, pass1_loss, pass1_bpb = eval_val_sliding_store(
+ args, base_model, rank, world_size, device, val_tokens,
+ base_bytes_lut, has_leading_space_lut, is_boundary_token_lut,
+ stride=stride, batch_seqs=batch_seqs, log0=log0,
+ )
+ t_pass1 = time.perf_counter()
+ log0(f"ngram_two_pass: Pass 1 done val_bpb={pass1_bpb:.6f} "
+ f"tokens_scored={len(positions)} time={t_pass1 - t0:.1f}s")
+
+ # --- Build complete n-gram cache ---
+ log0(f"ngram_two_pass: building cache orders={args.ngram_min_order}-{args.ngram_max_order} "
+ f"buckets={args.ngram_num_buckets}")
+ tokens_np = val_tokens.numpy().astype(np.int16)
+ cache = NgramCache(
+ min_order=args.ngram_min_order,
+ max_order=args.ngram_max_order,
+ num_buckets=args.ngram_num_buckets,
+ )
+ cache.build_full(tokens_np)
+ t_cache = time.perf_counter()
+ log0(f"ngram_two_pass: cache built in {t_cache - t_pass1:.1f}s")
+
+ # --- Pass 2: N-gram rescore ---
+ log0(f"ngram_two_pass: starting Pass 2 (n-gram rescore)")
+ val_loss, val_bpb = ngram_rescore(
+ args, tokens_np, cache, model_p, entropy, token_bytes, positions,
+ rank, world_size, device, log0=log0,
+ )
+ t_pass2 = time.perf_counter()
+ log0(f"ngram_two_pass: Pass 2 done val_bpb={val_bpb:.6f} "
+ f"improvement={pass1_bpb - val_bpb:.6f} time={t_pass2 - t_cache:.1f}s")
+ log0(f"ngram_two_pass: total time={t_pass2 - t0:.1f}s")
+
+ return val_loss, val_bpb
+
+
+# === COMPLEMENTARY TRAINING ===
+
+class TrainBigramTracker:
+ """Tracks bigram statistics from training data for complementary loss weighting."""
+
+ def __init__(self, vocab_size: int, device: torch.device):
+ # bigram_counts[prev_token, target_token] = count
+ self.counts = torch.zeros(vocab_size, vocab_size, device=device, dtype=torch.float32)
+ self.row_totals = torch.zeros(vocab_size, device=device, dtype=torch.float32)
+
+ @torch.no_grad()
+ def update(self, x: Tensor, y: Tensor):
+ """Update bigram counts. x: context tokens, y: target tokens."""
+ prev = x.reshape(-1)
+ tgt = y.reshape(-1)
+ idx = prev.long() * self.counts.shape[1] + tgt.long()
+ self.counts.view(-1).scatter_add_(0, idx, torch.ones_like(idx, dtype=torch.float32))
+ self.row_totals.scatter_add_(0, prev.long(), torch.ones(prev.shape[0], device=prev.device, dtype=torch.float32))
+
+ @torch.no_grad()
+ def get_weights(self, x: Tensor, y: Tensor, alpha: float = 0.5) -> Tensor:
+ """Compute per-token loss weights: downweight tokens predictable by bigrams."""
+ prev = x.reshape(-1)
+ tgt = y.reshape(-1)
+ totals = self.row_totals[prev.long()]
+ counts = self.counts[prev.long(), tgt.long()]
+ ngram_prob = counts / totals.clamp(min=1.0)
+ weights = (1.0 - alpha * ngram_prob).clamp(min=0.1)
+ return weights.reshape(y.shape)
+
+
+# --- GPTQ-lite int6 quantization ---
+
+def _classify_param(name: str) -> str:
+ if "tok_emb" in name or "lm_head" in name:
+ return "embed"
+ if ".mlp." in name:
+ return "mlp"
+ if ".attn." in name or (".proj." in name and ".mlp." not in name):
+ return "attn"
+ return "other"
+def quantize_int6_per_row(t: Tensor, clip_range: int = 31) -> tuple[Tensor, Tensor]:
+ t32 = t.float()
+ if t32.ndim == 2:
+ best_q, best_s, best_err = None, None, float('inf')
+ for pct in [0.9990, 0.9995, 0.9999, 0.99999, 1.0]:
+ if pct < 1.0:
+ row_clip = torch.quantile(t32.abs(), pct, dim=1)
+ else:
+ row_clip = t32.abs().amax(dim=1)
+ s = (row_clip / clip_range).clamp_min(1.0 / clip_range).to(torch.float16)
+ q = torch.clamp(torch.round(t32 / s.float()[:, None]), -clip_range, clip_range).to(torch.int8)
+ recon = q.float() * s.float()[:, None]
+ err = (t32 - recon).pow(2).mean().item()
+ if err < best_err:
+ best_q, best_s, best_err = q, s, err
+ return best_q, best_s
+ amax = t32.abs().max().item()
+ scale = torch.tensor(amax / clip_range if amax > 0 else 1.0, dtype=torch.float16)
+ q = torch.clamp(torch.round(t32 / scale.float()), -clip_range, clip_range).to(torch.int8)
+ return q, scale
+
+def _unbank_state_dict(sd: dict[str, Tensor], num_layers: int) -> dict[str, Tensor]:
+ """Convert 3D bank tensors into individual 2D tensors with standard names."""
+ out: dict[str, Tensor] = {}
+ n = num_layers
+ for name, tensor in sd.items():
+ if name == "qo_bank":
+ for i in range(n):
+ out[f"blocks.{i}.attn.c_q.weight"] = tensor[i]
+ out[f"blocks.{i}.attn.proj.weight"] = tensor[n + i]
+ elif name == "kv_bank":
+ for i in range(n):
+ out[f"blocks.{i}.attn.c_k.weight"] = tensor[i]
+ out[f"blocks.{i}.attn.c_v.weight"] = tensor[n + i]
+ elif name == "mlp_up_bank":
+ for i in range(n):
+ out[f"blocks.{i}.mlp.fc.weight"] = tensor[i]
+ elif name == "mlp_down_bank":
+ for i in range(n):
+ out[f"blocks.{i}.mlp.proj.weight"] = tensor[i]
+ else:
+ out[name] = tensor
+ return out
+
+def _rebank_state_dict(sd: dict[str, Tensor], num_layers: int, template_sd: dict[str, Tensor]) -> dict[str, Tensor]:
+ """Convert individual 2D tensors back into 3D bank tensors."""
+ out: dict[str, Tensor] = {}
+ n = num_layers
+ # Reconstruct banks from individual weight keys
+ qo_slices = [None] * (2 * n)
+ kv_slices = [None] * (2 * n)
+ up_slices = [None] * n
+ down_slices = [None] * n
+ consumed = set()
+ for i in range(n):
+ qk = f"blocks.{i}.attn.c_q.weight"
+ if qk in sd:
+ qo_slices[i] = sd[qk]
+ consumed.add(qk)
+ ok = f"blocks.{i}.attn.proj.weight"
+ if ok in sd:
+ qo_slices[n + i] = sd[ok]
+ consumed.add(ok)
+ kk = f"blocks.{i}.attn.c_k.weight"
+ if kk in sd:
+ kv_slices[i] = sd[kk]
+ consumed.add(kk)
+ vk = f"blocks.{i}.attn.c_v.weight"
+ if vk in sd:
+ kv_slices[n + i] = sd[vk]
+ consumed.add(vk)
+ fk = f"blocks.{i}.mlp.fc.weight"
+ if fk in sd:
+ up_slices[i] = sd[fk]
+ consumed.add(fk)
+ dk = f"blocks.{i}.mlp.proj.weight"
+ if dk in sd:
+ down_slices[i] = sd[dk]
+ consumed.add(dk)
+ out["qo_bank"] = torch.stack(qo_slices).to(dtype=template_sd["qo_bank"].dtype)
+ out["kv_bank"] = torch.stack(kv_slices).to(dtype=template_sd["kv_bank"].dtype)
+ out["mlp_up_bank"] = torch.stack(up_slices).to(dtype=template_sd["mlp_up_bank"].dtype)
+ out["mlp_down_bank"] = torch.stack(down_slices).to(dtype=template_sd["mlp_down_bank"].dtype)
+ for name, tensor in sd.items():
+ if name not in consumed:
+ out[name] = tensor
+ return out
+
+def quantize_int4_per_row(t: Tensor, clip_range: int = 15) -> tuple[Tensor, Tensor]:
+ """INT5 per-row quantization (-15..+15). For MLP banks — least sensitive to low-bit."""
+ t32 = t.float()
+ if t32.ndim == 2:
+ best_q, best_s, best_err = None, None, float('inf')
+ for pct in [0.999, 0.9995, 0.9999, 1.0]:
+ if pct < 1.0:
+ row_clip = torch.quantile(t32.abs(), pct, dim=1)
+ else:
+ row_clip = t32.abs().amax(dim=1)
+ s = (row_clip / clip_range).clamp_min(1.0 / clip_range).to(torch.float16)
+ q = torch.clamp(torch.round(t32 / s.float()[:, None]), -clip_range, clip_range).to(torch.int8)
+ recon = q.float() * s.float()[:, None]
+ err = (t32 - recon).pow(2).mean().item()
+ if err < best_err:
+ best_q, best_s, best_err = q, s, err
+ return best_q, best_s
+ amax = t32.abs().max().item()
+ scale = torch.tensor(amax / clip_range if amax > 0 else 1.0, dtype=torch.float16)
+ q = torch.clamp(torch.round(t32 / scale.float()), -clip_range, clip_range).to(torch.int8)
+ return q, scale
+
+def mixed_quantize_int6(state_dict: dict[str, Tensor], int6_cats: set[str], int4_cats: set[str] | None = None):
+ """Mixed quantization: INT4 for MLP (least sensitive), INT6 for attn, INT8 for rest."""
+ if int4_cats is None:
+ int4_cats = set()
+ num_layers_total = max(
+ (int(k.split(".")[1]) for k in state_dict if k.startswith("blocks.")),
+ default=0,
+ ) + 1
+ late_k_layers = set(range(num_layers_total - 2, num_layers_total))
+ result: dict[str, Tensor] = {}
+ meta: dict[str, object] = {}
+ for name, tensor in state_dict.items():
+ t = tensor.detach().cpu().contiguous()
+ cat = _classify_param(name)
+ # Also classify recurrence MLP params as "mlp"
+ if "recur_mlp" in name:
+ cat = "mlp"
+ if not t.is_floating_point() or t.numel() <= 65536:
+ result[name] = t.to(torch.float16) if t.is_floating_point() else t
+ meta[name] = "passthrough"
+ continue
+ if any(p in name for p in CONTROL_TENSOR_NAME_PATTERNS):
+ result[name] = t.float()
+ meta[name] = "passthrough_ctrl"
+ continue
+ if cat in int4_cats and t.ndim >= 1:
+ q, s = quantize_int4_per_row(t)
+ result[name + ".q"] = q
+ result[name + ".scale"] = s
+ meta[name] = {"type": "int4"}
+ elif cat in int6_cats and t.ndim >= 1:
+ q, s = quantize_int6_per_row(t)
+ result[name + ".q"] = q
+ result[name + ".scale"] = s
+ meta[name] = {"type": "int6"}
+ else:
+ q, s = quantize_float_tensor(t)
+ result[name + ".q"] = q
+ result[name + ".scale"] = s
+ meta[name] = {"type": "int8"}
+ return result, meta
+def dequantize_mixed_int6(result: dict[str, Tensor], meta: dict[str, object],
+ template_sd: dict[str, Tensor]) -> dict[str, Tensor]:
+ out: dict[str, Tensor] = {}
+ for name, orig in template_sd.items():
+ info = meta.get(name)
+ if info is None:
+ continue
+ orig_dtype = orig.dtype
+ if info in ("passthrough", "passthrough_ctrl", "passthrough_fp16"):
+ t = result[name]
+ if t.dtype == torch.float16 and orig_dtype in (torch.float32, torch.bfloat16):
+ t = t.to(orig_dtype)
+ out[name] = t
+ continue
+ q, s = result[name + ".q"], result[name + ".scale"]
+ if s.ndim > 0:
+ out[name] = (q.float() * s.float().view(q.shape[0], *([1] * (q.ndim - 1)))).to(orig_dtype)
+ else:
+ out[name] = (q.float() * float(s.item())).to(orig_dtype)
+ return out
+
+# --- Training ---
+
+def main() -> None:
+ code = Path(__file__).read_text(encoding="utf-8")
+ args = Hyperparameters()
+ # zeropower_via_newtonschulz5 runs eagerly with bmm -- do NOT compile
+ distributed = "RANK" in os.environ and "WORLD_SIZE" in os.environ
+ rank = int(os.environ.get("RANK", "0"))
+ world_size = int(os.environ.get("WORLD_SIZE", "1"))
+ local_rank = int(os.environ.get("LOCAL_RANK", "0"))
+ if world_size <= 0:
+ raise ValueError(f"WORLD_SIZE must be positive, got {world_size}")
+ if 8 % world_size != 0:
+ raise ValueError(f"WORLD_SIZE={world_size} must divide 8 so grad_accum_steps stays integral")
+ grad_accum_steps = 8 // world_size
+ grad_scale = 1.0 / grad_accum_steps
+ if not torch.cuda.is_available():
+ raise RuntimeError("CUDA is required")
+ device = torch.device("cuda", local_rank)
+ torch.cuda.set_device(device)
+ if distributed:
+ dist.init_process_group(backend="nccl", device_id=device)
+ dist.barrier()
+ master_process = rank == 0
+ torch.backends.cuda.matmul.allow_tf32 = True
+ torch.backends.cudnn.allow_tf32 = True
+ from torch.backends.cuda import enable_cudnn_sdp, enable_flash_sdp, enable_math_sdp, enable_mem_efficient_sdp
+ enable_cudnn_sdp(False)
+ enable_flash_sdp(True)
+ enable_mem_efficient_sdp(False)
+ enable_math_sdp(False)
+ logfile = None
+ if master_process:
+ os.makedirs("logs", exist_ok=True)
+ logfile = f"logs/{args.run_id}.txt"
+ print(logfile)
+ def log0(msg: str, console: bool = True) -> None:
+ if not master_process:
+ return
+ if console:
+ print(msg)
+ if logfile is not None:
+ with open(logfile, "a", encoding="utf-8") as f:
+ print(msg, file=f)
+ log0(code, console=False)
+ log0("=" * 100, console=False)
+ log0(f"Running Python {sys.version}", console=False)
+ log0(f"Running PyTorch {torch.__version__}", console=False)
+ log0(
+ subprocess.run(["nvidia-smi"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, check=False).stdout,
+ console=False,
+ )
+ log0("=" * 100, console=False)
+ random.seed(args.seed)
+ np.random.seed(args.seed)
+ torch.manual_seed(args.seed)
+ torch.cuda.manual_seed_all(args.seed)
+ dataset_dir = Path(args.data_path).resolve()
+ actual_train_files = len(list(dataset_dir.glob("fineweb_train_*.bin")))
+ effective_eval_seq_len = args.eval_seq_len if args.eval_seq_len > 0 else args.train_seq_len
+ val_seq_len = max(args.train_seq_len, effective_eval_seq_len)
+ val_tokens = load_validation_tokens(args.val_files, val_seq_len, args.val_tokens_limit)
+ (base_bytes_lut, has_leading_space_lut, is_boundary_token_lut), tokenizer_meta = load_tokenizer_luts(
+ args.tokenizer_path,
+ args.tokenizer_meta_path,
+ args.vocab_size,
+ device,
+ validate_meta=args.tokenizer_meta_validate,
+ )
+ meta_path = tokenizer_meta.get("meta_path")
+ log0(
+ f"val_bpb:enabled tokenizer_kind={tokenizer_meta['tokenizer_kind']} "
+ f"tokenizer_path={args.tokenizer_path} tokenizer_meta_path={meta_path or 'fallback:none'}"
+ )
+ log0(f"train_loader:dataset:{dataset_dir.name} train_shards:{actual_train_files}")
+ log0(f"val_loader:shards pattern={args.val_files} tokens:{val_tokens.numel() - 1}")
+ CastedLinear._qat_enabled = args.qat_enabled
+ base_model = GPT(
+ vocab_size=args.vocab_size,
+ num_layers=args.num_layers,
+ model_dim=args.model_dim,
+ num_heads=args.num_heads,
+ num_kv_heads=args.num_kv_heads,
+ mlp_mult=args.mlp_mult,
+ tie_embeddings=args.tie_embeddings,
+ tied_embed_init_std=args.tied_embed_init_std,
+ logit_softcap=args.logit_softcap,
+ rope_base=args.rope_base,
+ qk_gain_init=args.qk_gain_init,
+ mtp_num_heads=args.mtp_num_heads,
+ mtp_loss_weight=args.mtp_loss_weight,
+ bigram_vocab_size=args.bigram_vocab_size,
+ bigram_dim=args.bigram_dim,
+ xsa_last_n=args.xsa_last_n,
+ rope_dims=args.rope_dims,
+ ln_scale=args.ln_scale,
+ dtg=args.dtg_enabled,
+ ve_enabled=args.ve_enabled,
+ ve_dim=args.ve_dim,
+ ve_layers=args.ve_layers,
+ gated_attention=args.gated_attention,
+ value_residual=args.value_residual,
+ activation_mode=args.activation_mode,
+ activation_neg_slope=args.activation_neg_slope,
+ asymmetric_square_init=args.asymmetric_square_init,
+ gated_square_beta_init=args.gated_square_beta_init,
+ parallel_start_layer=args.parallel_start_layer,
+ recur_layers=args.recur_layers,
+ recur_untie_mlp=args.recur_untie_mlp,
+ ).to(device).bfloat16()
+ # Banks stay FP32 (like CastedLinear weights), cast to BF16 in forward
+ base_model.qo_bank.data = base_model.qo_bank.data.float()
+ base_model.kv_bank.data = base_model.kv_bank.data.float()
+ base_model.mlp_up_bank.data = base_model.mlp_up_bank.data.float()
+ base_model.mlp_down_bank.data = base_model.mlp_down_bank.data.float()
+ for module in base_model.modules():
+ if isinstance(module, CastedLinear):
+ module.float()
+ restore_low_dim_params_to_fp32(base_model)
+ # No DDP -- Parallel Muon handles bank grad communication via reduce-scatter,
+ # and non-bank grads are manually all-reduced before Adam steps.
+ compiled_model = torch.compile(base_model, dynamic=False, fullgraph=True)
+ model = compiled_model
+ # Separate compile for forward_logits (used in complementary training)
+ compiled_forward_logits = torch.compile(base_model.forward_logits, dynamic=False, fullgraph=True)
+
+ # Optimizer split:
+ # - 4 parameter banks -> Muon (batched Newton-Schulz)
+ # - token embedding -> Adam
+ # - scalars/control tensors -> Adam
+ # - bigram proj, mtp heads, VE proj -> Adam (small matrix params not worth banking)
+ matrix_params = [
+ base_model.qo_bank, base_model.kv_bank,
+ base_model.mlp_up_bank, base_model.mlp_down_bank,
+ ]
+ # Add recurrence MLP parameters to Muon optimizer
+ if base_model.recur_mlp_up is not None:
+ for lid in base_model.recur_layer_ids:
+ matrix_params.extend([base_model.recur_mlp_up[str(lid)], base_model.recur_mlp_down[str(lid)]])
+ block_named_params = list(base_model.blocks.named_parameters())
+ scalar_params = [
+ p
+ for name, p in block_named_params
+ if p.ndim < 2 or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS)
+ ]
+ if base_model.skip_weights.numel() > 0:
+ scalar_params.append(base_model.skip_weights)
+ scalar_params.append(base_model.smear.gate)
+ # Add parallel residual routing params
+ if base_model.lane_merge is not None:
+ scalar_params.append(base_model.lane_merge)
+ for block in base_model.blocks:
+ if hasattr(block, 'route') and block.parallel:
+ scalar_params.append(block.route)
+ if base_model.bigram is not None:
+ scalar_params.append(base_model.bigram.scale)
+ token_lr = args.tied_embed_lr if args.tie_embeddings else args.embed_lr
+ tok_params = [{"params": [base_model.tok_emb.weight], "lr": token_lr, "base_lr": token_lr}]
+ if base_model.bigram is not None:
+ tok_params.append({"params": [base_model.bigram.embed.weight], "lr": token_lr, "base_lr": token_lr})
+ if base_model.bigram.proj is not None:
+ scalar_params.append(base_model.bigram.proj.weight)
+ if base_model.ve_shared is not None:
+ tok_params.append({"params": [base_model.ve_shared.embed.weight], "lr": token_lr, "base_lr": token_lr})
+ if base_model.ve_shared.proj is not None:
+ scalar_params.append(base_model.ve_shared.proj.weight)
+ scalar_params.append(base_model.ve_shared.scale)
+ for s in base_model.ve_layer_scales:
+ scalar_params.append(s)
+ optimizer_tok = torch.optim.AdamW(
+ tok_params,
+ betas=(args.beta1, args.beta2),
+ eps=args.adam_eps,
+ weight_decay=args.adam_wd,
+ fused=True,
+ )
+ optimizer_muon = Muon(
+ matrix_params,
+ lr=args.matrix_lr,
+ momentum=args.muon_momentum,
+ backend_steps=args.muon_backend_steps,
+ weight_decay=args.muon_wd,
+ )
+ for group in optimizer_muon.param_groups:
+ group["base_lr"] = args.matrix_lr
+ optimizer_scalar = torch.optim.AdamW(
+ [{"params": scalar_params, "lr": args.scalar_lr, "base_lr": args.scalar_lr}],
+ betas=(args.beta1, args.beta2),
+ eps=args.adam_eps,
+ weight_decay=args.adam_wd,
+ fused=True,
+ )
+ # Non-bank params that need manual all-reduce (replicated across GPUs)
+ replicated_params = list(optimizer_tok.param_groups[0]["params"])
+ for pg in optimizer_tok.param_groups[1:]:
+ replicated_params.extend(pg["params"])
+ replicated_params.extend(scalar_params)
+
+ optimizer_head = None
+ if base_model.lm_head is not None:
+ optimizer_head = torch.optim.Adam(
+ [{"params": [base_model.lm_head.weight], "lr": args.head_lr, "base_lr": args.head_lr}],
+ betas=(args.beta1, args.beta2),
+ eps=args.adam_eps,
+ fused=True,
+ )
+ replicated_params.append(base_model.lm_head.weight)
+ optimizers: list[torch.optim.Optimizer] = [optimizer_tok, optimizer_muon, optimizer_scalar]
+ if optimizer_head is not None:
+ optimizers.append(optimizer_head)
+ n_params = sum(p.numel() for p in base_model.parameters())
+ mtp_params = sum(p.numel() for p in base_model.mtp_heads.parameters())
+ log0(f"model_params:{n_params}")
+ log0(f"mtp_num_heads:{args.mtp_num_heads} mtp_loss_weight:{args.mtp_loss_weight} mtp_params:{mtp_params}")
+ xsa_layers = [i for i, b in enumerate(base_model.blocks) if b.attn.use_xsa]
+ log0(f"XSA:last_{args.xsa_last_n} active_layers:{xsa_layers}")
+ log0(f"world_size:{world_size} grad_accum_steps:{grad_accum_steps}")
+ log0("sdp_backends:cudnn=False flash=True mem_efficient=False math=False")
+ log0(f"attention_mode:gqa num_heads:{args.num_heads} num_kv_heads:{args.num_kv_heads}")
+ log0(
+ f"tie_embeddings:{args.tie_embeddings} embed_lr:{token_lr} "
+ f"head_lr:{args.head_lr if base_model.lm_head is not None else 0.0} "
+ f"matrix_lr:{args.matrix_lr} scalar_lr:{args.scalar_lr}"
+ )
+ log0(
+ f"activation_mode:{args.activation_mode} neg_slope:{args.activation_neg_slope} "
+ f"asym_init:{args.asymmetric_square_init} gated_beta_init:{args.gated_square_beta_init}"
+ )
+ log0(
+ f"train_batch_tokens:{args.train_batch_tokens} train_seq_len:{args.train_seq_len} "
+ f"iterations:{args.iterations} warmup_steps:{args.warmup_steps} "
+ f"max_wallclock_seconds:{args.max_wallclock_seconds:.3f}"
+ )
+ log0(f"seed:{args.seed}")
+ train_loader = DistributedTokenLoader(args.train_files, rank, world_size, device)
+ def zero_grad_all() -> None:
+ for opt in optimizers:
+ opt.zero_grad(set_to_none=True)
+ max_wallclock_ms = 1000.0 * args.max_wallclock_seconds if args.max_wallclock_seconds > 0 else None
+ def lr_mul(step: int, elapsed_ms: float) -> float:
+ if args.warmdown_iters <= 0:
+ return 1.0
+ if max_wallclock_ms is None:
+ warmdown_start = max(args.iterations - args.warmdown_iters, 0)
+ return max((args.iterations - step) / max(args.warmdown_iters, 1), 0.0) if warmdown_start <= step < args.iterations else 1.0
+ step_ms = elapsed_ms / max(step, 1)
+ warmdown_ms = args.warmdown_iters * step_ms
+ remaining_ms = max(max_wallclock_ms - elapsed_ms, 0.0)
+ return remaining_ms / max(warmdown_ms, 1e-9) if remaining_ms <= warmdown_ms else 1.0
+ if args.warmup_steps > 0:
+ initial_model_state = {name: tensor.detach().cpu().clone() for name, tensor in base_model.state_dict().items()}
+ initial_optimizer_states = [copy.deepcopy(opt.state_dict()) for opt in optimizers]
+ model.train()
+ for warmup_step in range(args.warmup_steps):
+ zero_grad_all()
+ for micro_step in range(grad_accum_steps):
+ x, y = train_loader.next_batch(args.train_batch_tokens, args.train_seq_len, grad_accum_steps)
+ with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=True):
+ warmup_loss = model(x, y)
+ (warmup_loss * grad_scale).backward()
+ # All-reduce all grads for warmup (simple, not optimized)
+ if distributed:
+ for p in base_model.parameters():
+ if p.grad is not None:
+ dist.all_reduce(p.grad, op=dist.ReduceOp.AVG)
+ for opt in optimizers:
+ opt.step()
+ zero_grad_all()
+ if args.warmup_steps <= 20 or (warmup_step + 1) % 10 == 0 or warmup_step + 1 == args.warmup_steps:
+ log0(f"warmup_step:{warmup_step + 1}/{args.warmup_steps}")
+ base_model.load_state_dict(initial_model_state, strict=True)
+ for opt, state in zip(optimizers, initial_optimizer_states, strict=True):
+ opt.load_state_dict(state)
+ zero_grad_all()
+ train_loader = DistributedTokenLoader(args.train_files, rank, world_size, device)
+ # Complementary training tracker
+ bigram_tracker = TrainBigramTracker(args.vocab_size, device) if args.complement_enabled else None
+ if bigram_tracker is not None:
+ log0(f"complement:enabled alpha={args.complement_alpha}")
+ swa_state: dict[str, Tensor] | None = None
+ swa_count = 0
+ from collections import deque
+ lawa_queue: deque[dict[str, Tensor]] = deque(maxlen=args.lawa_k)
+ ema_state = {name: t.detach().float().clone() for name, t in base_model.state_dict().items()}
+ ema_decay = 0.997
+ training_time_ms = 0.0
+ stop_after_step: int | None = None
+ torch.cuda.synchronize()
+ t0 = time.perf_counter()
+ step = 0
+ while True:
+ last_step = step == args.iterations or (stop_after_step is not None and step >= stop_after_step)
+ should_validate = last_step or (args.val_loss_every > 0 and step % args.val_loss_every == 0)
+ if should_validate:
+ torch.cuda.synchronize()
+ training_time_ms += 1000.0 * (time.perf_counter() - t0)
+ val_loss, val_bpb = eval_val(
+ args,
+ model,
+ rank,
+ world_size,
+ device,
+ grad_accum_steps,
+ val_tokens,
+ base_bytes_lut,
+ has_leading_space_lut,
+ is_boundary_token_lut,
+ )
+ log0(
+ f"step:{step}/{args.iterations} val_loss:{val_loss:.4f} val_bpb:{val_bpb:.4f} "
+ f"train_time:{training_time_ms:.0f}ms step_avg:{training_time_ms / max(step, 1):.2f}ms"
+ )
+ torch.cuda.synchronize()
+ t0 = time.perf_counter()
+ if last_step:
+ if stop_after_step is not None and step < args.iterations:
+ log0(
+ f"stopping_early: wallclock_cap train_time:{training_time_ms:.0f}ms "
+ f"step:{step}/{args.iterations}"
+ )
+ break
+ elapsed_ms = training_time_ms + 1000.0 * (time.perf_counter() - t0)
+ scale = lr_mul(step, elapsed_ms)
+ if args.late_qat_threshold > 0 and scale < args.late_qat_threshold and not CastedLinear._qat_enabled:
+ CastedLinear._qat_enabled = True
+ log0(f"late_qat:enabled step:{step} scale:{scale:.4f}")
+ zero_grad_all()
+ train_loss = torch.zeros((), device=device)
+ for micro_step in range(grad_accum_steps):
+ x, y = train_loader.next_batch(args.train_batch_tokens, args.train_seq_len, grad_accum_steps)
+ with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=True):
+ if args.complement_enabled and bigram_tracker is not None:
+ # Complementary training: single forward, weighted CE
+ logits = compiled_forward_logits(x)
+ logits_flat = logits.reshape(-1, logits.size(-1)).float()
+ per_token_nll = F.cross_entropy(logits_flat, y.reshape(-1), reduction="none")
+ comp_weights = bigram_tracker.get_weights(x, y, alpha=args.complement_alpha).reshape(-1)
+ loss = (per_token_nll * comp_weights).sum() / comp_weights.sum()
+ bigram_tracker.update(x, y)
+ else:
+ loss = model(x, y)
+ train_loss += loss.detach()
+ (loss * grad_scale).backward()
+ train_loss /= grad_accum_steps
+ frac = min(step / args.muon_momentum_warmup_steps, 1.0) if args.muon_momentum_warmup_steps > 0 else 1.0
+ muon_momentum = (1 - frac) * args.muon_momentum_warmup_start + frac * args.muon_momentum
+ for group in optimizer_muon.param_groups:
+ group["momentum"] = muon_momentum
+ for opt in optimizers:
+ for group in opt.param_groups:
+ group["lr"] = group["base_lr"] * scale
+ if args.grad_clip_norm > 0:
+ torch.nn.utils.clip_grad_norm_(base_model.parameters(), args.grad_clip_norm)
+ # === 3-phase overlapped optimizer step ===
+ # Phase 1: Launch async reduce-scatter for banks (biggest first)
+ optimizer_muon.launch_reduce_scatters()
+ # Phase 2: All-reduce non-bank grads + step Adam (while bank RS is in-flight)
+ if distributed:
+ for p in replicated_params:
+ if p.grad is not None:
+ dist.all_reduce(p.grad, op=dist.ReduceOp.AVG)
+ optimizer_tok.step()
+ optimizer_scalar.step()
+ if optimizer_head is not None:
+ optimizer_head.step()
+ # Phase 3: Wait for RS, local NS5, all-gather (banks processed last)
+ optimizer_muon.step()
+ zero_grad_all()
+ # EMA update
+ with torch.no_grad():
+ for name, t in base_model.state_dict().items():
+ ema_state[name].mul_(ema_decay).add_(t.detach().float(), alpha=1.0 - ema_decay)
+ step += 1
+ approx_training_time_ms = training_time_ms + 1000.0 * (time.perf_counter() - t0)
+ if args.swa_enabled and scale < 0.2 and step % args.swa_every == 0:
+ if swa_state is None:
+ swa_state = {name: t.detach().cpu().clone() for name, t in base_model.state_dict().items()}
+ swa_count = 1
+ log0(f"swa:start step:{step}")
+ else:
+ for name, t in base_model.state_dict().items():
+ swa_state[name] += t.detach().cpu()
+ swa_count += 1
+ if args.lawa_enabled and step % args.lawa_freq == 0:
+ lawa_queue.append({name: t.detach().cpu().clone() for name, t in base_model.state_dict().items()})
+ should_log_train = (
+ args.train_log_every > 0
+ and (step <= 10 or step % args.train_log_every == 0 or stop_after_step is not None)
+ )
+ if should_log_train:
+ log0(
+ f"step:{step}/{args.iterations} train_loss:{train_loss.item():.4f} "
+ f"train_time:{approx_training_time_ms:.0f}ms step_avg:{approx_training_time_ms / step:.2f}ms"
+ )
+ reached_cap = max_wallclock_ms is not None and approx_training_time_ms >= max_wallclock_ms
+ if distributed and max_wallclock_ms is not None:
+ reached_cap_tensor = torch.tensor(int(reached_cap), device=device)
+ dist.all_reduce(reached_cap_tensor, op=dist.ReduceOp.MAX)
+ reached_cap = bool(reached_cap_tensor.item())
+ if stop_after_step is None and reached_cap:
+ stop_after_step = step
+ log0(
+ f"peak memory allocated: {torch.cuda.max_memory_allocated() // 1024 // 1024} MiB "
+ f"reserved: {torch.cuda.max_memory_reserved() // 1024 // 1024} MiB"
+ )
+ # Apply weight averaging
+ if args.lawa_enabled and len(lawa_queue) > 1:
+ log0(f"lawa:applying LAWA averaging k={len(lawa_queue)}")
+ current_state = base_model.state_dict()
+ avg_state = {name: torch.zeros(t.shape, dtype=torch.float32, device='cpu') for name, t in current_state.items()}
+ for snap in lawa_queue:
+ for name in avg_state:
+ avg_state[name] += snap[name].float()
+ for name in avg_state:
+ avg_state[name] /= len(lawa_queue)
+ avg_state[name] = avg_state[name].to(dtype=current_state[name].dtype)
+ base_model.load_state_dict(avg_state, strict=True)
+ else:
+ log0("ema:applying EMA weights")
+ current_state = base_model.state_dict()
+ avg_state = {name: t.to(dtype=current_state[name].dtype) for name, t in ema_state.items()}
+ base_model.load_state_dict(avg_state, strict=True)
+ torch.cuda.synchronize()
+ t_diag = time.perf_counter()
+ diag_val_loss, diag_val_bpb = eval_val(
+ args, compiled_model, rank, world_size, device, grad_accum_steps,
+ val_tokens, base_bytes_lut, has_leading_space_lut, is_boundary_token_lut,
+ )
+ torch.cuda.synchronize()
+ log0(
+ f"DIAGNOSTIC post_ema val_loss:{diag_val_loss:.4f} val_bpb:{diag_val_bpb:.4f} "
+ f"eval_time:{1000.0 * (time.perf_counter() - t_diag):.0f}ms"
+ )
+ full_state_dict = base_model.state_dict()
+ export_sd = {k: v for k, v in full_state_dict.items() if "mtp_heads" not in k}
+ excluded_mtp = sum(int(t.numel()) for k, t in full_state_dict.items() if "mtp_heads" in k)
+ if excluded_mtp > 0:
+ log0(f"export_excluding_mtp_params:{excluded_mtp}")
+ if master_process:
+ torch.save(export_sd, "final_model.pt")
+ model_bytes = os.path.getsize("final_model.pt")
+ code_bytes = len(code.encode("utf-8"))
+ log0(f"Serialized model: {model_bytes} bytes")
+ log0(f"Code size: {code_bytes} bytes")
+ # Unbank 3D tensors into individual 2D tensors for quantization
+ sd_cpu = {k: v.detach().cpu() for k, v in export_sd.items()}
+ unbanked_sd = _unbank_state_dict(sd_cpu, args.num_layers)
+ # Mixed quantization: INT6 for attn + sensitive MLP (first 3 + last 3 layers),
+ # INT5 for middle MLP (layers 3-7), INT8 for small control tensors.
+ # Sensitivity-driven: first/last layers are most important, middle tolerates INT5.
+ n_int6_mlp = int(os.environ.get("N_INT6_LAYERS", "6")) # how many MLP layers stay INT6
+ _half = n_int6_mlp // 2
+ _sensitive_mlp = set()
+ for _name in unbanked_sd:
+ _cat = _classify_param(_name)
+ if "recur_mlp" in _name: _cat = "mlp"
+ if _cat == "mlp" and "blocks." in _name:
+ _idx = int(_name.split(".")[1])
+ if _idx < _half or _idx >= args.num_layers - _half:
+ _sensitive_mlp.add(_name)
+ # Override mixed_quantize to use per-name INT5/INT6 for MLP
+ _quant_result: dict[str, torch.Tensor] = {}
+ _quant_meta: dict[str, object] = {}
+ for _name, _tensor in unbanked_sd.items():
+ _t = _tensor.detach().cpu().contiguous()
+ _cat = _classify_param(_name)
+ if "recur_mlp" in _name: _cat = "mlp"
+ if not _t.is_floating_point() or _t.numel() <= 65536:
+ _quant_result[_name] = _t.to(torch.float16) if _t.is_floating_point() else _t
+ _quant_meta[_name] = "passthrough"
+ continue
+ if any(p in _name for p in CONTROL_TENSOR_NAME_PATTERNS):
+ _quant_result[_name] = _t.float()
+ _quant_meta[_name] = "passthrough_ctrl"
+ continue
+ if _cat == "attn" or _name in _sensitive_mlp:
+ _q, _s = quantize_int6_per_row(_t)
+ _quant_result[_name + ".q"] = _q
+ _quant_result[_name + ".scale"] = _s
+ _quant_meta[_name] = {"type": "int6"}
+ elif _cat == "mlp":
+ _q, _s = quantize_int4_per_row(_t) # INT5 (clip_range=15)
+ _quant_result[_name + ".q"] = _q
+ _quant_result[_name + ".scale"] = _s
+ _quant_meta[_name] = {"type": "int5"}
+ else:
+ _q, _s = quantize_float_tensor(_t)
+ _quant_result[_name + ".q"] = _q
+ _quant_result[_name + ".scale"] = _s
+ _quant_meta[_name] = {"type": "int8"}
+ quant_result, quant_meta = _quant_result, _quant_meta
+ quant_buf = io.BytesIO()
+ torch.save({"w": quant_result, "m": quant_meta}, quant_buf)
+ quant_raw = quant_buf.getvalue()
+ # Use brotli if available (better ratio than lzma for quantized weights), fall back to lzma
+ try:
+ import brotli as _brotli
+ quant_blob = _brotli.compress(quant_raw, quality=11)
+ compress_name = "brotli"
+ except ImportError:
+ quant_blob = lzma.compress(quant_raw, preset=6)
+ compress_name = "lzma"
+ if master_process:
+ with open("final_model.int6.ptz", "wb") as f:
+ f.write(quant_blob)
+ quant_file_bytes = len(quant_blob)
+ code_bytes = len(code.encode("utf-8"))
+ log0(f"Serialized model mixed_quant+{compress_name}: {quant_file_bytes} bytes")
+ log0(f"Total submission size: {quant_file_bytes + code_bytes} bytes")
+ if quant_file_bytes + code_bytes > 16_000_000:
+ log0(f"WARNING: artifact {quant_file_bytes + code_bytes} bytes EXCEEDS 16MB budget!")
+ if distributed:
+ dist.barrier()
+ with open("final_model.int6.ptz", "rb") as f:
+ quant_blob_disk = f.read()
+ # Decompress with brotli or lzma (try brotli first)
+ try:
+ import brotli as _brotli
+ quant_raw_disk = _brotli.decompress(quant_blob_disk)
+ except Exception:
+ quant_raw_disk = lzma.decompress(quant_blob_disk)
+ quant_state = torch.load(
+ io.BytesIO(quant_raw_disk),
+ map_location="cpu",
+ )
+ deq_unbanked = dequantize_mixed_int6(quant_state["w"], quant_state["m"], unbanked_sd)
+ # Re-bank the dequantized tensors
+ deq_state = _rebank_state_dict(deq_unbanked, args.num_layers, sd_cpu)
+ eval_model = GPT(
+ vocab_size=args.vocab_size, num_layers=args.num_layers, model_dim=args.model_dim,
+ num_heads=args.num_heads, num_kv_heads=args.num_kv_heads, mlp_mult=args.mlp_mult,
+ tie_embeddings=args.tie_embeddings, tied_embed_init_std=args.tied_embed_init_std,
+ logit_softcap=args.logit_softcap, rope_base=args.rope_base, qk_gain_init=args.qk_gain_init,
+ mtp_num_heads=0, mtp_loss_weight=0.0,
+ bigram_vocab_size=args.bigram_vocab_size, bigram_dim=args.bigram_dim,
+ xsa_last_n=args.xsa_last_n,
+ rope_dims=args.rope_dims, ln_scale=args.ln_scale, dtg=args.dtg_enabled,
+ ve_enabled=args.ve_enabled, ve_dim=args.ve_dim, ve_layers=args.ve_layers,
+ gated_attention=args.gated_attention, value_residual=args.value_residual,
+ activation_mode=args.activation_mode,
+ activation_neg_slope=args.activation_neg_slope,
+ asymmetric_square_init=args.asymmetric_square_init,
+ gated_square_beta_init=args.gated_square_beta_init,
+ parallel_start_layer=args.parallel_start_layer,
+ recur_layers=args.recur_layers,
+ recur_untie_mlp=args.recur_untie_mlp,
+ ).to(device).bfloat16()
+ eval_model.qo_bank.data = eval_model.qo_bank.data.float()
+ eval_model.kv_bank.data = eval_model.kv_bank.data.float()
+ eval_model.mlp_up_bank.data = eval_model.mlp_up_bank.data.float()
+ eval_model.mlp_down_bank.data = eval_model.mlp_down_bank.data.float()
+ for m in eval_model.modules():
+ if isinstance(m, CastedLinear):
+ m.float()
+ restore_low_dim_params_to_fp32(eval_model)
+ eval_model.load_state_dict(deq_state, strict=True)
+ compiled_eval = torch.compile(eval_model, dynamic=False, fullgraph=True)
+ torch.cuda.synchronize()
+ t_qeval = time.perf_counter()
+ q_val_loss, q_val_bpb = eval_val(
+ args, compiled_eval, rank, world_size, device, grad_accum_steps,
+ val_tokens, base_bytes_lut, has_leading_space_lut, is_boundary_token_lut,
+ eval_seq_len=effective_eval_seq_len,
+ )
+ torch.cuda.synchronize()
+ log0(
+ f"final_int6_roundtrip val_loss:{q_val_loss:.4f} val_bpb:{q_val_bpb:.4f} "
+ f"eval_time:{1000.0 * (time.perf_counter() - t_qeval):.0f}ms"
+ )
+ log0(f"final_int6_roundtrip_exact val_loss:{q_val_loss:.8f} val_bpb:{q_val_bpb:.8f}")
+ sw_seq_len = effective_eval_seq_len
+ if args.eval_stride > 0 and args.eval_stride < sw_seq_len:
+ torch.cuda.synchronize()
+ t_slide = time.perf_counter()
+ sw_val_loss, sw_val_bpb = eval_val_sliding(
+ args, eval_model, rank, world_size, device,
+ val_tokens, base_bytes_lut, has_leading_space_lut, is_boundary_token_lut,
+ stride=args.eval_stride,
+ eval_seq_len=sw_seq_len,
+ )
+ torch.cuda.synchronize()
+ log0(
+ f"final_int6_sliding_window val_loss:{sw_val_loss:.4f} val_bpb:{sw_val_bpb:.4f} "
+ f"stride:{args.eval_stride} eval_time:{1000.0 * (time.perf_counter() - t_slide):.0f}ms"
+ )
+ log0(f"final_int6_sliding_window_exact val_loss:{sw_val_loss:.8f} val_bpb:{sw_val_bpb:.8f}")
+ log0(f"final_int8_zlib_roundtrip_exact val_loss:{sw_val_loss:.8f} val_bpb:{sw_val_bpb:.8f}")
+ if args.eval_stride != 64 and 64 < sw_seq_len:
+ torch.cuda.synchronize()
+ t_slide64 = time.perf_counter()
+ sw64_val_loss, sw64_val_bpb = eval_val_sliding(
+ args, eval_model, rank, world_size, device,
+ val_tokens, base_bytes_lut, has_leading_space_lut, is_boundary_token_lut,
+ stride=64,
+ eval_seq_len=sw_seq_len,
+ )
+ torch.cuda.synchronize()
+ log0(
+ f"final_int6_sliding_window_s64 val_loss:{sw64_val_loss:.4f} val_bpb:{sw64_val_bpb:.4f} "
+ f"stride:64 eval_time:{1000.0 * (time.perf_counter() - t_slide64):.0f}ms"
+ )
+ log0(f"final_int6_sliding_window_s64_exact val_loss:{sw64_val_loss:.8f} val_bpb:{sw64_val_bpb:.8f}")
+ log0(f"final_int8_zlib_roundtrip_exact val_loss:{sw64_val_loss:.8f} val_bpb:{sw64_val_bpb:.8f}")
+ # Legal score-first TTT (PR #461 recipe)
+ if args.ttt_enabled:
+ torch.cuda.synchronize()
+ t_ttt = time.perf_counter()
+ ttt_loss, ttt_bpb = eval_val_sliding_ttt(
+ args, eval_model, rank, world_size, device,
+ val_tokens, base_bytes_lut, has_leading_space_lut, is_boundary_token_lut,
+ stride=args.eval_stride, log0=log0,
+ )
+ torch.cuda.synchronize()
+ log0(f"legal_ttt val_loss:{ttt_loss:.4f} val_bpb:{ttt_bpb:.4f} "
+ f"eval_time:{1000.0 * (time.perf_counter() - t_ttt):.0f}ms")
+ log0(f"legal_ttt_exact val_loss:{ttt_loss:.8f} val_bpb:{ttt_bpb:.8f}")
+ # --- N-gram two-pass rescore ---
+ if args.ngram_enabled:
+ # Use TTT-adapted model if available, otherwise use quantized eval model
+ ngram_model = eval_model
+ torch.cuda.synchronize()
+ t_ngram = time.perf_counter()
+ ng_val_loss, ng_val_bpb = eval_ngram_two_pass(
+ args, ngram_model, rank, world_size, device,
+ val_tokens, base_bytes_lut, has_leading_space_lut, is_boundary_token_lut,
+ stride=args.eval_stride, log0=log0,
+ )
+ torch.cuda.synchronize()
+ log0(f"ngram_two_pass val_loss:{ng_val_loss:.4f} val_bpb:{ng_val_bpb:.4f} "
+ f"eval_time:{1000.0 * (time.perf_counter() - t_ngram):.0f}ms")
+ log0(f"ngram_two_pass_exact val_loss:{ng_val_loss:.8f} val_bpb:{ng_val_bpb:.8f}")
+ log0(f"final_int8_zlib_roundtrip_exact val_loss:{ng_val_loss:.8f} val_bpb:{ng_val_bpb:.8f}")
+ if distributed:
+ dist.destroy_process_group()
+if __name__ == "__main__":
+ main()