diff --git a/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/README.md b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/README.md new file mode 100644 index 0000000000..430c63dc9a --- /dev/null +++ b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/README.md @@ -0,0 +1,90 @@ +# Record: SP4096 + Byte-Level PPM Adaptive-λ Mixture — val_bpb 0.95165 (full val) + +**val_bpb: 0.95165** (3-seed mean, std=0.00036, full FineWeb val) + +| Seed | NN-only (sliding, token-BPB, full val) | NN-only byte-BPB | **Mix BPB (byte-level, full val)** | Δ | Artifact | Eval | +|-|-|-|-|-|-|-| +| 42 | 1.09745 | 1.08669 | **0.95145** | −0.13524 | 15,960,029 | 9:35 | +| 1337 | 1.09832 | 1.08755 | **0.95214** | −0.13541 | 15,929,684 | 9:02 | +| 2025 | 1.09751 | 1.08675 | **0.95135** | −0.13540 | 15,930,624 | 9:01 | +| **Mean** | **1.09776** | **1.08699** | **0.95165** | **−0.13535** | 15,940,112 | 9:13 | + +This beats the current record of **1.06453** (PR #1769 3-seed mean) by **0.11288** BPB on the same full-val basis — t-stat ≈ 513 on the 0.005-nat bar. + +Our NN-only mean **1.09776 matches @clarkkev's 2026-04-01 record of 1.09785** within seed noise (std 0.00036 vs clarkkev's 0.0004). The entire NN stack is unchanged from PR #1334 / the 2026-04-01 record; the gain comes from the byte-level PPM mixture applied at eval time. + +## This is a revised PR replacing an earlier version + +This PR supersedes the earlier submission in this branch. The earlier version had three concrete issues raised by reviewers: + +1. **Mixture BPB was measured on a 5M-token subset**, not full val → **FIXED**: mixture now runs on all 45.5M val tokens / 152.6MB byte stream, same basis as all merged records. +2. **NN-only BPB (1.144) was 0.054 BPB worse than clarkkev's base (1.098)** because training used only 2 SP4096 shards → **FIXED**: full SP4096 dataset downloaded (80+ shards), NN now trains to 1.09776 matching clarkkev exactly. +3. **Artifact was 32KB over the 16MB cap** → **FIXED**: all 3 seeds ship at 15.93–15.96 MB with the full readable source (no lzma-compressed stub needed). + +All three blockers resolved. + +## What exactly changed vs @clarkkev 2026-04-01 + +Source-level diff: one new function (`_ppm_mixture_bpb`, ~30 lines) plus ~30 lines of gather/mix logic inside `eval_val_sliding`. Everything else is untouched. + +1. **`_ppm_mixture_bpb(tgt, lp, sp, order=5, λ_high=0.9, λ_low=0.05, thr=0.9)`** — byte-level PPM-D order 5 with PPM-D escape. Streams val bytes, emits per-byte log-prob and confidence (= PPM's in-context probability of the observed byte). Mixture in byte-probability space: `q_mix(b) = λ·q_NN(b) + (1−λ)·q_PPM(b)`, with `λ = λ_low if conf > thr else λ_high`. NN log-prob spread uniformly across UTF-8 bytes of each token (conserves total NN bits — byte-level NN BPB 1.08699 equals token-level NN BPB 1.09776 scaled by bytes/token). + - Vectorized byte-stream construction (`np.repeat` + `b"".join`) and vectorized NN spread keep the full-val mixture under 6 min of PPM CPU time on pod. +2. **Mixture hook inside `eval_val_sliding`** — collects per-token target log-probs (= −scored_nll) and target IDs on each rank, all-gathers to rank 0, pads uneven shards, runs `_ppm_mixture_bpb` on the full gathered stream, returns mixture BPB as the function's reported val_bpb. Non-rank-0 ranks return NN-only BPB (only rank 0's number is logged). No dist.broadcast of the mixture value — avoids the NCCL watchdog timing out during the single-threaded PPM pass. + +Everything else (11L/SP4096/MLP4, sliding eval, EMA, GPTQ int6+brotli, legal TTT, parallel residuals, LeakyReLU², depth recurrence, wallclock cap) is unchanged from 2026-04-01. Same env vars as clarkkev's run (`RUN_ID`, `SEED`) plus one that gates the mixture (`PPM_MIX_ENABLED=1`). + +## The submission's scoring model is a byte-level two-predictor mixture + +Following reviewer feedback (Condition 2 framing): this submission's effective scoring model is **not** the NN alone. It is the byte-level mixture `q_mix = λ·q_NN_byte + (1−λ)·q_PPM_byte` where: +- `q_NN_byte` is derived from the NN's SentencePiece-token distribution by spreading the token log-prob uniformly across its UTF-8 bytes (a bit-conserving byte marginalization — a formally weaker-than-optimal lower bound on what a proper byte-level NN marginalization would emit). +- `q_PPM_byte` is emitted by a byte-level PPM-D order 5 predictor trained online on already-scored val bytes (zero bytes of pre-computed state ship in the artifact). + +The headline `val_bpb = 0.95165` is the byte-level BPB of this mixture, measured on full val. For audit, we also log the NN-alone token-level BPB (1.09776) — the number directly comparable to clarkkev's 2026-04-01 record — and the NN-alone byte-level BPB (1.08699). + +## Why the mixture works on top of an already-strong NN + +The adaptive-mix Δ stays in a tight −0.12 to −0.14 range across 5 different NN qualities, measured during development: + +| NN (byte, sliding) | Family | Δ adaptive | +|---:|---|---:| +| 2.540 | MLX SP1024 9L weak | −0.694 | +| 1.354 | torch SP1024 9L | −0.126 | +| 1.258 | torch SP1024 9L | −0.123 | +| 1.211 | torch SP8192 11L MLP4 | −0.137 | +| **1.087** | **This submission (SP4096 11L MLP4, record-quality)** | **−0.135** | + +The gain does not shrink with NN quality because it specifically targets rare-repeat byte patterns — a property of the FineWeb val distribution (URLs, code identifiers, wiki boilerplate, tokenization-spanning repeats), not of the NN. The high-gain bytes (≥10 bits saved per byte at λ≈0.5) require eval-time exact-match memorization, which is what PPM does and what any finite-context finite-parameter NN cannot do. + +## Compliance (per the 5 reviewer questions) + +- **(1) Full-val measurement** ✅ 45,508,608 tokens / 152,570,124 bytes, same basis as every merged record. +- **(2) PPM-as-TTT legality** ⚠️ **Request organizer ruling.** Our PPM counters update per byte in strict score-before-update order: at byte `i`, we (a) score `byte_i` using counters accumulated from bytes `0..i-1`, (b) then add `byte_i` to the counters for future bytes. By the letter of the rule ("test-time training on validation set tokens you've already evaluated your model on"), this qualifies: every PPM update uses only already-scored bytes. Per-byte granularity is finer than the chunk-level score-first TTT Issue #1017 was written for; we'd welcome explicit organizer guidance on whether this class of online streaming predictor qualifies. If the ruling is "no," the submission is withdrawn. +- **(3) Byte-level vs token-level BPB** ✅ Both logged. NN-alone token-BPB: 1.09776 (= clarkkev's metric). NN-alone byte-BPB: 1.08699 (bit-conserving spread). Mixture byte-BPB: 0.95165. The submission's leaderboard number is the mixture byte-BPB because the mixture is the scoring object; the NN-alone token-BPB is provided for direct comparability with existing records. +- **(4) NN regression vs @clarkkev** ✅ Resolved. NN-only mean 1.09776 vs clarkkev 1.09785. Stack and env vars unchanged; training runs on full SP4096 data. +- **(5) Condition 2 framing** ✅ The scoring model is explicitly framed as a byte-level two-predictor mixture (see section above). + +Other compliance from 2026-04-01 base, unchanged: +- Train ≤ 600s ✅ (all 3 seeds stopped at 590s wallclock cap, steps 5898–5901) +- Artifact ≤ 16 MB ✅ (15.93-15.96 MB, no lzma stub needed) +- Eval ≤ 600s ✅ (sliding+full-val mixture 540-575s) +- No SLOT, no pre-quant TTT on val, no ETLB (inherited from base) + +## Reproduction + +```bash +# Data prep (Kevin Clark's SP4096 dataset): +MATCHED_FINEWEB_REPO_ID=kevclark/parameter-golf python3 data/cached_challenge_fineweb.py --variant sp4096 + +# Training + mixture eval (per seed): +RUN_ID= SEED= PPM_MIX_ENABLED=1 torchrun --standalone --nproc_per_node=8 train_gpt.py +``` + +The reported val_bpb is the `final_int6_sliding_window val_bpb:` line, which equals the `[ppm_mix] ... mix=` value by construction. + +## Credits + +- **@clarkkev** — entire SP4096 + 11L + MLP4 + depth-recurrence + EMA + GPTQ + sliding + brotli stack (PR #1334 / the 2026-04-01 record). All of the NN contribution here is his work; the 1.097 NN-only column is exactly his measurement. +- **Cleary & Witten 1984; Moffat 1990** — PPM-D with the escape method used here. +- **This submission** — the byte-probability-space two-predictor mixture construction and the adaptive-λ gate keyed on PPM's in-context confidence. + +Neither predictor alone reaches this BPB: clarkkev's NN at 1.098, and byte-PPM alone is ~2.7 at full val. The mixture at 0.95 captures bit-saves on the minority of bytes where PPM strictly dominates (rare exact-repeat sequences) while leaving the majority to the NN. diff --git a/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/submission.json b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/submission.json new file mode 100644 index 0000000000..c5ff852a6b --- /dev/null +++ b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/submission.json @@ -0,0 +1,54 @@ +{ + "author": "OE-GOD", + "github_id": "OE-GOD", + "name": "SP4096 + Byte-Level PPM Adaptive-λ Mixture (full-val)", + "date": "2026-04-23", + "track": "10min_16mb", + "val_bpb": 0.95165, + "val_bpb_std": 0.00036, + "val_bpb_nn_only_mean": 1.09776, + "val_bpb_delta_mean": -0.13535, + "measurement": "Full FineWeb validation set (45,508,608 tokens, 152,570,124 bytes). Mixture BPB computed per-byte after spreading NN per-token logprob uniformly across UTF-8 bytes; adaptive-λ gate on byte-level PPM-D order-5 confidence.", + "seeds": [42, 1337, 2025], + "seed_results": { + "42": {"val_bpb": 0.95145, "val_bpb_nn_token": 1.09745, "val_bpb_nn_byte": 1.08669, "val_bpb_delta": -0.13524, "artifact_bytes": 15960029, "eval_time_ms": 575204}, + "1337": {"val_bpb": 0.95214, "val_bpb_nn_token": 1.09832, "val_bpb_nn_byte": 1.08755, "val_bpb_delta": -0.13541, "artifact_bytes": 15929684, "eval_time_ms": 541682}, + "2025": {"val_bpb": 0.95135, "val_bpb_nn_token": 1.09751, "val_bpb_nn_byte": 1.08675, "val_bpb_delta": -0.13540, "artifact_bytes": 15930624, "eval_time_ms": 540903} + }, + "hardware": "8xH100 80GB SXM", + "pytorch_version": "2.9.1+cu128", + "technique_summary": "Base: @clarkkev 2026-04-01 SP4096 + 11L + MLP4x submission (record 1.09785). Addition: byte-level PPM-D order-5 with adaptive-λ gate mixed with the NN's per-token target logprob in byte-probability space during final sliding-window eval on FULL val.", + "mixture_technique": { + "predictor": "byte-level PPM-D order 5 (pure Python, online, legal score-before-update on already-scored val bytes)", + "mixing": "adaptive λ gate: λ=0.05 when PPM in-context probability of observed byte > 0.9, else λ=0.9", + "byte_marginalization": "spread NN token logprob uniformly across UTF-8 bytes (conserves total NN bits — NN_byte_BPB ≡ NN_token_BPB)", + "measurement_basis": "full val (45.5M tokens, 152.6MB bytes) — same as all merged records", + "performance": "pure-Python PPM at ~260 KB/s on pod CPU; full-val mixture eval completes in 540-575 s, well under the 10-minute cap" + }, + "compliance": { + "train_under_600s": true, + "artifact_under_16mb": true, + "artifact_under_16mb_note": "All 3 seeds 15.93-15.96 MB natively (no lzma-compressed stub needed). train_gpt.py is shipped as readable Python for reviewability.", + "eval_under_600s": true, + "eval_under_600s_note": "Full-val sliding+mixture 540-575s. PPM kernel is pure-Python streaming with vectorized numpy byte-stream build + NN-spread.", + "no_slot": true, + "no_pre_quant_ttt": true, + "no_etlb": true, + "no_ngram_cache": false, + "no_ngram_cache_note": "Byte-level online PPM predictor trained from empty counters during sliding eval. Per-byte semantics: score byte i using counters accumulated from bytes 0..i-1 (score-before-update), then add byte i to counters for subsequent bytes. All PPM state is built from val tokens the NN has already graded in the same sliding pass — consistent with the challenge's explicit allowance of 'test-time training on validation set tokens you've already evaluated your model on'. No precomputed n-gram table is shipped in the artifact. Organizer ruling requested on whether this class of online streaming predictor counts as legal score-first TTT (see PR discussion).", + "three_seeds": true, + "three_seeds_significance": "t-stat for the 0.005-nat improvement bar: (1.0595 − 0.95165)/0.00021 ≈ 513; p ≪ 1e-10" + }, + "attribution": { + "base_submission": "@clarkkev 2026-04-01 SP4096 submission (record 1.09785) — stack unchanged", + "byte_ppm": "Cleary & Witten 1984; Moffat 1990 (PPM-D escape method)", + "adaptive_lambda_gate": "designed for this submission" + }, + "reviewer_questions_addressed": { + "1_full_val_measurement": "RESOLVED — mixture measured on full 45.5M-token val (152.6MB byte stream), identical basis to current record", + "2_ppm_as_ttt_legality": "REQUEST ORGANIZER RULING — per-byte score-before-update semantics described above; consistent with rule text, pattern is novel", + "3_byte_vs_token_BPB": "BOTH REPORTED — NN token-BPB (1.09776, matches clarkkev), NN byte-BPB (1.08699), mix byte-BPB (0.95165). Leaderboard column is byte-BPB of the mixture; token-BPB of NN alone provided for audit", + "4_nn_regression_vs_clarkkev": "RESOLVED — our NN-only mean 1.09776 matches clarkkev's 1.09785 within seed noise (std 0.00036 vs clarkkev's 0.0004)", + "5_condition_2_framing": "ADDRESSED IN README — the submission's scoring model is explicitly the byte-level mixture q_mix = λ·q_NN + (1−λ)·q_PPM, a two-predictor family" + } +} diff --git a/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_gpt.py b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_gpt.py new file mode 100644 index 0000000000..7f0c24da82 --- /dev/null +++ b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_gpt.py @@ -0,0 +1,1707 @@ +import copy +import glob +import io +import lzma +import math +import os +from pathlib import Path +import random +import subprocess +import sys +import time +import uuid + +import numpy as np +import sentencepiece as spm +import torch +import torch.distributed as dist +import torch.nn.functional as F +from torch.nn.parallel import DistributedDataParallel as DDP +from torch import Tensor, nn + +from flash_attn_interface import flash_attn_func as flash_attn_3_func + +# ---------------------------------------- +# Hyperparameters +# ---------------------------------------- + +class Hyperparameters(): + # Experiment settings + data_dir = os.environ.get('DATA_DIR', './data/') + seed = int(os.environ.get('SEED', 1337)) + run_id = os.environ.get("RUN_ID", str(uuid.uuid4())) + + # Training length + iterations = int(os.environ.get('ITERATIONS', 20000)) + warmdown_frac = float(os.environ.get('WARMDOWN_FRAC', 0.667)) + warmup_steps = int(os.environ.get('WARMUP_STEPS', 20)) + train_batch_tokens = int(os.environ.get('TRAIN_BATCH_TOKENS', 2048 * 48 * 8)) + train_seq_len = int(os.environ.get('TRAIN_SEQ_LEN', 2048)) + eval_seq_len = int(os.environ.get('EVAL_SEQ_LEN', 2048)) + max_wallclock_seconds = float(os.environ.get('MAX_WALLCLOCK_SECONDS', 600.0)) + train_log_every = int(os.environ.get('TRAIN_LOG_EVERY', 500)) + + # Validation/Evals + val_batch_tokens = int(os.environ.get('VAL_BATCH_TOKENS', 2048 * 32 * 8)) + val_loss_every = int(os.environ.get('VAL_LOSS_EVERY', 4000)) + sliding_window_enabled = bool(int(os.environ.get('SLIDING_WINDOW_ENABLED', '1'))) + + # Model architecture + vocab_size = int(os.environ.get('VOCAB_SIZE', 4096)) + num_layers = int(os.environ.get('NUM_LAYERS', 11)) + xsa_last_n = int(os.environ.get('XSA_LAST_N', 11)) + num_kv_heads = int(os.environ.get('NUM_KV_HEADS', 4)) + model_dim = int(os.environ.get('MODEL_DIM', 512)) + embedding_dim = int(os.environ.get('EMBEDDING_DIM', 512)) + num_heads = int(os.environ.get('NUM_HEADS', 8)) + mlp_mult = float(os.environ.get('MLP_MULT', 4.0)) + skip_gates_enabled = bool(int(os.environ.get('SKIP_GATES_ENABLED', '1'))) + tie_embeddings = bool(int(os.environ.get('TIE_EMBEDDINGS', '1'))) + logit_softcap = float(os.environ.get('LOGIT_SOFTCAP', 30.0)) + rope_base = float(os.environ.get('ROPE_BASE', 10000.0)) + rope_dims = int(os.environ.get('ROPE_DIMS', 16)) + rope_train_seq_len = int(os.environ.get('ROPE_TRAIN_SEQ_LEN', 2048)) + ln_scale = bool(int(os.environ.get('LN_SCALE', '1'))) + ve_enabled = bool(int(os.environ.get('VE_ENABLED', '1'))) + ve_dim = int(os.environ.get('VE_DIM', 128)) + ve_layers = os.environ.get('VE_LAYERS', '9,10') + qk_gain_init = float(os.environ.get('QK_GAIN_INIT', 4.0)) + + # Optimizer + min_lr = float(os.environ.get('MIN_LR', 0.0)) + embed_lr = float(os.environ.get('EMBED_LR', 0.6)) + head_lr = float(os.environ.get('HEAD_LR', 0.008)) + tied_embed_lr = float(os.environ.get('TIED_EMBED_LR', 0.03)) + tied_embed_init_std = float(os.environ.get('TIED_EMBED_INIT_STD', 0.005)) + matrix_lr = float(os.environ.get('MATRIX_LR', 0.02)) + scalar_lr = float(os.environ.get('SCALAR_LR', 0.02)) + muon_momentum = float(os.environ.get('MUON_MOMENTUM', 0.99)) + muon_backend_steps = int(os.environ.get('MUON_BACKEND_STEPS', 5)) + muon_momentum_warmup_start = float(os.environ.get('MUON_MOMENTUM_WARMUP_START', 0.92)) + muon_momentum_warmup_steps = int(os.environ.get('MUON_MOMENTUM_WARMUP_STEPS', 1500)) + beta1 = float(os.environ.get('BETA1', 0.9)) + beta2 = float(os.environ.get('BETA2', 0.95)) + adam_eps = float(os.environ.get('ADAM_EPS', 1e-8)) + grad_clip_norm = float(os.environ.get('GRAD_CLIP_NORM', 0.3)) + eval_stride = int(os.environ.get('EVAL_STRIDE', 64)) + muon_beta2 = float(os.environ.get('MUON_BETA2', 0.95)) + adam_wd = float(os.environ.get('ADAM_WD', 0.02)) + muon_wd = float(os.environ.get('MUON_WD', 0.085)) + embed_wd = float(os.environ.get('EMBED_WD', 0.085)) + ema_decay = float(os.environ.get('EMA_DECAY', 0.997)) + + # Compression + compressor = os.environ.get('COMPRESSOR', 'brotli') #(lzma or brotli) + gptq_enabled = bool(int(os.environ.get('GPTQ_ENABLED', '1'))) + gptq_calibration_batches = int(os.environ.get('GPTQ_CALIBRATION_BATCHES', 64)) + gptq_reserve_seconds = float(os.environ.get('GPTQ_RESERVE_SECONDS', 10.0)) + + # Distributed setup + distributed = "RANK" in os.environ and "WORLD_SIZE" in os.environ + rank = int(os.environ.get("RANK", "0")) + world_size = int(os.environ.get("WORLD_SIZE", "1")) + local_rank = int(os.environ.get("LOCAL_RANK", "0")) + is_main_process = rank == 0 + grad_accum_steps = 8 // world_size + + # Data paths + datasets_dir = os.path.join(data_dir, 'datasets', f'fineweb10B_sp{vocab_size}') + train_files = os.path.join(datasets_dir, 'fineweb_train_*.bin') + val_files = os.path.join(datasets_dir, 'fineweb_val_*.bin') + tokenizer_path = os.path.join(data_dir, 'tokenizers', f'fineweb_{vocab_size}_bpe.model') + + # Experiment files + logfile = f"logs/{run_id}.txt" + model_path = "final_model.pt" + quantized_model_path = "final_model.int6.ptz" + +# ---------------------------------------- +# Global Logging Function +# ---------------------------------------- + +_logger_hparams = None + + +def set_logging_hparams(h: Hyperparameters) -> None: + global _logger_hparams + _logger_hparams = h + + +def log(msg, console: bool = True) -> None: + if _logger_hparams is None: + print(msg) + if _logger_hparams.is_main_process: + if console: + print(msg) + if _logger_hparams.logfile is not None: + with open(_logger_hparams.logfile, "a", encoding="utf-8") as f: + print(msg, file=f) + +# ---------------------------------------- +# Data Loading +# ---------------------------------------- + +class ValidationData: + def __init__(self, h: Hyperparameters, device: torch.device): + if not h.tokenizer_path.endswith(".model"): + raise ValueError(f"Script only setup for SentencePiece .model file: {h.tokenizer_path}") + self.sp = spm.SentencePieceProcessor(model_file=h.tokenizer_path) + if int(self.sp.vocab_size()) != h.vocab_size: + raise ValueError( + f"VOCAB_SIZE={h.vocab_size} does not match tokenizer vocab_size={int(self.sp.vocab_size())}" + ) + + self.val_tokens = load_validation_tokens(h.val_files, h.eval_seq_len) + self.base_bytes_lut, self.has_leading_space_lut, self.is_boundary_token_lut = ( + build_sentencepiece_luts(self.sp, h.vocab_size, device)) + + +def build_sentencepiece_luts( + sp: spm.SentencePieceProcessor, vocab_size: int, device: torch.device +) -> tuple[Tensor, Tensor, Tensor]: + sp_vocab_size = int(sp.vocab_size()) + # The BPB calculation assumes "▁" is its own token so that leading-space bytes + # are counted correctly. See https://github.com/openai/parameter-golf/issues/897 + assert sp.piece_to_id("\u2581") != sp.unk_id(), \ + "Tokenizer must have '▁' (space) as its own token for correct BPB byte counting" + table_size = max(sp_vocab_size, vocab_size) + base_bytes_np = np.zeros((table_size,), dtype=np.int16) + has_leading_space_np = np.zeros((table_size,), dtype=np.bool_) + is_boundary_token_np = np.ones((table_size,), dtype=np.bool_) + for token_id in range(sp_vocab_size): + if sp.is_control(token_id) or sp.is_unknown(token_id) or sp.is_unused(token_id): + continue + is_boundary_token_np[token_id] = False + if sp.is_byte(token_id): + base_bytes_np[token_id] = 1 + continue + piece = sp.id_to_piece(token_id) + if piece.startswith("\u2581"): + has_leading_space_np[token_id] = True + piece = piece[1:] + base_bytes_np[token_id] = len(piece.encode("utf-8")) + return ( + torch.tensor(base_bytes_np, dtype=torch.int16, device=device), + torch.tensor(has_leading_space_np, dtype=torch.bool, device=device), + torch.tensor(is_boundary_token_np, dtype=torch.bool, device=device), + ) + + +def load_validation_tokens(pattern: str, seq_len: int) -> Tensor: + files = [Path(p) for p in sorted(glob.glob(pattern))] + if not files: + raise FileNotFoundError(f"No files found for pattern: {pattern}") + # The export pipeline writes the fixed first-50k-doc validation set to fineweb_val_*. + tokens = torch.cat([load_data_shard(file) for file in files]).contiguous() + usable = ((tokens.numel() - 1) // seq_len) * seq_len + if usable <= 0: + raise ValueError(f"Validation split is too short for TRAIN_SEQ_LEN={seq_len}") + return tokens[: usable + 1] + + +def load_data_shard(file: Path) -> Tensor: + header_bytes = 256 * np.dtype(" int: + key = str(file) + cached = _SHARD_NTOKENS_CACHE.get(key) + if cached is not None: + return cached + header = np.fromfile(file, dtype=" np.memmap: + key = str(file) + mm = _MMAP_CACHE.get(key) + if mm is not None: + return mm + n = _read_num_tokens(file) + mm = np.memmap(file, mode="r", dtype=" int: + if n <= 1: + return 1 + while True: + s = int(self._rng.integers(1, n)) + if math.gcd(s, n) == 1: + return s + + def _reset_cursor(self, si: int, seq_len: int) -> None: + nt = int(self._num_tokens[si]) + max_phase = min(seq_len - 1, max(0, nt - seq_len - 1)) + phase = int(self._rng.integers(max_phase + 1)) if max_phase > 0 else 0 + bc = (nt - 1 - phase) // seq_len + self._cursor_phase[si] = phase + self._cursor_block_count[si] = bc + self._cursor_next[si] = 0 + self._cursor_start[si] = int(self._rng.integers(bc)) if bc > 1 else 0 + self._cursor_stride[si] = self._pick_coprime_stride(bc) + self._cursor_init[si] = True + + def _ensure_cursor(self, si: int, seq_len: int) -> None: + if not self._cursor_init[si] or self._cursor_next[si] >= self._cursor_block_count[si]: + self._reset_cursor(si, seq_len) + + def _take_from_shard(self, si: int, seq_len: int, count: int, out: list[tuple[int, int]]) -> None: + rem = count + while rem > 0: + self._ensure_cursor(si, seq_len) + bc = int(self._cursor_block_count[si]) + ni = int(self._cursor_next[si]) + take = min(rem, bc - ni) + phase = int(self._cursor_phase[si]) + start = int(self._cursor_start[si]) + stride = int(self._cursor_stride[si]) + for j in range(take): + bi = (start + (ni + j) * stride) % bc + out.append((si, phase + bi * seq_len)) + self._cursor_next[si] = ni + take + rem -= take + + def _init_pipeline(self, global_tokens: int, seq_len: int, grad_accum_steps: int) -> None: + local_tokens = global_tokens // (self.world_size * grad_accum_steps) + num_seqs = local_tokens // seq_len + global_num_seqs = num_seqs * self.world_size + self._cfg = (local_tokens, seq_len, num_seqs, global_num_seqs) + bbc = (self._num_tokens - 1) // seq_len + eligible = bbc > 0 + self._eligible_shards = np.nonzero(eligible)[0].astype(np.int64) + self._base_block_counts = bbc[self._eligible_shards].astype(np.int64) + + def _sample_global_windows(self) -> list[tuple[int, int]]: + assert self._cfg is not None and self._eligible_shards is not None + _, seq_len, _, gns = self._cfg + ec = int(self._eligible_shards.size) + progress = min(self._batches_built / 1800.0, 1.0) + remaining = np.empty(ec, dtype=np.float64) + for i, si in enumerate(self._eligible_shards.tolist()): + if self._cursor_init[si]: + r = int(self._cursor_block_count[si]) - int(self._cursor_next[si]) + remaining[i] = float(max(r, 1)) + else: + remaining[i] = float(self._base_block_counts[i]) + alpha = 0.90 - 0.40 * progress + weights = np.power(remaining, alpha) + ws = float(weights.sum()) + if not np.isfinite(ws) or ws <= 0.0: + weights = np.ones(ec, dtype=np.float64) + ws = float(weights.sum()) + probs = weights / ws + low = min(max(8, self.world_size), ec, gns) + high = min(max(32, self.world_size * 8), ec, gns) + mix = max(1, min(int(round(low + progress * (high - low))), ec, gns)) + cp = self._rng.choice(ec, size=mix, replace=False, p=probs) + cs = self._eligible_shards[cp] + cpr = probs[cp].copy() + cpr /= cpr.sum() + counts = np.ones(mix, dtype=np.int64) + extra = gns - mix + if extra > 0: + counts += self._rng.multinomial(extra, cpr).astype(np.int64) + perm = self._rng.permutation(mix) + cs, counts = cs[perm], counts[perm] + buckets: list[list[tuple[int, int]]] = [] + for si, cnt in zip(cs.tolist(), counts.tolist()): + b: list[tuple[int, int]] = [] + self._take_from_shard(int(si), seq_len, int(cnt), b) + if b: + if len(b) > 1: + bp = self._rng.permutation(len(b)) + b = [b[int(k)] for k in bp.tolist()] + buckets.append(b) + windows: list[tuple[int, int]] = [] + active = [i for i, bk in enumerate(buckets) if bk] + while active: + order = self._rng.permutation(len(active)) + new_active: list[int] = [] + for oi in order.tolist(): + bi = active[oi] + if buckets[bi]: + windows.append(buckets[bi].pop()) + if buckets[bi]: + new_active.append(bi) + active = new_active + return windows + + def next_batch(self, global_tokens: int, seq_len: int, grad_accum_steps: int) -> tuple[Tensor, Tensor]: + if self._cfg is None: + self._init_pipeline(global_tokens, seq_len, grad_accum_steps) + _, _, num_seqs, _ = self._cfg + gw = self._sample_global_windows() + local_w = gw[self.rank::self.world_size] + x = torch.empty((num_seqs, seq_len), dtype=torch.int64) + y = torch.empty((num_seqs, seq_len), dtype=torch.int64) + for slot, (si, pos) in enumerate(local_w): + mm = _get_shard_memmap(self.files[si]) + window = torch.as_tensor(np.array(mm[pos:pos + seq_len + 1], dtype=np.int64)) + x[slot] = window[:-1] + y[slot] = window[1:] + self._batches_built += 1 + return x.to(self.device, non_blocking=True), y.to(self.device, non_blocking=True) + +# ---------------------------------------- +# Model Architecture +# ---------------------------------------- + +class RMSNorm(nn.Module): + def __init__(self, eps: float | None = None): + super().__init__() + self.eps = eps + + def forward(self, x: Tensor) -> Tensor: + return F.rms_norm(x, (x.size(-1),), eps=self.eps) + + +class CastedLinear(nn.Linear): + def forward(self, x: Tensor) -> Tensor: + w = self.weight.to(x.dtype) + bias = self.bias.to(x.dtype) if self.bias is not None else None + return F.linear(x, w, bias) + + +class Rotary(nn.Module): + def __init__(self, dim: int, base: float = 10000.0, train_seq_len: int = 1024, rope_dims: int = 0): + super().__init__() + self.dim = dim + self.base = base + self.train_seq_len = train_seq_len + self.rope_dims = rope_dims if rope_dims > 0 else dim + inv_freq = 1.0 / (base ** (torch.arange(0, self.rope_dims, 2, dtype=torch.float32) / self.rope_dims)) + self.register_buffer("inv_freq", inv_freq, persistent=False) + self._seq_len_cached = 0 + self._cos_cached: Tensor | None = None + self._sin_cached: Tensor | None = None + + def forward(self, seq_len: int, device: torch.device, dtype: torch.dtype) -> tuple[Tensor, Tensor]: + if ( + self._cos_cached is None + or self._sin_cached is None + or self._seq_len_cached != seq_len + or self._cos_cached.device != device + ): + rd = self.rope_dims + if seq_len > self.train_seq_len: + scale = seq_len / self.train_seq_len + new_base = self.base * (scale ** (rd / (rd - 2))) + inv_freq = 1.0 / (new_base ** (torch.arange(0, rd, 2, dtype=torch.float32, device=device) / rd)) + else: + inv_freq = self.inv_freq.to(device) + t = torch.arange(seq_len, device=device, dtype=inv_freq.dtype) + freqs = torch.outer(t, inv_freq) + self._cos_cached = freqs.cos()[None, :, None, :] + self._sin_cached = freqs.sin()[None, :, None, :] + self._seq_len_cached = seq_len + return self._cos_cached.to(dtype=dtype), self._sin_cached.to(dtype=dtype) + + +def apply_rotary_emb(x: Tensor, cos: Tensor, sin: Tensor, rope_dims: int = 0) -> Tensor: + if rope_dims > 0 and rope_dims < x.size(-1): + x_rope, x_pass = x[..., :rope_dims], x[..., rope_dims:] + half = rope_dims // 2 + x1, x2 = x_rope[..., :half], x_rope[..., half:] + x_rope = torch.cat((x1 * cos + x2 * sin, x1 * (-sin) + x2 * cos), dim=-1) + return torch.cat((x_rope, x_pass), dim=-1) + half = x.size(-1) // 2 + x1, x2 = x[..., :half], x[..., half:] + return torch.cat((x1 * cos + x2 * sin, x1 * (-sin) + x2 * cos), dim=-1) + + +class CausalSelfAttention(nn.Module): + def __init__(self, dim: int, num_heads: int, num_kv_heads: int, + rope_base: float, qk_gain_init: float, train_seq_len: int): + super().__init__() + if dim % num_heads != 0: + raise ValueError("model_dim must be divisible by num_heads") + if num_heads % num_kv_heads != 0: + raise ValueError("num_heads must be divisible by num_kv_heads") + self.num_heads = num_heads + self.num_kv_heads = num_kv_heads + self.head_dim = dim // num_heads + if self.head_dim % 2 != 0: + raise ValueError("head_dim must be even for RoPE") + kv_dim = self.num_kv_heads * self.head_dim + self.c_q = CastedLinear(dim, dim, bias=False) + self.c_k = CastedLinear(dim, kv_dim, bias=False) + self.c_v = CastedLinear(dim, kv_dim, bias=False) + self.proj = CastedLinear(dim, dim, bias=False) + self.proj._zero_init = True + self.q_gain = nn.Parameter(torch.full((num_heads,), qk_gain_init, dtype=torch.float32)) + self.rope_dims = 0 + self.rotary = Rotary(self.head_dim, base=rope_base, train_seq_len=train_seq_len) + self.use_xsa = False + + def _xsa_efficient(self, y: Tensor, v: Tensor) -> Tensor: + B, T, H, D = y.shape + Hkv = v.size(-2) + group = H // Hkv + y_g = y.reshape(B, T, Hkv, group, D) + vn = F.normalize(v, dim=-1).unsqueeze(-2) + proj = (y_g * vn).sum(dim=-1, keepdim=True) * vn + return (y_g - proj).reshape(B, T, H, D) + + def forward(self, x: Tensor, v_embed: Tensor | None = None) -> Tensor: + bsz, seqlen, dim = x.shape + q = self.c_q(x).reshape(bsz, seqlen, self.num_heads, self.head_dim) + k = self.c_k(x).reshape(bsz, seqlen, self.num_kv_heads, self.head_dim) + v = self.c_v(x) + if v_embed is not None: + v = v + v_embed + v = v.reshape(bsz, seqlen, self.num_kv_heads, self.head_dim) + q = F.rms_norm(q, (q.size(-1),)) + k = F.rms_norm(k, (k.size(-1),)) + cos, sin = self.rotary(seqlen, x.device, q.dtype) + q = apply_rotary_emb(q, cos, sin, self.rope_dims) + k = apply_rotary_emb(k, cos, sin, self.rope_dims) + q = q * self.q_gain.to(dtype=q.dtype)[None, None, :, None] + y = flash_attn_3_func(q, k, v, causal=True) + if self.use_xsa: + y = self._xsa_efficient(y, v) + y = y.reshape(bsz, seqlen, dim) + return self.proj(y) + + +class ValueEmbedding(nn.Module): + def __init__(self, vocab_size: int, ve_dim: int, model_dim: int): + super().__init__() + self.embed = nn.Embedding(vocab_size, ve_dim) + nn.init.normal_(self.embed.weight, std=0.01) + self.proj = CastedLinear(ve_dim, model_dim, bias=False) if ve_dim != model_dim else None + if self.proj is not None: + nn.init.zeros_(self.proj.weight) + self.scale = nn.Parameter(torch.tensor(0.1, dtype=torch.float32)) + + def forward(self, token_ids: Tensor) -> Tensor: + h = self.embed(token_ids) + if self.proj is not None: + h = self.proj(h) + return h * self.scale.to(dtype=h.dtype) + + +class MLP(nn.Module): + def __init__(self, dim: int, mlp_mult: int): + super().__init__() + hidden = int(mlp_mult * dim) + self.fc = CastedLinear(dim, hidden, bias=False) + self.proj = CastedLinear(hidden, dim, bias=False) + self.proj._zero_init = True + + def forward(self, x: Tensor) -> Tensor: + return self.proj(F.leaky_relu(self.fc(x), negative_slope=0.5).square()) + + +class Block(nn.Module): + def __init__(self, dim: int, num_heads: int, num_kv_heads: int, mlp_mult: int, + rope_base: float, qk_gain_init: float, train_seq_len: int, + layer_idx: int = 0, ln_scale: bool = False): + super().__init__() + self.attn_norm = RMSNorm() + self.mlp_norm = RMSNorm() + self.attn = CausalSelfAttention(dim, num_heads, num_kv_heads, rope_base, qk_gain_init, train_seq_len) + self.mlp = MLP(dim, mlp_mult) + self.attn_scale = nn.Parameter(torch.ones(dim, dtype=torch.float32)) + self.mlp_scale = nn.Parameter(torch.ones(dim, dtype=torch.float32)) + self.resid_mix = nn.Parameter(torch.stack((torch.ones(dim), torch.zeros(dim))).float()) + self.ln_scale_factor = 1.0 / math.sqrt(layer_idx + 1) if ln_scale else 1.0 + + def forward(self, x: Tensor, x0: Tensor, v_embed: Tensor | None = None) -> Tensor: + mix = self.resid_mix.to(dtype=x.dtype) + x_in = mix[0][None, None, :] * x + mix[1][None, None, :] * x0 + attn_out = self.attn(self.attn_norm(x_in) * self.ln_scale_factor, v_embed=v_embed) + x_out = x_in + self.attn_scale.to(dtype=x_in.dtype)[None, None, :] * attn_out + x_out = x_out + self.mlp_scale.to(dtype=x_out.dtype)[None, None, :] * self.mlp(self.mlp_norm(x_out) * self.ln_scale_factor) + return x_out + + +class GPT(nn.Module): + def __init__(self, h: Hyperparameters): + super().__init__() + self._ve_target_dim = h.num_kv_heads * (h.model_dim // h.num_heads) + if h.logit_softcap <= 0.0: + raise ValueError(f"logit_softcap must be positive, got {h.logit_softcap}") + self.tie_embeddings = h.tie_embeddings + self.tied_embed_init_std = h.tied_embed_init_std + self.logit_softcap = h.logit_softcap + self.tok_emb = nn.Embedding(h.vocab_size, h.embedding_dim) + if h.embedding_dim != h.model_dim: + self.embed_proj = CastedLinear(h.embedding_dim, h.model_dim, bias=False) + self.head_proj = CastedLinear(h.model_dim, h.embedding_dim, bias=False) + else: + self.embed_proj = None + self.head_proj = None + self.num_encoder_layers = h.num_layers // 2 + self.num_decoder_layers = h.num_layers - self.num_encoder_layers + self.num_skip_weights = min(self.num_encoder_layers, self.num_decoder_layers) + self.skip_weights = nn.Parameter(torch.ones(self.num_skip_weights, h.model_dim, dtype=torch.float32)) + self.skip_gates = nn.Parameter(torch.zeros(self.num_skip_weights, h.model_dim, dtype=torch.float32)) if h.skip_gates_enabled else None + self.blocks = nn.ModuleList([ + Block(h.model_dim, h.num_heads, h.num_kv_heads, h.mlp_mult, h.rope_base, + h.qk_gain_init, h.train_seq_len, layer_idx=i, ln_scale=h.ln_scale) + for i in range(h.num_layers) + ]) + if h.rope_dims > 0: + head_dim = h.model_dim // h.num_heads + for block in self.blocks: + block.attn.rope_dims = h.rope_dims + block.attn.rotary = Rotary(head_dim, base=h.rope_base, train_seq_len=h.train_seq_len, rope_dims=h.rope_dims) + self.ve_layer_indices = [int(x) for x in h.ve_layers.split(",") if x.strip()] if h.ve_enabled else [] + kv_dim = self._ve_target_dim + if self.ve_layer_indices: + self.ve_shared = ValueEmbedding(h.vocab_size, h.ve_dim, kv_dim) + self.ve_layer_scales = nn.ParameterList( + [nn.Parameter(torch.ones(1, dtype=torch.float32)) for _ in self.ve_layer_indices] + ) + else: + self.ve_shared = None + self.ve_layer_scales = nn.ParameterList() + self.value_embeds = nn.ModuleList() + self.final_norm = RMSNorm() + self.lm_head = None if h.tie_embeddings else CastedLinear(h.embedding_dim, h.vocab_size, bias=False) + if self.lm_head is not None: + self.lm_head._zero_init = True + if h.xsa_last_n > 0: + for i in range(max(0, h.num_layers - h.xsa_last_n), h.num_layers): + self.blocks[i].attn.use_xsa = True + self._init_weights() + + def _init_weights(self) -> None: + if self.tie_embeddings: + nn.init.normal_(self.tok_emb.weight, mean=0.0, std=self.tied_embed_init_std) + for name, module in self.named_modules(): + if isinstance(module, nn.Linear): + if getattr(module, "_zero_init", False): + nn.init.zeros_(module.weight) + elif module.weight.ndim == 2 and module.weight.shape[0] >= 64 and module.weight.shape[1] >= 64: + nn.init.orthogonal_(module.weight, gain=1.0) + + def _get_ve(self, layer_idx: int, input_ids: Tensor, ve_cache: dict | None = None) -> Tensor | None: + if self.ve_shared is None or layer_idx not in self.ve_layer_indices: + return None + if ve_cache is not None and 've' not in ve_cache: + ve_cache['ve'] = self.ve_shared(input_ids) + ve_base = ve_cache['ve'] if ve_cache is not None else self.ve_shared(input_ids) + ve_idx = self.ve_layer_indices.index(layer_idx) + return ve_base * self.ve_layer_scales[ve_idx].to(dtype=ve_base.dtype) + + def forward_logits(self, input_ids: Tensor) -> Tensor: + x = self.tok_emb(input_ids) + x = F.rms_norm(x, (x.size(-1),)) + if self.embed_proj is not None: + x = self.embed_proj(x) + x0 = x + skips: list[Tensor] = [] + ve_cache: dict = {} + for i in range(self.num_encoder_layers): + ve = self._get_ve(i, input_ids, ve_cache) + x = self.blocks[i](x, x0, v_embed=ve) + skips.append(x) + for i in range(self.num_decoder_layers): + bi = self.num_encoder_layers + i + if skips: + scaled_skip = self.skip_weights[i].to(dtype=x.dtype)[None, None, :] * skips.pop() + if self.skip_gates is not None: + g = torch.sigmoid(self.skip_gates[i].to(dtype=x.dtype))[None, None, :] + x = torch.lerp(scaled_skip, x, g) + else: + x = x + scaled_skip + ve = self._get_ve(bi, input_ids, ve_cache) + x = self.blocks[bi](x, x0, v_embed=ve) + x = self.final_norm(x) + if self.head_proj is not None: + x = self.head_proj(x) + if self.tie_embeddings: + logits_proj = F.linear(x, self.tok_emb.weight) + else: + logits_proj = self.lm_head(x) + return self.logit_softcap * torch.tanh(logits_proj / self.logit_softcap) + + def forward(self, input_ids: Tensor, target_ids: Tensor) -> Tensor: + logits = self.forward_logits(input_ids) + return F.cross_entropy( + logits.reshape(-1, logits.size(-1)).float(), target_ids.reshape(-1), reduction="mean") + + +def classify_param(name: str) -> str: + if "tok_emb" in name or "lm_head" in name: + return "embed" + if ".mlp." in name: + return "mlp" + if ".attn." in name or (".proj." in name and ".mlp." not in name): + return "attn" + return "other" + +# ---------------------------------------- +# Optimization +# ---------------------------------------- + +@torch.compile +def zeropower_via_newtonschulz5(G: Tensor, steps: int = 10, eps: float = 1e-7) -> Tensor: + a, b, c = (3.4445, -4.7750, 2.0315) + X = G.bfloat16() + X /= X.norm() + eps + transposed = G.size(0) > G.size(1) + if transposed: + X = X.T + for _ in range(steps): + A = X @ X.T + B = b * A + c * A @ A + X = a * X + B @ X + return X.T if transposed else X + + +class Muon(torch.optim.Optimizer): + def __init__(self, params, lr: float, momentum: float, backend_steps: int, + nesterov: bool = True, weight_decay: float = 0.0): + super().__init__( + params, + dict(lr=lr, momentum=momentum, backend_steps=backend_steps, + nesterov=nesterov, weight_decay=weight_decay), + ) + + @torch.no_grad() + def step(self, closure=None): + loss = None + if closure is not None: + with torch.enable_grad(): + loss = closure() + distributed = dist.is_available() and dist.is_initialized() + world_size = dist.get_world_size() if distributed else 1 + rank = dist.get_rank() if distributed else 0 + for group in self.param_groups: + params = group["params"] + if not params: + continue + lr = group["lr"] + momentum = group["momentum"] + backend_steps = group["backend_steps"] + nesterov = group["nesterov"] + total_params = sum(int(p.numel()) for p in params) + updates_flat = torch.zeros(total_params, device=params[0].device, dtype=torch.bfloat16) + curr = 0 + for i, p in enumerate(params): + if i % world_size == rank and p.grad is not None: + g = p.grad + state = self.state[p] + if "momentum_buffer" not in state: + state["momentum_buffer"] = torch.zeros_like(g) + buf = state["momentum_buffer"] + buf.mul_(momentum).add_(g) + if nesterov: + g = g.add(buf, alpha=momentum) + g = zeropower_via_newtonschulz5(g, steps=backend_steps) + g *= max(1, g.size(0) / g.size(1)) ** 0.5 + updates_flat[curr : curr + p.numel()] = g.reshape(-1) + curr += p.numel() + if distributed: + dist.all_reduce(updates_flat, op=dist.ReduceOp.SUM) + wd = group.get("weight_decay", 0.0) + curr = 0 + for p in params: + if wd > 0.0: + p.data.mul_(1.0 - lr * wd) + g = updates_flat[curr : curr + p.numel()].view_as(p).to(dtype=p.dtype) + p.add_(g, alpha=-lr) + curr += p.numel() + return loss + + +class Optimizers(): + def __init__(self, h: Hyperparameters, base_model: GPT): + block_named_params = list(base_model.blocks.named_parameters()) + matrix_params = [ + p + for name, p in block_named_params + if p.ndim == 2 and not any(pattern in name for pattern in + CONTROL_TENSOR_NAME_PATTERNS) + ] + scalar_params = [ + p + for name, p in block_named_params + if p.ndim < 2 or any(pattern in name for pattern in + CONTROL_TENSOR_NAME_PATTERNS) + ] + if base_model.skip_weights.numel() > 0: + scalar_params.append(base_model.skip_weights) + if base_model.skip_gates is not None and base_model.skip_gates.numel() > 0: + scalar_params.append(base_model.skip_gates) + + token_lr = h.tied_embed_lr if h.tie_embeddings else h.embed_lr + tok_params = [{"params": [base_model.tok_emb.weight], "lr": token_lr, "base_lr": token_lr}] + if base_model.ve_shared is not None: + tok_params.append({"params": [base_model.ve_shared.embed.weight], "lr": token_lr, "base_lr": token_lr}) + if base_model.ve_shared.proj is not None: + matrix_params.append(base_model.ve_shared.proj.weight) + scalar_params.append(base_model.ve_shared.scale) + for s in base_model.ve_layer_scales: + scalar_params.append(s) + + self.optimizer_tok = torch.optim.AdamW( + tok_params, + betas=(h.beta1, h.beta2), + eps=h.adam_eps, + weight_decay=h.embed_wd, + fused=True, + ) + self.optimizer_muon = Muon( + matrix_params, + lr=h.matrix_lr, + momentum=h.muon_momentum, + backend_steps=h.muon_backend_steps, + weight_decay=h.muon_wd, + ) + for group in self.optimizer_muon.param_groups: + group["base_lr"] = h.matrix_lr + self.optimizer_scalar = torch.optim.AdamW( + [{"params": scalar_params, "lr": h.scalar_lr, "base_lr": h.scalar_lr}], + betas=(h.beta1, h.beta2), + eps=h.adam_eps, + weight_decay=h.adam_wd, + fused=True, + ) + self.optimizers: list[torch.optim.Optimizer] = [self.optimizer_tok, self.optimizer_muon, self.optimizer_scalar] + if base_model.lm_head is not None: + self.optimizer_head = torch.optim.Adam( + [{"params": [base_model.lm_head.weight], "lr": h.head_lr, "base_lr": h.head_lr}], + betas=(h.beta1, h.beta2), + eps=h.adam_eps, + fused=True, + ) + self.optimizers.insert(1, self.optimizer_head) + else: + self.optimizer_head = None + + def __iter__(self): + return iter(self.optimizers) + + def zero_grad_all(self) -> None: + for opt in self.optimizers: + opt.zero_grad(set_to_none=True) + + def step(self): + for opt in self.optimizers: + opt.step() + self.zero_grad_all() + +# ---------------------------------------- +# Quantization +# ---------------------------------------- + +CONTROL_TENSOR_NAME_PATTERNS = tuple( + pattern + for pattern in os.environ.get( + "CONTROL_TENSOR_NAME_PATTERNS", + "attn_scale,attn_scales,mlp_scale,mlp_scales,resid_mix,resid_mixes,q_gain,skip_weight,skip_weights,skip_gates,ve_layer_scales,ve_shared.scale", + ).split(",") + if pattern +) +INT8_PER_ROW_SCALE_DTYPE = torch.float16 +INT8_CLIP_PERCENTILE = 99.99984 +INT8_CLIP_Q = INT8_CLIP_PERCENTILE / 100.0 + + +def quantize_float_tensor(t: Tensor) -> tuple[Tensor, Tensor]: + t32 = t.float() + if t32.ndim == 2: + clip_abs = ( + torch.quantile(t32.abs(), INT8_CLIP_Q, dim=1) + if t32.numel() + else torch.empty((t32.shape[0],), dtype=torch.float32) + ) + clipped = torch.maximum(torch.minimum(t32, clip_abs[:, None]), -clip_abs[:, None]) + scale = (clip_abs / 127.0).clamp_min(1.0 / 127.0) + q = torch.clamp(torch.round(clipped / scale[:, None]), -127, 127).to(torch.int8).contiguous() + return q, scale.to(dtype=INT8_PER_ROW_SCALE_DTYPE).contiguous() + + clip_abs = float(torch.quantile(t32.abs().flatten(), INT8_CLIP_Q).item()) if t32.numel() else 0.0 + scale = torch.tensor(clip_abs / 127.0 if clip_abs > 0 else 1.0, dtype=torch.float32) + q = torch.clamp(torch.round(torch.clamp(t32, -clip_abs, clip_abs) / scale), -127, 127).to(torch.int8).contiguous() + return q, scale + + +def restore_fp32_params(model: nn.Module) -> None: + """After .bfloat16(), restore CastedLinear weights and control params to FP32.""" + for module in model.modules(): + if isinstance(module, CastedLinear): + module.float() + for name, param in model.named_parameters(): + if (param.ndim < 2 or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS)) and param.dtype != torch.float32: + param.data = param.data.float() + + +def quantize_int6_per_row(t: Tensor, clip_range: int = 31) -> tuple[Tensor, Tensor]: + t32 = t.float() + if t32.ndim == 2: + best_q, best_s, best_err = None, None, float('inf') + for pct in [0.9990, 0.9995, 0.9999, 0.99999, 1.0]: + if pct < 1.0: + row_clip = torch.quantile(t32.abs(), pct, dim=1) + else: + row_clip = t32.abs().amax(dim=1) + s = (row_clip / clip_range).clamp_min(1.0 / clip_range).to(torch.float16) + q = torch.clamp(torch.round(t32 / s.float()[:, None]), -clip_range, clip_range).to(torch.int8) + recon = q.float() * s.float()[:, None] + err = (t32 - recon).pow(2).mean().item() + if err < best_err: + best_q, best_s, best_err = q, s, err + return best_q, best_s + amax = t32.abs().max().item() + scale = torch.tensor(amax / clip_range if amax > 0 else 1.0, dtype=torch.float16) + q = torch.clamp(torch.round(t32 / scale.float()), -clip_range, clip_range).to(torch.int8) + return q, scale + + +def collect_hessians( + model: nn.Module, + train_loader: DistributedTokenLoader, + h: Hyperparameters, + device: torch.device, + n_calibration_batches: int = 64, +) -> dict[str, Tensor]: + """Run calibration batches and collect H = X^T X for each CastedLinear layer.""" + hessians: dict[str, Tensor] = {} + hooks = [] + + def make_hook(name: str): + def hook_fn(module, inp, out): + x = inp[0].detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + if name not in hessians: + hessians[name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(x.T, x) + return hook_fn + + for name, module in model.named_modules(): + if isinstance(module, CastedLinear) and module.weight.numel() > 65536: + cat = classify_param(name + ".weight") + if cat in ("mlp", "attn"): + hooks.append(module.register_forward_hook(make_hook(name + ".weight"))) + + model.eval() + with torch.no_grad(): + for i in range(n_calibration_batches): + x, y = train_loader.next_batch( + h.train_batch_tokens, + h.train_seq_len, h.grad_accum_steps, + ) + model.forward_logits(x) + + for h in hooks: + h.remove() + + for name in hessians: + hessians[name] = hessians[name].cpu() / n_calibration_batches + + return hessians + + +def gptq_quantize_weight( + w: Tensor, + H: Tensor, + clip_range: int = 31, + block_size: int = 128, +) -> tuple[Tensor, Tensor]: + """GPTQ with Cholesky error compensation and actorder (Frantar et al., ICLR 2023).""" + W_orig = w.float().clone() + rows, cols = W_orig.shape + H = H.float().clone() + + # Zero out dead columns and add damping + dead = torch.diag(H) == 0 + H[dead, dead] = 1 + damp = 0.01 * H.diag().mean() + H.diagonal().add_(damp) + + # Column reordering by descending Hessian diagonal (actorder) + perm = torch.argsort(H.diag(), descending=True) + invperm = torch.argsort(perm) + W_perm = W_orig[:, perm].clone() + W_perm[:, dead[perm]] = 0 + H = H[perm][:, perm] + + # Upper Cholesky of the inverse + try: + Hinv = torch.cholesky_inverse(torch.linalg.cholesky(H)) + Hinv = torch.linalg.cholesky(Hinv, upper=True) + except torch.linalg.LinAlgError: + return quantize_int6_per_row(W_orig, clip_range) + + # Search over scale candidates, running full GPTQ for each + best_q, best_scale, best_err = None, None, float('inf') + for pct in [0.9990, 0.9995, 0.9999, 0.99999, 1.0]: + if pct < 1.0: + row_clip = torch.quantile(W_orig.abs(), pct, dim=1) + else: + row_clip = W_orig.abs().amax(dim=1) + s = (row_clip / clip_range).clamp_min(1.0 / clip_range).to(torch.float16) + sf = s.float() + + Q = torch.zeros(rows, cols, dtype=torch.int8) + W_work = W_perm.clone() + for i1 in range(0, cols, block_size): + i2 = min(i1 + block_size, cols) + W_block = W_work[:, i1:i2].clone() + Hinv_block = Hinv[i1:i2, i1:i2] + Err = torch.zeros(rows, i2 - i1) + for j in range(i2 - i1): + w_col = W_block[:, j] + d = Hinv_block[j, j] + q_col = torch.clamp(torch.round(w_col / sf), -clip_range, clip_range) + Q[:, i1 + j] = q_col.to(torch.int8) + err = (w_col - q_col.float() * sf) / d + Err[:, j] = err + W_block[:, j:] -= err.unsqueeze(1) * Hinv_block[j, j:].unsqueeze(0) + if i2 < cols: + W_work[:, i2:] -= Err @ Hinv[i1:i2, i2:] + + recon = Q.float() * sf[:, None] + mse = (W_perm - recon).pow(2).mean().item() + if mse < best_err: + best_q, best_scale, best_err = Q, s, mse + + return best_q[:, invperm], best_scale + + +def gptq_mixed_quantize_int6( + state_dict: dict[str, Tensor], + int6_cats: set[str], + hessians: dict[str, Tensor], +) -> tuple[dict[str, Tensor], dict[str, object]]: + """Mixed quantization using full GPTQ for layers with Hessians, fallback to clip-search.""" + result: dict[str, Tensor] = {} + meta: dict[str, object] = {} + gptq_count = 0 + fallback_count = 0 + + for name, tensor in state_dict.items(): + t = tensor.detach().cpu().contiguous() + cat = classify_param(name) + + if not t.is_floating_point() or t.numel() <= 65536: + result[name] = t.to(torch.float16) if t.is_floating_point() else t + meta[name] = "passthrough" + continue + + if any(p in name for p in CONTROL_TENSOR_NAME_PATTERNS): + result[name] = t.float() + meta[name] = "passthrough_ctrl" + continue + + if cat in int6_cats and t.ndim == 2: + if name in hessians: + q, s = gptq_quantize_weight(t, hessians[name]) + gptq_count += 1 + meta[name] = {"type": "int6", "method": "gptq"} + else: + q, s = quantize_int6_per_row(t) + fallback_count += 1 + meta[name] = {"type": "int6", "method": "clip_search"} + result[name + ".q"] = q + result[name + ".scale"] = s + elif cat in int6_cats and t.ndim >= 1: + q, s = quantize_int6_per_row(t) + result[name + ".q"] = q + result[name + ".scale"] = s + meta[name] = {"type": "int6"} + else: + q, s = quantize_float_tensor(t) + result[name + ".q"] = q + result[name + ".scale"] = s + meta[name] = {"type": "int8"} + + log(f"GPTQ quantization: {gptq_count} layers with full GPTQ, {fallback_count} fallback to clip-search") + return result, meta + + +def mixed_quantize_int6(state_dict: dict[str, Tensor], int6_cats: set[str]): + result: dict[str, Tensor] = {} + meta: dict[str, object] = {} + for name, tensor in state_dict.items(): + t = tensor.detach().cpu().contiguous() + cat = classify_param(name) + if not t.is_floating_point() or t.numel() <= 65536: + result[name] = t.to(torch.float16) if t.is_floating_point() else t + meta[name] = "passthrough" + continue + if any(p in name for p in CONTROL_TENSOR_NAME_PATTERNS): + result[name] = t.float() + meta[name] = "passthrough_ctrl" + continue + if cat in int6_cats and t.ndim >= 1: + q, s = quantize_int6_per_row(t) + result[name + ".q"] = q + result[name + ".scale"] = s + meta[name] = {"type": "int6"} + else: + q, s = quantize_float_tensor(t) + result[name + ".q"] = q + result[name + ".scale"] = s + meta[name] = {"type": "int8"} + return result, meta + + +def dequantize_mixed_int6(result: dict[str, Tensor], meta: dict[str, object], + template_sd: dict[str, Tensor]) -> dict[str, Tensor]: + out: dict[str, Tensor] = {} + for name, orig in template_sd.items(): + info = meta.get(name) + if info is None: + continue + orig_dtype = orig.dtype + if info in ("passthrough", "passthrough_ctrl", "passthrough_fp16"): + t = result[name] + if t.dtype == torch.float16 and orig_dtype in (torch.float32, torch.bfloat16): + t = t.to(orig_dtype) + out[name] = t + continue + q, s = result[name + ".q"], result[name + ".scale"] + if s.ndim > 0: + out[name] = (q.float() * s.float().view(q.shape[0], *([1] * (q.ndim - 1)))).to(orig_dtype) + else: + out[name] = (q.float() * float(s.item())).to(orig_dtype) + return out + + +_BSHF_MAGIC = b"BSHF" + + +def _byte_shuffle(data: bytes, stride: int = 2) -> bytes: + """Transpose byte stream by stride position for better compression.""" + if stride <= 1 or len(data) < stride: + return data + src = np.frombuffer(data, dtype=np.uint8) + n = len(src) + out = np.empty(n, dtype=np.uint8) + dest_off = 0 + for pos in range(stride): + chunk = src[pos::stride] + out[dest_off:dest_off + len(chunk)] = chunk + dest_off += len(chunk) + return _BSHF_MAGIC + bytes([stride]) + out.tobytes() + + +def _byte_unshuffle(data: bytes) -> bytes: + """Inverse of _byte_shuffle. Auto-detects BSHF magic header.""" + if len(data) < 5 or data[:4] != _BSHF_MAGIC: + return data + stride = data[4] + if stride < 2: + return data[5:] + payload = np.frombuffer(data, dtype=np.uint8, offset=5) + n = len(payload) + out = np.empty(n, dtype=np.uint8) + src_off = 0 + for pos in range(stride): + chunk_len = n // stride + (1 if pos < n % stride else 0) + out[pos::stride][:chunk_len] = payload[src_off:src_off + chunk_len] + src_off += chunk_len + return out.tobytes() + + +def _compress(data: bytes, compressor: str, byte_shuffle: bool = True) -> bytes: + if byte_shuffle: + data = _byte_shuffle(data) + if compressor == "lzma": + return lzma.compress(data, preset=6) + elif compressor == "brotli": + import brotli + return brotli.compress(data, quality=11) + raise ValueError(f"Unknown compressor: {compressor!r}") + + +def _decompress(data: bytes, compressor: str, byte_shuffle: bool = True) -> bytes: + if compressor == "lzma": + raw = lzma.decompress(data) + elif compressor == "brotli": + import brotli + raw = brotli.decompress(data) + if byte_shuffle: + raw = _byte_unshuffle(raw) + return raw + raise ValueError(f"Unknown compressor: {compressor!r}") + + +def serialize(h: Hyperparameters, base_model: torch.nn.Module, code: str) -> int: + model_bytes = None + code_bytes = len(code.encode("utf-8")) + if h.is_main_process: + torch.save(base_model.state_dict(), h.model_path) + model_bytes = os.path.getsize(h.model_path) + log(f"Serialized model: {model_bytes} bytes") + log(f"Code size: {code_bytes} bytes") + + sd_cpu = {k: v.detach().cpu() for k, v in base_model.state_dict().items()} + if h.gptq_enabled: + log("GPTQ:collecting Hessians from calibration data...") + t0 = time.perf_counter() + calib_loader = DistributedTokenLoader(h.train_files, h.rank, h.world_size, + torch.device("cuda", h.local_rank)) + hessians = collect_hessians( + base_model, calib_loader, h, + torch.device("cuda", h.local_rank), + n_calibration_batches=h.gptq_calibration_batches, + ) + log(f"GPTQ:collected {len(hessians)} Hessians in {time.perf_counter() - t0:.1f}s") + quant_result, quant_meta = gptq_mixed_quantize_int6(sd_cpu, {"mlp", "attn"}, hessians) + else: + quant_result, quant_meta = mixed_quantize_int6(sd_cpu, {"mlp", "attn"}) + + quant_buf = io.BytesIO() + torch.save({"w": quant_result, "m": quant_meta}, quant_buf) + quant_raw = quant_buf.getvalue() + quant_blob = _compress(quant_raw, h.compressor) + quant_file_bytes = len(quant_blob) + bytes_total = quant_file_bytes + code_bytes + if h.is_main_process: + with open(h.quantized_model_path, "wb") as f: + f.write(quant_blob) + log(f"Serialized model int6+{h.compressor}: {quant_file_bytes} bytes") + log(f"Total submission size int6+{h.compressor}: {bytes_total} bytes") + + +def deserialize(h: Hyperparameters, device: torch.device) -> GPT: + eval_model = GPT(h).to(device).bfloat16() + restore_fp32_params(eval_model) + + sd_cpu = {k: v.detach().cpu() for k, v in eval_model.state_dict().items()} + + with open(h.quantized_model_path, "rb") as f: + quant_blob_disk = f.read() + quant_state = torch.load( + io.BytesIO(_decompress(quant_blob_disk, h.compressor)), + map_location="cpu", + ) + deq_state = dequantize_mixed_int6(quant_state["w"], quant_state["m"], sd_cpu) + eval_model.load_state_dict(deq_state, strict=True) + + return eval_model + +# ---------------------------------------- +# Evaluation +# ---------------------------------------- + +# PPM byte mixture (adaptive-λ). Byte-level PPM-D order 5, online TTT on scored val. +# Vectorized byte-stream + NN-spread for full-val throughput; PPM kernel in tight Python. +def _ppm_mixture_bpb(tgt_np, lp_np, sp, O=5, H=0.9, L_=0.05, T=0.9): + V = sp.vocab_size() + piece_bytes = [None]*V; piece_lens = np.zeros(V, dtype=np.int32) + for i in range(V): + p = sp.id_to_piece(i) + b = (' '+p[1:]).encode() if p.startswith('▁') else p.encode() + piece_bytes[i] = b; piece_lens[i] = len(b) + per_tok_len = piece_lens[tgt_np] + bs = b"".join(piece_bytes[int(t)] for t in tgt_np) + N = len(bs) + # Vectorized NN-spread: each byte gets log_p_token / L(token) + rep_lp = np.repeat(lp_np.astype(np.float64), per_tok_len) + rep_len = np.repeat(per_tok_len.astype(np.float64), per_tok_len) + nlp = np.where(rep_len > 0, rep_lp / rep_len, 0.0) + # PPM-D + tabs = [dict() for _ in range(O+1)] + plp = np.empty(N, dtype=np.float64); cf = np.empty(N, dtype=np.float64) + LN256 = math.log(1/256); log = math.log + h = b"" + for i in range(N): + x = bs[i] + if i == 0: + plp[i] = LN256; cf[i] = 1/256 + else: + esc = 1.0; pf = 0.0 + lim = O if i > O else i + for o in range(lim, -1, -1): + k = h[-o:] if o else b"" + e = tabs[o].get(k) + if e is None: continue + tot = e[0]; d = e[1]; c = d.get(x, 0) + if c > 0: + pf = esc * (2*c - 1) / (2*tot); break + esc *= len(d) / (2*tot) + else: + pf = esc / 256 + if pf < 1e-20: pf = 1e-20 + plp[i] = log(pf); cf[i] = pf + for o in range(O+1): + k = h[-o:] if o else b"" + e = tabs[o].get(k) + if e is None: + tabs[o][k] = [1, {x: 1}] + else: + e[0] += 1; d = e[1]; d[x] = d.get(x, 0) + 1 + h = (h + bytes([x]))[-O:] + lam = np.where(cf > T, L_, H) + pm = lam*np.exp(nlp) + (1-lam)*np.exp(plp) + return float(-np.log2(np.maximum(pm, 1e-300)).sum()/N) + + +def _loss_bpb(loss_sum, token_count, byte_count) -> tuple[float, float]: + val_loss = (loss_sum / token_count).item() + val_bpb = val_loss / math.log(2.0) * (token_count.item() / byte_count.item()) + return val_loss, val_bpb + + +def eval_val( + h: Hyperparameters, + device: torch.device, + val_data: ValidationData, + model: nn.Module +) -> tuple[float, float]: + seq_len = h.eval_seq_len + local_batch_tokens = h.val_batch_tokens // (h.world_size * h.grad_accum_steps) + if local_batch_tokens < seq_len: + raise ValueError( + "VAL_BATCH_SIZE must provide at least one sequence per rank; " + f"got VAL_BATCH_SIZE={h.val_batch_tokens}, WORLD_SIZE={h.world_size}, " + f"GRAD_ACCUM_STEPS={h.grad_accum_steps}, seq_len={seq_len}" + ) + local_batch_seqs = local_batch_tokens // seq_len + total_seqs = (val_data.val_tokens.numel() - 1) // seq_len + seq_start = (total_seqs * h.rank) // h.world_size + seq_end = (total_seqs * (h.rank + 1)) // h.world_size + val_loss_sum = torch.zeros((), device=device, dtype=torch.float64) + val_token_count = torch.zeros((), device=device, dtype=torch.float64) + val_byte_count = torch.zeros((), device=device, dtype=torch.float64) + + model.eval() + with torch.inference_mode(): + for batch_seq_start in range(seq_start, seq_end, local_batch_seqs): + batch_seq_end = min(batch_seq_start + local_batch_seqs, seq_end) + raw_start = batch_seq_start * seq_len + raw_end = batch_seq_end * seq_len + 1 + local = val_data.val_tokens[raw_start:raw_end].to(device=device, dtype=torch.int64, non_blocking=True) + x = local[:-1].reshape(-1, seq_len) + y = local[1:].reshape(-1, seq_len) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=True): + batch_loss = model(x, y).detach() + batch_token_count = float(y.numel()) + val_loss_sum += batch_loss.to(torch.float64) * batch_token_count + val_token_count += batch_token_count + prev_ids = x.reshape(-1) + tgt_ids = y.reshape(-1) + token_bytes = val_data.base_bytes_lut[tgt_ids].to(dtype=torch.int16) + token_bytes += (val_data.has_leading_space_lut[tgt_ids] & ~val_data.is_boundary_token_lut[prev_ids]).to(dtype=torch.int16) + val_byte_count += token_bytes.to(torch.float64).sum() + + if dist.is_available() and dist.is_initialized(): + dist.all_reduce(val_loss_sum, op=dist.ReduceOp.SUM) + dist.all_reduce(val_token_count, op=dist.ReduceOp.SUM) + dist.all_reduce(val_byte_count, op=dist.ReduceOp.SUM) + + model.train() + return _loss_bpb(val_loss_sum, val_token_count, val_byte_count) + + +def eval_val_sliding( + h: Hyperparameters, + device: torch.device, + val_data: ValidationData, + base_model: nn.Module, + batch_seqs: int = 32 +) -> tuple[float, float]: + """Sliding window evaluation: each token scored with maximum context.""" + base_model.eval() + logits_fn = torch.compile(base_model.forward_logits, dynamic=False, fullgraph=True) + + seq_len = h.eval_seq_len + context_size = seq_len - h.eval_stride + total_tokens = val_data.val_tokens.numel() - 1 + + window_starts = [ws for ws in range(0, total_tokens, h.eval_stride) + if ws + context_size < total_tokens] + + total_windows = len(window_starts) + my_s = (total_windows * h.rank) // h.world_size + my_e = (total_windows * (h.rank + 1)) // h.world_size + my_windows = window_starts[my_s:my_e] + + loss_sum = torch.zeros((), device=device, dtype=torch.float64) + token_count = torch.zeros((), device=device, dtype=torch.float64) + byte_count = torch.zeros((), device=device, dtype=torch.float64) + lp_chunks: list[Tensor] = [] + tgt_chunks: list[Tensor] = [] + + with torch.inference_mode(): + for bi in range(0, len(my_windows), batch_seqs): + batch_ws = my_windows[bi:bi + batch_seqs] + bsz = len(batch_ws) + x_batch = torch.zeros(bsz, seq_len, dtype=torch.int64, device=device) + y_batch = torch.zeros(bsz, seq_len, dtype=torch.int64, device=device) + wlens: list[int] = [] + for i, ws in enumerate(batch_ws): + we = min(ws + seq_len, total_tokens); wlen = we - ws + wlens.append(wlen) + chunk = val_data.val_tokens[ws:we + 1].to(dtype=torch.int64, device=device) + x_batch[i, :wlen] = chunk[:-1]; y_batch[i, :wlen] = chunk[1:] + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + logits = logits_fn(x_batch) + nll = F.cross_entropy(logits.reshape(-1, logits.size(-1)).float(), y_batch.reshape(-1), reduction="none").reshape(bsz, seq_len) + for i, ws in enumerate(batch_ws): + wlen = wlens[i]; s = 0 if ws == 0 else context_size + scored_nll = nll[i, s:wlen].to(torch.float64) + loss_sum += scored_nll.sum(); token_count += float(wlen - s) + tgt = y_batch[i, s:wlen]; prev = x_batch[i, s:wlen] + tb = val_data.base_bytes_lut[tgt].to(torch.float64) + tb += (val_data.has_leading_space_lut[tgt] & ~val_data.is_boundary_token_lut[prev]).to(torch.float64) + byte_count += tb.sum() + lp_chunks.append((-scored_nll).float()); tgt_chunks.append(tgt.to(torch.int32)) + + if dist.is_available() and dist.is_initialized(): + dist.all_reduce(loss_sum, op=dist.ReduceOp.SUM) + dist.all_reduce(token_count, op=dist.ReduceOp.SUM) + dist.all_reduce(byte_count, op=dist.ReduceOp.SUM) + nn_val_loss, nn_val_bpb = _loss_bpb(loss_sum, token_count, byte_count) + + # NN + byte-PPM adaptive mixture (online, legal TTT on already-scored val). + # Full-val mixture: no subsampling. Vectorized byte-stream + NN-spread inside + # _ppm_mixture_bpb make this fit comfortably under the 10-min eval cap. + val_bpb_out = nn_val_bpb + if os.environ.get("PPM_MIX_ENABLED", "0") == "1": + lpl = torch.cat(lp_chunks) if lp_chunks else torch.zeros(0, dtype=torch.float32, device=device) + tgl = torch.cat(tgt_chunks) if tgt_chunks else torch.zeros(0, dtype=torch.int32, device=device) + if dist.is_available() and dist.is_initialized(): + sz = torch.tensor([lpl.numel()], device=device, dtype=torch.long) + asz = [torch.zeros_like(sz) for _ in range(h.world_size)] + dist.all_gather(asz, sz); sizes = [int(x.item()) for x in asz]; mx = max(sizes) + if lpl.numel() < mx: + lpl = torch.cat([lpl, torch.zeros(mx - lpl.numel(), dtype=lpl.dtype, device=device)]) + tgl = torch.cat([tgl, torch.zeros(mx - tgl.numel(), dtype=tgl.dtype, device=device)]) + gl = [torch.zeros(mx, dtype=lpl.dtype, device=device) for _ in range(h.world_size)] if h.rank == 0 else None + gt = [torch.zeros(mx, dtype=tgl.dtype, device=device) for _ in range(h.world_size)] if h.rank == 0 else None + if h.rank == 0: + dist.gather(lpl, gl, dst=0); dist.gather(tgl, gt, dst=0) + lpa = torch.cat([gl[r][:sizes[r]] for r in range(h.world_size)]).cpu().numpy() + tga = torch.cat([gt[r][:sizes[r]] for r in range(h.world_size)]).cpu().numpy() + else: + dist.gather(lpl, None, dst=0); dist.gather(tgl, None, dst=0) + lpa = None; tga = None + else: + lpa = lpl.cpu().numpy(); tga = tgl.cpu().numpy() + if h.rank == 0: + V = val_data.sp.vocab_size() + _pl = np.zeros(V, dtype=np.int32) + for _i in range(V): + _p = val_data.sp.id_to_piece(_i) + _pl[_i] = len((' '+_p[1:]).encode() if _p.startswith('▁') else _p.encode()) + nb = int(_pl[tga].sum()) + mb = _ppm_mixture_bpb(tga, lpa, val_data.sp) + nn_byte_bpb = (-lpa.sum()/math.log(2))/max(nb, 1) + log(f"[ppm_mix] tokens={len(tga)} bytes={nb} NN_byte={nn_byte_bpb:.5f} mix={mb:.5f} Δ={mb-nn_byte_bpb:+.5f} NN_full={nn_val_bpb:.5f}") + val_bpb_out = mb + + base_model.train() + return nn_val_loss, val_bpb_out + + +def timed_eval(label: str, fn, *args, **kwargs) -> tuple[float, float]: + torch.cuda.synchronize() + t0 = time.perf_counter() + val_loss, val_bpb = fn(*args, **kwargs) + torch.cuda.synchronize() + elapsed_ms = 1000.0 * (time.perf_counter() - t0) + log(f"{label} val_loss:{val_loss:.8f} val_bpb:{val_bpb:.8f} eval_time:{elapsed_ms:.0f}ms") + return val_loss, val_bpb + + +def run_evals( + h: Hyperparameters, + device: torch.device, + val_data: ValidationData, + eval_model: torch.nn.Module +): + compiled_model = torch.compile(eval_model, dynamic=False, fullgraph=True) + timed_eval("final_int6_roundtrip", eval_val, h, device, val_data, compiled_model) + if h.sliding_window_enabled: + timed_eval("final_int6_sliding_window", eval_val_sliding, h, device, val_data, eval_model) + +# ----------------------------- +# Training +# ----------------------------- + +def train_model(h: Hyperparameters, device: torch.device, val_data: ValidationData) -> None: + # Set up model + base_model = GPT(h).to(device).bfloat16() + restore_fp32_params(base_model) + compiled_model = torch.compile(base_model, dynamic=False, fullgraph=True) + if h.distributed: + model = DDP(compiled_model, device_ids=[h.local_rank], broadcast_buffers=False) + else: + model = compiled_model + log(f"model_params:{sum(p.numel() for p in base_model.parameters())}") + + # Set up optimizer and load train data + optimizers = Optimizers(h, base_model) + train_loader = DistributedTokenLoader( h.train_files, h.rank, h.world_size, device) + + # Helper functions for training + max_wallclock_ms = 1000.0 * h.max_wallclock_seconds if h.max_wallclock_seconds > 0 else None + if h.gptq_enabled and max_wallclock_ms is not None: + max_wallclock_ms -= h.gptq_reserve_seconds * 1000.0 + log(f"gptq:reserving {h.gptq_reserve_seconds:.0f}s, effective={max_wallclock_ms:.0f}ms") + + def training_frac(step: int, elapsed_ms: float) -> float: + """Fraction of training completed (0 to 1), using step or wallclock.""" + if max_wallclock_ms is None: + return step / max(h.iterations, 1) + return elapsed_ms / max(max_wallclock_ms, 1e-9) + + def lr_mul(frac: float) -> float: + if h.warmdown_frac <= 0: + return 1.0 + if frac >= 1.0 - h.warmdown_frac: + return max((1.0 - frac) / h.warmdown_frac, h.min_lr) + return 1.0 + + def step_fn(step, lr_scale): + optimizers.zero_grad_all() + train_loss = torch.zeros((), device=device) + for micro_step in range(h.grad_accum_steps): + if h.distributed: + model.require_backward_grad_sync = micro_step == h.grad_accum_steps - 1 + x, y = train_loader.next_batch(h.train_batch_tokens, h.train_seq_len, h.grad_accum_steps) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=True): + loss = model(x, y) + train_loss += loss.detach() + (loss / h.grad_accum_steps).backward() + train_loss /= h.grad_accum_steps + + frac = min(step / h.muon_momentum_warmup_steps, 1.0) if h.muon_momentum_warmup_steps > 0 else 1.0 + muon_momentum = (1 - frac) * h.muon_momentum_warmup_start + frac * h.muon_momentum + for group in optimizers.optimizer_muon.param_groups: + group["momentum"] = muon_momentum + + for opt in optimizers: + for group in opt.param_groups: + group["lr"] = group["base_lr"] * lr_scale + + if h.grad_clip_norm > 0: + torch.nn.utils.clip_grad_norm_(base_model.parameters(), h.grad_clip_norm) + + optimizers.step() + return train_loss + + # Model warmup + if h.warmup_steps > 0: + initial_model_state = {name: tensor.detach().cpu().clone() for name, tensor in base_model.state_dict().items()} + initial_optimizer_states = [copy.deepcopy(opt.state_dict()) for opt in optimizers] + model.train() + for warmup_step in range(h.warmup_steps): + step_fn(warmup_step, 1.0) + if warmup_step <= 5 or (warmup_step + 1) % 10 == 0 or warmup_step + 1 == h.warmup_steps: + log(f"warmup_step: {warmup_step + 1}/{h.warmup_steps}") + base_model.load_state_dict(initial_model_state, strict=True) + for opt, state in zip(optimizers, initial_optimizer_states, strict=True): + opt.load_state_dict(state) + optimizers.zero_grad_all() + if h.distributed: + model.require_backward_grad_sync = True + train_loader = DistributedTokenLoader( + h.train_files, h.rank, h.world_size, device) + + # Training loop + ema_state = {name: t.detach().float().clone() for name, t in base_model.state_dict().items()} + ema_decay = h.ema_decay + + training_time_ms = 0.0 + stop_after_step: int | None = None + torch.cuda.synchronize() + t0 = time.perf_counter() + + step = 0 + while True: + last_step = step == h.iterations or (stop_after_step is not None and step >= stop_after_step) + + should_validate = last_step or (h.val_loss_every > 0 and step % h.val_loss_every == 0) + if should_validate: + torch.cuda.synchronize() + training_time_ms += 1000.0 * (time.perf_counter() - t0) + val_loss, val_bpb = eval_val(h, device, val_data, model) + log(f"{step}/{h.iterations} val_loss: {val_loss:.4f} val_bpb: {val_bpb:.4f}") + torch.cuda.synchronize() + t0 = time.perf_counter() + + if last_step: + if stop_after_step is not None and step < h.iterations: + log( + f"stopping_early: wallclock_cap train_time: {training_time_ms:.0f}ms " + f"step: {step}/{h.iterations}" + ) + break + + elapsed_ms = training_time_ms + 1000.0 * (time.perf_counter() - t0) + frac = training_frac(step, elapsed_ms) + scale = lr_mul(frac) + train_loss = step_fn(step, scale) + + with torch.no_grad(): + for name, t in base_model.state_dict().items(): + ema_state[name].mul_(ema_decay).add_(t.detach().float(), alpha=1.0 - ema_decay) + + step += 1 + approx_training_time_ms = training_time_ms + 1000.0 * (time.perf_counter() - t0) + + should_log_train = ( + h.train_log_every > 0 + and (step <= 5 or step % h.train_log_every == 0 or stop_after_step is not None) + ) + if should_log_train: + tok_per_sec = step * h.train_batch_tokens / (approx_training_time_ms / 1000.0) + log( + f"{step}/{h.iterations} train_loss: {train_loss.item():.4f} " + f"train_time: {approx_training_time_ms / 60000:.1f}m tok/s: {tok_per_sec:.0f}" + ) + + reached_cap = max_wallclock_ms is not None and approx_training_time_ms >= max_wallclock_ms + if h.distributed and max_wallclock_ms is not None: + reached_cap_tensor = torch.tensor(int(reached_cap), device=device) + dist.all_reduce(reached_cap_tensor, op=dist.ReduceOp.MAX) + reached_cap = bool(reached_cap_tensor.item()) + if stop_after_step is None and reached_cap: + stop_after_step = step + + log( + f"peak memory allocated: {torch.cuda.max_memory_allocated() // 1024 // 1024} MiB " + f"reserved: {torch.cuda.max_memory_reserved() // 1024 // 1024} MiB" + ) + + # Weight averaging + log("ema:applying EMA weights") + current_state = base_model.state_dict() + avg_state = {name: t.to(dtype=current_state[name].dtype) for name, t in ema_state.items()} + base_model.load_state_dict(avg_state, strict=True) + + return base_model, compiled_model + + +def train_and_eval(h: Hyperparameters, device: torch.device) -> None: + random.seed(h.seed) + np.random.seed(h.seed) + torch.manual_seed(h.seed) + torch.cuda.manual_seed_all(h.seed) + + val_data = ValidationData(h, device) + log(f"train_shards: {len(list(Path(h.datasets_dir).resolve().glob('fineweb_train_*.bin')))}") + log(f"val_tokens: {val_data.val_tokens.numel() - 1}") + + base_model, compiled_model = train_model(h, device, val_data) + timed_eval("pre-quantization post-ema", eval_val, h, device, val_data, compiled_model) + + serialize(h, base_model, Path(__file__).read_text(encoding="utf-8")) + if h.distributed: + dist.barrier() + eval_model = deserialize(h, device) + + run_evals(h, device, val_data, eval_model) + + +def main(): + world_size = int(os.environ.get("WORLD_SIZE", "1")) + local_rank = int(os.environ.get("LOCAL_RANK", "0")) + distributed = "RANK" in os.environ and "WORLD_SIZE" in os.environ + + if not torch.cuda.is_available(): + raise RuntimeError("CUDA is required") + if world_size <= 0: + raise ValueError(f"WORLD_SIZE must be positive, got {world_size}") + if 8 % world_size != 0: + raise ValueError(f"WORLD_SIZE={world_size} must divide 8 so grad_accum_steps stays integral") + + device = torch.device("cuda", local_rank) + torch.cuda.set_device(device) + if distributed: + dist.init_process_group(backend="nccl", device_id=device) + dist.barrier() + + torch.backends.cuda.matmul.allow_tf32 = True + torch.backends.cudnn.allow_tf32 = True + torch.set_float32_matmul_precision("high") + from torch.backends.cuda import enable_cudnn_sdp, enable_flash_sdp, enable_math_sdp, enable_mem_efficient_sdp + + enable_cudnn_sdp(False) + enable_flash_sdp(True) + enable_mem_efficient_sdp(False) + enable_math_sdp(False) + torch._dynamo.config.optimize_ddp = False + + h = Hyperparameters() + set_logging_hparams(h) + if h.is_main_process: + os.makedirs("logs", exist_ok=True) + log(100 * "=", console=False) + log("Hyperparameters:", console=True) + for k, v in sorted(vars(type(h)).items()): + if not k.startswith("_"): + log(f" {k}: {v}", console=True) + log(Path(__file__).read_text(encoding="utf-8"), console=False) + log("=" * 100, console=False) + log(f"Running Python {sys.version}", console=False) + log(f"Running PyTorch {torch.__version__}", console=False) + log( + subprocess.run(["nvidia-smi"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, check=False).stdout, + console=False, + ) + log("=" * 100, console=False) + + train_and_eval(h, device) + + if distributed: + dist.destroy_process_group() + + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_seed1337.log b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_seed1337.log new file mode 100644 index 0000000000..793fb39f88 --- /dev/null +++ b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_seed1337.log @@ -0,0 +1,121 @@ +W0423 21:47:14.485000 35795 torch/distributed/run.py:803] +W0423 21:47:14.485000 35795 torch/distributed/run.py:803] ***************************************** +W0423 21:47:14.485000 35795 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0423 21:47:14.485000 35795 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp4096 + distributed: True + ema_decay: 0.997 + embed_lr: 0.6 + embed_wd: 0.085 + embedding_dim: 512 + eval_seq_len: 2048 + eval_stride: 64 + gptq_calibration_batches: 64 + gptq_enabled: True + gptq_reserve_seconds: 10.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + head_lr: 0.008 + is_main_process: True + iterations: 20000 + ln_scale: True + local_rank: 0 + logfile: logs/1337.txt + logit_softcap: 30.0 + matrix_lr: 0.02 + max_wallclock_seconds: 600.0 + min_lr: 0.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_beta2: 0.95 + muon_momentum: 0.99 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_wd: 0.085 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + qk_gain_init: 4.0 + quantized_model_path: final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + run_id: 1337 + scalar_lr: 0.02 + seed: 1337 + skip_gates_enabled: True + sliding_window_enabled: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_4096_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp4096/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 2048 + val_batch_tokens: 524288 + val_files: ./data/datasets/fineweb10B_sp4096/fineweb_val_*.bin + val_loss_every: 4000 + ve_dim: 128 + ve_enabled: True + ve_layers: 9,10 + vocab_size: 4096 + warmdown_frac: 0.667 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 45508608 +model_params:34401371 +gptq:reserving 10s, effective=590000ms +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +0/20000 val_loss: 8.3169 val_bpb: 3.6144 +1/20000 train_loss: 8.3195 train_time: 0.0m tok/s: 8431812 +2/20000 train_loss: 12.3801 train_time: 0.0m tok/s: 8311698 +3/20000 train_loss: 10.8725 train_time: 0.0m tok/s: 8205569 +4/20000 train_loss: 9.0219 train_time: 0.0m tok/s: 8148269 +5/20000 train_loss: 7.8134 train_time: 0.0m tok/s: 8122147 +500/20000 train_loss: 2.9337 train_time: 0.8m tok/s: 7870638 +1000/20000 train_loss: 2.9095 train_time: 1.7m tok/s: 7856717 +1500/20000 train_loss: 2.9266 train_time: 2.5m tok/s: 7848927 +2000/20000 train_loss: 2.6613 train_time: 3.3m tok/s: 7851901 +2500/20000 train_loss: 2.7165 train_time: 4.2m tok/s: 7853800 +3000/20000 train_loss: 2.7636 train_time: 5.0m tok/s: 7855520 +3500/20000 train_loss: 2.7096 train_time: 5.8m tok/s: 7856849 +4000/20000 train_loss: 2.6494 train_time: 6.7m tok/s: 7857797 +4000/20000 val_loss: 2.6733 val_bpb: 1.1618 +4500/20000 train_loss: 2.6105 train_time: 7.5m tok/s: 7858406 +5000/20000 train_loss: 2.5666 train_time: 8.3m tok/s: 7859037 +5500/20000 train_loss: 2.5758 train_time: 9.2m tok/s: 7859521 +5897/20000 val_loss: 2.5458 val_bpb: 1.1064 +stopping_early: wallclock_cap train_time: 590041ms step: 5897/20000 +peak memory allocated: 25773 MiB reserved: 25882 MiB +ema:applying EMA weights +pre-quantization post-ema val_loss:2.54309633 val_bpb:1.10520037 eval_time:1731ms +Serialized model: 132405827 bytes +Code size: 72829 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 66 Hessians in 8.3s +GPTQ quantization: 66 layers with full GPTQ, 0 fallback to clip-search +Serialized model int6+brotli: 15856855 bytes +Total submission size int6+brotli: 15929684 bytes +final_int6_roundtrip val_loss:2.56959416 val_bpb:1.11671602 eval_time:6639ms +[ppm_mix] tokens=45508608 bytes=152570124 NN_byte=1.08755 mix=0.95214 Δ=-0.13541 NN_full=1.09832 +final_int6_sliding_window val_loss:2.52725872 val_bpb:0.95213621 eval_time:541682ms diff --git a/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_seed2025.log b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_seed2025.log new file mode 100644 index 0000000000..3129ea5887 --- /dev/null +++ b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_seed2025.log @@ -0,0 +1,121 @@ +W0423 22:15:34.031000 37080 torch/distributed/run.py:803] +W0423 22:15:34.031000 37080 torch/distributed/run.py:803] ***************************************** +W0423 22:15:34.031000 37080 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0423 22:15:34.031000 37080 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp4096 + distributed: True + ema_decay: 0.997 + embed_lr: 0.6 + embed_wd: 0.085 + embedding_dim: 512 + eval_seq_len: 2048 + eval_stride: 64 + gptq_calibration_batches: 64 + gptq_enabled: True + gptq_reserve_seconds: 10.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + head_lr: 0.008 + is_main_process: True + iterations: 20000 + ln_scale: True + local_rank: 0 + logfile: logs/2025.txt + logit_softcap: 30.0 + matrix_lr: 0.02 + max_wallclock_seconds: 600.0 + min_lr: 0.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_beta2: 0.95 + muon_momentum: 0.99 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_wd: 0.085 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + qk_gain_init: 4.0 + quantized_model_path: final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + run_id: 2025 + scalar_lr: 0.02 + seed: 2025 + skip_gates_enabled: True + sliding_window_enabled: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_4096_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp4096/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 2048 + val_batch_tokens: 524288 + val_files: ./data/datasets/fineweb10B_sp4096/fineweb_val_*.bin + val_loss_every: 4000 + ve_dim: 128 + ve_enabled: True + ve_layers: 9,10 + vocab_size: 4096 + warmdown_frac: 0.667 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 45508608 +model_params:34401371 +gptq:reserving 10s, effective=590000ms +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +0/20000 val_loss: 8.3157 val_bpb: 3.6139 +1/20000 train_loss: 8.3179 train_time: 0.0m tok/s: 8408044 +2/20000 train_loss: 12.3827 train_time: 0.0m tok/s: 8285200 +3/20000 train_loss: 10.8875 train_time: 0.0m tok/s: 8185272 +4/20000 train_loss: 9.0416 train_time: 0.0m tok/s: 8135610 +5/20000 train_loss: 7.8092 train_time: 0.0m tok/s: 8099886 +500/20000 train_loss: 2.9326 train_time: 0.8m tok/s: 7867458 +1000/20000 train_loss: 2.9024 train_time: 1.7m tok/s: 7857849 +1500/20000 train_loss: 2.9197 train_time: 2.5m tok/s: 7858915 +2000/20000 train_loss: 2.6646 train_time: 3.3m tok/s: 7860451 +2500/20000 train_loss: 2.7164 train_time: 4.2m tok/s: 7858666 +3000/20000 train_loss: 2.7611 train_time: 5.0m tok/s: 7859735 +3500/20000 train_loss: 2.7064 train_time: 5.8m tok/s: 7860388 +4000/20000 train_loss: 2.6483 train_time: 6.7m tok/s: 7860863 +4000/20000 val_loss: 2.6716 val_bpb: 1.1611 +4500/20000 train_loss: 2.6061 train_time: 7.5m tok/s: 7861174 +5000/20000 train_loss: 2.5630 train_time: 8.3m tok/s: 7861611 +5500/20000 train_loss: 2.5709 train_time: 9.2m tok/s: 7862096 +5899/20000 val_loss: 2.5443 val_bpb: 1.1057 +stopping_early: wallclock_cap train_time: 590063ms step: 5899/20000 +peak memory allocated: 25773 MiB reserved: 25882 MiB +ema:applying EMA weights +pre-quantization post-ema val_loss:2.54157766 val_bpb:1.10454037 eval_time:1730ms +Serialized model: 132405827 bytes +Code size: 72829 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 66 Hessians in 8.3s +GPTQ quantization: 66 layers with full GPTQ, 0 fallback to clip-search +Serialized model int6+brotli: 15857795 bytes +Total submission size int6+brotli: 15930624 bytes +final_int6_roundtrip val_loss:2.56743953 val_bpb:1.11577965 eval_time:6619ms +[ppm_mix] tokens=45508608 bytes=152570124 NN_byte=1.08675 mix=0.95135 Δ=-0.13540 NN_full=1.09751 +final_int6_sliding_window val_loss:2.52540642 val_bpb:0.95134764 eval_time:540903ms diff --git a/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_seed42.log b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_seed42.log new file mode 100644 index 0000000000..fb55631786 --- /dev/null +++ b/records/track_10min_16mb/2026-04-23_SP4096_PPM_AdaptiveMix/train_seed42.log @@ -0,0 +1,121 @@ +W0423 21:23:02.449000 2204 torch/distributed/run.py:803] +W0423 21:23:02.449000 2204 torch/distributed/run.py:803] ***************************************** +W0423 21:23:02.449000 2204 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0423 21:23:02.449000 2204 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp4096 + distributed: True + ema_decay: 0.997 + embed_lr: 0.6 + embed_wd: 0.085 + embedding_dim: 512 + eval_seq_len: 2048 + eval_stride: 64 + gptq_calibration_batches: 64 + gptq_enabled: True + gptq_reserve_seconds: 10.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + head_lr: 0.008 + is_main_process: True + iterations: 20000 + ln_scale: True + local_rank: 0 + logfile: logs/42.txt + logit_softcap: 30.0 + matrix_lr: 0.02 + max_wallclock_seconds: 600.0 + min_lr: 0.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_beta2: 0.95 + muon_momentum: 0.99 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_wd: 0.085 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + qk_gain_init: 4.0 + quantized_model_path: final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + run_id: 42 + scalar_lr: 0.02 + seed: 42 + skip_gates_enabled: True + sliding_window_enabled: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_4096_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp4096/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 2048 + val_batch_tokens: 524288 + val_files: ./data/datasets/fineweb10B_sp4096/fineweb_val_*.bin + val_loss_every: 4000 + ve_dim: 128 + ve_enabled: True + ve_layers: 9,10 + vocab_size: 4096 + warmdown_frac: 0.667 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 45508608 +model_params:34401371 +gptq:reserving 10s, effective=590000ms +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +0/20000 val_loss: 8.3187 val_bpb: 3.6152 +1/20000 train_loss: 8.3201 train_time: 0.0m tok/s: 8366516 +2/20000 train_loss: 12.3390 train_time: 0.0m tok/s: 8282231 +3/20000 train_loss: 10.8514 train_time: 0.0m tok/s: 8173830 +4/20000 train_loss: 9.0159 train_time: 0.0m tok/s: 8122498 +5/20000 train_loss: 7.7961 train_time: 0.0m tok/s: 8086754 +500/20000 train_loss: 2.9275 train_time: 0.8m tok/s: 7865083 +1000/20000 train_loss: 2.9033 train_time: 1.7m tok/s: 7854647 +1500/20000 train_loss: 2.9205 train_time: 2.5m tok/s: 7855979 +2000/20000 train_loss: 2.6620 train_time: 3.3m tok/s: 7858089 +2500/20000 train_loss: 2.7159 train_time: 4.2m tok/s: 7859352 +3000/20000 train_loss: 2.7632 train_time: 5.0m tok/s: 7860146 +3500/20000 train_loss: 2.7064 train_time: 5.8m tok/s: 7859009 +4000/20000 train_loss: 2.6465 train_time: 6.7m tok/s: 7859700 +4000/20000 val_loss: 2.6715 val_bpb: 1.1610 +4500/20000 train_loss: 2.6097 train_time: 7.5m tok/s: 7860159 +5000/20000 train_loss: 2.5607 train_time: 8.3m tok/s: 7860585 +5500/20000 train_loss: 2.5739 train_time: 9.2m tok/s: 7860928 +5898/20000 val_loss: 2.5439 val_bpb: 1.1056 +stopping_early: wallclock_cap train_time: 590044ms step: 5898/20000 +peak memory allocated: 25776 MiB reserved: 25848 MiB +ema:applying EMA weights +pre-quantization post-ema val_loss:2.54123397 val_bpb:1.10439101 eval_time:1734ms +Serialized model: 132405827 bytes +Code size: 72829 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 66 Hessians in 8.3s +GPTQ quantization: 66 layers with full GPTQ, 0 fallback to clip-search +Serialized model int6+brotli: 15887200 bytes +Total submission size int6+brotli: 15960029 bytes +final_int6_roundtrip val_loss:2.56763572 val_bpb:1.11586490 eval_time:18567ms +[ppm_mix] tokens=45508608 bytes=152570124 NN_byte=1.08669 mix=0.95145 Δ=-0.13524 NN_full=1.09745 +final_int6_sliding_window val_loss:2.52526944 val_bpb:0.95145061 eval_time:575204ms