diff --git a/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/README.md b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/README.md new file mode 100644 index 0000000000..c6202cd50e --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/README.md @@ -0,0 +1,184 @@ +# Record: SP8192 + Full Stack + Headwise Gated Attention + Legal TTT + +**val_bpb = 1.0805** (3-seed mean, std 0.0012) | **~15.74 MB** | 8xH100 SXM + +**Non-record submission** documenting a novel architecture modification (headwise gated attention) and systematic ablation study across 40+ experiments. + +## 3-Seed Results + +| Seed | Sliding BPB | **TTT BPB** | Artifact | +|------|-------------|-------------|----------| +| 42 | 1.0834 | **1.0818** | 15,697,552 | +| 1337 | 1.0810 | **1.0794** | 15,694,065 | +| 2025 | 1.0820 | **1.0804** | 15,693,855 | +| **Mean** | **1.0821** | **1.0805** | **15,695,157** | +| **Std** | **0.0012** | **0.0012** | | + +## Author & Research Approach + +**An Thien Vo** (James Emerson Vo) — Georgia Tech, CS 7643 Deep Learning. + +This submission is the result of a systematic research effort to identify which language model training techniques transfer to the extreme compression regime of Parameter Golf (36M params, 16 MB artifact, 10-minute wall clock on 8×H100). + +I surveyed **29+ papers** from NeurIPS 2024-2025, ICML 2025, ICLR 2025, and ACL 2025 — covering attention modifications, normalization strategies, optimizer scheduling, data selection, structured layers, and compression techniques. Each candidate technique was: + +1. **Assessed for PG feasibility** — does it fit within the 16 MB / 10-min constraints? +2. **Tested individually on 2×H100** — isolated A/B against the rank 1 baseline +3. **Validated for stacking** — confirmed no interference with other techniques before combining +4. **Scaled to 8×H100** — final verification at competition scale with 3-seed reproducibility + +Over **40+ experiments** across 2×H100 and 8×H100, I identified that most techniques published for 125M+ parameter models **do not transfer** to the 36M regime — 5 of 10 tested papers produced negative results. + +## Novel Contributions + +1. **Headwise Gated Attention** — Original architecture modification: post-attention sigmoid gate applied per-head after FA3+XSA. Q projection widened by `gate_dim`, gate modulates each head's contribution before output projection. Consistent -0.0005 BPB across scales. Inspired by NeurIPS 2025 Best Paper ([arxiv:2505.06708](https://arxiv.org/abs/2505.06708)). + +2. **29-Paper Systematic Survey** — Surveyed NeurIPS 2024-2025, ICML 2025, ICLR 2025, and ACL 2025 papers to identify which techniques are applicable to the 16 MB / 10-min / 36M-param regime. Mapped each paper to PG leaderboard presence and feasibility. Found that most techniques published for 125M+ models **do not transfer** — 5 of 10 tested papers produced negative results. + +3. **EMA Decay Scaling Law at Short Training Durations** — Discovered that optimal EMA decay shifts dramatically lower when training steps are limited (~1,000-3,000 steps). Default 0.9965 -> optimal **0.990**, with gains monotonically increasing as decay decreases: 0.995 (-0.006), 0.993 (-0.0096), 0.990 (-0.0117 BPB). This submission uses the default 0.9965; the EMA finding is documented as a research contribution. + +4. **Negative Results at 36M Scale** — Systematic ablation showing 5 papers fail to transfer: SLM/Rho-1 (NeurIPS 2024), ResFormer (ACL 2025), LR Warmup (NeurIPS 2024), Structured FFN (NeurIPS 2024), and Peri-LN (ICML 2025). Documents **why** each fails — providing guidance for future small-model compression research. + +## Key Technique + +| Technique | Source | Phase | Impact (2xH100) | +|-----------|--------|-------|-----------------| +| **Headwise Gated Attention** | Inspired by [NeurIPS 2025 Best Paper](https://arxiv.org/abs/2505.06708) | Architecture | -0.0005 BPB | + +### Headwise Gated Attention (Novel Contribution) + +Post-attention sigmoid gate applied per-head, after FlashAttention-3 + XSA compute the attention output. A learned gate modulates each head's contribution before the output projection: + +- Q projection widened by `gate_dim` extra dimensions +- Gate signal extracted from extra Q dims, passed through sigmoid +- Applied elementwise per-head: `attn_out *= gate.unsqueeze(-1)` +- ~50K extra parameters, zero inference latency cost +- Consistent -0.0005 BPP improvement across 2xH100 and 8xH100 scales + +Inspired by NeurIPS 2025 Best Paper ([arxiv:2505.06708](https://arxiv.org/abs/2505.06708)). + +## Base Stack (from @bigbag, PR #1493) + +Our submission builds on @bigbag's base stack: + +1. **SP8192 vocabulary** — 8192-token SentencePiece BPE ([PR #1394](https://github.com/openai/parameter-golf/pull/1394) @clarkkev) +2. **11L x 512d x 8H/4KV** — 11 encoder layers, 512 model dim, GQA (8 heads, 4 KV heads) +3. **4x MLP** with LeakyReLU(0.5)^2 activation +4. **3-Layer Depth Recurrence** — layers 3,4,5 looped 2x, 17 virtual layers from 11 physical ([PR #1331](https://github.com/openai/parameter-golf/pull/1331), [#1437](https://github.com/openai/parameter-golf/pull/1437) @dexhunter) +5. **Parallel Residuals** (layers 7+) — GPT-J style ([PR #1412](https://github.com/openai/parameter-golf/pull/1412) @Robby955, [PR #1204](https://github.com/openai/parameter-golf/pull/1204) @msisovic) +6. **Sigmoid Skip Gates** — learned encoder-decoder bridging +7. **Partial RoPE** (16/64 dims) with layerwise LN scale 1/sqrt(layer+1) +8. **XSA (Exclusive Self-Attention)** on all 11 layers — attention orthogonal to self-value vector +9. **QK-Gain 5.25** — learnable per-head query scaling +10. **Logit softcap 30.0** — soft capping on output logits + +## Techniques That Failed + +Tested on V2 rank 1 stack. All produced negative results at the 36M-parameter scale. + +| # | Technique | Paper | Result | Why It Failed | +|---|-----------|-------|--------|---------------| +| 1 | SLM / Rho-1 | [NeurIPS 2024](https://arxiv.org/abs/2404.07965) | ALL ratios worse (+0.002 to +0.155 BPB) | 17M model needs every gradient signal; paper tested at 1B+ | +| 2 | ResFormer (Value Residual) | [ACL 2025](https://arxiv.org/abs/2410.17897) | +0.0022 BPB on 8xH100 | Parallel residuals already provide the gradient highway ResFormer tries to create | +| 3 | LR Warmup | [NeurIPS 2024](https://neurips.cc/virtual/2024/poster/95431) | +0.0024 to +0.0066 (monotonically worse) | MuonEq-R has its own momentum warmup; extra LR ramp wastes steps | +| 4 | Structured FFN | [NeurIPS 2024](https://arxiv.org/abs/2406.16450) | +0.04 to +0.05 BPB | Low-rank + block-diagonal too lossy at 36M; paper tested at 125M+ | +| 5 | Peri-LN | [ICML 2025](https://arxiv.org/abs/2502.02732) | Immediate NaN | Output norms conflict with existing attn_scale/mlp_scale + ln_scale_factor | + +**Takeaway:** Most techniques from large-scale papers (125M+) do not transfer to the extreme compression regime. The 36M-parameter constraint changes which optimizations matter. + +## Architecture + +11L x 512d x 8H / 4KV, MLP 4x, LeakyReLU(0.5)^2, partial RoPE (16/64 dims), layerwise LN scale, tied embeddings, logit softcap=30.0. Depth recurrence: encoder [0,1,2,3,4,5,3,4] decoder [5,3,4,5,6,7,8,9,10] (loops layers 3-5, activated at frac=0.35). Parallel residuals from layer 7. Skip gates (sigmoid-gated U-Net connections). Headwise gated attention: Q widened by gate_dim, sigmoid gate per-head after FA3+XSA. + +Total parameters: ~35.99M. + +## Training + +MuonEq-R optimizer (row-normalized Muon, Newton-Schulz 5 steps) for matrix params, AdamW for embeddings and scalars. `GRAD_ACCUM_STEPS=1` (8 GPUs), `TRAIN_BATCH_TOKENS=786,432` — ~4,469 steps in ~588s on 8xH100 SXM (PyTorch 2.11, CUDA 13.0). Linear warmdown to LR=0 over final 72% of training. **EMA decay 0.9965** (rank 1 default). Weight decay: Muon WD=0.095, Embed WD=0.085, Adam WD=0.02. + +## Quantization + +Full-Hessian GPTQ with SDClip: `clip = k * std(row)` for principled rate-distortion. +- int6 for attention/MLP matrices (`MATRIX_CLIP_SIGMAS=12.85`) +- int7 for token embeddings (`EMBED_BITS=7`, `EMBED_CLIP_SIGMAS=15.0`) +- Byte-shuffle + Brotli-11 compression +- 64 calibration batches from training data + +## Evaluation + +**Sliding-window causal eval** with stride 64 across the full validation set. + +**Score-first TTT** (test-time training) — chunk-based SGD adaptation at eval time: +- Chunk validation tokens into 32K-token segments +- For each chunk: (1) score all sliding windows under `torch.no_grad()`, (2) train model on scored tokens with SGD +- 3 epochs per chunk, lr=0.005, momentum=0.9, cosine LR decay across chunks +- Gradient clipping at 1.0, distributed all-reduce for multi-GPU +- Total eval time: ~395s (within 600s budget) + +## Compliance + +Per [Issue #1017](https://github.com/openai/parameter-golf/issues/1017) (Track B — legal eval-time adaptation): + +- **Condition 1 (Causality):** Sliding-window eval is strictly causal. Each position scored from prefix tokens only. +- **Condition 2 (Normalized distribution):** Standard softmax over full vocab. No n-gram cache, no logit biasing. +- **Condition 3 (Score before update):** Each chunk fully scored under `torch.no_grad()` BEFORE any SGD update. +- **Condition 4 (Single pass):** Each token scored exactly once. No rescoring, no multi-pass. + +Additional: +- No SLOT (standard or causal) +- **No Pre-Quantization TTT** — fully legal, no val-data fine-tuning before GPTQ +- No ETLB (eval-time logit bias) +- No n-gram cache or tilt +- All artifacts under 16,000,000 bytes on all 3 seeds +- Training under 600s on all 3 seeds +- Eval (sliding + TTT) under 600s on all 3 seeds + +## Reproduction + +```bash +pip install --upgrade torch +pip install brotli sentencepiece numpy +pip install --no-cache-dir \ + "https://download.pytorch.org/whl/cu130/flash_attn_3-3.0.0-cp39-abi3-manylinux_2_28_x86_64.whl" +MATCHED_FINEWEB_REPO_ID=kevclark/parameter-golf python3 data/cached_challenge_fineweb.py --variant sp8192 + +SEED=42 GATED_ATTN=headwise EMBED_BITS=7 EMBED_CLIP_SIGMAS=15.0 \ + TTT_ENABLED=1 \ + torchrun --standalone --nproc_per_node=8 train_gpt.py +``` + +## Credits + +This submission builds on the work of many contributors to the Parameter Golf community: + +- **@bigbag** — Base stack: 3-layer depth recurrence, parallel residuals, sigmoid skip gates, QK-Gain 5.25, LeakyReLU^2, LN scale, legal TTT ([PR #1493](https://github.com/openai/parameter-golf/pull/1493)) +- **@clarkkev** (Kevin Clark) — SP8192 vocabulary, GPTQ with SDClip, MuonEq-R optimizer, embedding GPTQ ([PR #1394](https://github.com/openai/parameter-golf/pull/1394)) +- **@dexhunter** — Depth recurrence on SP8192 ([PR #1331](https://github.com/openai/parameter-golf/pull/1331), [#1437](https://github.com/openai/parameter-golf/pull/1437)), legal score-first TTT on SP8192 ([PR #1413](https://github.com/openai/parameter-golf/pull/1413)) +- **@abaybektursun** — Score-first TTT framework and legality analysis ([PR #549](https://github.com/openai/parameter-golf/pull/549)) +- **@Robby955** — Parallel residuals on SP8192 ([PR #1412](https://github.com/openai/parameter-golf/pull/1412)) +- **@msisovic** — Parallel residuals concept ([PR #1204](https://github.com/openai/parameter-golf/pull/1204)) +- **@X-Abhishek-X** — Hyperparameter tuning and optimizer experiments ([PR #1445](https://github.com/openai/parameter-golf/pull/1445), [#1471](https://github.com/openai/parameter-golf/pull/1471)) +- **@aryanbhosale** — Parallel residuals + score-first TTT stack ([PR #1517](https://github.com/openai/parameter-golf/pull/1517)) +- **An Thien Vo** (James Emerson Vo) — Headwise gated attention (novel contribution), 29-paper literature survey, 40+ experiment ablation study + +## Acknowledgements + +- **OpenAI** — for hosting the Parameter Golf challenge and the development grant +- **RunPod** — for compute credits supporting our 2xH100 and 8xH100 experiments +- **Georgia Tech PACE** — for supplementary compute resources +- **@sranganath02** (Sid Ranganathan) — for collaborating on nanochat research and tokenizer investigation as part of our CS 7643 Deep Learning team project +- **CS 7643 Deep Learning** at Georgia Tech, taught by Dr. Zsolt Kira — course context for this research + +Total compute cost: ~$280+ across 40+ experiments on RunPod (2xH100 and 8xH100). + +In memory of Moomoo, my cat. + +## Included Files + +- `README.md` (this file) +- `submission.json` +- `train_gpt.py` +- `requirements.txt` +- `train_seed42.log` +- `train_seed1337.log` +- `train_seed2025.log` diff --git a/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/requirements.txt b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/requirements.txt new file mode 100644 index 0000000000..7b23c52e05 --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/requirements.txt @@ -0,0 +1,5 @@ +sentencepiece +brotli +numpy +# flash-attn 3 (FA3) for H100 + CUDA 13.0 — install separately: +# pip install --no-cache-dir "https://download.pytorch.org/whl/cu130/flash_attn_3-3.0.0-cp39-abi3-manylinux_2_28_x86_64.whl" diff --git a/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/submission.json b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/submission.json new file mode 100644 index 0000000000..460868b865 --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/submission.json @@ -0,0 +1,37 @@ +{ + "author": "James Vo", + "github_id": "jamesEmerson112", + "name": "SP8192 + Full Stack + Headwise Gated Attention + Legal TTT", + "date": "2026-04-30", + "track": "10min_16mb", + "val_bpb": 1.0805, + "val_bpb_std": 0.0012, + "seeds": [42, 1337, 2025], + "seed_results": { + "42": {"val_bpb": 1.0818, "artifact_bytes": 15697552}, + "1337": {"val_bpb": 1.0794, "artifact_bytes": 15694065}, + "2025": {"val_bpb": 1.0804, "artifact_bytes": 15693855} + }, + "hardware": "8xH100 80GB SXM", + "pytorch_version": "2.11.0+cu130", + "technique_summary": "SP8192 + Headwise Gated Attention (novel) + full rank 1 stack (FA3, 11Lx512d, depth recurrence, parallel residuals, XSA, MuonEq-R, EMA 0.9965, GPTQ int6+brotli, score-first TTT, emb7+eclip15)", + "compliance": { + "train_under_600s": true, + "artifact_under_16mb": true, + "eval_under_600s": true, + "no_slot": true, + "no_pre_quant_ttt": true, + "no_etlb": true, + "no_ngram_cache": true, + "score_first_ttt": true, + "three_seeds": true + }, + "attribution": { + "base_stack": "@bigbag (PR #1493)", + "sp8192_gptq_sdclip": "@clarkkev (PR #1394)", + "depth_recurrence": "@dexhunter (PR #1331, #1437)", + "parallel_residuals": "@Robby955 (PR #1412), @msisovic (PR #1204)", + "legal_ttt_framework": "@abaybektursun (PR #549), @dexhunter (PR #1413)", + "headwise_gated_attention": "James Vo (novel contribution)" + } +} diff --git a/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_gpt.py b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_gpt.py new file mode 100644 index 0000000000..1650047b34 --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_gpt.py @@ -0,0 +1,526 @@ +import collections,copy,glob,io,lzma,math,os +from pathlib import Path +import random,re,subprocess,sys,time,uuid,numpy as np,sentencepiece as spm,torch,torch.distributed as dist,torch.nn.functional as F +from torch.nn.parallel import DistributedDataParallel as DDP +from torch import Tensor,nn +from flash_attn_interface import flash_attn_func as flash_attn_3_func +class Hyperparameters:data_dir=os.environ.get('DATA_DIR','./data/');seed=int(os.environ.get('SEED',1337));run_id=os.environ.get('RUN_ID',str(uuid.uuid4()));iterations=int(os.environ.get('ITERATIONS',20000));warmdown_frac=float(os.environ.get('WARMDOWN_FRAC',.72));warmup_steps=int(os.environ.get('WARMUP_STEPS',20));train_batch_tokens=int(os.environ.get('TRAIN_BATCH_TOKENS',786432));train_seq_len=int(os.environ.get('TRAIN_SEQ_LEN',2048));train_log_every=int(os.environ.get('TRAIN_LOG_EVERY',500));max_wallclock_seconds=float(os.environ.get('MAX_WALLCLOCK_SECONDS',6e2));val_batch_tokens=int(os.environ.get('VAL_BATCH_TOKENS',524288));eval_seq_len=int(os.environ.get('EVAL_SEQ_LEN',2048));val_loss_every=int(os.environ.get('VAL_LOSS_EVERY',4000));sliding_window_enabled=bool(int(os.environ.get('SLIDING_WINDOW_ENABLED','1')));vocab_size=int(os.environ.get('VOCAB_SIZE',8192));num_layers=int(os.environ.get('NUM_LAYERS',11));xsa_last_n=int(os.environ.get('XSA_LAST_N',11));model_dim=int(os.environ.get('MODEL_DIM',512));embedding_dim=int(os.environ.get('EMBEDDING_DIM',512));num_kv_heads=int(os.environ.get('NUM_KV_HEADS',4));num_heads=int(os.environ.get('NUM_HEADS',8));mlp_mult=float(os.environ.get('MLP_MULT',4.));skip_gates_enabled=bool(int(os.environ.get('SKIP_GATES_ENABLED','1')));tie_embeddings=bool(int(os.environ.get('TIE_EMBEDDINGS','1')));logit_softcap=float(os.environ.get('LOGIT_SOFTCAP',3e1));rope_base=float(os.environ.get('ROPE_BASE',1e4));rope_dims=int(os.environ.get('ROPE_DIMS',16));rope_train_seq_len=int(os.environ.get('ROPE_TRAIN_SEQ_LEN',2048));ln_scale=bool(int(os.environ.get('LN_SCALE','1')));qk_gain_init=float(os.environ.get('QK_GAIN_INIT',5.));num_loops=int(os.environ.get('NUM_LOOPS',2));loop_start=int(os.environ.get('LOOP_START',3));loop_end=int(os.environ.get('LOOP_END',5));enable_looping_at=float(os.environ.get('ENABLE_LOOPING_AT',.35));parallel_residual_start=int(os.environ.get('PARALLEL_RESIDUAL_START',7));min_lr=float(os.environ.get('MIN_LR',.0));embed_lr=float(os.environ.get('EMBED_LR',.6));head_lr=float(os.environ.get('HEAD_LR',.008));tied_embed_lr=float(os.environ.get('TIED_EMBED_LR',.03));tied_embed_init_std=float(os.environ.get('TIED_EMBED_INIT_STD',.005));matrix_lr=float(os.environ.get('MATRIX_LR',.022));scalar_lr=float(os.environ.get('SCALAR_LR',.02));muon_momentum=float(os.environ.get('MUON_MOMENTUM',.99));muon_backend_steps=int(os.environ.get('MUON_BACKEND_STEPS',5));muon_momentum_warmup_start=float(os.environ.get('MUON_MOMENTUM_WARMUP_START',.92));muon_momentum_warmup_steps=int(os.environ.get('MUON_MOMENTUM_WARMUP_STEPS',1500));muon_row_normalize=bool(int(os.environ.get('MUON_ROW_NORMALIZE','1')));beta1=float(os.environ.get('BETA1',.9));beta2=float(os.environ.get('BETA2',.95));adam_eps=float(os.environ.get('ADAM_EPS',1e-08));grad_clip_norm=float(os.environ.get('GRAD_CLIP_NORM',.3));eval_stride=int(os.environ.get('EVAL_STRIDE',64));muon_beta2=float(os.environ.get('MUON_BETA2',.95));adam_wd=float(os.environ.get('ADAM_WD',.02));muon_wd=float(os.environ.get('MUON_WD',.095));embed_wd=float(os.environ.get('EMBED_WD',.085));ema_decay=float(os.environ.get('EMA_DECAY',.9965));ttt_enabled=bool(int(os.environ.get('TTT_ENABLED','0')));ttt_lr=float(os.environ.get('TTT_LR',.005));ttt_epochs=int(os.environ.get('TTT_EPOCHS',3));ttt_momentum=float(os.environ.get('TTT_MOMENTUM',.9));ttt_chunk_tokens=int(os.environ.get('TTT_CHUNK_TOKENS',32768));etlb_enabled=bool(int(os.environ.get('ETLB_ENABLED','0')));etlb_lr=float(os.environ.get('ETLB_LR',.05));etlb_steps=int(os.environ.get('ETLB_STEPS',5));etlb_clip=float(os.environ.get('ETLB_CLIP',3.));compressor=os.environ.get('COMPRESSOR','brotli');gated_attn=os.environ.get('GATED_ATTN','none');value_residual_alpha=float(os.environ.get('VALUE_RESIDUAL_ALPHA','0.0'));gptq_calibration_batches=int(os.environ.get('GPTQ_CALIBRATION_BATCHES',64));gptq_reserve_seconds=float(os.environ.get('GPTQ_RESERVE_SECONDS',12.));matrix_bits=int(os.environ.get('MATRIX_BITS',6));embed_bits=int(os.environ.get('EMBED_BITS',8));matrix_clip_sigmas=float(os.environ.get('MATRIX_CLIP_SIGMAS',12.85));embed_clip_sigmas=float(os.environ.get('EMBED_CLIP_SIGMAS',2e1));distributed='RANK'in os.environ and'WORLD_SIZE'in os.environ;rank=int(os.environ.get('RANK','0'));world_size=int(os.environ.get('WORLD_SIZE','1'));local_rank=int(os.environ.get('LOCAL_RANK','0'));is_main_process=rank==0;grad_accum_steps=int(os.environ.get('GRAD_ACCUM_STEPS',str(8//world_size)));datasets_dir=os.path.join(data_dir,'datasets',f"fineweb10B_sp{vocab_size}");train_files=os.path.join(datasets_dir,'fineweb_train_*.bin');val_files=os.path.join(datasets_dir,'fineweb_val_*.bin');tokenizer_path=os.path.join(data_dir,'tokenizers',f"fineweb_{vocab_size}_bpe.model");logfile=f"logs/{run_id}.txt";model_path='final_model.pt';quantized_model_path='final_model.int6.ptz' +_logger_hparams=None +def set_logging_hparams(h):global _logger_hparams;_logger_hparams=h +def log(msg,console=True): + if _logger_hparams is None:print(msg);return + if _logger_hparams.is_main_process: + if console:print(msg) + if _logger_hparams.logfile is not None: + with open(_logger_hparams.logfile,'a',encoding='utf-8')as f:print(msg,file=f) +class ValidationData: + def __init__(self,h,device): + self.sp=spm.SentencePieceProcessor(model_file=h.tokenizer_path) + if int(self.sp.vocab_size())!=h.vocab_size:raise ValueError(f"VOCAB_SIZE={h.vocab_size} does not match tokenizer vocab_size={int(self.sp.vocab_size())}") + self.val_tokens=load_validation_tokens(h.val_files,h.eval_seq_len);self.base_bytes_lut,self.has_leading_space_lut,self.is_boundary_token_lut=build_sentencepiece_luts(self.sp,h.vocab_size,device) +def build_sentencepiece_luts(sp,vocab_size,device): + sp_vocab_size=int(sp.vocab_size());assert sp.piece_to_id('▁')!=sp.unk_id(),"Tokenizer must have '▁' (space) as its own token for correct BPB byte counting";table_size=max(sp_vocab_size,vocab_size);base_bytes_np=np.zeros((table_size,),dtype=np.int16);has_leading_space_np=np.zeros((table_size,),dtype=np.bool_);is_boundary_token_np=np.ones((table_size,),dtype=np.bool_) + for token_id in range(sp_vocab_size): + if sp.is_control(token_id)or sp.is_unknown(token_id)or sp.is_unused(token_id):continue + is_boundary_token_np[token_id]=False + if sp.is_byte(token_id):base_bytes_np[token_id]=1;continue + piece=sp.id_to_piece(token_id) + if piece.startswith('▁'):has_leading_space_np[token_id]=True;piece=piece[1:] + base_bytes_np[token_id]=len(piece.encode('utf-8')) + return torch.tensor(base_bytes_np,dtype=torch.int16,device=device),torch.tensor(has_leading_space_np,dtype=torch.bool,device=device),torch.tensor(is_boundary_token_np,dtype=torch.bool,device=device) +def load_validation_tokens(pattern,seq_len): + files=[Path(p)for p in sorted(glob.glob(pattern))] + if not files:raise FileNotFoundError(f"No files found for pattern: {pattern}") + tokens=torch.cat([load_data_shard(file)for file in files]).contiguous();usable=(tokens.numel()-1)//seq_len*seq_len + if usable<=0:raise ValueError(f"Validation split is too short for TRAIN_SEQ_LEN={seq_len}") + return tokens[:usable+1] +def load_data_shard(file): + header_bytes=256*np.dtype('0 else 0;num_sequences=(self.num_tokens[si]-1-phase)//self.seq_len;sequence_order=self.rng.permutation(num_sequences);self.start_inds[si]=(phase+sequence_order*self.seq_len).tolist() + def next_batch(self,global_tokens,grad_accum_steps): + device_tokens=global_tokens//(self.world_size*grad_accum_steps);device_batch_size=device_tokens//self.seq_len;remaining=np.array([len(s)for s in self.start_inds],dtype=np.float64);x=torch.empty((device_batch_size,self.seq_len),dtype=torch.int64);y=torch.empty((device_batch_size,self.seq_len),dtype=torch.int64) + for bi in range(device_batch_size): + total=remaining.sum() + if total<=0: + for si in range(len(self.files)):self._reset_shard(si) + remaining=np.array([len(s)for s in self.start_inds],dtype=np.float64);total=remaining.sum() + probs=remaining/total;si=int(self.rng.choice(len(self.files),p=probs));start_ind=self.start_inds[si].pop();remaining[si]-=1;mm=_get_shard_memmap(self.files[si]);window=torch.as_tensor(np.array(mm[start_ind:start_ind+self.seq_len+1],dtype=np.int64));x[bi]=window[:-1];y[bi]=window[1:] + return x.to(self.device,non_blocking=True),y.to(self.device,non_blocking=True) +class RMSNorm(nn.Module): + def __init__(self,eps=None):super().__init__();self.eps=eps + def forward(self,x):return F.rms_norm(x,(x.size(-1),),eps=self.eps) +class CastedLinear(nn.Linear): + def forward(self,x):w=self.weight.to(x.dtype);bias=self.bias.to(x.dtype)if self.bias is not None else None;return F.linear(x,w,bias) +class Rotary(nn.Module): + def __init__(self,dim,base=1e4,train_seq_len=1024,rope_dims=0):super().__init__();self.dim=dim;self.base=base;self.train_seq_len=train_seq_len;self.rope_dims=rope_dims if rope_dims>0 else dim;inv_freq=1./base**(torch.arange(0,self.rope_dims,2,dtype=torch.float32)/self.rope_dims);self.register_buffer('inv_freq',inv_freq,persistent=False);self._seq_len_cached=0;self._cos_cached=None;self._sin_cached=None + def forward(self,seq_len,device,dtype): + if self._cos_cached is None or self._sin_cached is None or self._seq_len_cached!=seq_len or self._cos_cached.device!=device: + rd=self.rope_dims + if seq_len>self.train_seq_len:scale=seq_len/self.train_seq_len;new_base=self.base*scale**(rd/(rd-2));inv_freq=1./new_base**(torch.arange(0,rd,2,dtype=torch.float32,device=device)/rd) + else:inv_freq=self.inv_freq.to(device) + t=torch.arange(seq_len,device=device,dtype=inv_freq.dtype);freqs=torch.outer(t,inv_freq);self._cos_cached=freqs.cos()[None,:,None,:];self._sin_cached=freqs.sin()[None,:,None,:];self._seq_len_cached=seq_len + return self._cos_cached.to(dtype=dtype),self._sin_cached.to(dtype=dtype) +def apply_rotary_emb(x,cos,sin,rope_dims=0): + if rope_dims>0 and rope_dims0:q_raw,gate_logits=q_out.split([dim,self.gate_dim],dim=-1) + else:q_raw=q_out;gate_logits=None + q=q_raw.reshape(bsz,seqlen,self.num_heads,self.head_dim);k=self.c_k(x).reshape(bsz,seqlen,self.num_kv_heads,self.head_dim);v=self.c_v(x).reshape(bsz,seqlen,self.num_kv_heads,self.head_dim) + cache_v=(vr_alpha>0 and v0 is None);v_before_repeat=v + if v0 is not None and vr_alpha>0:v=(1-vr_alpha)*v+vr_alpha*v0 + q=F.rms_norm(q,(q.size(-1),));k=F.rms_norm(k,(k.size(-1),));cos,sin=self.rotary(seqlen,x.device,q.dtype);q=apply_rotary_emb(q,cos,sin,self.rope_dims);k=apply_rotary_emb(k,cos,sin,self.rope_dims);q=q*self.q_gain.to(dtype=q.dtype)[None,None,:,None];y=flash_attn_3_func(q,k,v,causal=True) + if self.use_xsa:y=self._xsa_efficient(y,v) + if gate_logits is not None: + if self.gated_attn=='headwise':gate=torch.sigmoid(gate_logits).unsqueeze(-1);y=y*gate + elif self.gated_attn=='elementwise':gate=torch.sigmoid(gate_logits).reshape(bsz,seqlen,self.num_heads,self.head_dim);y=y*gate + y=y.reshape(bsz,seqlen,dim);out=self.proj(y) + if cache_v:return out,v_before_repeat + return out +class MLP(nn.Module): + def __init__(self,dim,mlp_mult):super().__init__();hidden=int(mlp_mult*dim);self.fc=CastedLinear(dim,hidden,bias=False);self.proj=CastedLinear(hidden,dim,bias=False);self.proj._zero_init=True + def forward(self,x):return self.proj(F.leaky_relu(self.fc(x),negative_slope=.5).square()) +class Block(nn.Module): + def __init__(self,dim,num_heads,num_kv_heads,mlp_mult,rope_base,qk_gain_init,train_seq_len,layer_idx=0,ln_scale=False,gated_attn='none'):super().__init__();self.attn_norm=RMSNorm();self.mlp_norm=RMSNorm();self.attn=CausalSelfAttention(dim,num_heads,num_kv_heads,rope_base,qk_gain_init,train_seq_len,gated_attn=gated_attn);self.mlp=MLP(dim,mlp_mult);self.attn_scale=nn.Parameter(torch.ones(dim,dtype=torch.float32));self.mlp_scale=nn.Parameter(torch.ones(dim,dtype=torch.float32));self.resid_mix=nn.Parameter(torch.stack((torch.ones(dim),torch.zeros(dim))).float());self.ln_scale_factor=1./math.sqrt(layer_idx+1)if ln_scale else 1.;self.parallel=False + def forward(self,x,x0,v0=None,vr_alpha=0.0): + mix=self.resid_mix.to(dtype=x.dtype);x_in=mix[0][None,None,:]*x+mix[1][None,None,:]*x0;attn_result=self.attn(self.attn_norm(x_in)*self.ln_scale_factor,v0=v0,vr_alpha=vr_alpha) + if isinstance(attn_result,tuple):attn_out,v_cached=attn_result + else:attn_out=attn_result;v_cached=None + if self.parallel:mlp_out=self.mlp(self.mlp_norm(x_in)*self.ln_scale_factor);x_out=x_in+self.attn_scale.to(dtype=x_in.dtype)[None,None,:]*attn_out+self.mlp_scale.to(dtype=x_in.dtype)[None,None,:]*mlp_out + else:x_out=x_in+self.attn_scale.to(dtype=x_in.dtype)[None,None,:]*attn_out;x_out=x_out+self.mlp_scale.to(dtype=x_out.dtype)[None,None,:]*self.mlp(self.mlp_norm(x_out)*self.ln_scale_factor) + if v_cached is not None:return x_out,v_cached + return x_out +class GPT(nn.Module): + def __init__(self,h): + super().__init__() + if h.logit_softcap<=.0:raise ValueError(f"logit_softcap must be positive, got {h.logit_softcap}") + self.tie_embeddings=h.tie_embeddings;self.tied_embed_init_std=h.tied_embed_init_std;self.logit_softcap=h.logit_softcap;self.tok_emb=nn.Embedding(h.vocab_size,h.embedding_dim) + if h.embedding_dim!=h.model_dim:self.embed_proj=CastedLinear(h.embedding_dim,h.model_dim,bias=False);self.head_proj=CastedLinear(h.model_dim,h.embedding_dim,bias=False) + else:self.embed_proj=None;self.head_proj=None + self.num_encoder_layers=h.num_layers//2;self.num_decoder_layers=h.num_layers-self.num_encoder_layers;self.value_residual_alpha=h.value_residual_alpha;self.blocks=nn.ModuleList([Block(h.model_dim,h.num_heads,h.num_kv_heads,h.mlp_mult,h.rope_base,h.qk_gain_init,h.train_seq_len,layer_idx=i,ln_scale=h.ln_scale,gated_attn=h.gated_attn)for i in range(h.num_layers)]) + if h.rope_dims>0: + head_dim=h.model_dim//h.num_heads + for block in self.blocks:block.attn.rope_dims=h.rope_dims;block.attn.rotary=Rotary(head_dim,base=h.rope_base,train_seq_len=h.train_seq_len,rope_dims=h.rope_dims) + self.final_norm=RMSNorm();self.lm_head=None if h.tie_embeddings else CastedLinear(h.embedding_dim,h.vocab_size,bias=False) + if self.lm_head is not None:self.lm_head._zero_init=True + if h.xsa_last_n>0: + for i in range(max(0,h.num_layers-h.xsa_last_n),h.num_layers):self.blocks[i].attn.use_xsa=True + if h.parallel_residual_start>=0: + for i in range(h.parallel_residual_start,h.num_layers):self.blocks[i].parallel=True + self.looping_active=False + if h.num_loops>0: + loop_seg=list(range(h.loop_start,h.loop_end+1));all_indices=list(range(h.loop_start)) + for _ in range(h.num_loops+1):all_indices.extend(loop_seg) + all_indices.extend(range(h.loop_end+1,h.num_layers));num_enc=len(all_indices)//2;self.encoder_indices=all_indices[:num_enc];self.decoder_indices=all_indices[num_enc:] + else:self.encoder_indices=list(range(self.num_encoder_layers));self.decoder_indices=list(range(self.num_encoder_layers,h.num_layers)) + self.num_skip_weights=min(len(self.encoder_indices),len(self.decoder_indices));self.skip_weights=nn.Parameter(torch.ones(self.num_skip_weights,h.model_dim,dtype=torch.float32));self.skip_gates=nn.Parameter(torch.zeros(self.num_skip_weights,h.model_dim,dtype=torch.float32))if h.skip_gates_enabled else None;self._init_weights() + def _init_weights(self): + if self.tie_embeddings:nn.init.normal_(self.tok_emb.weight,mean=.0,std=self.tied_embed_init_std) + for(name,module)in self.named_modules(): + if isinstance(module,nn.Linear): + if getattr(module,'_zero_init',False):nn.init.zeros_(module.weight) + elif module.weight.ndim==2 and module.weight.shape[0]>=64 and module.weight.shape[1]>=64:nn.init.orthogonal_(module.weight,gain=1.) + def forward_logits(self,input_ids): + x=self.tok_emb(input_ids);x=F.rms_norm(x,(x.size(-1),)) + if self.embed_proj is not None:x=self.embed_proj(x) + x0=x;skips=[];vr_alpha=self.value_residual_alpha;v0=None;enc_iter=self.encoder_indices if self.looping_active else range(self.num_encoder_layers);dec_iter=self.decoder_indices if self.looping_active else range(self.num_encoder_layers,self.num_encoder_layers+self.num_decoder_layers) + for i in enc_iter: + result=self.blocks[i](x,x0,v0=v0,vr_alpha=vr_alpha) + if isinstance(result,tuple):x,v0=result + else:x=result + skips.append(x) + for(skip_idx,i)in enumerate(dec_iter): + if skip_idxG.size(1) + if transposed:X=X.T + for _ in range(steps):A=X@X.T;B=b*A+c*A@A;X=a*X+B@X + return X.T if transposed else X +class Muon(torch.optim.Optimizer): + def __init__(self,params,lr,momentum,backend_steps,nesterov=True,weight_decay=.0,row_normalize=False):super().__init__(params,dict(lr=lr,momentum=momentum,backend_steps=backend_steps,nesterov=nesterov,weight_decay=weight_decay,row_normalize=row_normalize)) + @torch.no_grad() + def step(self,closure=None): + loss=None + if closure is not None: + with torch.enable_grad():loss=closure() + distributed=dist.is_available()and dist.is_initialized();world_size=dist.get_world_size()if distributed else 1;rank=dist.get_rank()if distributed else 0 + for group in self.param_groups: + params=group['params'] + if not params:continue + lr=group['lr'];momentum=group['momentum'];backend_steps=group['backend_steps'];nesterov=group['nesterov'];total_params=sum(int(p.numel())for p in params);updates_flat=torch.zeros(total_params,device=params[0].device,dtype=torch.bfloat16);curr=0 + for(i,p)in enumerate(params): + if i%world_size==rank and p.grad is not None: + g=p.grad;state=self.state[p] + if'momentum_buffer'not in state:state['momentum_buffer']=torch.zeros_like(g) + buf=state['momentum_buffer'];buf.mul_(momentum).add_(g) + if nesterov:g=g.add(buf,alpha=momentum) + if group.get('row_normalize',False):row_norms=g.float().norm(dim=-1,keepdim=True).clamp_min(1e-07);g=g/row_norms.to(g.dtype) + g=zeropower_via_newtonschulz5(g,steps=backend_steps);g*=max(1,g.size(0)/g.size(1))**.5;updates_flat[curr:curr+p.numel()]=g.reshape(-1) + curr+=p.numel() + if distributed:dist.all_reduce(updates_flat,op=dist.ReduceOp.SUM) + wd=group.get('weight_decay',.0);curr=0 + for p in params: + if wd>.0:p.data.mul_(1.-lr*wd) + g=updates_flat[curr:curr+p.numel()].view_as(p).to(dtype=p.dtype);p.add_(g,alpha=-lr);curr+=p.numel() + return loss +CONTROL_TENSOR_NAME_PATTERNS=tuple(pattern for pattern in os.environ.get('CONTROL_TENSOR_NAME_PATTERNS','attn_scale,attn_scales,mlp_scale,mlp_scales,resid_mix,resid_mixes,q_gain,skip_weight,skip_weights,skip_gates').split(',')if pattern) +class Optimizers: + def __init__(self,h,base_model): + block_named_params=list(base_model.blocks.named_parameters());matrix_params=[p for(name,p)in block_named_params if p.ndim==2 and not any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS)];scalar_params=[p for(name,p)in block_named_params if p.ndim<2 or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS)] + if base_model.skip_weights.numel()>0:scalar_params.append(base_model.skip_weights) + if base_model.skip_gates is not None and base_model.skip_gates.numel()>0:scalar_params.append(base_model.skip_gates) + token_lr=h.tied_embed_lr if h.tie_embeddings else h.embed_lr;tok_params=[{'params':[base_model.tok_emb.weight],'lr':token_lr,'base_lr':token_lr}];self.optimizer_tok=torch.optim.AdamW(tok_params,betas=(h.beta1,h.beta2),eps=h.adam_eps,weight_decay=h.embed_wd,fused=True);self.optimizer_muon=Muon(matrix_params,lr=h.matrix_lr,momentum=h.muon_momentum,backend_steps=h.muon_backend_steps,weight_decay=h.muon_wd,row_normalize=h.muon_row_normalize) + for group in self.optimizer_muon.param_groups:group['base_lr']=h.matrix_lr + self.optimizer_scalar=torch.optim.AdamW([{'params':scalar_params,'lr':h.scalar_lr,'base_lr':h.scalar_lr}],betas=(h.beta1,h.beta2),eps=h.adam_eps,weight_decay=h.adam_wd,fused=True);self.optimizers=[self.optimizer_tok,self.optimizer_muon,self.optimizer_scalar] + if base_model.lm_head is not None:self.optimizer_head=torch.optim.Adam([{'params':[base_model.lm_head.weight],'lr':h.head_lr,'base_lr':h.head_lr}],betas=(h.beta1,h.beta2),eps=h.adam_eps,fused=True);self.optimizers.insert(1,self.optimizer_head) + else:self.optimizer_head=None + def __iter__(self):return iter(self.optimizers) + def zero_grad_all(self): + for opt in self.optimizers:opt.zero_grad(set_to_none=True) + def step(self): + for opt in self.optimizers:opt.step() + self.zero_grad_all() +def restore_fp32_params(model): + for module in model.modules(): + if isinstance(module,CastedLinear):module.float() + for(name,param)in model.named_parameters(): + if(param.ndim<2 or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS))and param.dtype!=torch.float32:param.data=param.data.float() +def collect_hessians(model,train_loader,h,device,n_calibration_batches=64): + hessians={};hooks=[] + def make_hook(name): + def hook_fn(module,inp,out): + x=inp[0].detach().float() + if x.ndim==3:x=x.reshape(-1,x.shape[-1]) + if name not in hessians:hessians[name]=torch.zeros(x.shape[1],x.shape[1],dtype=torch.float32,device=device) + hessians[name].addmm_(x.T,x) + return hook_fn + for(name,module)in model.named_modules(): + if isinstance(module,CastedLinear)and module.weight.numel()>65536: + cat=classify_param(name+'.weight') + if cat in('mlp','attn'):hooks.append(module.register_forward_hook(make_hook(name+'.weight'))) + if model.tie_embeddings: + hook_module=model.head_proj if model.head_proj is not None else model.final_norm + def make_output_hook(name): + def hook_fn(module,inp,out): + x=out.detach().float() + if x.ndim==3:x=x.reshape(-1,x.shape[-1]) + if name not in hessians:hessians[name]=torch.zeros(x.shape[1],x.shape[1],dtype=torch.float32,device=device) + hessians[name].addmm_(x.T,x) + return hook_fn + hooks.append(hook_module.register_forward_hook(make_output_hook('tok_emb.weight'))) + model.eval() + with torch.no_grad(): + for _ in range(n_calibration_batches):x,_=train_loader.next_batch(h.train_batch_tokens,h.grad_accum_steps);model.forward_logits(x) + for hook in hooks:hook.remove() + for name in hessians:hessians[name]=hessians[name].cpu()/n_calibration_batches + return hessians +def gptq_quantize_weight(w,H,clip_sigmas=3.,clip_range=63,block_size=128): + W_orig=w.float().clone();rows,cols=W_orig.shape;H=H.float().clone();dead=torch.diag(H)==0;H[dead,dead]=1;damp=.01*H.diag().mean();H.diagonal().add_(damp);perm=torch.argsort(H.diag(),descending=True);invperm=torch.argsort(perm);W_perm=W_orig[:,perm].clone();W_perm[:,dead[perm]]=0;H=H[perm][:,perm];Hinv=torch.cholesky_inverse(torch.linalg.cholesky(H));Hinv=torch.linalg.cholesky(Hinv,upper=True);row_std=W_orig.std(dim=1);s=(clip_sigmas*row_std/clip_range).clamp_min(1e-10).to(torch.float16);sf=s.float();Q=torch.zeros(rows,cols,dtype=torch.int8);W_work=W_perm.clone() + for i1 in range(0,cols,block_size): + i2=min(i1+block_size,cols);W_block=W_work[:,i1:i2].clone();Hinv_block=Hinv[i1:i2,i1:i2];Err=torch.zeros(rows,i2-i1) + for j in range(i2-i1):w_col=W_block[:,j];d=Hinv_block[j,j];q_col=torch.clamp(torch.round(w_col/sf),-clip_range,clip_range);Q[:,i1+j]=q_col.to(torch.int8);err=(w_col-q_col.float()*sf)/d;Err[:,j]=err;W_block[:,j:]-=err.unsqueeze(1)*Hinv_block[j,j:].unsqueeze(0) + if i20:out[name]=(q.float()*s.float().view(q.shape[0],*[1]*(q.ndim-1))).to(orig_dtype) + else:out[name]=(q.float()*float(s.item())).to(orig_dtype) + return out +_BSHF_MAGIC=b'BSHF' +def _byte_shuffle(data,stride=2): + if stride<=1 or len(data)0: + base_model.train();chunk_seqs=(chunk_end-chunk_start)//seq_len + if chunk_seqs>0: + cos_lr=h.ttt_lr*.5*(1.+math.cos(math.pi*ci/max(num_chunks-1,1))) + for pg in optimizer.param_groups:pg['lr']=cos_lr + my_seq_s=chunk_seqs*rank//world_size;my_seq_e=chunk_seqs*(rank+1)//world_size;my_chunk_seqs=my_seq_e-my_seq_s + for _ep in range(h.ttt_epochs): + for bs in range(0,my_chunk_seqs,batch_seqs): + be=min(bs+batch_seqs,my_chunk_seqs);actual_bs=my_seq_s+bs;start_tok=chunk_start+actual_bs*seq_len;end_tok=chunk_start+(my_seq_s+be)*seq_len+1 + if end_tok>val_data.val_tokens.numel():continue + local=val_data.val_tokens[start_tok:end_tok].to(device=device,dtype=torch.int64);x=local[:-1].reshape(-1,seq_len);y=local[1:].reshape(-1,seq_len);optimizer.zero_grad(set_to_none=True) + with torch.autocast(device_type='cuda',dtype=torch.bfloat16):loss=base_model(x,y) + loss.backward() + if world_size>1: + for p in ttt_params: + if p.grad is not None:dist.all_reduce(p.grad,op=dist.ReduceOp.AVG) + torch.nn.utils.clip_grad_norm_(ttt_params,1.);optimizer.step() + if dist.is_available()and dist.is_initialized():dist.all_reduce(loss_sum,op=dist.ReduceOp.SUM);dist.all_reduce(token_count,op=dist.ReduceOp.SUM);dist.all_reduce(byte_count,op=dist.ReduceOp.SUM) + for p in base_model.parameters():p.requires_grad_(True) + base_model.eval();return _loss_bpb(loss_sum,token_count,byte_count) +def timed_eval(label,fn,*args,**kwargs):torch.cuda.synchronize();t0=time.perf_counter();val_loss,val_bpb=fn(*args,**kwargs);torch.cuda.synchronize();elapsed_ms=1e3*(time.perf_counter()-t0);log(f"{label} val_loss:{val_loss:.8f} val_bpb:{val_bpb:.8f} eval_time:{elapsed_ms:.0f}ms");return val_loss,val_bpb +def train_model(h,device,val_data): + base_model=GPT(h).to(device).bfloat16();restore_fp32_params(base_model);compiled_model=torch.compile(base_model,dynamic=False,fullgraph=True) + if h.distributed:model=DDP(compiled_model,device_ids=[h.local_rank],broadcast_buffers=False) + else:model=compiled_model + log(f"model_params:{sum(p.numel()for p in base_model.parameters())}") + if h.grad_accum_steps!=8//h.world_size:log(f"grad_accum:overridden steps={h.grad_accum_steps}") + optimizers=Optimizers(h,base_model);train_loader=ShuffledSequenceLoader(h,device);max_wallclock_ms=1e3*h.max_wallclock_seconds if h.max_wallclock_seconds>0 else None + if max_wallclock_ms is not None:max_wallclock_ms-=h.gptq_reserve_seconds*1e3;log(f"gptq:reserving {h.gptq_reserve_seconds:.0f}s, effective={max_wallclock_ms:.0f}ms") + def training_frac(step,elapsed_ms): + if max_wallclock_ms is None:return step/max(h.iterations,1) + return elapsed_ms/max(max_wallclock_ms,1e-09) + def lr_mul(frac): + if h.warmdown_frac<=0:return 1. + if frac>=1.-h.warmdown_frac:return max((1.-frac)/h.warmdown_frac,h.min_lr) + return 1. + def step_fn(step,lr_scale): + optimizers.zero_grad_all();train_loss=torch.zeros((),device=device) + for micro_step in range(h.grad_accum_steps): + if h.distributed:model.require_backward_grad_sync=micro_step==h.grad_accum_steps-1 + x,y=train_loader.next_batch(h.train_batch_tokens,h.grad_accum_steps) + with torch.autocast(device_type='cuda',dtype=torch.bfloat16,enabled=True):loss=model(x,y) + train_loss+=loss.detach();(loss/h.grad_accum_steps).backward() + train_loss/=h.grad_accum_steps;frac=min(step/h.muon_momentum_warmup_steps,1.)if h.muon_momentum_warmup_steps>0 else 1.;muon_momentum=(1-frac)*h.muon_momentum_warmup_start+frac*h.muon_momentum + for group in optimizers.optimizer_muon.param_groups:group['momentum']=muon_momentum + for opt in optimizers: + for group in opt.param_groups:group['lr']=group['base_lr']*lr_scale + if h.grad_clip_norm>0:torch.nn.utils.clip_grad_norm_(base_model.parameters(),h.grad_clip_norm) + optimizers.step();return train_loss + if h.warmup_steps>0: + initial_model_state={name:tensor.detach().cpu().clone()for(name,tensor)in base_model.state_dict().items()};initial_optimizer_states=[copy.deepcopy(opt.state_dict())for opt in optimizers];model.train() + for warmup_step in range(h.warmup_steps): + step_fn(warmup_step,1.) + if warmup_step<=5 or(warmup_step+1)%10==0 or warmup_step+1==h.warmup_steps:log(f"warmup_step: {warmup_step+1}/{h.warmup_steps}") + if h.num_loops>0: + base_model.looping_active=True;log(f"loop_warmup:enabled encoder:{base_model.encoder_indices} decoder:{base_model.decoder_indices}") + for warmup_step in range(h.warmup_steps): + step_fn(warmup_step,1.) + if warmup_step<=5 or(warmup_step+1)%10==0 or warmup_step+1==h.warmup_steps:log(f"loop_warmup_step: {warmup_step+1}/{h.warmup_steps}") + base_model.looping_active=False + base_model.load_state_dict(initial_model_state,strict=True) + for(opt,state)in zip(optimizers,initial_optimizer_states,strict=True):opt.load_state_dict(state) + optimizers.zero_grad_all() + if h.distributed:model.require_backward_grad_sync=True + train_loader=ShuffledSequenceLoader(h,device) + ema_state={name:t.detach().float().clone()for(name,t)in base_model.state_dict().items()};ema_decay=h.ema_decay;training_time_ms=.0;stop_after_step=None;torch.cuda.synchronize();t0=time.perf_counter();step=0 + while True: + last_step=step==h.iterations or stop_after_step is not None and step>=stop_after_step;should_validate=last_step or h.val_loss_every>0 and step%h.val_loss_every==0 + if should_validate:torch.cuda.synchronize();training_time_ms+=1e3*(time.perf_counter()-t0);val_loss,val_bpb=eval_val(h,device,val_data,model);log(f"{step}/{h.iterations} val_loss: {val_loss:.4f} val_bpb: {val_bpb:.4f}");torch.cuda.synchronize();t0=time.perf_counter() + if last_step: + if stop_after_step is not None and step0 and not base_model.looping_active and frac>=h.enable_looping_at:base_model.looping_active=True;log(f"layer_loop:enabled step:{step} frac:{frac:.3f} encoder:{base_model.encoder_indices} decoder:{base_model.decoder_indices}") + train_loss=step_fn(step,scale) + with torch.no_grad(): + for(name,t)in base_model.state_dict().items():ema_state[name].mul_(ema_decay).add_(t.detach().float(),alpha=1.-ema_decay) + step+=1;approx_training_time_ms=training_time_ms+1e3*(time.perf_counter()-t0);should_log_train=h.train_log_every>0 and(step<=5 or step%h.train_log_every==0 or stop_after_step is not None) + if should_log_train:tok_per_sec=step*h.train_batch_tokens/(approx_training_time_ms/1e3);log(f"{step}/{h.iterations} train_loss: {train_loss.item():.4f} train_time: {approx_training_time_ms/60000:.1f}m tok/s: {tok_per_sec:.0f}") + reached_cap=max_wallclock_ms is not None and approx_training_time_ms>=max_wallclock_ms + if h.distributed and max_wallclock_ms is not None:reached_cap_tensor=torch.tensor(int(reached_cap),device=device);dist.all_reduce(reached_cap_tensor,op=dist.ReduceOp.MAX);reached_cap=bool(reached_cap_tensor.item()) + if stop_after_step is None and reached_cap:stop_after_step=step + log(f"peak memory allocated: {torch.cuda.max_memory_allocated()//1024//1024} MiB reserved: {torch.cuda.max_memory_reserved()//1024//1024} MiB");log('ema:applying EMA weights');current_state=base_model.state_dict();avg_state={name:t.to(dtype=current_state[name].dtype)for(name,t)in ema_state.items()};base_model.load_state_dict(avg_state,strict=True);return base_model,compiled_model +def train_and_eval(h,device): + random.seed(h.seed);np.random.seed(h.seed);torch.manual_seed(h.seed);torch.cuda.manual_seed_all(h.seed);val_data=ValidationData(h,device);_n_shards=len(list(Path(h.datasets_dir).resolve().glob('fineweb_train_*.bin')));log(f"train_shards: {_n_shards}");log(f"val_tokens: {val_data.val_tokens.numel()-1}");base_model,compiled_model=train_model(h,device,val_data);torch._dynamo.reset();timed_eval('pre-quantization post-ema',eval_val,h,device,val_data,compiled_model) + serialize(h,base_model,Path(__file__).read_text(encoding='utf-8')) + if h.distributed:dist.barrier() + eval_model=deserialize(h,device) + if h.num_loops>0:eval_model.looping_active=True + compiled_model=torch.compile(eval_model,dynamic=False,fullgraph=True);timed_eval('quantized',eval_val,h,device,val_data,compiled_model) + if h.sliding_window_enabled:timed_eval('quantized_sliding_window',eval_val_sliding,h,device,val_data,eval_model) + if h.ttt_enabled and h.sliding_window_enabled: + del eval_model,compiled_model;torch._dynamo.reset();torch.cuda.empty_cache();ttt_model=deserialize(h,device) + if h.num_loops>0:ttt_model.looping_active=True + timed_eval('quantized_ttt',eval_val_ttt,h,device,val_data,ttt_model);del ttt_model + if h.etlb_enabled and h.sliding_window_enabled: + if'eval_model'not in dir(): + eval_model=deserialize(h,device) + if h.num_loops>0:eval_model.looping_active=True + timed_eval('quantized_sliding_etlb',eval_val_sliding_etlb,h,device,val_data,eval_model) +def main(): + world_size=int(os.environ.get('WORLD_SIZE','1'));local_rank=int(os.environ.get('LOCAL_RANK','0'));distributed='RANK'in os.environ and'WORLD_SIZE'in os.environ + if not torch.cuda.is_available():raise RuntimeError('CUDA is required') + if world_size<=0:raise ValueError(f"WORLD_SIZE must be positive, got {world_size}") + if 8%world_size!=0:raise ValueError(f"WORLD_SIZE={world_size} must divide 8 so grad_accum_steps stays integral") + device=torch.device('cuda',local_rank);torch.cuda.set_device(device) + if distributed:dist.init_process_group(backend='nccl',device_id=device);dist.barrier() + torch.backends.cuda.matmul.allow_tf32=True;torch.backends.cudnn.allow_tf32=True;torch.set_float32_matmul_precision('high');from torch.backends.cuda import enable_cudnn_sdp,enable_flash_sdp,enable_math_sdp,enable_mem_efficient_sdp;enable_cudnn_sdp(False);enable_flash_sdp(True);enable_mem_efficient_sdp(False);enable_math_sdp(False);torch._dynamo.config.optimize_ddp=False;h=Hyperparameters();set_logging_hparams(h) + if h.is_main_process: + os.makedirs('logs',exist_ok=True);log(100*'=',console=False);log('Hyperparameters:',console=True) + for(k,v)in sorted(vars(type(h)).items()): + if not k.startswith('_'):log(f" {k}: {v}",console=True) + log('='*100,console=False);log(f"Running Python {sys.version}",console=False);log(f"Running PyTorch {torch.__version__}",console=False);log(subprocess.run(['nvidia-smi'],stdout=subprocess.PIPE,stderr=subprocess.PIPE,text=True,check=False).stdout,console=False);log('='*100,console=False) + train_and_eval(h,device) + if distributed:dist.destroy_process_group() +if __name__=='__main__':main() \ No newline at end of file diff --git a/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_seed1337.log b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_seed1337.log new file mode 100644 index 0000000000..ba03b3c4c6 --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_seed1337.log @@ -0,0 +1,242 @@ +==================================================================================================== +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp8192 + distributed: True + ema_decay: 0.9965 + embed_bits: 7 + embed_clip_sigmas: 15.0 + embed_lr: 0.6 + embed_wd: 0.085 + embedding_dim: 512 + enable_looping_at: 0.35 + etlb_clip: 3.0 + etlb_enabled: False + etlb_lr: 0.05 + etlb_steps: 5 + eval_seq_len: 2048 + eval_stride: 64 + gated_attn: headwise + gptq_calibration_batches: 64 + gptq_reserve_seconds: 12.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + head_lr: 0.008 + is_main_process: True + iterations: 20000 + ln_scale: True + local_rank: 0 + logfile: logs/v2_c6_seed1337.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.022 + max_wallclock_seconds: 600.0 + min_lr: 0.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_beta2: 0.95 + muon_momentum: 0.99 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_residual_start: 7 + qk_gain_init: 5.25 + quantized_model_path: final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + run_id: v2_c6_seed1337 + scalar_lr: 0.02 + seed: 1337 + skip_gates_enabled: True + sliding_window_enabled: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_8192_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp8192/fineweb_train_*.bin + train_log_every: 100 + train_seq_len: 2048 + ttt_chunk_tokens: 32768 + ttt_enabled: True + ttt_epochs: 3 + ttt_lr: 0.005 + ttt_momentum: 0.9 + val_batch_tokens: 524288 + val_files: ./data/datasets/fineweb10B_sp8192/fineweb_val_*.bin + val_loss_every: 500 + value_residual_alpha: 0.0 + vocab_size: 8192 + warmdown_frac: 0.72 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +==================================================================================================== +Running Python 3.11.10 (main, Sep 7 2024, 18:35:41) [GCC 11.4.0] +Running PyTorch 2.11.0+cu130 +Wed Apr 29 12:31:14 2026 ++-----------------------------------------------------------------------------------------+ +| NVIDIA-SMI 580.126.09 Driver Version: 580.126.09 CUDA Version: 13.0 | ++-----------------------------------------+------------------------+----------------------+ +| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | +| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | +| | | MIG M. | +|=========================================+========================+======================| +| 0 NVIDIA H100 80GB HBM3 On | 00000000:19:00.0 Off | 0 | +| N/A 48C P0 128W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 1 NVIDIA H100 80GB HBM3 On | 00000000:3B:00.0 Off | 0 | +| N/A 38C P0 116W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 2 NVIDIA H100 80GB HBM3 On | 00000000:4C:00.0 Off | 0 | +| N/A 37C P0 116W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 3 NVIDIA H100 80GB HBM3 On | 00000000:5D:00.0 Off | 0 | +| N/A 46C P0 126W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 4 NVIDIA H100 80GB HBM3 On | 00000000:9B:00.0 Off | 0 | +| N/A 49C P0 130W / 700W | 1505MiB / 81559MiB | 3% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 5 NVIDIA H100 80GB HBM3 On | 00000000:BB:00.0 Off | 0 | +| N/A 39C P0 120W / 700W | 1505MiB / 81559MiB | 2% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 6 NVIDIA H100 80GB HBM3 On | 00000000:CB:00.0 Off | 0 | +| N/A 48C P0 128W / 700W | 1505MiB / 81559MiB | 2% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 7 NVIDIA H100 80GB HBM3 On | 00000000:DB:00.0 Off | 0 | +| N/A 38C P0 119W / 700W | 1505MiB / 81559MiB | 2% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ + ++-----------------------------------------------------------------------------------------+ +| Processes: | +| GPU GI CI PID Type Process name GPU Memory | +| ID ID Usage | +|=========================================================================================| +| No running processes found | ++-----------------------------------------------------------------------------------------+ + +==================================================================================================== +train_shards: 80 +val_tokens: 40540160 +model_params:35989592 +gptq:reserving 12s, effective=588000ms +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +0/20000 val_loss: 9.0068 val_bpb: 3.4868 +1/20000 train_loss: 9.0088 train_time: 0.0m tok/s: 8152392 +2/20000 train_loss: 12.2787 train_time: 0.0m tok/s: 7983132 +3/20000 train_loss: 10.9339 train_time: 0.0m tok/s: 7881069 +4/20000 train_loss: 9.4166 train_time: 0.0m tok/s: 7816024 +5/20000 train_loss: 8.3102 train_time: 0.0m tok/s: 7783888 +100/20000 train_loss: 4.5645 train_time: 0.2m tok/s: 7590748 +200/20000 train_loss: 3.7342 train_time: 0.3m tok/s: 7582352 +300/20000 train_loss: 3.4871 train_time: 0.5m tok/s: 7576320 +400/20000 train_loss: 3.3571 train_time: 0.7m tok/s: 7572042 +500/20000 train_loss: 3.3662 train_time: 0.9m tok/s: 7568371 +500/20000 val_loss: 3.3133 val_bpb: 1.2827 +600/20000 train_loss: 3.2899 train_time: 1.0m tok/s: 7565238 +700/20000 train_loss: 3.3220 train_time: 1.2m tok/s: 7562824 +800/20000 train_loss: 3.2130 train_time: 1.4m tok/s: 7561290 +900/20000 train_loss: 3.1658 train_time: 1.6m tok/s: 7561203 +1000/20000 train_loss: 3.2657 train_time: 1.7m tok/s: 7560948 +1000/20000 val_loss: 3.1960 val_bpb: 1.2373 +1100/20000 train_loss: 3.0734 train_time: 1.9m tok/s: 7561322 +1200/20000 train_loss: 3.1693 train_time: 2.1m tok/s: 7562658 +1300/20000 train_loss: 3.1835 train_time: 2.3m tok/s: 7563490 +1400/20000 train_loss: 3.1695 train_time: 2.4m tok/s: 7564196 +1500/20000 train_loss: 3.1733 train_time: 2.6m tok/s: 7564921 +1500/20000 val_loss: 3.1484 val_bpb: 1.2188 +1600/20000 train_loss: 3.1777 train_time: 2.8m tok/s: 7565726 +1700/20000 train_loss: 3.1173 train_time: 2.9m tok/s: 7566589 +1800/20000 train_loss: 3.1076 train_time: 3.1m tok/s: 7567120 +1900/20000 train_loss: 3.1316 train_time: 3.3m tok/s: 7567926 +layer_loop:enabled step:1981 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2000/20000 train_loss: 3.1238 train_time: 3.5m tok/s: 7534164 +2000/20000 val_loss: 3.1405 val_bpb: 1.2158 +2100/20000 train_loss: 3.1535 train_time: 3.7m tok/s: 7369858 +2200/20000 train_loss: 3.0303 train_time: 4.0m tok/s: 7226284 +2300/20000 train_loss: 3.1113 train_time: 4.2m tok/s: 7100077 +2400/20000 train_loss: 2.9770 train_time: 4.5m tok/s: 6979056 +2500/20000 train_loss: 3.1097 train_time: 4.8m tok/s: 6870929 +2500/20000 val_loss: 3.0071 val_bpb: 1.1641 +2600/20000 train_loss: 3.0104 train_time: 5.0m tok/s: 6782235 +2700/20000 train_loss: 2.9826 train_time: 5.3m tok/s: 6702212 +2800/20000 train_loss: 2.9001 train_time: 5.5m tok/s: 6629643 +2900/20000 train_loss: 2.9302 train_time: 5.8m tok/s: 6549147 +3000/20000 train_loss: 2.8857 train_time: 6.1m tok/s: 6489373 +3000/20000 val_loss: 2.9609 val_bpb: 1.1463 +3100/20000 train_loss: 2.9701 train_time: 6.3m tok/s: 6434348 +3200/20000 train_loss: 2.8887 train_time: 6.6m tok/s: 6383803 +3300/20000 train_loss: 3.0209 train_time: 6.8m tok/s: 6336969 +3400/20000 train_loss: 2.9710 train_time: 7.1m tok/s: 6293534 +3500/20000 train_loss: 2.9246 train_time: 7.3m tok/s: 6253073 +3500/20000 val_loss: 2.9136 val_bpb: 1.1280 +3600/20000 train_loss: 2.9852 train_time: 7.6m tok/s: 6215470 +3700/20000 train_loss: 2.8358 train_time: 7.8m tok/s: 6179379 +3800/20000 train_loss: 2.9140 train_time: 8.1m tok/s: 6146353 +3900/20000 train_loss: 2.9994 train_time: 8.4m tok/s: 6115495 +4000/20000 train_loss: 2.8022 train_time: 8.6m tok/s: 6086484 +4000/20000 val_loss: 2.8571 val_bpb: 1.1061 +4100/20000 train_loss: 2.8320 train_time: 8.9m tok/s: 6059161 +4200/20000 train_loss: 2.8535 train_time: 9.1m tok/s: 6033335 +4300/20000 train_loss: 2.9284 train_time: 9.4m tok/s: 6008920 +4400/20000 train_loss: 2.7724 train_time: 9.6m tok/s: 5985838 +4465/20000 val_loss: 2.8064 val_bpb: 1.0864 +stopping_early: wallclock_cap train_time: 588034ms step: 4465/20000 +peak memory allocated: 39116 MiB reserved: 39228 MiB +ema:applying EMA weights +pre-quantization post-ema val_loss:2.80337728 val_bpb:1.08527471 eval_time:6806ms +Serialized model: 135611257 bytes +Code size: 50027 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 13.1s +Quantized weights: + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int7): tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, skip_gates, skip_weights +Serialized model quantized+brotli: 15694065 bytes +Total submission size quantized+brotli: 15744092 bytes +quantized val_loss:2.83547250 val_bpb:1.09769977 eval_time:9179ms +quantized_sliding_window val_loss:2.79222167 val_bpb:1.08095602 eval_time:93255ms +ttt:start chunks=1238 ttt_lr=0.005 ttt_epochs=3 +quantized_ttt val_loss:2.78825707 val_bpb:1.07942120 eval_time:335110ms diff --git a/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_seed2025.log b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_seed2025.log new file mode 100644 index 0000000000..456dfe696d --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_seed2025.log @@ -0,0 +1,242 @@ +==================================================================================================== +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp8192 + distributed: True + ema_decay: 0.9965 + embed_bits: 7 + embed_clip_sigmas: 15.0 + embed_lr: 0.6 + embed_wd: 0.085 + embedding_dim: 512 + enable_looping_at: 0.35 + etlb_clip: 3.0 + etlb_enabled: False + etlb_lr: 0.05 + etlb_steps: 5 + eval_seq_len: 2048 + eval_stride: 64 + gated_attn: headwise + gptq_calibration_batches: 64 + gptq_reserve_seconds: 12.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + head_lr: 0.008 + is_main_process: True + iterations: 20000 + ln_scale: True + local_rank: 0 + logfile: logs/v2_c6_seed2025.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.022 + max_wallclock_seconds: 600.0 + min_lr: 0.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_beta2: 0.95 + muon_momentum: 0.99 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_residual_start: 7 + qk_gain_init: 5.25 + quantized_model_path: final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + run_id: v2_c6_seed2025 + scalar_lr: 0.02 + seed: 2025 + skip_gates_enabled: True + sliding_window_enabled: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_8192_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp8192/fineweb_train_*.bin + train_log_every: 100 + train_seq_len: 2048 + ttt_chunk_tokens: 32768 + ttt_enabled: True + ttt_epochs: 3 + ttt_lr: 0.005 + ttt_momentum: 0.9 + val_batch_tokens: 524288 + val_files: ./data/datasets/fineweb10B_sp8192/fineweb_val_*.bin + val_loss_every: 500 + value_residual_alpha: 0.0 + vocab_size: 8192 + warmdown_frac: 0.72 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +==================================================================================================== +Running Python 3.11.10 (main, Sep 7 2024, 18:35:41) [GCC 11.4.0] +Running PyTorch 2.11.0+cu130 +Wed Apr 29 12:51:28 2026 ++-----------------------------------------------------------------------------------------+ +| NVIDIA-SMI 580.126.09 Driver Version: 580.126.09 CUDA Version: 13.0 | ++-----------------------------------------+------------------------+----------------------+ +| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | +| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | +| | | MIG M. | +|=========================================+========================+======================| +| 0 NVIDIA H100 80GB HBM3 On | 00000000:19:00.0 Off | 0 | +| N/A 52C P0 131W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 1 NVIDIA H100 80GB HBM3 On | 00000000:3B:00.0 Off | 0 | +| N/A 40C P0 117W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 2 NVIDIA H100 80GB HBM3 On | 00000000:4C:00.0 Off | 0 | +| N/A 39C P0 118W / 700W | 1505MiB / 81559MiB | 2% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 3 NVIDIA H100 80GB HBM3 On | 00000000:5D:00.0 Off | 0 | +| N/A 50C P0 130W / 700W | 1505MiB / 81559MiB | 3% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 4 NVIDIA H100 80GB HBM3 On | 00000000:9B:00.0 Off | 0 | +| N/A 52C P0 133W / 700W | 1505MiB / 81559MiB | 3% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 5 NVIDIA H100 80GB HBM3 On | 00000000:BB:00.0 Off | 0 | +| N/A 41C P0 122W / 700W | 1505MiB / 81559MiB | 3% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 6 NVIDIA H100 80GB HBM3 On | 00000000:CB:00.0 Off | 0 | +| N/A 52C P0 133W / 700W | 1505MiB / 81559MiB | 2% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 7 NVIDIA H100 80GB HBM3 On | 00000000:DB:00.0 Off | 0 | +| N/A 40C P0 120W / 700W | 1505MiB / 81559MiB | 3% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ + ++-----------------------------------------------------------------------------------------+ +| Processes: | +| GPU GI CI PID Type Process name GPU Memory | +| ID ID Usage | +|=========================================================================================| +| No running processes found | ++-----------------------------------------------------------------------------------------+ + +==================================================================================================== +train_shards: 80 +val_tokens: 40540160 +model_params:35989592 +gptq:reserving 12s, effective=588000ms +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +0/20000 val_loss: 9.0100 val_bpb: 3.4880 +1/20000 train_loss: 9.0113 train_time: 0.0m tok/s: 8136987 +2/20000 train_loss: 12.2871 train_time: 0.0m tok/s: 7973934 +3/20000 train_loss: 10.9455 train_time: 0.0m tok/s: 7868032 +4/20000 train_loss: 9.4306 train_time: 0.0m tok/s: 7812482 +5/20000 train_loss: 8.3383 train_time: 0.0m tok/s: 7780567 +100/20000 train_loss: 4.5317 train_time: 0.2m tok/s: 7599495 +200/20000 train_loss: 3.7031 train_time: 0.3m tok/s: 7595751 +300/20000 train_loss: 3.4961 train_time: 0.5m tok/s: 7591104 +400/20000 train_loss: 3.3587 train_time: 0.7m tok/s: 7587028 +500/20000 train_loss: 3.3750 train_time: 0.9m tok/s: 7583681 +500/20000 val_loss: 3.3217 val_bpb: 1.2859 +600/20000 train_loss: 3.2996 train_time: 1.0m tok/s: 7580783 +700/20000 train_loss: 3.3299 train_time: 1.2m tok/s: 7578309 +800/20000 train_loss: 3.2183 train_time: 1.4m tok/s: 7576945 +900/20000 train_loss: 3.1755 train_time: 1.6m tok/s: 7576378 +1000/20000 train_loss: 3.2762 train_time: 1.7m tok/s: 7576194 +1000/20000 val_loss: 3.2028 val_bpb: 1.2399 +1100/20000 train_loss: 3.0803 train_time: 1.9m tok/s: 7575691 +1200/20000 train_loss: 3.1721 train_time: 2.1m tok/s: 7576522 +1300/20000 train_loss: 3.1941 train_time: 2.2m tok/s: 7576654 +1400/20000 train_loss: 3.1761 train_time: 2.4m tok/s: 7577146 +1500/20000 train_loss: 3.1801 train_time: 2.6m tok/s: 7577975 +1500/20000 val_loss: 3.1553 val_bpb: 1.2215 +1600/20000 train_loss: 3.1853 train_time: 2.8m tok/s: 7578490 +1700/20000 train_loss: 3.1251 train_time: 2.9m tok/s: 7578538 +1800/20000 train_loss: 3.1163 train_time: 3.1m tok/s: 7578683 +1900/20000 train_loss: 3.1374 train_time: 3.3m tok/s: 7579243 +layer_loop:enabled step:1984 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2000/20000 train_loss: 3.1622 train_time: 3.5m tok/s: 7550122 +2000/20000 val_loss: 3.1755 val_bpb: 1.2293 +2100/20000 train_loss: 3.1634 train_time: 3.7m tok/s: 7384564 +2200/20000 train_loss: 3.0355 train_time: 4.0m tok/s: 7229413 +2300/20000 train_loss: 3.1132 train_time: 4.3m tok/s: 7093224 +2400/20000 train_loss: 2.9776 train_time: 4.5m tok/s: 6981816 +2500/20000 train_loss: 3.1128 train_time: 4.8m tok/s: 6882075 +2500/20000 val_loss: 3.0115 val_bpb: 1.1658 +2600/20000 train_loss: 3.0165 train_time: 5.0m tok/s: 6792901 +2700/20000 train_loss: 2.9889 train_time: 5.3m tok/s: 6712424 +2800/20000 train_loss: 2.9033 train_time: 5.5m tok/s: 6639347 +2900/20000 train_loss: 2.9284 train_time: 5.8m tok/s: 6553098 +3000/20000 train_loss: 2.8865 train_time: 6.1m tok/s: 6493052 +3000/20000 val_loss: 2.9638 val_bpb: 1.1474 +3100/20000 train_loss: 2.9726 train_time: 6.3m tok/s: 6437965 +3200/20000 train_loss: 2.8972 train_time: 6.6m tok/s: 6387121 +3300/20000 train_loss: 3.0272 train_time: 6.8m tok/s: 6340155 +3400/20000 train_loss: 2.9736 train_time: 7.1m tok/s: 6296661 +3500/20000 train_loss: 2.9283 train_time: 7.3m tok/s: 6256240 +3500/20000 val_loss: 2.9162 val_bpb: 1.1289 +3600/20000 train_loss: 2.9896 train_time: 7.6m tok/s: 6217568 +3700/20000 train_loss: 2.8387 train_time: 7.8m tok/s: 6182477 +3800/20000 train_loss: 2.9144 train_time: 8.1m tok/s: 6149527 +3900/20000 train_loss: 3.0018 train_time: 8.4m tok/s: 6118694 +4000/20000 train_loss: 2.8041 train_time: 8.6m tok/s: 6089650 +4000/20000 val_loss: 2.8597 val_bpb: 1.1071 +4100/20000 train_loss: 2.8328 train_time: 8.9m tok/s: 6062257 +4200/20000 train_loss: 2.8606 train_time: 9.1m tok/s: 6036324 +4300/20000 train_loss: 2.9303 train_time: 9.4m tok/s: 6011767 +4400/20000 train_loss: 2.7797 train_time: 9.6m tok/s: 5988653 +4467/20000 val_loss: 2.8087 val_bpb: 1.0873 +stopping_early: wallclock_cap train_time: 588064ms step: 4467/20000 +peak memory allocated: 39116 MiB reserved: 39228 MiB +ema:applying EMA weights +pre-quantization post-ema val_loss:2.80567668 val_bpb:1.08616487 eval_time:6853ms +Serialized model: 135611257 bytes +Code size: 50027 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 13.1s +Quantized weights: + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int7): tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, skip_gates, skip_weights +Serialized model quantized+brotli: 15693855 bytes +Total submission size quantized+brotli: 15743882 bytes +quantized val_loss:2.83798603 val_bpb:1.09867283 eval_time:9178ms +quantized_sliding_window val_loss:2.79483308 val_bpb:1.08196698 eval_time:93119ms +ttt:start chunks=1238 ttt_lr=0.005 ttt_epochs=3 +quantized_ttt val_loss:2.79075426 val_bpb:1.08038794 eval_time:333981ms diff --git a/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_seed42.log b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_seed42.log new file mode 100644 index 0000000000..6f2bde1631 --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_HeadwiseGate_EMA_LegalTTT/train_seed42.log @@ -0,0 +1,242 @@ +==================================================================================================== +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp8192 + distributed: True + ema_decay: 0.9965 + embed_bits: 7 + embed_clip_sigmas: 15.0 + embed_lr: 0.6 + embed_wd: 0.085 + embedding_dim: 512 + enable_looping_at: 0.35 + etlb_clip: 3.0 + etlb_enabled: False + etlb_lr: 0.05 + etlb_steps: 5 + eval_seq_len: 2048 + eval_stride: 64 + gated_attn: headwise + gptq_calibration_batches: 64 + gptq_reserve_seconds: 12.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + head_lr: 0.008 + is_main_process: True + iterations: 20000 + ln_scale: True + local_rank: 0 + logfile: logs/v2_c6_seed42.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.022 + max_wallclock_seconds: 600.0 + min_lr: 0.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_beta2: 0.95 + muon_momentum: 0.99 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_residual_start: 7 + qk_gain_init: 5.25 + quantized_model_path: final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + run_id: v2_c6_seed42 + scalar_lr: 0.02 + seed: 42 + skip_gates_enabled: True + sliding_window_enabled: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_8192_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp8192/fineweb_train_*.bin + train_log_every: 100 + train_seq_len: 2048 + ttt_chunk_tokens: 32768 + ttt_enabled: True + ttt_epochs: 3 + ttt_lr: 0.005 + ttt_momentum: 0.9 + val_batch_tokens: 524288 + val_files: ./data/datasets/fineweb10B_sp8192/fineweb_val_*.bin + val_loss_every: 500 + value_residual_alpha: 0.0 + vocab_size: 8192 + warmdown_frac: 0.72 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +==================================================================================================== +Running Python 3.11.10 (main, Sep 7 2024, 18:35:41) [GCC 11.4.0] +Running PyTorch 2.11.0+cu130 +Wed Apr 29 12:07:07 2026 ++-----------------------------------------------------------------------------------------+ +| NVIDIA-SMI 580.126.09 Driver Version: 580.126.09 CUDA Version: 13.0 | ++-----------------------------------------+------------------------+----------------------+ +| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | +| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | +| | | MIG M. | +|=========================================+========================+======================| +| 0 NVIDIA H100 80GB HBM3 On | 00000000:19:00.0 Off | 0 | +| N/A 36C P0 120W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 1 NVIDIA H100 80GB HBM3 On | 00000000:3B:00.0 Off | 0 | +| N/A 32C P0 113W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 2 NVIDIA H100 80GB HBM3 On | 00000000:4C:00.0 Off | 0 | +| N/A 32C P0 113W / 700W | 1505MiB / 81559MiB | 3% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 3 NVIDIA H100 80GB HBM3 On | 00000000:5D:00.0 Off | 0 | +| N/A 35C P0 121W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 4 NVIDIA H100 80GB HBM3 On | 00000000:9B:00.0 Off | 0 | +| N/A 37C P0 120W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 5 NVIDIA H100 80GB HBM3 On | 00000000:BB:00.0 Off | 0 | +| N/A 34C P0 118W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 6 NVIDIA H100 80GB HBM3 On | 00000000:CB:00.0 Off | 0 | +| N/A 36C P0 121W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ +| 7 NVIDIA H100 80GB HBM3 On | 00000000:DB:00.0 Off | 0 | +| N/A 33C P0 114W / 700W | 1505MiB / 81559MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+------------------------+----------------------+ + ++-----------------------------------------------------------------------------------------+ +| Processes: | +| GPU GI CI PID Type Process name GPU Memory | +| ID ID Usage | +|=========================================================================================| +| No running processes found | ++-----------------------------------------------------------------------------------------+ + +==================================================================================================== +train_shards: 80 +val_tokens: 40540160 +model_params:35989592 +gptq:reserving 12s, effective=588000ms +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +0/20000 val_loss: 9.0045 val_bpb: 3.4859 +1/20000 train_loss: 9.0064 train_time: 0.0m tok/s: 8012323 +2/20000 train_loss: 12.3019 train_time: 0.0m tok/s: 7924952 +3/20000 train_loss: 10.9537 train_time: 0.0m tok/s: 7854995 +4/20000 train_loss: 9.4041 train_time: 0.0m tok/s: 7816103 +5/20000 train_loss: 8.2936 train_time: 0.0m tok/s: 7784731 +100/20000 train_loss: 4.5489 train_time: 0.2m tok/s: 7603114 +200/20000 train_loss: 3.7193 train_time: 0.3m tok/s: 7602987 +300/20000 train_loss: 3.4907 train_time: 0.5m tok/s: 7600366 +400/20000 train_loss: 3.3617 train_time: 0.7m tok/s: 7582159 +500/20000 train_loss: 3.3727 train_time: 0.9m tok/s: 7578959 +500/20000 val_loss: 3.3234 val_bpb: 1.2866 +600/20000 train_loss: 3.3028 train_time: 1.0m tok/s: 7576696 +700/20000 train_loss: 3.3330 train_time: 1.2m tok/s: 7575419 +800/20000 train_loss: 3.2133 train_time: 1.4m tok/s: 7574082 +900/20000 train_loss: 3.1722 train_time: 1.6m tok/s: 7574208 +1000/20000 train_loss: 3.2775 train_time: 1.7m tok/s: 7574614 +1000/20000 val_loss: 3.2036 val_bpb: 1.2402 +1100/20000 train_loss: 3.0824 train_time: 1.9m tok/s: 7575270 +1200/20000 train_loss: 3.1757 train_time: 2.1m tok/s: 7576344 +1300/20000 train_loss: 3.1877 train_time: 2.2m tok/s: 7577501 +1400/20000 train_loss: 3.1795 train_time: 2.4m tok/s: 7578179 +1500/20000 train_loss: 3.1783 train_time: 2.6m tok/s: 7578636 +1500/20000 val_loss: 3.1560 val_bpb: 1.2218 +1600/20000 train_loss: 3.1846 train_time: 2.8m tok/s: 7579127 +1700/20000 train_loss: 3.1286 train_time: 2.9m tok/s: 7579600 +1800/20000 train_loss: 3.1151 train_time: 3.1m tok/s: 7579945 +1900/20000 train_loss: 3.1394 train_time: 3.3m tok/s: 7580141 +layer_loop:enabled step:1984 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2000/20000 train_loss: 3.1663 train_time: 3.5m tok/s: 7551603 +2000/20000 val_loss: 3.1795 val_bpb: 1.2309 +2100/20000 train_loss: 3.1656 train_time: 3.7m tok/s: 7385791 +2200/20000 train_loss: 3.0397 train_time: 4.0m tok/s: 7241043 +2300/20000 train_loss: 3.1158 train_time: 4.2m tok/s: 7113972 +2400/20000 train_loss: 2.9803 train_time: 4.5m tok/s: 7001149 +2500/20000 train_loss: 3.1147 train_time: 4.7m tok/s: 6900238 +2500/20000 val_loss: 3.0151 val_bpb: 1.1672 +2600/20000 train_loss: 3.0190 train_time: 5.0m tok/s: 6809739 +2700/20000 train_loss: 2.9911 train_time: 5.3m tok/s: 6728079 +2800/20000 train_loss: 2.9091 train_time: 5.5m tok/s: 6654002 +2900/20000 train_loss: 2.9346 train_time: 5.8m tok/s: 6586443 +3000/20000 train_loss: 2.8931 train_time: 6.0m tok/s: 6524753 +3000/20000 val_loss: 2.9681 val_bpb: 1.1490 +3100/20000 train_loss: 2.9785 train_time: 6.3m tok/s: 6468147 +3200/20000 train_loss: 2.8975 train_time: 6.5m tok/s: 6416041 +3300/20000 train_loss: 3.0301 train_time: 6.8m tok/s: 6367913 +3400/20000 train_loss: 2.9782 train_time: 7.0m tok/s: 6323280 +3500/20000 train_loss: 2.9307 train_time: 7.3m tok/s: 6281915 +3500/20000 val_loss: 2.9204 val_bpb: 1.1306 +3600/20000 train_loss: 2.9910 train_time: 7.6m tok/s: 6232875 +3700/20000 train_loss: 2.8466 train_time: 7.8m tok/s: 6197189 +3800/20000 train_loss: 2.9162 train_time: 8.1m tok/s: 6163813 +3900/20000 train_loss: 3.0050 train_time: 8.3m tok/s: 6132356 +4000/20000 train_loss: 2.8088 train_time: 8.6m tok/s: 6093864 +4000/20000 val_loss: 2.8635 val_bpb: 1.1085 +4100/20000 train_loss: 2.8387 train_time: 8.9m tok/s: 6066365 +4200/20000 train_loss: 2.8609 train_time: 9.1m tok/s: 6040307 +4300/20000 train_loss: 2.9323 train_time: 9.4m tok/s: 6015781 +4400/20000 train_loss: 2.7779 train_time: 9.6m tok/s: 5992488 +4469/20000 val_loss: 2.8122 val_bpb: 1.0887 +stopping_early: wallclock_cap train_time: 588009ms step: 4469/20000 +peak memory allocated: 39124 MiB reserved: 39208 MiB +ema:applying EMA weights +pre-quantization post-ema val_loss:2.80919078 val_bpb:1.08752529 eval_time:7510ms +Serialized model: 135611257 bytes +Code size: 50027 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 13.1s +Quantized weights: + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int7): tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, skip_gates, skip_weights +Serialized model quantized+brotli: 15697552 bytes +Total submission size quantized+brotli: 15747579 bytes +quantized val_loss:2.84172890 val_bpb:1.10012182 eval_time:26778ms +quantized_sliding_window val_loss:2.79862574 val_bpb:1.08343524 eval_time:125387ms +ttt:start chunks=1238 ttt_lr=0.005 ttt_epochs=3 +quantized_ttt val_loss:2.79445055 val_bpb:1.08181889 eval_time:394489ms