Review: Rerun of #972 with actual full-vocab normalization#978
Review: Rerun of #972 with actual full-vocab normalization#978AnirudhRahul wants to merge 2 commits intoopenai:mainfrom
Conversation
Correct the eval-time n-gram posterior to normalize by the summed hashed-vocab mass and update the recorded metrics. The honest rerun lands at 1.5134 BPB, showing the earlier 0.3922 result came from the flawed normalization path. Made-with: Cursor
|
Thanks for the review. I'd already identified the denominator issue and reran with pair_c.sum(dim=1) + beta — confirmed it degrades to ~1.23-1.51 BPP, The entire n-gram gain is collision artifact. Closing this PR. |
Community Review — Review: Rerun of #972 with actual full-vocab normalizationBPB: 1.1686 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.04s, dim=512, layers=10, vocab=1024, code=65550 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.04s, dim=512, layers=10, vocab=1024, code=65550 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
eval_val_sliding()so the n-gram score is normalized by the summed hashed-vocab masspair_counts.sum(...) + betainstead ofctx_count + beta#972record README and submission metadata to reflect the honest rerun rather than the earlier0.3922claim1.51343368BPB and loses to the neural sliding baselineIssue in #972
#972claimed to compute a full-vocab normalized n-gram distribution, but the evaluation code did not actually use that normalization.What the code effectively did was:
The problem is that
ctx_count + betais not the normalization constant for the 1024-way hashed candidate distribution. Oncepair_chas been materialized, the correct denominator for a full-vocab posterior is the total mass over candidate tokens:So the previous PR was still scoring a gold-token scalar against a non-normalized denominator, even though it had already gathered counts for all 1024 tokens.
Test plan
torchrun --standalone --nproc_per_node=8 train_gpt.pyon 8xH100 with:DATA_PATH=/root/parameter-golf/data/datasets/fineweb10B_sp1024TOKENIZER_PATH=/root/parameter-golf/data/tokenizers/fineweb_1024_bpe.modelCTW_BETA=2.0CTW_BLEND=0.5final_int8_zlib_roundtrip_exact val_loss:1.97320202 val_bpb:1.16864138final_sliding_window_exact sliding_bpb:1.14740867 val_bpb:1.51343368