Record: PR #1797 base + PPM-D byte mixture — val_bpb 0.90236 (3-seed mean)#1854
Open
ndokutovich wants to merge 1 commit intoopenai:mainfrom
Open
Record: PR #1797 base + PPM-D byte mixture — val_bpb 0.90236 (3-seed mean)#1854ndokutovich wants to merge 1 commit intoopenai:mainfrom
ndokutovich wants to merge 1 commit intoopenai:mainfrom
Conversation
…-seed mean, std 0.000816)
sunnypatneedi
pushed a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Apr 27, 2026
… required; PR openai#1848 BPB risk; Day 18 plateau; Session 23 - Merged SOTA still 1.0810 (Day 18, no change since Apr 9) - PPM-D byte mixture confirmed by dexhunter at 1.0322 (PR openai#1857, self-closed) - SmearGate BOS bug documented: prev-token leaks at document boundaries; fix required - PR openai#1848 (newjordan, 0.87980) flagged BPB risk: sibling PR openai#1846 closed same day - PR openai#1858 (0.9946) only covers 8M/40.5M tokens — not leaderboard-comparable - PR openai#1855 (codemath3000, 1.06108) and openai#1851 (aquariouseworkman, 1.06128) both clean - PPM-D wave: PRs openai#1850, openai#1854, openai#1835 await organizer ruling - Added Session 23 lessons to CLAUDE.md - 3 days to deadline (Apr 30) — final GPU run window https://claude.ai/code/session_01RmJtLYUmKNzDgDVTnWoKzU
robbiebusinessacc
added a commit
to robbiebusinessacc/parameter-golf
that referenced
this pull request
Apr 28, 2026
…tion — val_bpb 1.06777 (3-seed mean) 3-seed validated reproduction of PR openai#1854's neural stack with PHASED_TTT_PREFIX_DOCS=1500 to fit the 600s eval budget. Beats merged SOTA PR openai#1493 (bigbag, 1.0810) by 0.01323 BPB at ~13σ statistical significance. Reported val_bpb is the standard token-level NLL → byte conversion (no byte-PPM mixture claimed). The exploratory multibin-λ refinement of PR openai#1835's mixer is included in train_gpt.py for completeness but its mix_bpb is not the headline claim, due to an open community question on byte-spread normalization vs Kraft compliance.
6 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Record: PR #1797 base + PPM-D byte mixture — val_bpb 0.90236
val_bpb: 0.90236 (3-seed mean, std 0.00082) | 15.95 MB | 8×H100 SXM, ≤600s train / ≤600s eval | PPM-D mixture
This submission ports the PPM-D byte-level mixture from PR #1835 (anmarhindi) onto the PR #1797 base stack from dexhunter (val_bpb 1.06157), then evaluates the mixture as the headline submission score per the same protocol used in PR #1835.
Result (3 seeds, 8×H100 80GB SXM)
All seeds:
stopping_early: wallclock_cap).Headline result
Stack
The submission is a clean two-component stack:
PR Record: PR #1787 base + Smear Gate + LQER Asym — val_bpb 1.06157 #1797 (dexhunter) base — verbatim:
PR Record: SP8192 + PPM-D byte mixture — 1.00136 BPB (3-seed mean) #1835 (anmarhindi) PPM-D byte mixture — direct port:
λ_lo=0.05if PPM confidence ≥ 0.9 elseλ_hi=0.9p_mix = λ * p_NN + (1 - λ) * p_PPMReproduction
Data preparation (CPU-bound, runs once outside the 600s training cap)
The
--max-docs 5000000cap caps disk usage at ~10 GB while still producing far more train tokens than 600s of 8×H100 training can consume.Training + evaluation (3-seed, ≤600s training each)
The headline
mix_bpbvalue is logged at the end of each run as:Pre-quant / quantized / TTT-only diagnostic BPBs are also logged for completeness.
Compliance — Issue #1017 four conditions
The PPM-D byte mixture is a transparent extension of the score-first protocol already used by the PR #1797 base stack. Every byte's mixture probability is computed using only:
p_mix = λ * p_NN + (1−λ) * p_PPMis a convex combination of two normalized distributions over the 256-symbol byte alphabet → also normalized over the same alphabet.tare updated AFTERlog_mix(t)is recorded. The neural NLL itself is from the score-first phased-TTT path of PR Record: PR #1787 base + Smear Gate + LQER Asym — val_bpb 1.06157 #1797. No byte ever influences its own probability mass before being scored.dict[bytes, dict[int, int]]consumed exactly once in increasing byte position. No rescoring, no oracle selection across passes.Additional compliance:
Architecture (inherits PR #1797 shape)
Lineage and credits
piece.encode).Author contribution
willdepueoai/parameter-golfdocs_selected.jsonlcorpus.eval_val_ttt_phased, score-first protocol preserved).prepare_caseops_data.py, multiprocessing, ~16× wall-clock improvement vs single-thread original) for tractable submission turnaround.Included files
README.md— this file.submission.json— metadata.train_gpt.py— training + eval script (PR Record: PR #1787 base + Smear Gate + LQER Asym — val_bpb 1.06157 #1797 verbatim + 6 small PPM-D hunks from PR Record: SP8192 + PPM-D byte mixture — 1.00136 BPB (3-seed mean) #1835).lossless_caps.py— CaseOps transform module (from PR Record: CaseOps Tokenizer + Tapered WD - val_bpb 1.0678 (3-seed mean) #1729 lineage).prepare_caseops_data.py— parallel CaseOps re-tokenizer.tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model— CaseOps SP8192 tokenizer.train_seed42.log,train_seed1337.log,train_seed314.log— per-seed train+eval logs.