[10min_16mb] 0.9641 BPB: LeakyReLU² + Score-First TTT + N-gram Backoff Cache#1185
Closed
skoustav35 wants to merge 6 commits intoopenai:mainfrom
Closed
[10min_16mb] 0.9641 BPB: LeakyReLU² + Score-First TTT + N-gram Backoff Cache#1185skoustav35 wants to merge 6 commits intoopenai:mainfrom
skoustav35 wants to merge 6 commits intoopenai:mainfrom
Conversation
…ascade code size
sunnypatneedi
pushed a commit
to sunnypatneedi/parameter-golf
that referenced
this pull request
Mar 31, 2026
- logs/daily_research.md: append 2026-03-31 research section - PR openai#771 CLOSED (score-first TTT rule violation) - PR openai#727 CLOSED (n-gram illegal — no renormalization) - Merged SOTA: 1.1147 (PR openai#1019, 2026-03-25) - New PRs: openai#1184 (0.9485 Scylla tokenizer), openai#1185 (0.9641) - SLOT eval technique, Full GPTQ, QK-Gain 4.0 documented - CLAUDE.md: update Competition Strategy + lessons 21-24 - Merged SOTA updated to 1.1147 - Current Best Path rewritten for 2026-03-31 - Lessons openai#21-24: TTT fix, n-gram risk, Scylla, SLOT - TTT constraint clarified to score-first protocol - Version bumped to v9.0 https://claude.ai/code/session_015z6QKyKzDSYzTniW1GPhAe
…ct-for-golf-challenge Add opt-in MoD routing, SquareGLU MLP, EMA warmdown distillation, and Grokfast
Contributor
|
Hi! Even though you aren't using the hashed n-gram cache and using Laplace smoothing instead, I think your implementation as currently coded still uses knowledge of the eval token ahead of time to calculate the blended ngram probability, which is not allowed, you should calculate and renormalize over the whole vocab size, or with some other heuristic that does not use oracle knowledge of the eval token. If you did that, I would be more inclined to treat this as legal. Closing for now. |
MatoTeziTanka
pushed a commit
to MatoTeziTanka/parameter-golf
that referenced
this pull request
Apr 6, 2026
NEW SECTION 'Maintainer Activity Tracker' immediately above Checklist: 4 cards for the validated OpenAI mods with merge/close authority on openai/parameter-golf: - notapplica ~13h ago (closed openai#140) - valerio-oai ~5 days (closed PR openai#1185) - 0hq ~12 days (Will DePue, founder) - yuzhougu-oai ~17 days Each card shows days-silent badge color-coded by severity, last action summary, authority signal (how we validated they have authority), and account creation date. Methodology disclosed inline. Validation: author_association on closed PRs (only collaborators can close other users' PRs), OAI account suffix, repo collaborator status, cross-checked against /users/<handle>/events for last activity timestamp. NAV: Added '⚠ Mod Tracker' link in red. Sits between Home and Checklist. scripts/update_mod_tracker.py: Standalone script that hits the GitHub events API for each validated handle, computes days-silent and severity, writes data/mod_tracker.json. Run on every Agora rebuild to keep current. BANNER FIX: valerio-oai last active April 2 (closed PR openai#1185 + comments) not April 4 as previously stated. Confirmed via events API. The earlier date was based on the openai#677 thread comment which was actually March 27. CHANGELOG: 4 new entries for v0.8.0 covering the mod tracker, validation methodology, auto-update script, and banner timestamp fix.
5 tasks
himanshudongre
added a commit
to himanshudongre/parameter-golf
that referenced
this pull request
Apr 15, 2026
…at Scale Rigorous negative result demonstrating that a legal causal n-gram additive-logit blend does not scale to strong models, paired with the first clean reference implementation verified against all three valerio-oai closure rulings (openai#993 hashed caches, openai#1185 full-vocab renormalization, openai#959 two-pass rescoring). Includes: - 8-probe automated legality harness + 4-test integration suite - Scaling curve across 6 model configurations (2L/4L, 128d/256d, 800-4000 steps, sp1024/sp8192) showing peak BPB improvement collapses from 0.0515 (weak baseline) to 0.00018 (strongest model) — well below the 0.0072 BPB record threshold - Localized delta decomposition showing 100% of the gain comes from out-of- attention-window cache hits, and why the sp1024 → sp8192 transition erodes even that architectural floor
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Submitting a new entry for the 10-minute 16MB track that achieves a 3-seed exact mean of 0.9641 BPB (1.6274 nats).
This improves upon the current merged 1.1147 BPB baseline (PR #1019) by 0.1506 BPB (0.2548 nats), which exceeds the required 0.005 nats threshold by ~51× (Welch t = -328.3, p ≪ 0.01).
Techniques Used
Compliance & Margins
599,384,599,761, and599,618ms (Note: logged train_time excludes initial compilation and 20 warmup steps).15,989,583bytes max across seeds (well under 16,000,000 B).Reproducibility
The script resolves data paths relative to the repo root automatically.