Skip to content

Record: SP8192 + Improved Parallel Residuals + Muon 0.97 + TTT 5ep + N-gram Tilt + Hessian SDClip — val_bpb 1.07730#1557

Open
ndokutovich wants to merge 1 commit intoopenai:mainfrom
ndokutovich:submission-s8-ha
Open

Record: SP8192 + Improved Parallel Residuals + Muon 0.97 + TTT 5ep + N-gram Tilt + Hessian SDClip — val_bpb 1.07730#1557
ndokutovich wants to merge 1 commit intoopenai:mainfrom
ndokutovich:submission-s8-ha

Conversation

@ndokutovich
Copy link
Copy Markdown

Record: SP8192 + Improved Parallel Residuals + Score-First TTT + Causal N-gram Tilt + Hessian SDClip

val_bpb = 1.07730 (3-seed mean, std 0.00040) | ~15.97 MB | 8xH100 SXM

3-Seed Results

Seed Sliding BPB TTT BPB Artifact
42 1.07880 1.07684 15,965,495
314 1.07959 1.07748 15,965,495
999 1.07963 1.07757 15,965,495
Mean 1.07934 1.07730

Merged SOTA (PR #1493): 1.0810. Delta: -0.00370 nats.

Techniques

  • Architecture: SP8192, 11L x 512d, 8H/4KV, MLP 4x, improved parallel residuals (L7+)
  • Training: Muon 0.97 (row-normalized), Matrix LR 0.03, EMA 0.997, 3-layer depth recurrence (L3-5)
  • Quantization: GPTQ int6 (attn+MLP) + int8 (embeddings) + Hessian-Aware SDClip (lambda=0.175)
  • Eval: Score-first TTT (SGD, 5 epochs, lr=0.005) + Causal n-gram tilt (beta=2.0, agree=0.1)
  • Compression: Brotli quality=11

Compliance

Attribution

Compute

Funded by OpenAI Advanced Competitor grant ($500 RunPod credit). 8xH100-SXM, ~3 runs for 3 seeds.

…t TTT 5ep + Causal N-gram Tilt + Hessian SDClip — val_bpb 1.07730 (3-seed mean)
@ndokutovich
Copy link
Copy Markdown
Author

Note on AT-RISK flag — legality of the n-gram tilt

Posting this proactively since the community OLYMPUS tracker (@MatoTeziTanka's running audit in issue #140) has this submission flagged AT-RISK under the "N-gram cache (03-27)" category. OLYMPUS is a careful community compliance index, not an official maintainer ruling, but it's a reasonable heuristic that reviewers consult — so I'd rather put my reasoning on the record than let the flag stand unanswered. The right answer may still be that I'm wrong, but I'd rather the decision be informed.

What ngram_tilt.py does. A single-token exponential tilt on per-position NLL:

p_tilt(v) = p_model(v) · exp(β · 𝟙[v = hint]) / Z
Z = 1 + p_hint · (exp(β) − 1)

Z is computed explicitly; sum_v p_tilt(v) = 1 holds as an algebraic identity. The hint for position t is read from prefix-only hash tables built over val_tokens[:t], and the C++ kernel emits the hint before appending val_tokens[t] to those tables — score-before-update is enforced at the kernel level.

Relation to the March 27 sweep. The sweep closed submissions where unnormalized n-gram caches produced apparent gains around 1 BPB (Robby955 on #677, 2026-03-30: "unnormalized apparent gain was ~1 BPB at 1M buckets; after proper normalization the real signal is ~0.003 BPB"). My measured tilt contribution is in the ~0.003 BPB range, matching the "real signal" bound for normalized approaches rather than the bucket-counting artifact that triggered the sweep.

Lineage. The kernel is derived from PR #1420 (ContextMixer). Two normalized tilt PRs — #1145 (collision-free trie) and #1420 — remained open after the sweep; I read that as the line being drawn at normalization discipline rather than at any use of hash-backed lookups. My implementation uses #1420's hash mechanism with the same explicit normalization.

4 conditions (Issue #1017). C1: hint at p depends only on val_tokens[:p]. C2: Z explicit, sum p_tilt = 1 by construction over the full vocabulary. C3: hint emitted before table update — verified in fused_expert_kernel.cpp. C4: single left-to-right pass, no rescoring.

Where I may be wrong. The honest disagreement point is whether "normalized tilt output built on a collision-prone hash" satisfies the spirit of C2. My reading: collisions only affect which hint is selected, not the validity of the final distribution, which remains exact. If maintainers rule instead that hash-backed hint sources are incompatible with C2 regardless of output normalization, I accept that — closing this PR is the correct outcome under that reading.

Not litigating — just making the reasoning visible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant