Ablation: WiderGate32, RoPE dims, activation slopes, hparam stack (8xH100)#1970
Open
bsisduck wants to merge 1 commit intoopenai:mainfrom
Open
Ablation: WiderGate32, RoPE dims, activation slopes, hparam stack (8xH100)#1970bsisduck wants to merge 1 commit intoopenai:mainfrom
bsisduck wants to merge 1 commit intoopenai:mainfrom
Conversation
8 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Ablation: WiderGate, RoPE dims, activation slopes, hparam stack (8xH100)
Systematic ablation of 10 configurations on the PR #1693 architecture with CaseOps SP8192. All runs on 8xH100 SXM, 600s wallclock, single seed unless noted.
Results
Base config: 11L/512d, CaseOps SP8192, AttnOutGate + SmearGate, Polar Express NS, MIN_LR=0.10, LQER OFF, brotli.
Experiments 5-8 use EMBED_BITS=6 (int6 embeddings) to fit under 16MB without LQER.
Key Findings
1. Wider attention gates help (GATE_WIDTH=32)
Increasing AttnOutGate input from 12 to 32 dimensions gives -0.002 pre-quant and -0.003 post-TTT improvement. The wider gate sees more of the residual stream for its per-head gating decision. Cost: 1,760 extra float16 params (negligible).
Recommendation: Adopt GATE_WIDTH=32 as default. Free improvement.
2. More RoPE dims hurt post-quantization
RoPE 32 improves pre-quant by -0.0007 but increases quant gap from 0.007 to 0.010. More rotated dimensions create weight distributions that GPTQ handles worse. Keep ROPE_DIMS=16.
3. Activation slope changes are neutral or worse
PR #1948 reported slope=0.3 as optimal on a different base config. On the CaseOps+gates stack with EMBED_BITS=6, slope 0.5 remains optimal. Pure ReLU² hurts — the leaky negative slope provides useful gradient flow.
4. PR #1855 hparam stack does not transfer
The 9-hparam stack from PR #1855 was greedy-validated on a different config (SparseAttnGate, no SmearGate widening). On our CaseOps+AttnOutGate stack, tighter clip sigmas hurt quantization and WARMDOWN/BETA2 changes are neutral. Keep defaults.
5. EMBED_BITS=6 costs +0.014 BPP
Dropping embedding precision from int7 to int6 saves ~500KB but costs +0.014 BPB post-TTT. This is the price of fitting under 16MB without per-group lrzip compression.
6. LZMA compression is worse than brotli
LZMA produced artifacts ~300KB larger than brotli-11 on this architecture. Use brotli.
Negative Results Summary
Configuration
All experiments use
train_gpt.pyfrom the record submission (PR #1969) with env var overrides. No code changes needed except GATE_WIDTH and LEAKY_SLOPE which requiretrain_gpt_v2.py.