12L QAT Int4-MLP + Int6-Attn (Non-record)#910
12L QAT Int4-MLP + Int6-Attn (Non-record)#910Meirzhan05 wants to merge 6 commits intoopenai:mainfrom
Conversation
Community Review — 2L QAT Int4-MLP + Int6-AttnBPB: (not parsed — see PR title) | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 1098 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.04s, dim=512, layers=12, vocab=1024, code=91173 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.04s, dim=512, layers=12, vocab=1024, code=91173 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
|
@MatoTeziTanka
All seeds pass artifact (<16MB) and wallclock (≤600s) requirements. Post-quant BPB does not beat the naive baseline (1.2244) due to the int4 MLP quantization gap (~0.069 BPB), so marking this as a non-record submission. |
12L QAT Int4-MLP + Int6-Attn (Non-record Submission)
Summary
3-Seed Validation Results
Note: This is a non-record submission. The post-quantization BPB (1.2292) does not beat the naive baseline (1.2244) due to a large quantization gap (~0.069 BPB). The QAT approach with int4 MLP quantization introduces too much distortion at this model scale.
Changes
Implementation
QAT is applied directly in
MLP.forwardandCausalSelfAttention.forwardon the banked weight tensors via_fake_quantize_ste(row-wise scale, STE gradient). Clip ranges:mlp_clip=7(int4),attn_clip=31(int6). Post-training quantization uses GPTQ-lite clip search with the same ranges.All techniques from PR #549 are inherited: LeakyReLU(0.5)², Legal Score-First TTT, Parallel Muon + Parameter Banking, XSA on last 4 layers, Partial RoPE (16/64 dims), LN Scale 1/sqrt(layer+1), EMA (0.997), SmearGate, BigramHash(2048), Value Embedding.
Test Plan
Hardware