diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/README.md b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/README.md new file mode 100644 index 0000000000..6815af52e1 --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/README.md @@ -0,0 +1,163 @@ +# Corrected: PR #2014 stack + LeakyReLU 0.3 + token-only in-timer n-gram TTT (val_bpb 1.05702) + +**Corrected 3-seed mean: val_bpb 1.05701907** | **max 15,989,637 bytes** | 8xH100 SXM | 600s train + in-timer eval + +This commit corrects the originally submitted #2140 state. The initial #2140 logs accidentally restored the within-word, word-start, and agreement n-gram channels. The corrected run uses the intended PR #2018 posture: token-only n-gram tilt, with the target-token-gated channels disabled. + +## Results + +| Seed | Train steps | Pre-quant val_bpb | Quantized val_bpb | Post-TTT val_bpb | Train time | Eval time | Artifact bytes | Notes | +|------|------------:|------------------:|------------------:|-----------------:|-----------:|----------:|---------------:|-------| +| 42 | 4,901 | 1.05911087 | 1.06721120 | **1.05590816** | 596.1s | 506.3s | 15,989,637 | token-only in-timer n-gram hints, prefix=2500, chunk=64 | +| 0 | 4,861 | 1.06126113 | 1.07025102 | **1.05838308** | 596.2s | 553.5s | 15,985,432 | token-only in-timer n-gram hints, prefix=2500, chunk=64 | +| 314 | 4,855 | 1.06013753 | 1.06808383 | **1.05676598** | 596.1s | 473.2s | 15,983,433 | token-only in-timer n-gram hints, prefix=2500, chunk=64 | +| **Mean** | **4872.3** | **1.06016984** | **1.06851535** | **1.05701907** | **596.1s** | **511.0s** | **15,989,637 max** | 3 corrected seeds | + +Compared with the last merged leaderboard record (#1855, 1.06107587 BPB), this corrected 3-seed mean improves val_bpb by **0.00405680**. + +## Summary + +This corrected submission starts from the PR #2014 strict-compliance stack and adds two changes: + +1. **LeakyReLU-square slope 0.3.** PR #2014 inherited the older LeakyReLU(0.5)^2 MLP slope. This changes the fused/eager LeakyReLU-square path to slope 0.3, following the later PR #1967 lineage. +2. **Token-only in-timer online n-gram tilt during TTT eval.** This ports the in-timer online n-gram tilt approach we introduced in PR #2018 into the PR #2014 progressive-context / short-doc TTT path. Hints are built causally from validation tokens inside the measured TTT eval timer (`NGRAM_HINT_PRECOMPUTE_OUTSIDE=0`) and applied as a scoring-time posterior adjustment to per-token NLL. The within-word, word-start, and agreement channels are disabled (`WITHIN_BOOST=0.0`, `WORD_BOOST=0.0`, `AGREE_ADD_BOOST=0.0`). + +The n-gram path does not add model parameters and has no artifact-size cost beyond source files. The run keeps the PR #2014 global prefix phase (`PHASED_TTT_PREFIX_DOCS=2500`) and uses larger TTT chunks to fit hint construction and scoring inside the 600s eval budget. + +## What changed vs PR #2014 + +| Component | PR #2014 | This submission | +|-----------|----------|-----------------| +| Base stack | Progressive 3k context growth + ShortDoc TTT + CaseOps + LQER + AWQ-lite | Same | +| LeakyReLU-square slope | 0.5 | 0.3 | +| Eval-time n-gram tilt | off | token-only, causal, in timer | +| N-gram hint timing | n/a | `NGRAM_HINT_PRECOMPUTE_OUTSIDE=0` | +| Global phased TTT prefix | 2500 docs | 2500 docs | +| TTT chunking | 48 / short 24 | 64 / short 32 | +| Artifact | PR #2014 per-group compressed artifact | same compression path, 15,989,637 bytes max | + +## Compliance notes + +- **Training cap:** all three corrected seeds stopped under 600s (`596.069s`, `596.182s`, `596.099s`). +- **Eval cap:** all three corrected final TTT evals are under 600s (`506.254s`, `553.458s`, `473.217s`). All use `NGRAM_HINT_PRECOMPUTE_OUTSIDE=0`, so n-gram hint generation is inside the measured eval timer. +- **Artifact cap:** max observed `Total submission size quantized+pergroup` is 15,989,637 bytes, under 16 MB. +- **Score-first TTT:** the LoRA TTT path scores each chunk before any per-doc update. The global prefix SGD phase runs after the prefix docs have already been scored. +- **N-gram causality:** hints are generated by a single left-to-right pass over validation tokens and aligned to target positions. The tilt uses prefix-derived token hint IDs and boosts; it does not inspect future tokens for the scored position. +- **Token-only diagnostic:** all corrected evals report `ngram_tilt:hints total=47853343 gated=628156 token_gate=628156 within_gate=0 word_gate=0 agree2plus=0`. + +## Key settings + +```bash +CASEOPS_ENABLED=1 +VOCAB_SIZE=8192 +TRAIN_SEQ_LEN=3072 +ROPE_TRAIN_SEQ_LEN=3072 +TRAIN_SEQ_SCHEDULE=1024@0.100,2048@0.700,3072@1.000 +TRAIN_SEQ_SCHEDULE_MODE=wallclock +SEQ_CHANGE_WARMUP_STEPS=32 +COMPILE_SHAPE_WARMUP=1 +EVAL_SEQ_LEN=3072 +EVAL_STRIDE=1536 + +LEAKY_RELU_SQ_SLOPE=0.3 + +TTT_ENABLED=1 +TTT_EVAL_SEQ_LEN=3072 +TTT_BATCH_SIZE=24 +TTT_CHUNK_SIZE=64 +TTT_SHORT_SCORE_FIRST_ENABLED=1 +TTT_SHORT_DOC_LEN=2000 +TTT_SHORT_CHUNK_SIZE=32 +TTT_SHORT_SCORE_FIRST_STEPS=256:16,2000:32 +TTT_LORA_RANK=80 +TTT_LORA_LR=0.0001 +TTT_LOCAL_LR_MULT=0.75 +TTT_MASK=no_qv +TTT_Q_LORA=0 +TTT_V_LORA=0 +TTT_WEIGHT_DECAY=0.5 +TTT_BETA2=0.99 +PHASED_TTT_PREFIX_DOCS=2500 +PHASED_TTT_NUM_PHASES=1 + +NGRAM_TILT_ENABLED=1 +NGRAM_HINT_PRECOMPUTE_OUTSIDE=0 +TOKEN_ORDER=16 +TOKEN_THRESHOLD=0.800 +TOKEN_BOOST=2.625 +WITHIN_TAU=0.450 +WITHIN_BOOST=0.0 +WORD_ORDER=4 +WORD_NORMALIZE=strip_punct_lower +WORD_TAU=0.650 +WORD_BOOST=0.0 +AGREE_ADD_BOOST=0.0 + +WARMDOWN_FRAC=0.85 +BETA2=0.99 +QK_GAIN_INIT=5.25 +SPARSE_ATTN_GATE_ENABLED=1 +SPARSE_ATTN_GATE_SCALE=0.5 +GATED_ATTN_QUANT_GATE=1 +SMEAR_GATE_ENABLED=1 +GATE_WINDOW=12 +FUSED_CE_ENABLED=1 +MATRIX_LR=0.026 +MIN_LR=0.1 +GRAD_CLIP_NORM=0.3 +EMBED_BITS=7 +EMBED_CLIP_SIGMAS=14.0 +MATRIX_CLIP_SIGMAS=12.85 +ATTN_CLIP_SIGMAS=13.0 +MLP_CLIP_SIGMAS=11.5 +LQER_ENABLED=1 +LQER_RANK=4 +LQER_TOP_K=3 +LQER_FACTOR_BITS=4 +LQER_ASYM_ENABLED=1 +LQER_ASYM_GROUP=64 +AWQ_LITE_ENABLED=1 +AWQ_LITE_BITS=8 +AWQ_LITE_GROUP_TOP_K=1 +AWQ_LITE_GROUP_SIZE=64 +ASYM_LOGIT_RESCALE=1 +COMPRESSOR=pergroup +GPTQ_RESERVE_SECONDS=4.0 +GPTQ_CALIBRATION_BATCHES=16 +VAL_LOSS_EVERY=0 +``` + +## Files + +- `train_gpt.py` — full script for the candidate. +- `online_ngram_tilt.py`, `online_ngram_state.c` — online causal n-gram hint builder and scoring-time tilt helper, from the PR #2018 in-timer n-gram tilt work. +- `train_eval_seed42_corrected_token_only.log` — corrected seed-42 token-only in-timer TTT eval log, using the corrected seed-42 training artifact. +- `train_eval_seed0_corrected_token_only.log` — corrected seed-0 training, quantization, and token-only in-timer TTT eval log. +- `train_eval_seed314_corrected_token_only.log` — corrected seed-314 training, quantization, and token-only in-timer TTT eval log. +- `train_seed42.log`, `eval_seed42_ngram_p0_c64.log`, `eval_seed42_ngram_p2500_c64.log`, `train_eval_seed314.log`, `train_eval_seed0.log` — superseded initial #2140 logs retained for transparency; these used the accidental within-word / word-start / agreement n-gram channels and are not the corrected token-only result. +- `prepare_caseops_data.py`, `lossless_caps.py`, `tokenizers/...model` — CaseOps data/tokenizer helpers from the merged #1855 lineage. +- `submission.json` — structured 3-seed metadata. + +## Reproducing + +After preparing the CaseOps data and tokenizer, run with the environment above: + +```bash +SEED=42 DATA_PATH=./data/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved TOKENIZER_PATH=./data/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model torchrun --standalone --nproc_per_node=8 train_gpt.py +``` + +For the eval-only sweep used here, load the saved quantized artifact and run with: + +```bash +TTT_EVAL_ONLY=1 NGRAM_TILT_ENABLED=1 NGRAM_HINT_PRECOMPUTE_OUTSIDE=0 WITHIN_BOOST=0.0 WORD_BOOST=0.0 AGREE_ADD_BOOST=0.0 PHASED_TTT_PREFIX_DOCS=2500 TTT_CHUNK_SIZE=64 TTT_SHORT_CHUNK_SIZE=32 TTT_SHORT_SCORE_FIRST_STEPS=256:16,2000:32 torchrun --standalone --nproc_per_node=8 train_gpt.py +``` + +## Credits + +This is a stack on top of the recent strict-compliance CaseOps line. Most directly: + +- PR #2014 by @simonbissonnette — progressive 3k context growth and short-doc TTT base. +- PR #2018 — in-timer online n-gram tilt during TTT eval. +- PR #1967 / PR #1948 lineage — LeakyReLU-square slope 0.3 evidence. +- PR #1855 by @codemath3000 — merged CaseOps / SparseGate / LQER / per-group compression lineage and data-prep precedent. +- PR #1787, #1736, #1729, #1667, #1626, #1530, #1344, #493, #478, #315, and #289 for the underlying architecture, optimizer, tokenizer, quantization, compression, and legal score-first TTT components. diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/eval_seed42_ngram_p0_c64.log b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/eval_seed42_ngram_p0_c64.log new file mode 100644 index 0000000000..829bae1549 --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/eval_seed42_ngram_p0_c64.log @@ -0,0 +1,398 @@ +W0501 22:27:38.964000 624189 torch/distributed/run.py:803] +W0501 22:27:38.964000 624189 torch/distributed/run.py:803] ***************************************** +W0501 22:27:38.964000 624189 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0501 22:27:38.964000 624189 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + agree_add_boost: 0.5 + artifact_dir: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_ttt_sweeps/lrelu03_ngram_inside_p0_c64_s42 + attn_clip_sigmas: 13.0 + attn_out_gate_enabled: False + attn_out_gate_src: proj + awq_lite_bits: 8 + awq_lite_enabled: True + awq_lite_group_size: 64 + awq_lite_group_top_k: 1 + beta1: 0.9 + beta2: 0.99 + caseops_enabled: True + compile_shape_warmup: True + compile_shape_warmup_iters: 1 + compile_shape_warmup_loop_modes: auto + compressor: pergroup + data_dir: ./data + datasets_dir: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved + distributed: True + ema_decay: 0.9965 + embed_bits: 7 + embed_clip_sigmas: 14.0 + embed_lr: 0.6 + embed_wd: 0.085 + enable_looping_at: 0.35 + eval_include_tail: True + eval_seq_len: 3072 + eval_stride: 1536 + fused_ce_enabled: True + gate_window: 12 + gated_attn_enabled: False + gated_attn_init_std: 0.01 + gated_attn_quant_gate: True + global_ttt_batch_seqs: 32 + global_ttt_chunk_tokens: 32768 + global_ttt_epochs: 1 + global_ttt_grad_clip: 1.0 + global_ttt_lr: 0.001 + global_ttt_momentum: 0.9 + global_ttt_respect_doc_boundaries: True + global_ttt_warmup_chunks: 0 + global_ttt_warmup_start_lr: 0.0 + gptq_calibration_batches: 16 + gptq_reserve_seconds: 4.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + is_main_process: True + iterations: 20000 + leaky_relu_sq_slope: 0.3 + ln_scale: True + local_rank: 0 + logfile: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_ttt_sweeps/lrelu03_ngram_inside_p0_c64_s42/lrelu03_ngram_inside_p0_c64_s42.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + lqer_asym_enabled: True + lqer_asym_group: 64 + lqer_enabled: True + lqer_factor_bits: 4 + lqer_gain_select: False + lqer_rank: 4 + lqer_scope: all + lqer_top_k: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.026 + max_wallclock_seconds: 600.0 + midrun_cap_log_updates: False + midrun_cap_schedule: + min_lr: 0.1 + mlp_clip_sigmas: 11.5 + mlp_mult: 4.0 + model_dim: 512 + model_path: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_ttt_sweeps/lrelu03_ngram_inside_p0_c64_s42/final_model.pt + muon_backend_steps: 5 + muon_momentum: 0.97 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + ngram_hint_precompute_outside: False + ngram_tilt_enabled: True + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_final_lane: mean + parallel_start_layer: 8 + phased_ttt_num_phases: 1 + phased_ttt_prefix_docs: 0 + qk_gain_init: 5.25 + quantized_model_path: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_ttt_sweeps/lrelu03_ngram_inside_p0_c64_s42/final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 3072 + rope_yarn: False + run_id: lrelu03_ngram_inside_p0_c64_s42 + scalar_lr: 0.02 + seed: 42 + seq_change_warmup_steps: 32 + skip_gates_enabled: True + skylight_norm_beta2: 0.95 + skylight_norm_ema: False + skylight_norm_eps: 1e-07 + skylight_uw_floor: False + skylight_uw_ratio: 0.35 + smear_gate_enabled: True + sparse_attn_gate_enabled: True + sparse_attn_gate_init_std: 0.0 + sparse_attn_gate_scale: 0.5 + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + token_boost: 2.625 + token_order: 16 + token_threshold: 0.8 + tokenizer_path: /tmp/parameter-golf-data-authorhf/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + train_batch_tokens: 786432 + train_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 3072 + train_seq_schedule: 1024@0.100,2048@0.700,3072@1.000 + train_seq_schedule_mode: wallclock + ttt_batch_size: 24 + ttt_beta1: 0.0 + ttt_beta2: 0.99 + ttt_chunk_size: 64 + ttt_enabled: True + ttt_eval_batches: + ttt_eval_seq_len: 3072 + ttt_grad_steps: 1 + ttt_k_lora: True + ttt_local_lr_mult: 0.75 + ttt_lora_lr: 0.0001 + ttt_lora_rank: 80 + ttt_mask: no_qv + ttt_mlp_lora: True + ttt_o_lora: True + ttt_optimizer: adam + ttt_q_lora: False + ttt_short_beta2: 0.99 + ttt_short_chunk_size: 32 + ttt_short_doc_len: 2000 + ttt_short_lora_enabled: False + ttt_short_lora_lr: 0.0001 + ttt_short_lora_rank: 80 + ttt_short_score_first_enabled: True + ttt_short_score_first_steps: 256:16,2000:32 + ttt_short_weight_decay: 0.5 + ttt_train_max_doc_len: 0 + ttt_train_min_doc_len: 0 + ttt_v_lora: False + ttt_warm_start_mean_doc_len: 2000 + ttt_warm_start_mean_enabled: False + ttt_warm_start_mean_momentum: 0.95 + ttt_weight_decay: 0.5 + val_batch_tokens: 524288 + val_bytes_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_bytes_*.bin + val_doc_fraction: 1.0 + val_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_*.bin + val_loss_every: 0 + vocab_size: 8192 + warmdown_frac: 0.85 + warmdown_iters: 0 + warmup_steps: 20 + within_boost: 0.75 + within_tau: 0.45 + word_boost: 0.75 + word_normalize: strip_punct_lower + word_order: 4 + word_tau: 0.65 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 47853343 +TTT_EVAL_ONLY=1 — skipping training + GPTQ, loading saved artifact for TTT eval +ttt_lora_alpha: 144.0 +ttt_warm_start_a: True +ttt_weight_decay: 0.5 +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 16.3s +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 16.1s +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (187.3s) + +beginning TTT eval timer +ngram_tilt:hints total=47853343 gated=13023831 token_gate=628156 within_gate=9867233 word_gate=2891718 agree2plus=303187 +ngram_tilt:precompute_outside_timer_done elapsed=119.56s total_targets=47853343 +ttt_phased: total_docs:50000 prefix_docs:0 suffix_docs:50000 num_phases:1 boundaries:[0] target_tokens:47853343 +ttp: b2084/2084 bl:2.1279 bb:1.0178 rl:2.1279 rb:1.0178 dl:36899-97114 gd:1 sr:0 sf:0 tr:8/8 wt:0 +ttp: b2013/2084 bl:2.3889 bb:1.0510 rl:2.1635 rb:1.0226 dl:3436-3454 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1793/2084 bl:2.4378 bb:1.0649 rl:2.1795 rb:1.0253 dl:1562-1565 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1784/2084 bl:2.2640 bb:1.0572 rl:2.1841 rb:1.0270 dl:1534-1537 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1777/2084 bl:2.2770 bb:1.0523 rl:2.1888 rb:1.0283 dl:1514-1517 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1769/2084 bl:2.2786 bb:1.0055 rl:2.1931 rb:1.0272 dl:1493-1496 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1760/2084 bl:2.3649 bb:1.0256 rl:2.2008 rb:1.0271 dl:1471-1474 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1752/2084 bl:2.3104 bb:1.0591 rl:2.2054 rb:1.0285 dl:1450-1452 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1744/2084 bl:2.2377 bb:1.0511 rl:2.2067 rb:1.0294 dl:1429-1432 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1736/2084 bl:2.2130 bb:1.0304 rl:2.2070 rb:1.0294 dl:1410-1411 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1727/2084 bl:2.1983 bb:1.0248 rl:2.2067 rb:1.0292 dl:1388-1390 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1718/2084 bl:2.2735 bb:1.0203 rl:2.2089 rb:1.0289 dl:1366-1368 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1709/2084 bl:2.3116 bb:1.0241 rl:2.2123 rb:1.0288 dl:1346-1349 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1701/2084 bl:2.3365 bb:1.0538 rl:2.2162 rb:1.0296 dl:1329-1330 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1693/2084 bl:2.3400 bb:1.0668 rl:2.2199 rb:1.0307 dl:1311-1314 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1684/2084 bl:2.3155 bb:1.0209 rl:2.2226 rb:1.0304 dl:1291-1293 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1676/2084 bl:2.3708 bb:1.0366 rl:2.2267 rb:1.0306 dl:1277-1278 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1667/2084 bl:2.4532 bb:1.0623 rl:2.2327 rb:1.0315 dl:1258-1260 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1658/2084 bl:2.2232 bb:1.0022 rl:2.2324 rb:1.0307 dl:1240-1242 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1650/2084 bl:2.3722 bb:1.0629 rl:2.2358 rb:1.0315 dl:1223-1225 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1642/2084 bl:2.5114 bb:1.0885 rl:2.2423 rb:1.0330 dl:1207-1209 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1634/2084 bl:2.2398 bb:0.9819 rl:2.2423 rb:1.0317 dl:1192-1194 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1626/2084 bl:2.2322 bb:1.0023 rl:2.2420 rb:1.0311 dl:1179-1181 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1618/2084 bl:2.3742 bb:1.1187 rl:2.2449 rb:1.0329 dl:1166-1167 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1603/2084 bl:2.2139 bb:0.9702 rl:2.2442 rb:1.0316 dl:1140-1142 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1595/2084 bl:2.3144 bb:1.0425 rl:2.2456 rb:1.0318 dl:1128-1130 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1587/2084 bl:2.1881 bb:1.0367 rl:2.2445 rb:1.0319 dl:1114-1115 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1577/2084 bl:2.1280 bb:0.9527 rl:2.2424 rb:1.0304 dl:1099-1100 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1567/2084 bl:2.3152 bb:1.0475 rl:2.2437 rb:1.0307 dl:1082-1084 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1558/2084 bl:2.1999 bb:0.9901 rl:2.2429 rb:1.0300 dl:1068-1069 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1549/2084 bl:2.3074 bb:1.0343 rl:2.2440 rb:1.0300 dl:1054-1055 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1541/2084 bl:2.3556 bb:1.0012 rl:2.2458 rb:1.0295 dl:1043-1044 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1533/2084 bl:2.3240 bb:1.0352 rl:2.2471 rb:1.0296 dl:1031-1032 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1525/2084 bl:2.2841 bb:1.0133 rl:2.2476 rb:1.0293 dl:1019-1021 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1517/2084 bl:2.3400 bb:1.0988 rl:2.2490 rb:1.0304 dl:1009-1010 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1509/2084 bl:2.2974 bb:1.0095 rl:2.2497 rb:1.0301 dl:998-1000 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1501/2084 bl:2.3274 bb:1.0588 rl:2.2509 rb:1.0305 dl:988-989 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1493/2084 bl:2.1575 bb:0.9758 rl:2.2496 rb:1.0297 dl:977-978 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1485/2084 bl:2.2304 bb:0.9737 rl:2.2493 rb:1.0289 dl:967-967 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1474/2084 bl:2.3601 bb:1.0490 rl:2.2508 rb:1.0292 dl:953-955 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1463/2084 bl:2.2343 bb:1.0042 rl:2.2506 rb:1.0288 dl:940-942 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1455/2084 bl:2.2834 bb:1.0037 rl:2.2510 rb:1.0285 dl:931-932 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1448/2084 bl:2.3499 bb:0.9997 rl:2.2522 rb:1.0281 dl:924-924 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1439/2084 bl:2.2704 bb:1.0375 rl:2.2524 rb:1.0282 dl:913-914 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1431/2084 bl:2.2823 bb:1.0493 rl:2.2528 rb:1.0285 dl:903-904 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1423/2084 bl:2.3977 bb:1.1054 rl:2.2545 rb:1.0294 dl:894-895 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1413/2084 bl:2.2290 bb:1.0448 rl:2.2542 rb:1.0295 dl:883-884 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1403/2084 bl:2.2669 bb:1.0625 rl:2.2543 rb:1.0299 dl:871-873 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1394/2084 bl:2.3130 bb:1.0469 rl:2.2549 rb:1.0301 dl:861-862 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1384/2084 bl:2.3795 bb:1.0281 rl:2.2563 rb:1.0301 dl:851-852 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1376/2084 bl:2.5022 bb:1.0714 rl:2.2588 rb:1.0305 dl:842-843 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1368/2084 bl:2.4061 bb:1.0677 rl:2.2603 rb:1.0309 dl:834-835 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1360/2084 bl:2.4099 bb:1.0969 rl:2.2618 rb:1.0316 dl:825-826 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1352/2084 bl:2.3625 bb:1.0799 rl:2.2628 rb:1.0320 dl:816-817 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1344/2084 bl:2.2409 bb:1.0103 rl:2.2626 rb:1.0318 dl:808-809 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1336/2084 bl:2.5065 bb:1.0817 rl:2.2648 rb:1.0323 dl:801-802 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1326/2084 bl:2.4564 bb:1.0491 rl:2.2666 rb:1.0325 dl:791-791 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1317/2084 bl:2.3418 bb:1.0308 rl:2.2673 rb:1.0325 dl:782-783 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1309/2084 bl:2.3720 bb:1.0398 rl:2.2682 rb:1.0325 dl:774-775 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1299/2084 bl:2.2628 bb:1.0661 rl:2.2681 rb:1.0328 dl:766-767 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1291/2084 bl:2.3492 bb:1.0363 rl:2.2688 rb:1.0328 dl:758-759 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1281/2084 bl:2.3067 bb:1.0392 rl:2.2692 rb:1.0329 dl:749-750 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1273/2084 bl:2.2597 bb:1.0279 rl:2.2691 rb:1.0328 dl:742-743 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1264/2084 bl:2.3466 bb:1.0893 rl:2.2697 rb:1.0333 dl:734-735 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1256/2084 bl:2.4909 bb:1.0996 rl:2.2714 rb:1.0338 dl:727-728 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1246/2084 bl:2.2923 bb:1.0377 rl:2.2716 rb:1.0339 dl:719-720 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1238/2084 bl:2.3523 bb:1.0928 rl:2.2722 rb:1.0343 dl:712-713 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1229/2084 bl:2.3124 bb:1.0308 rl:2.2725 rb:1.0343 dl:705-706 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1221/2084 bl:2.1183 bb:1.0353 rl:2.2714 rb:1.0343 dl:699-699 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1210/2084 bl:2.3854 bb:1.0867 rl:2.2722 rb:1.0347 dl:690-691 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1202/2084 bl:2.4484 bb:1.1592 rl:2.2734 rb:1.0355 dl:683-684 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1194/2084 bl:2.3541 bb:1.0322 rl:2.2740 rb:1.0355 dl:677-678 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1185/2084 bl:2.3001 bb:1.0296 rl:2.2742 rb:1.0354 dl:670-671 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1177/2084 bl:2.2638 bb:1.0325 rl:2.2741 rb:1.0354 dl:664-665 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1168/2084 bl:2.1758 bb:1.0293 rl:2.2735 rb:1.0354 dl:656-657 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1160/2084 bl:2.2230 bb:0.9859 rl:2.2731 rb:1.0351 dl:650-651 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1150/2084 bl:2.4199 bb:1.0434 rl:2.2741 rb:1.0351 dl:643-644 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1140/2084 bl:2.2597 bb:1.0639 rl:2.2740 rb:1.0353 dl:637-638 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1132/2084 bl:2.4125 bb:1.1031 rl:2.2748 rb:1.0357 dl:631-631 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1123/2084 bl:2.2987 bb:0.9822 rl:2.2750 rb:1.0354 dl:624-625 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1114/2084 bl:2.3429 bb:1.0580 rl:2.2754 rb:1.0355 dl:617-618 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1107/2084 bl:2.3788 bb:1.0301 rl:2.2760 rb:1.0355 dl:613-613 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1094/2084 bl:2.3865 bb:1.0613 rl:2.2766 rb:1.0356 dl:603-603 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1086/2084 bl:2.3381 bb:1.0903 rl:2.2770 rb:1.0359 dl:597-597 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1077/2084 bl:2.3240 bb:1.0144 rl:2.2773 rb:1.0358 dl:591-591 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1066/2084 bl:2.3105 bb:1.0688 rl:2.2774 rb:1.0360 dl:583-584 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1060/2084 bl:2.2270 bb:0.9976 rl:2.2772 rb:1.0358 dl:579-579 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1048/2084 bl:2.3450 bb:1.0955 rl:2.2775 rb:1.0361 dl:571-571 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1038/2084 bl:2.3144 bb:1.0534 rl:2.2777 rb:1.0362 dl:564-565 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1030/2084 bl:2.3843 bb:1.0690 rl:2.2783 rb:1.0363 dl:558-559 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1020/2084 bl:2.2609 bb:1.0227 rl:2.2782 rb:1.0363 dl:552-553 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1012/2084 bl:2.4399 bb:1.1492 rl:2.2790 rb:1.0368 dl:547-548 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1002/2084 bl:2.1206 bb:0.9452 rl:2.2782 rb:1.0363 dl:541-542 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b994/2084 bl:2.4336 bb:1.1129 rl:2.2790 rb:1.0367 dl:536-536 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b982/2084 bl:2.3193 bb:1.0906 rl:2.2792 rb:1.0370 dl:528-529 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b974/2084 bl:2.3808 bb:1.0883 rl:2.2796 rb:1.0372 dl:523-524 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b966/2084 bl:2.4291 bb:1.0779 rl:2.2803 rb:1.0374 dl:518-519 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b958/2084 bl:2.2305 bb:0.9997 rl:2.2801 rb:1.0372 dl:513-514 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b947/2084 bl:2.3673 bb:1.0688 rl:2.2805 rb:1.0374 dl:506-507 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b939/2084 bl:2.2841 bb:1.0159 rl:2.2805 rb:1.0373 dl:501-501 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b928/2084 bl:2.3936 bb:1.0428 rl:2.2810 rb:1.0373 dl:494-495 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b918/2084 bl:2.3595 bb:1.0728 rl:2.2813 rb:1.0374 dl:489-490 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b912/2084 bl:2.2454 bb:1.0561 rl:2.2812 rb:1.0375 dl:486-486 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b902/2084 bl:2.3157 bb:1.0542 rl:2.2813 rb:1.0376 dl:480-481 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b890/2084 bl:2.4285 bb:1.0993 rl:2.2819 rb:1.0378 dl:473-474 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b882/2084 bl:2.4294 bb:1.1322 rl:2.2825 rb:1.0382 dl:468-469 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b873/2084 bl:2.2907 bb:0.9702 rl:2.2825 rb:1.0379 dl:463-464 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b866/2084 bl:2.3499 bb:1.0662 rl:2.2828 rb:1.0380 dl:460-460 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b856/2084 bl:2.3808 bb:1.0868 rl:2.2832 rb:1.0382 dl:454-455 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b851/2084 bl:2.4555 bb:1.1215 rl:2.2838 rb:1.0385 dl:451-451 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b839/2084 bl:2.3202 bb:1.0374 rl:2.2840 rb:1.0385 dl:444-444 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b831/2084 bl:2.2867 bb:1.0443 rl:2.2840 rb:1.0386 dl:439-440 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b826/2084 bl:2.2135 bb:1.0558 rl:2.2837 rb:1.0386 dl:437-437 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b817/2084 bl:2.3876 bb:1.1188 rl:2.2841 rb:1.0389 dl:432-432 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b806/2084 bl:2.3702 bb:1.0719 rl:2.2844 rb:1.0390 dl:425-426 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b798/2084 bl:2.3444 bb:1.0426 rl:2.2846 rb:1.0390 dl:421-421 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b786/2084 bl:2.2666 bb:1.0370 rl:2.2846 rb:1.0390 dl:414-415 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b780/2084 bl:2.4261 bb:1.0729 rl:2.2850 rb:1.0391 dl:411-411 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b770/2084 bl:2.3211 bb:1.0923 rl:2.2852 rb:1.0393 dl:405-406 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b764/2084 bl:2.4738 bb:1.1498 rl:2.2858 rb:1.0397 dl:402-402 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b754/2084 bl:2.3099 bb:1.1543 rl:2.2859 rb:1.0400 dl:397-397 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b743/2084 bl:2.2369 bb:1.0293 rl:2.2857 rb:1.0400 dl:391-392 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b734/2084 bl:2.3020 bb:1.0034 rl:2.2857 rb:1.0398 dl:386-387 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b723/2084 bl:2.2388 bb:1.0594 rl:2.2856 rb:1.0399 dl:381-382 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b715/2084 bl:2.2448 bb:0.9959 rl:2.2855 rb:1.0398 dl:377-378 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b705/2084 bl:2.2071 bb:1.0640 rl:2.2852 rb:1.0398 dl:372-373 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b696/2084 bl:2.3453 bb:1.0882 rl:2.2854 rb:1.0400 dl:368-369 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b688/2084 bl:2.4039 bb:1.0481 rl:2.2858 rb:1.0400 dl:364-365 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b679/2084 bl:2.3516 bb:1.0718 rl:2.2860 rb:1.0401 dl:360-360 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b672/2084 bl:2.3812 bb:1.0592 rl:2.2862 rb:1.0401 dl:357-357 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b662/2084 bl:2.5823 bb:1.1494 rl:2.2870 rb:1.0405 dl:352-353 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b657/2084 bl:2.2958 bb:1.0502 rl:2.2871 rb:1.0405 dl:350-350 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b645/2084 bl:2.4055 bb:1.1220 rl:2.2874 rb:1.0407 dl:344-345 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b637/2084 bl:2.2905 bb:1.0757 rl:2.2874 rb:1.0408 dl:340-341 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b626/2084 bl:2.2341 bb:1.0427 rl:2.2873 rb:1.0408 dl:335-336 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b619/2084 bl:2.2022 bb:1.0634 rl:2.2870 rb:1.0408 dl:332-333 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b611/2084 bl:2.2210 bb:1.0619 rl:2.2869 rb:1.0409 dl:329-329 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b596/2084 bl:2.5174 bb:1.1739 rl:2.2874 rb:1.0412 dl:322-323 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b590/2084 bl:2.3588 bb:1.1044 rl:2.2876 rb:1.0414 dl:320-320 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b581/2084 bl:2.3763 bb:1.0819 rl:2.2878 rb:1.0415 dl:316-316 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b572/2084 bl:2.3706 bb:1.1173 rl:2.2880 rb:1.0416 dl:312-312 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b559/2084 bl:2.2871 bb:1.0772 rl:2.2880 rb:1.0417 dl:306-307 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b549/2084 bl:2.3369 bb:1.0504 rl:2.2881 rb:1.0417 dl:302-303 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b541/2084 bl:2.3430 bb:1.0672 rl:2.2883 rb:1.0418 dl:299-300 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b535/2084 bl:2.4526 bb:1.1266 rl:2.2886 rb:1.0420 dl:297-297 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b523/2084 bl:2.3790 bb:1.0533 rl:2.2888 rb:1.0420 dl:292-292 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b510/2084 bl:2.4319 bb:1.1372 rl:2.2892 rb:1.0422 dl:286-287 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b503/2084 bl:2.5129 bb:1.2010 rl:2.2896 rb:1.0425 dl:284-284 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b496/2084 bl:2.3249 bb:1.1363 rl:2.2897 rb:1.0427 dl:281-281 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b481/2084 bl:2.2658 bb:1.1440 rl:2.2897 rb:1.0429 dl:275-276 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b470/2084 bl:2.3386 bb:1.1595 rl:2.2898 rb:1.0431 dl:271-272 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b462/2084 bl:2.4006 bb:1.1200 rl:2.2900 rb:1.0433 dl:268-269 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b455/2084 bl:2.3769 bb:1.1236 rl:2.2902 rb:1.0434 dl:266-266 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b444/2084 bl:2.5170 bb:1.1440 rl:2.2906 rb:1.0436 dl:262-262 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b434/2084 bl:2.2995 bb:1.0971 rl:2.2906 rb:1.0437 dl:258-258 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b425/2084 bl:2.3741 bb:1.0948 rl:2.2908 rb:1.0438 dl:255-255 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b414/2084 bl:2.4216 bb:1.1866 rl:2.2910 rb:1.0441 dl:251-251 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b406/2084 bl:2.2211 bb:1.0713 rl:2.2909 rb:1.0441 dl:248-248 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b398/2084 bl:2.2191 bb:1.0818 rl:2.2908 rb:1.0442 dl:245-245 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b389/2084 bl:2.4513 bb:1.1534 rl:2.2910 rb:1.0444 dl:242-242 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b378/2084 bl:2.4199 bb:1.1020 rl:2.2913 rb:1.0445 dl:238-238 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b366/2084 bl:2.4066 bb:1.1066 rl:2.2915 rb:1.0446 dl:233-234 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b362/2084 bl:2.4030 bb:1.1127 rl:2.2917 rb:1.0447 dl:232-232 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b353/2084 bl:2.4519 bb:1.0987 rl:2.2919 rb:1.0448 dl:229-229 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b344/2084 bl:2.3748 bb:1.1814 rl:2.2921 rb:1.0450 dl:226-226 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b336/2084 bl:2.4930 bb:1.1692 rl:2.2924 rb:1.0452 dl:223-223 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b321/2084 bl:2.3849 bb:1.0537 rl:2.2925 rb:1.0452 dl:218-218 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b312/2084 bl:2.4524 bb:1.1647 rl:2.2928 rb:1.0454 dl:215-215 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b303/2084 bl:2.4594 bb:1.0748 rl:2.2931 rb:1.0454 dl:212-212 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b293/2084 bl:2.3013 bb:1.1107 rl:2.2931 rb:1.0455 dl:208-208 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b284/2084 bl:2.4203 bb:1.1592 rl:2.2933 rb:1.0457 dl:205-205 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b273/2084 bl:2.4208 bb:1.1757 rl:2.2934 rb:1.0459 dl:202-202 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b264/2084 bl:2.5170 bb:1.2058 rl:2.2938 rb:1.0461 dl:198-199 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b257/2084 bl:2.3602 bb:1.0667 rl:2.2939 rb:1.0461 dl:196-196 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b245/2084 bl:2.3711 bb:1.1442 rl:2.2940 rb:1.0462 dl:192-192 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b237/2084 bl:2.4063 bb:1.1042 rl:2.2941 rb:1.0463 dl:189-189 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b227/2084 bl:2.4713 bb:1.1391 rl:2.2943 rb:1.0464 dl:186-186 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b217/2084 bl:2.4819 bb:1.1250 rl:2.2946 rb:1.0466 dl:183-183 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b207/2084 bl:2.4731 bb:1.1862 rl:2.2948 rb:1.0467 dl:179-179 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b196/2084 bl:2.3273 bb:1.1137 rl:2.2949 rb:1.0468 dl:175-176 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b188/2084 bl:2.3979 bb:1.1543 rl:2.2950 rb:1.0469 dl:173-173 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b174/2084 bl:2.4675 bb:1.1870 rl:2.2952 rb:1.0471 dl:168-169 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b168/2084 bl:2.4250 bb:1.1922 rl:2.2954 rb:1.0472 dl:166-166 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b159/2084 bl:2.4557 bb:1.1711 rl:2.2955 rb:1.0474 dl:163-163 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b146/2084 bl:2.5734 bb:1.2182 rl:2.2959 rb:1.0476 dl:158-159 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b136/2084 bl:2.5203 bb:1.2098 rl:2.2961 rb:1.0477 dl:155-155 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b125/2084 bl:2.4188 bb:1.1240 rl:2.2962 rb:1.0478 dl:150-151 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b112/2084 bl:2.5877 bb:1.2475 rl:2.2965 rb:1.0480 dl:146-146 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b97/2084 bl:2.6264 bb:1.2280 rl:2.2969 rb:1.0482 dl:140-141 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b88/2084 bl:2.5925 bb:1.2467 rl:2.2971 rb:1.0484 dl:137-137 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b76/2084 bl:2.7347 bb:1.2681 rl:2.2976 rb:1.0486 dl:132-133 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b66/2084 bl:2.6823 bb:1.2156 rl:2.2979 rb:1.0487 dl:128-129 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b57/2084 bl:2.5388 bb:1.2373 rl:2.2981 rb:1.0489 dl:124-125 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b48/2084 bl:2.7185 bb:1.2199 rl:2.2985 rb:1.0490 dl:120-121 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b38/2084 bl:2.7277 bb:1.2234 rl:2.2988 rb:1.0492 dl:115-116 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b25/2084 bl:2.6253 bb:1.1506 rl:2.2991 rb:1.0492 dl:107-108 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b17/2084 bl:2.7678 bb:1.2101 rl:2.2994 rb:1.0494 dl:101-102 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b6/2084 bl:2.6758 bb:1.1705 rl:2.2996 rb:1.0494 dl:89-90 gd:1 sr:0 sf:1 tr:24/24 wt:0 +quantized_ttt_phased val_loss:2.30986239 val_bpb:1.05551424 eval_time:492481ms +total_eval_time:492.5s diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/eval_seed42_ngram_p2500_c64.log b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/eval_seed42_ngram_p2500_c64.log new file mode 100644 index 0000000000..145a7a6805 --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/eval_seed42_ngram_p2500_c64.log @@ -0,0 +1,779 @@ +W0501 22:53:09.281000 675667 torch/distributed/run.py:803] +W0501 22:53:09.281000 675667 torch/distributed/run.py:803] ***************************************** +W0501 22:53:09.281000 675667 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0501 22:53:09.281000 675667 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + agree_add_boost: 0.5 + artifact_dir: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_ttt_sweeps/lrelu03_ngram_inside_p2500_c64_s42 + attn_clip_sigmas: 13.0 + attn_out_gate_enabled: False + attn_out_gate_src: proj + awq_lite_bits: 8 + awq_lite_enabled: True + awq_lite_group_size: 64 + awq_lite_group_top_k: 1 + beta1: 0.9 + beta2: 0.99 + caseops_enabled: True + compile_shape_warmup: True + compile_shape_warmup_iters: 1 + compile_shape_warmup_loop_modes: auto + compressor: pergroup + data_dir: ./data + datasets_dir: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved + distributed: True + ema_decay: 0.9965 + embed_bits: 7 + embed_clip_sigmas: 14.0 + embed_lr: 0.6 + embed_wd: 0.085 + enable_looping_at: 0.35 + eval_include_tail: True + eval_seq_len: 3072 + eval_stride: 1536 + fused_ce_enabled: True + gate_window: 12 + gated_attn_enabled: False + gated_attn_init_std: 0.01 + gated_attn_quant_gate: True + global_ttt_batch_seqs: 32 + global_ttt_chunk_tokens: 32768 + global_ttt_epochs: 1 + global_ttt_grad_clip: 1.0 + global_ttt_lr: 0.001 + global_ttt_momentum: 0.9 + global_ttt_respect_doc_boundaries: True + global_ttt_warmup_chunks: 0 + global_ttt_warmup_start_lr: 0.0 + gptq_calibration_batches: 16 + gptq_reserve_seconds: 4.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + is_main_process: True + iterations: 20000 + leaky_relu_sq_slope: 0.3 + ln_scale: True + local_rank: 0 + logfile: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_ttt_sweeps/lrelu03_ngram_inside_p2500_c64_s42/lrelu03_ngram_inside_p2500_c64_s42.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + lqer_asym_enabled: True + lqer_asym_group: 64 + lqer_enabled: True + lqer_factor_bits: 4 + lqer_gain_select: False + lqer_rank: 4 + lqer_scope: all + lqer_top_k: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.026 + max_wallclock_seconds: 600.0 + midrun_cap_log_updates: False + midrun_cap_schedule: + min_lr: 0.1 + mlp_clip_sigmas: 11.5 + mlp_mult: 4.0 + model_dim: 512 + model_path: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_ttt_sweeps/lrelu03_ngram_inside_p2500_c64_s42/final_model.pt + muon_backend_steps: 5 + muon_momentum: 0.97 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + ngram_hint_precompute_outside: False + ngram_tilt_enabled: True + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_final_lane: mean + parallel_start_layer: 8 + phased_ttt_num_phases: 1 + phased_ttt_prefix_docs: 2500 + qk_gain_init: 5.25 + quantized_model_path: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_ttt_sweeps/lrelu03_ngram_inside_p2500_c64_s42/final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 3072 + rope_yarn: False + run_id: lrelu03_ngram_inside_p2500_c64_s42 + scalar_lr: 0.02 + seed: 42 + seq_change_warmup_steps: 32 + skip_gates_enabled: True + skylight_norm_beta2: 0.95 + skylight_norm_ema: False + skylight_norm_eps: 1e-07 + skylight_uw_floor: False + skylight_uw_ratio: 0.35 + smear_gate_enabled: True + sparse_attn_gate_enabled: True + sparse_attn_gate_init_std: 0.0 + sparse_attn_gate_scale: 0.5 + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + token_boost: 2.625 + token_order: 16 + token_threshold: 0.8 + tokenizer_path: /tmp/parameter-golf-data-authorhf/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + train_batch_tokens: 786432 + train_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 3072 + train_seq_schedule: 1024@0.100,2048@0.700,3072@1.000 + train_seq_schedule_mode: wallclock + ttt_batch_size: 24 + ttt_beta1: 0.0 + ttt_beta2: 0.99 + ttt_chunk_size: 64 + ttt_enabled: True + ttt_eval_batches: + ttt_eval_seq_len: 3072 + ttt_grad_steps: 1 + ttt_k_lora: True + ttt_local_lr_mult: 0.75 + ttt_lora_lr: 0.0001 + ttt_lora_rank: 80 + ttt_mask: no_qv + ttt_mlp_lora: True + ttt_o_lora: True + ttt_optimizer: adam + ttt_q_lora: False + ttt_short_beta2: 0.99 + ttt_short_chunk_size: 32 + ttt_short_doc_len: 2000 + ttt_short_lora_enabled: False + ttt_short_lora_lr: 0.0001 + ttt_short_lora_rank: 80 + ttt_short_score_first_enabled: True + ttt_short_score_first_steps: 256:16,2000:32 + ttt_short_weight_decay: 0.5 + ttt_train_max_doc_len: 0 + ttt_train_min_doc_len: 0 + ttt_v_lora: False + ttt_warm_start_mean_doc_len: 2000 + ttt_warm_start_mean_enabled: False + ttt_warm_start_mean_momentum: 0.95 + ttt_weight_decay: 0.5 + val_batch_tokens: 524288 + val_bytes_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_bytes_*.bin + val_doc_fraction: 1.0 + val_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_*.bin + val_loss_every: 0 + vocab_size: 8192 + warmdown_frac: 0.85 + warmdown_iters: 0 + warmup_steps: 20 + within_boost: 0.75 + within_tau: 0.45 + word_boost: 0.75 + word_normalize: strip_punct_lower + word_order: 4 + word_tau: 0.65 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 47853343 +TTT_EVAL_ONLY=1 — skipping training + GPTQ, loading saved artifact for TTT eval +ttt_lora_alpha: 144.0 +ttt_warm_start_a: True +ttt_weight_decay: 0.5 +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 17.1s +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 17.4s +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (178.2s) + +beginning TTT eval timer +ngram_tilt:hints total=47853343 gated=13023831 token_gate=628156 within_gate=9867233 word_gate=2891718 agree2plus=303187 +ngram_tilt:precompute_outside_timer_done elapsed=127.48s total_targets=47853343 +ttt_phased: total_docs:50000 prefix_docs:2500 suffix_docs:47500 num_phases:1 boundaries:[2500] target_tokens:47853343 +ttp: b2082/2084 bl:2.0803 bb:1.0144 rl:2.0803 rb:1.0144 dl:20182-24246 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2068/2084 bl:2.0717 bb:1.0166 rl:2.0780 rb:1.0150 dl:7689-7878 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2065/2084 bl:2.3285 bb:1.0704 rl:2.1256 rb:1.0260 dl:6892-7069 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2056/2084 bl:2.2540 bb:1.0724 rl:2.1429 rb:1.0323 dl:5670-5749 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2052/2084 bl:2.1173 bb:1.0091 rl:2.1400 rb:1.0297 dl:5323-5395 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2044/2084 bl:2.1622 bb:1.0772 rl:2.1420 rb:1.0339 dl:4697-4743 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2035/2084 bl:2.2994 bb:1.0766 rl:2.1539 rb:1.0372 dl:4250-4292 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2028/2084 bl:2.4712 bb:1.1188 rl:2.1745 rb:1.0428 dl:3935-3966 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2023/2084 bl:2.3742 bb:1.0528 rl:2.1862 rb:1.0434 dl:3761-3786 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2013/2084 bl:2.3860 bb:1.0497 rl:2.1963 rb:1.0438 dl:3436-3454 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2010/2084 bl:2.3233 bb:1.0410 rl:2.2023 rb:1.0436 dl:3360-3381 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2001/2084 bl:2.2686 bb:1.0306 rl:2.2052 rb:1.0431 dl:3150-3175 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1992/2084 bl:2.3364 bb:1.0462 rl:2.2102 rb:1.0432 dl:2976-2991 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1986/2084 bl:2.2610 bb:1.0277 rl:2.2120 rb:1.0426 dl:2856-2872 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1978/2084 bl:2.3972 bb:1.0320 rl:2.2181 rb:1.0422 dl:2743-2753 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1973/2084 bl:2.1556 bb:1.0166 rl:2.2162 rb:1.0414 dl:2671-2680 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttpp: phase:1/1 pd:2672 gd:2500 t:203.1s +tttg: c1/344 lr:0.001000 t:0.4s +tttg: c2/344 lr:0.001000 t:0.5s +tttg: c3/344 lr:0.001000 t:0.6s +tttg: c4/344 lr:0.001000 t:0.7s +tttg: c5/344 lr:0.001000 t:0.9s +tttg: c6/344 lr:0.000999 t:1.0s +tttg: c7/344 lr:0.000999 t:1.1s +tttg: c8/344 lr:0.000999 t:1.2s +tttg: c9/344 lr:0.000999 t:1.4s +tttg: c10/344 lr:0.000998 t:1.5s +tttg: c11/344 lr:0.000998 t:1.6s +tttg: c12/344 lr:0.000997 t:1.7s +tttg: c13/344 lr:0.000997 t:1.8s +tttg: c14/344 lr:0.000996 t:2.0s +tttg: c15/344 lr:0.000996 t:2.1s +tttg: c16/344 lr:0.000995 t:2.2s +tttg: c17/344 lr:0.000995 t:2.3s +tttg: c18/344 lr:0.000994 t:2.5s +tttg: c19/344 lr:0.000993 t:2.6s +tttg: c20/344 lr:0.000992 t:2.7s +tttg: c21/344 lr:0.000992 t:2.8s +tttg: c22/344 lr:0.000991 t:3.0s +tttg: c23/344 lr:0.000990 t:3.1s +tttg: c24/344 lr:0.000989 t:3.2s +tttg: c25/344 lr:0.000988 t:3.3s +tttg: c26/344 lr:0.000987 t:3.5s +tttg: c27/344 lr:0.000986 t:3.6s +tttg: c28/344 lr:0.000985 t:3.7s +tttg: c29/344 lr:0.000984 t:3.8s +tttg: c30/344 lr:0.000982 t:3.9s +tttg: c31/344 lr:0.000981 t:4.1s +tttg: c32/344 lr:0.000980 t:4.2s +tttg: c33/344 lr:0.000979 t:4.4s +tttg: c34/344 lr:0.000977 t:4.5s +tttg: c35/344 lr:0.000976 t:4.6s +tttg: c36/344 lr:0.000975 t:4.7s +tttg: c37/344 lr:0.000973 t:4.9s +tttg: c38/344 lr:0.000972 t:5.0s +tttg: c39/344 lr:0.000970 t:5.1s +tttg: c40/344 lr:0.000968 t:5.3s +tttg: c41/344 lr:0.000967 t:5.4s +tttg: c42/344 lr:0.000965 t:5.5s +tttg: c43/344 lr:0.000963 t:5.6s +tttg: c44/344 lr:0.000962 t:5.8s +tttg: c45/344 lr:0.000960 t:5.9s +tttg: c46/344 lr:0.000958 t:6.0s +tttg: c47/344 lr:0.000956 t:6.1s +tttg: c48/344 lr:0.000954 t:6.2s +tttg: c49/344 lr:0.000952 t:6.4s +tttg: c50/344 lr:0.000950 t:6.5s +tttg: c51/344 lr:0.000948 t:6.7s +tttg: c52/344 lr:0.000946 t:6.8s +tttg: c53/344 lr:0.000944 t:6.9s +tttg: c54/344 lr:0.000942 t:7.0s +tttg: c55/344 lr:0.000940 t:7.2s +tttg: c56/344 lr:0.000938 t:7.3s +tttg: c57/344 lr:0.000936 t:7.4s +tttg: c58/344 lr:0.000933 t:7.6s +tttg: c59/344 lr:0.000931 t:7.7s +tttg: c60/344 lr:0.000929 t:7.8s +tttg: c61/344 lr:0.000926 t:7.9s +tttg: c62/344 lr:0.000924 t:8.1s +tttg: c63/344 lr:0.000922 t:8.2s +tttg: c64/344 lr:0.000919 t:8.3s +tttg: c65/344 lr:0.000917 t:8.4s +tttg: c66/344 lr:0.000914 t:8.6s +tttg: c67/344 lr:0.000911 t:8.7s +tttg: c68/344 lr:0.000909 t:8.8s +tttg: c69/344 lr:0.000906 t:8.9s +tttg: c70/344 lr:0.000903 t:9.1s +tttg: c71/344 lr:0.000901 t:9.2s +tttg: c72/344 lr:0.000898 t:9.3s +tttg: c73/344 lr:0.000895 t:9.4s +tttg: c74/344 lr:0.000892 t:9.6s +tttg: c75/344 lr:0.000889 t:9.7s +tttg: c76/344 lr:0.000887 t:9.8s +tttg: c77/344 lr:0.000884 t:9.9s +tttg: c78/344 lr:0.000881 t:10.1s +tttg: c79/344 lr:0.000878 t:10.2s +tttg: c80/344 lr:0.000875 t:10.3s +tttg: c81/344 lr:0.000872 t:10.4s +tttg: c82/344 lr:0.000869 t:10.6s +tttg: c83/344 lr:0.000865 t:10.7s +tttg: c84/344 lr:0.000862 t:10.8s +tttg: c85/344 lr:0.000859 t:10.9s +tttg: c86/344 lr:0.000856 t:11.1s +tttg: c87/344 lr:0.000853 t:11.2s +tttg: c88/344 lr:0.000849 t:11.3s +tttg: c89/344 lr:0.000846 t:11.4s +tttg: c90/344 lr:0.000843 t:11.5s +tttg: c91/344 lr:0.000840 t:11.7s +tttg: c92/344 lr:0.000836 t:11.8s +tttg: c93/344 lr:0.000833 t:11.9s +tttg: c94/344 lr:0.000829 t:12.0s +tttg: c95/344 lr:0.000826 t:12.2s +tttg: c96/344 lr:0.000822 t:12.3s +tttg: c97/344 lr:0.000819 t:12.4s +tttg: c98/344 lr:0.000815 t:12.5s +tttg: c99/344 lr:0.000812 t:12.7s +tttg: c100/344 lr:0.000808 t:12.8s +tttg: c101/344 lr:0.000805 t:12.9s +tttg: c102/344 lr:0.000801 t:13.0s +tttg: c103/344 lr:0.000797 t:13.2s +tttg: c104/344 lr:0.000794 t:13.3s +tttg: c105/344 lr:0.000790 t:13.5s +tttg: c106/344 lr:0.000786 t:13.6s +tttg: c107/344 lr:0.000782 t:13.7s +tttg: c108/344 lr:0.000778 t:13.8s +tttg: c109/344 lr:0.000775 t:14.0s +tttg: c110/344 lr:0.000771 t:14.1s +tttg: c111/344 lr:0.000767 t:14.2s +tttg: c112/344 lr:0.000763 t:14.3s +tttg: c113/344 lr:0.000759 t:14.5s +tttg: c114/344 lr:0.000755 t:14.6s +tttg: c115/344 lr:0.000751 t:14.7s +tttg: c116/344 lr:0.000747 t:14.8s +tttg: c117/344 lr:0.000743 t:15.0s +tttg: c118/344 lr:0.000739 t:15.1s +tttg: c119/344 lr:0.000735 t:15.2s +tttg: c120/344 lr:0.000731 t:15.3s +tttg: c121/344 lr:0.000727 t:15.5s +tttg: c122/344 lr:0.000723 t:15.6s +tttg: c123/344 lr:0.000719 t:15.7s +tttg: c124/344 lr:0.000715 t:15.8s +tttg: c125/344 lr:0.000711 t:15.9s +tttg: c126/344 lr:0.000707 t:16.1s +tttg: c127/344 lr:0.000702 t:16.3s +tttg: c128/344 lr:0.000698 t:16.4s +tttg: c129/344 lr:0.000694 t:16.5s +tttg: c130/344 lr:0.000690 t:16.7s +tttg: c131/344 lr:0.000686 t:16.8s +tttg: c132/344 lr:0.000681 t:16.9s +tttg: c133/344 lr:0.000677 t:17.0s +tttg: c134/344 lr:0.000673 t:17.1s +tttg: c135/344 lr:0.000668 t:17.3s +tttg: c136/344 lr:0.000664 t:17.4s +tttg: c137/344 lr:0.000660 t:17.5s +tttg: c138/344 lr:0.000655 t:17.6s +tttg: c139/344 lr:0.000651 t:17.8s +tttg: c140/344 lr:0.000647 t:17.9s +tttg: c141/344 lr:0.000642 t:18.0s +tttg: c142/344 lr:0.000638 t:18.1s +tttg: c143/344 lr:0.000633 t:18.3s +tttg: c144/344 lr:0.000629 t:18.4s +tttg: c145/344 lr:0.000625 t:18.5s +tttg: c146/344 lr:0.000620 t:18.6s +tttg: c147/344 lr:0.000616 t:18.8s +tttg: c148/344 lr:0.000611 t:18.9s +tttg: c149/344 lr:0.000607 t:19.0s +tttg: c150/344 lr:0.000602 t:19.1s +tttg: c151/344 lr:0.000598 t:19.2s +tttg: c152/344 lr:0.000593 t:19.3s +tttg: c153/344 lr:0.000589 t:19.4s +tttg: c154/344 lr:0.000584 t:19.5s +tttg: c155/344 lr:0.000580 t:19.6s +tttg: c156/344 lr:0.000575 t:19.7s +tttg: c157/344 lr:0.000571 t:19.8s +tttg: c158/344 lr:0.000566 t:19.9s +tttg: c159/344 lr:0.000562 t:20.0s +tttg: c160/344 lr:0.000557 t:20.1s +tttg: c161/344 lr:0.000553 t:20.2s +tttg: c162/344 lr:0.000548 t:20.3s +tttg: c163/344 lr:0.000543 t:20.4s +tttg: c164/344 lr:0.000539 t:20.5s +tttg: c165/344 lr:0.000534 t:20.6s +tttg: c166/344 lr:0.000530 t:20.7s +tttg: c167/344 lr:0.000525 t:20.9s +tttg: c168/344 lr:0.000521 t:21.0s +tttg: c169/344 lr:0.000516 t:21.1s +tttg: c170/344 lr:0.000511 t:21.1s +tttg: c171/344 lr:0.000507 t:21.2s +tttg: c172/344 lr:0.000502 t:21.3s +tttg: c173/344 lr:0.000498 t:21.4s +tttg: c174/344 lr:0.000493 t:21.5s +tttg: c175/344 lr:0.000489 t:21.6s +tttg: c176/344 lr:0.000484 t:21.8s +tttg: c177/344 lr:0.000479 t:21.9s +tttg: c178/344 lr:0.000475 t:22.0s +tttg: c179/344 lr:0.000470 t:22.1s +tttg: c180/344 lr:0.000466 t:22.1s +tttg: c181/344 lr:0.000461 t:22.2s +tttg: c182/344 lr:0.000457 t:22.3s +tttg: c183/344 lr:0.000452 t:22.4s +tttg: c184/344 lr:0.000447 t:22.5s +tttg: c185/344 lr:0.000443 t:22.6s +tttg: c186/344 lr:0.000438 t:22.7s +tttg: c187/344 lr:0.000434 t:22.8s +tttg: c188/344 lr:0.000429 t:22.9s +tttg: c189/344 lr:0.000425 t:23.0s +tttg: c190/344 lr:0.000420 t:23.1s +tttg: c191/344 lr:0.000416 t:23.2s +tttg: c192/344 lr:0.000411 t:23.3s +tttg: c193/344 lr:0.000407 t:23.4s +tttg: c194/344 lr:0.000402 t:23.5s +tttg: c195/344 lr:0.000398 t:23.6s +tttg: c196/344 lr:0.000393 t:23.7s +tttg: c197/344 lr:0.000389 t:23.9s +tttg: c198/344 lr:0.000384 t:24.0s +tttg: c199/344 lr:0.000380 t:24.1s +tttg: c200/344 lr:0.000375 t:24.2s +tttg: c201/344 lr:0.000371 t:24.3s +tttg: c202/344 lr:0.000367 t:24.4s +tttg: c203/344 lr:0.000362 t:24.4s +tttg: c204/344 lr:0.000358 t:24.5s +tttg: c205/344 lr:0.000353 t:24.6s +tttg: c206/344 lr:0.000349 t:24.7s +tttg: c207/344 lr:0.000345 t:24.8s +tttg: c208/344 lr:0.000340 t:24.9s +tttg: c209/344 lr:0.000336 t:25.0s +tttg: c210/344 lr:0.000332 t:25.1s +tttg: c211/344 lr:0.000327 t:25.2s +tttg: c212/344 lr:0.000323 t:25.3s +tttg: c213/344 lr:0.000319 t:25.4s +tttg: c214/344 lr:0.000314 t:25.5s +tttg: c215/344 lr:0.000310 t:25.6s +tttg: c216/344 lr:0.000306 t:25.8s +tttg: c217/344 lr:0.000302 t:25.9s +tttg: c218/344 lr:0.000298 t:26.0s +tttg: c219/344 lr:0.000293 t:26.1s +tttg: c220/344 lr:0.000289 t:26.2s +tttg: c221/344 lr:0.000285 t:26.3s +tttg: c222/344 lr:0.000281 t:26.4s +tttg: c223/344 lr:0.000277 t:26.5s +tttg: c224/344 lr:0.000273 t:26.6s +tttg: c225/344 lr:0.000269 t:26.7s +tttg: c226/344 lr:0.000265 t:26.8s +tttg: c227/344 lr:0.000261 t:26.9s +tttg: c228/344 lr:0.000257 t:27.0s +tttg: c229/344 lr:0.000253 t:27.1s +tttg: c230/344 lr:0.000249 t:27.2s +tttg: c231/344 lr:0.000245 t:27.3s +tttg: c232/344 lr:0.000241 t:27.4s +tttg: c233/344 lr:0.000237 t:27.5s +tttg: c234/344 lr:0.000233 t:27.6s +tttg: c235/344 lr:0.000229 t:27.7s +tttg: c236/344 lr:0.000225 t:27.8s +tttg: c237/344 lr:0.000222 t:27.9s +tttg: c238/344 lr:0.000218 t:28.0s +tttg: c239/344 lr:0.000214 t:28.1s +tttg: c240/344 lr:0.000210 t:28.2s +tttg: c241/344 lr:0.000206 t:28.3s +tttg: c242/344 lr:0.000203 t:28.4s +tttg: c243/344 lr:0.000199 t:28.5s +tttg: c244/344 lr:0.000195 t:28.6s +tttg: c245/344 lr:0.000192 t:28.7s +tttg: c246/344 lr:0.000188 t:28.8s +tttg: c247/344 lr:0.000185 t:28.9s +tttg: c248/344 lr:0.000181 t:29.0s +tttg: c249/344 lr:0.000178 t:29.0s +tttg: c250/344 lr:0.000174 t:29.2s +tttg: c251/344 lr:0.000171 t:29.3s +tttg: c252/344 lr:0.000167 t:29.4s +tttg: c253/344 lr:0.000164 t:29.5s +tttg: c254/344 lr:0.000160 t:29.6s +tttg: c255/344 lr:0.000157 t:29.7s +tttg: c256/344 lr:0.000154 t:29.8s +tttg: c257/344 lr:0.000151 t:29.9s +tttg: c258/344 lr:0.000147 t:30.0s +tttg: c259/344 lr:0.000144 t:30.1s +tttg: c260/344 lr:0.000141 t:30.2s +tttg: c261/344 lr:0.000138 t:30.3s +tttg: c262/344 lr:0.000135 t:30.4s +tttg: c263/344 lr:0.000131 t:30.5s +tttg: c264/344 lr:0.000128 t:30.6s +tttg: c265/344 lr:0.000125 t:30.7s +tttg: c266/344 lr:0.000122 t:30.8s +tttg: c267/344 lr:0.000119 t:30.9s +tttg: c268/344 lr:0.000116 t:31.0s +tttg: c269/344 lr:0.000113 t:31.1s +tttg: c270/344 lr:0.000111 t:31.2s +tttg: c271/344 lr:0.000108 t:31.3s +tttg: c272/344 lr:0.000105 t:31.4s +tttg: c273/344 lr:0.000102 t:31.5s +tttg: c274/344 lr:0.000099 t:31.6s +tttg: c275/344 lr:0.000097 t:31.7s +tttg: c276/344 lr:0.000094 t:31.8s +tttg: c277/344 lr:0.000091 t:31.9s +tttg: c278/344 lr:0.000089 t:32.0s +tttg: c279/344 lr:0.000086 t:32.1s +tttg: c280/344 lr:0.000083 t:32.2s +tttg: c281/344 lr:0.000081 t:32.3s +tttg: c282/344 lr:0.000078 t:32.4s +tttg: c283/344 lr:0.000076 t:32.5s +tttg: c284/344 lr:0.000074 t:32.6s +tttg: c285/344 lr:0.000071 t:32.7s +tttg: c286/344 lr:0.000069 t:32.8s +tttg: c287/344 lr:0.000067 t:32.9s +tttg: c288/344 lr:0.000064 t:33.0s +tttg: c289/344 lr:0.000062 t:33.1s +tttg: c290/344 lr:0.000060 t:33.2s +tttg: c291/344 lr:0.000058 t:33.3s +tttg: c292/344 lr:0.000056 t:33.4s +tttg: c293/344 lr:0.000054 t:33.5s +tttg: c294/344 lr:0.000052 t:33.6s +tttg: c295/344 lr:0.000050 t:33.7s +tttg: c296/344 lr:0.000048 t:33.8s +tttg: c297/344 lr:0.000046 t:33.9s +tttg: c298/344 lr:0.000044 t:34.0s +tttg: c299/344 lr:0.000042 t:34.1s +tttg: c300/344 lr:0.000040 t:34.2s +tttg: c301/344 lr:0.000038 t:34.4s +tttg: c302/344 lr:0.000037 t:34.5s +tttg: c303/344 lr:0.000035 t:34.6s +tttg: c304/344 lr:0.000033 t:34.7s +tttg: c305/344 lr:0.000032 t:34.8s +tttg: c306/344 lr:0.000030 t:34.9s +tttg: c307/344 lr:0.000028 t:35.0s +tttg: c308/344 lr:0.000027 t:35.1s +tttg: c309/344 lr:0.000025 t:35.2s +tttg: c310/344 lr:0.000024 t:35.3s +tttg: c311/344 lr:0.000023 t:35.4s +tttg: c312/344 lr:0.000021 t:35.5s +tttg: c313/344 lr:0.000020 t:35.6s +tttg: c314/344 lr:0.000019 t:35.7s +tttg: c315/344 lr:0.000018 t:35.8s +tttg: c316/344 lr:0.000016 t:35.8s +tttg: c317/344 lr:0.000015 t:35.9s +tttg: c318/344 lr:0.000014 t:36.0s +tttg: c319/344 lr:0.000013 t:36.1s +tttg: c320/344 lr:0.000012 t:36.2s +tttg: c321/344 lr:0.000011 t:36.3s +tttg: c322/344 lr:0.000010 t:36.5s +tttg: c323/344 lr:0.000009 t:36.6s +tttg: c324/344 lr:0.000008 t:36.7s +tttg: c325/344 lr:0.000008 t:36.8s +tttg: c326/344 lr:0.000007 t:36.9s +tttg: c327/344 lr:0.000006 t:37.0s +tttg: c328/344 lr:0.000005 t:37.1s +tttg: c329/344 lr:0.000005 t:37.2s +tttg: c330/344 lr:0.000004 t:37.3s +tttg: c331/344 lr:0.000004 t:37.4s +tttg: c332/344 lr:0.000003 t:37.5s +tttg: c333/344 lr:0.000003 t:37.6s +tttg: c334/344 lr:0.000002 t:37.7s +tttg: c335/344 lr:0.000002 t:37.8s +tttg: c336/344 lr:0.000001 t:37.9s +tttg: c337/344 lr:0.000001 t:38.0s +tttg: c338/344 lr:0.000001 t:38.1s +tttg: c339/344 lr:0.000001 t:38.2s +tttg: c340/344 lr:0.000000 t:38.3s +tttg: c341/344 lr:0.000000 t:38.4s +tttg: c342/344 lr:0.000000 t:38.5s +tttg: c343/344 lr:0.000000 t:38.6s +ttpr: phase:1/1 t:242.2s +ttp: b1965/2084 bl:2.2690 bb:1.0046 rl:2.2177 rb:1.0403 dl:2565-2577 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1960/2084 bl:2.5024 bb:1.1244 rl:2.2256 rb:1.0427 dl:2515-2526 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1953/2084 bl:2.2830 bb:1.0415 rl:2.2271 rb:1.0427 dl:2441-2454 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1947/2084 bl:2.2156 bb:0.9539 rl:2.2268 rb:1.0403 dl:2368-2382 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1941/2084 bl:2.3010 bb:1.0496 rl:2.2286 rb:1.0405 dl:2314-2323 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1935/2084 bl:2.2631 bb:1.0263 rl:2.2294 rb:1.0402 dl:2260-2270 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1929/2084 bl:2.2707 bb:1.0203 rl:2.2303 rb:1.0398 dl:2203-2216 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1923/2084 bl:2.3698 bb:1.0797 rl:2.2332 rb:1.0406 dl:2160-2164 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1917/2084 bl:2.3223 bb:1.0571 rl:2.2349 rb:1.0409 dl:2117-2122 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1910/2084 bl:2.2029 bb:1.0399 rl:2.2343 rb:1.0409 dl:2067-2072 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1904/2084 bl:2.2235 bb:0.9907 rl:2.2341 rb:1.0400 dl:2028-2035 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1898/2084 bl:2.2237 bb:1.0771 rl:2.2339 rb:1.0406 dl:1990-1996 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1890/2084 bl:2.5079 bb:1.1008 rl:2.2386 rb:1.0417 dl:1942-1947 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1882/2084 bl:2.3616 bb:1.0703 rl:2.2406 rb:1.0421 dl:1902-1906 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1876/2084 bl:2.2726 bb:1.0314 rl:2.2411 rb:1.0420 dl:1873-1880 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1870/2084 bl:2.3520 bb:1.0923 rl:2.2428 rb:1.0427 dl:1846-1850 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1862/2084 bl:2.4466 bb:1.0436 rl:2.2458 rb:1.0428 dl:1813-1817 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1854/2084 bl:2.3306 bb:1.0624 rl:2.2470 rb:1.0430 dl:1778-1781 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1846/2084 bl:2.3793 bb:1.0750 rl:2.2489 rb:1.0435 dl:1744-1749 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1838/2084 bl:2.2034 bb:1.0233 rl:2.2483 rb:1.0432 dl:1714-1718 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1830/2084 bl:2.3174 bb:1.0911 rl:2.2492 rb:1.0438 dl:1684-1688 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1821/2084 bl:2.2893 bb:1.0512 rl:2.2497 rb:1.0439 dl:1652-1657 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1813/2084 bl:2.2435 bb:1.0333 rl:2.2496 rb:1.0438 dl:1623-1627 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1805/2084 bl:2.2443 bb:1.0071 rl:2.2495 rb:1.0434 dl:1598-1601 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1797/2084 bl:2.3637 bb:1.1052 rl:2.2509 rb:1.0441 dl:1574-1577 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1789/2084 bl:2.3715 bb:1.0620 rl:2.2522 rb:1.0443 dl:1549-1552 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1780/2084 bl:2.2574 bb:1.0574 rl:2.2523 rb:1.0444 dl:1522-1525 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1768/2084 bl:2.3715 bb:1.0557 rl:2.2535 rb:1.0445 dl:1490-1493 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1760/2084 bl:2.3645 bb:1.0254 rl:2.2547 rb:1.0443 dl:1471-1474 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1752/2084 bl:2.3092 bb:1.0585 rl:2.2552 rb:1.0445 dl:1450-1452 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1744/2084 bl:2.2369 bb:1.0507 rl:2.2551 rb:1.0445 dl:1429-1432 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1735/2084 bl:2.5565 bb:1.0760 rl:2.2580 rb:1.0449 dl:1407-1409 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1727/2084 bl:2.1970 bb:1.0242 rl:2.2574 rb:1.0447 dl:1388-1390 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1719/2084 bl:2.3166 bb:1.0408 rl:2.2579 rb:1.0446 dl:1368-1371 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1710/2084 bl:2.3219 bb:1.0700 rl:2.2585 rb:1.0449 dl:1349-1351 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1702/2084 bl:2.3155 bb:1.0193 rl:2.2590 rb:1.0446 dl:1331-1332 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1693/2084 bl:2.3392 bb:1.0665 rl:2.2597 rb:1.0448 dl:1311-1314 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1681/2084 bl:2.2744 bb:0.9920 rl:2.2598 rb:1.0444 dl:1285-1287 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1673/2084 bl:2.3132 bb:1.0266 rl:2.2603 rb:1.0442 dl:1271-1273 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1665/2084 bl:2.3708 bb:1.0070 rl:2.2612 rb:1.0439 dl:1255-1257 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1657/2084 bl:2.2808 bb:1.0531 rl:2.2613 rb:1.0440 dl:1237-1239 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1649/2084 bl:2.3530 bb:1.0875 rl:2.2620 rb:1.0443 dl:1221-1223 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1641/2084 bl:2.3685 bb:1.0329 rl:2.2628 rb:1.0442 dl:1205-1207 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1632/2084 bl:2.1966 bb:0.9951 rl:2.2623 rb:1.0438 dl:1189-1190 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1620/2084 bl:2.2714 bb:1.0634 rl:2.2624 rb:1.0440 dl:1169-1170 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1611/2084 bl:2.2523 bb:1.0143 rl:2.2623 rb:1.0438 dl:1153-1155 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1603/2084 bl:2.2130 bb:0.9698 rl:2.2620 rb:1.0432 dl:1140-1142 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1594/2084 bl:2.3586 bb:1.0662 rl:2.2626 rb:1.0434 dl:1126-1128 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1586/2084 bl:2.4238 bb:1.0753 rl:2.2637 rb:1.0436 dl:1112-1113 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1579/2084 bl:2.2402 bb:1.0004 rl:2.2636 rb:1.0433 dl:1102-1104 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1571/2084 bl:2.2460 bb:1.0232 rl:2.2634 rb:1.0432 dl:1089-1090 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1559/2084 bl:2.2347 bb:1.0226 rl:2.2633 rb:1.0430 dl:1070-1071 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1549/2084 bl:2.3068 bb:1.0341 rl:2.2635 rb:1.0430 dl:1054-1055 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1541/2084 bl:2.3544 bb:1.0007 rl:2.2641 rb:1.0427 dl:1043-1044 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1532/2084 bl:2.2760 bb:1.0444 rl:2.2642 rb:1.0427 dl:1030-1031 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1524/2084 bl:2.4137 bb:1.0857 rl:2.2650 rb:1.0430 dl:1018-1019 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1516/2084 bl:2.2868 bb:1.0169 rl:2.2651 rb:1.0428 dl:1008-1009 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1507/2084 bl:2.4099 bb:1.0770 rl:2.2660 rb:1.0430 dl:996-997 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1495/2084 bl:2.1631 bb:0.9939 rl:2.2654 rb:1.0427 dl:980-981 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1485/2084 bl:2.2302 bb:0.9736 rl:2.2652 rb:1.0423 dl:967-967 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1476/2084 bl:2.3747 bb:1.0799 rl:2.2658 rb:1.0426 dl:956-957 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1468/2084 bl:2.2640 bb:1.0094 rl:2.2658 rb:1.0424 dl:947-948 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1460/2084 bl:2.3038 bb:1.0290 rl:2.2660 rb:1.0423 dl:937-937 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1451/2084 bl:2.1497 bb:0.9728 rl:2.2654 rb:1.0419 dl:927-928 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1443/2084 bl:2.3858 bb:0.9945 rl:2.2660 rb:1.0417 dl:917-919 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1434/2084 bl:2.3381 bb:1.0039 rl:2.2663 rb:1.0415 dl:906-908 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1421/2084 bl:2.3349 bb:1.0123 rl:2.2667 rb:1.0413 dl:891-892 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1413/2084 bl:2.2283 bb:1.0445 rl:2.2665 rb:1.0413 dl:883-884 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1405/2084 bl:2.3887 bb:1.0100 rl:2.2671 rb:1.0412 dl:874-875 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1397/2084 bl:2.3074 bb:1.0294 rl:2.2673 rb:1.0411 dl:865-866 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1389/2084 bl:2.3115 bb:1.0360 rl:2.2675 rb:1.0411 dl:856-857 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1381/2084 bl:2.1709 bb:0.9797 rl:2.2670 rb:1.0408 dl:847-849 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1371/2084 bl:2.4134 bb:1.0802 rl:2.2677 rb:1.0410 dl:837-838 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1361/2084 bl:2.3715 bb:1.0362 rl:2.2681 rb:1.0410 dl:826-827 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1351/2084 bl:2.1705 bb:0.9595 rl:2.2677 rb:1.0406 dl:815-816 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1343/2084 bl:2.3890 bb:1.0209 rl:2.2682 rb:1.0405 dl:807-808 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1335/2084 bl:2.2833 bb:1.0618 rl:2.2683 rb:1.0406 dl:800-801 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1327/2084 bl:2.3398 bb:1.1019 rl:2.2686 rb:1.0409 dl:791-792 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1317/2084 bl:2.3417 bb:1.0307 rl:2.2689 rb:1.0408 dl:782-783 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1310/2084 bl:2.1972 bb:0.9787 rl:2.2686 rb:1.0406 dl:775-776 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1303/2084 bl:2.5358 bb:1.1355 rl:2.2696 rb:1.0409 dl:770-770 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1292/2084 bl:2.3118 bb:1.0307 rl:2.2698 rb:1.0409 dl:759-760 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1283/2084 bl:2.3174 bb:1.0409 rl:2.2700 rb:1.0409 dl:751-752 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1274/2084 bl:2.4681 bb:1.0916 rl:2.2707 rb:1.0411 dl:743-744 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1266/2084 bl:2.3521 bb:1.0352 rl:2.2710 rb:1.0411 dl:736-737 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1256/2084 bl:2.4898 bb:1.0991 rl:2.2718 rb:1.0413 dl:727-728 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1247/2084 bl:2.3059 bb:1.0085 rl:2.2719 rb:1.0412 dl:720-721 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1240/2084 bl:2.3678 bb:1.0296 rl:2.2723 rb:1.0411 dl:714-714 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1229/2084 bl:2.3119 bb:1.0306 rl:2.2724 rb:1.0411 dl:705-706 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1220/2084 bl:2.2311 bb:0.9843 rl:2.2723 rb:1.0409 dl:697-699 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1213/2084 bl:2.3612 bb:1.0691 rl:2.2726 rb:1.0410 dl:692-693 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1205/2084 bl:2.2994 bb:1.0207 rl:2.2726 rb:1.0409 dl:686-687 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1195/2084 bl:2.3562 bb:1.0241 rl:2.2729 rb:1.0409 dl:678-679 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1188/2084 bl:2.3855 bb:1.0997 rl:2.2733 rb:1.0411 dl:673-673 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1177/2084 bl:2.2637 bb:1.0324 rl:2.2733 rb:1.0410 dl:664-665 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1160/2084 bl:2.2219 bb:0.9854 rl:2.2731 rb:1.0408 dl:650-651 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1152/2084 bl:2.2411 bb:1.0022 rl:2.2730 rb:1.0407 dl:645-645 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1143/2084 bl:2.1679 bb:0.9950 rl:2.2727 rb:1.0406 dl:639-639 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1134/2084 bl:2.2169 bb:1.0072 rl:2.2725 rb:1.0405 dl:632-633 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1123/2084 bl:2.2979 bb:0.9818 rl:2.2726 rb:1.0403 dl:624-625 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1116/2084 bl:2.2272 bb:1.0161 rl:2.2724 rb:1.0402 dl:619-619 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1105/2084 bl:2.2212 bb:1.0564 rl:2.2723 rb:1.0403 dl:611-612 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1097/2084 bl:2.3640 bb:1.0412 rl:2.2726 rb:1.0403 dl:605-606 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1087/2084 bl:2.3002 bb:1.0657 rl:2.2726 rb:1.0403 dl:597-598 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1080/2084 bl:2.3662 bb:1.0397 rl:2.2729 rb:1.0403 dl:593-593 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1071/2084 bl:2.2409 bb:0.9919 rl:2.2728 rb:1.0402 dl:587-587 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1059/2084 bl:2.3082 bb:1.0315 rl:2.2729 rb:1.0402 dl:578-579 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1052/2084 bl:2.2185 bb:1.0680 rl:2.2728 rb:1.0402 dl:574-574 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1043/2084 bl:2.2601 bb:1.0676 rl:2.2727 rb:1.0403 dl:568-568 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1032/2084 bl:2.3346 bb:1.1052 rl:2.2729 rb:1.0405 dl:560-561 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1023/2084 bl:2.3225 bb:1.0616 rl:2.2730 rb:1.0405 dl:554-555 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1016/2084 bl:2.2922 bb:1.0738 rl:2.2731 rb:1.0406 dl:549-550 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1006/2084 bl:2.3544 bb:1.0668 rl:2.2733 rb:1.0407 dl:544-544 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b997/2084 bl:2.2794 bb:1.0318 rl:2.2733 rb:1.0407 dl:537-538 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b987/2084 bl:2.3636 bb:1.1422 rl:2.2735 rb:1.0409 dl:531-532 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b977/2084 bl:2.3093 bb:1.0700 rl:2.2736 rb:1.0410 dl:525-526 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b969/2084 bl:2.3257 bb:1.0391 rl:2.2737 rb:1.0410 dl:521-521 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b958/2084 bl:2.2296 bb:0.9993 rl:2.2736 rb:1.0409 dl:513-514 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b947/2084 bl:2.3662 bb:1.0683 rl:2.2738 rb:1.0409 dl:506-507 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b940/2084 bl:2.1771 bb:0.9678 rl:2.2736 rb:1.0408 dl:501-502 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b930/2084 bl:2.3043 bb:1.0042 rl:2.2737 rb:1.0407 dl:495-496 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b921/2084 bl:2.3384 bb:1.0900 rl:2.2738 rb:1.0408 dl:491-491 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b909/2084 bl:2.2763 bb:1.0700 rl:2.2738 rb:1.0408 dl:484-485 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b900/2084 bl:2.3284 bb:1.0656 rl:2.2739 rb:1.0409 dl:479-480 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b892/2084 bl:2.4552 bb:1.1143 rl:2.2743 rb:1.0410 dl:474-475 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b884/2084 bl:2.2853 bb:0.9994 rl:2.2743 rb:1.0410 dl:470-470 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b871/2084 bl:2.2207 bb:1.0601 rl:2.2742 rb:1.0410 dl:462-463 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b861/2084 bl:2.4898 bb:1.1001 rl:2.2747 rb:1.0411 dl:457-458 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b854/2084 bl:2.2965 bb:1.0918 rl:2.2747 rb:1.0412 dl:453-453 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b841/2084 bl:2.1640 bb:1.0002 rl:2.2745 rb:1.0411 dl:445-446 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b835/2084 bl:2.2870 bb:1.0957 rl:2.2745 rb:1.0412 dl:442-442 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b825/2084 bl:2.4527 bb:1.1413 rl:2.2749 rb:1.0414 dl:437-437 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b816/2084 bl:2.5094 bb:1.1293 rl:2.2753 rb:1.0416 dl:431-432 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b808/2084 bl:2.3591 bb:1.1034 rl:2.2755 rb:1.0417 dl:427-427 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b795/2084 bl:2.3573 bb:1.0858 rl:2.2756 rb:1.0418 dl:419-420 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b786/2084 bl:2.2663 bb:1.0369 rl:2.2756 rb:1.0418 dl:414-415 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b778/2084 bl:2.3013 bb:1.0964 rl:2.2757 rb:1.0419 dl:409-410 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b768/2084 bl:2.1138 bb:0.9871 rl:2.2754 rb:1.0418 dl:404-405 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b758/2084 bl:2.3344 bb:1.0711 rl:2.2755 rb:1.0418 dl:399-400 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b750/2084 bl:2.4051 bb:1.0798 rl:2.2757 rb:1.0419 dl:395-395 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b739/2084 bl:2.1541 bb:1.0660 rl:2.2755 rb:1.0419 dl:389-390 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b730/2084 bl:2.1982 bb:1.0656 rl:2.2754 rb:1.0420 dl:384-385 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b722/2084 bl:2.4193 bb:1.1406 rl:2.2756 rb:1.0421 dl:381-381 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b712/2084 bl:2.4572 bb:1.1101 rl:2.2759 rb:1.0422 dl:376-376 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b700/2084 bl:2.3457 bb:1.0920 rl:2.2760 rb:1.0423 dl:370-371 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b690/2084 bl:2.2997 bb:1.1227 rl:2.2760 rb:1.0424 dl:365-366 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b683/2084 bl:2.4597 bb:1.1110 rl:2.2763 rb:1.0425 dl:362-362 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b674/2084 bl:2.3534 bb:1.1505 rl:2.2764 rb:1.0427 dl:358-358 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b662/2084 bl:2.5813 bb:1.1490 rl:2.2769 rb:1.0429 dl:352-353 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b655/2084 bl:2.3451 bb:1.1286 rl:2.2770 rb:1.0430 dl:349-349 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b643/2084 bl:2.3595 bb:1.1015 rl:2.2771 rb:1.0431 dl:343-344 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b636/2084 bl:2.4704 bb:1.0986 rl:2.2774 rb:1.0432 dl:340-340 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b624/2084 bl:2.4816 bb:1.1964 rl:2.2777 rb:1.0434 dl:334-335 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b615/2084 bl:2.3315 bb:1.0738 rl:2.2778 rb:1.0434 dl:331-331 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b606/2084 bl:2.2430 bb:1.1097 rl:2.2777 rb:1.0435 dl:327-327 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b598/2084 bl:2.2904 bb:1.0448 rl:2.2777 rb:1.0435 dl:323-323 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b588/2084 bl:2.3552 bb:1.0894 rl:2.2778 rb:1.0436 dl:319-319 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b577/2084 bl:2.3336 bb:1.0958 rl:2.2779 rb:1.0436 dl:314-314 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b564/2084 bl:2.4754 bb:1.0969 rl:2.2782 rb:1.0437 dl:309-309 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b558/2084 bl:2.1810 bb:1.0974 rl:2.2781 rb:1.0438 dl:306-306 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b548/2084 bl:2.4094 bb:1.1232 rl:2.2782 rb:1.0439 dl:302-302 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b536/2084 bl:2.2856 bb:1.0725 rl:2.2782 rb:1.0439 dl:297-298 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b530/2084 bl:2.4350 bb:1.1284 rl:2.2784 rb:1.0440 dl:295-295 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b523/2084 bl:2.3768 bb:1.0523 rl:2.2785 rb:1.0440 dl:292-292 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b508/2084 bl:2.3788 bb:1.1022 rl:2.2787 rb:1.0441 dl:285-286 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b501/2084 bl:2.3073 bb:1.0723 rl:2.2787 rb:1.0441 dl:283-283 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b490/2084 bl:2.4084 bb:1.0867 rl:2.2788 rb:1.0442 dl:278-279 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b479/2084 bl:2.4234 bb:1.1413 rl:2.2790 rb:1.0443 dl:275-275 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b472/2084 bl:2.4743 bb:1.0864 rl:2.2792 rb:1.0443 dl:272-272 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b461/2084 bl:2.4677 bb:1.1053 rl:2.2794 rb:1.0444 dl:268-268 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b450/2084 bl:2.5511 bb:1.2529 rl:2.2797 rb:1.0446 dl:264-264 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b438/2084 bl:2.4731 bb:1.1294 rl:2.2800 rb:1.0447 dl:260-260 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b432/2084 bl:2.3071 bb:1.0513 rl:2.2800 rb:1.0447 dl:257-257 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b422/2084 bl:2.5476 bb:1.2208 rl:2.2803 rb:1.0449 dl:254-254 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b414/2084 bl:2.4215 bb:1.1866 rl:2.2804 rb:1.0450 dl:251-251 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b406/2084 bl:2.2221 bb:1.0718 rl:2.2804 rb:1.0450 dl:248-248 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b395/2084 bl:2.2309 bb:1.0921 rl:2.2803 rb:1.0451 dl:244-244 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b384/2084 bl:2.3679 bb:1.1132 rl:2.2804 rb:1.0452 dl:240-240 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b375/2084 bl:2.4956 bb:1.1628 rl:2.2806 rb:1.0453 dl:236-237 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b366/2084 bl:2.4058 bb:1.1062 rl:2.2807 rb:1.0453 dl:233-234 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b362/2084 bl:2.4029 bb:1.1127 rl:2.2808 rb:1.0454 dl:232-232 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b350/2084 bl:2.3468 bb:1.1767 rl:2.2809 rb:1.0455 dl:228-228 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b337/2084 bl:2.4789 bb:1.1925 rl:2.2811 rb:1.0456 dl:223-224 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b329/2084 bl:2.3463 bb:1.1579 rl:2.2811 rb:1.0457 dl:220-221 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b320/2084 bl:2.2663 bb:1.0687 rl:2.2811 rb:1.0457 dl:217-218 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b316/2084 bl:2.3950 bb:1.1847 rl:2.2812 rb:1.0459 dl:216-216 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b306/2084 bl:2.4041 bb:1.1618 rl:2.2813 rb:1.0460 dl:213-213 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b295/2084 bl:2.3369 bb:1.1790 rl:2.2814 rb:1.0461 dl:209-209 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b287/2084 bl:2.2575 bb:1.1395 rl:2.2814 rb:1.0461 dl:206-206 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b272/2084 bl:2.5187 bb:1.1329 rl:2.2816 rb:1.0462 dl:201-202 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b267/2084 bl:2.4927 bb:1.1932 rl:2.2817 rb:1.0463 dl:200-200 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b255/2084 bl:2.5617 bb:1.2354 rl:2.2819 rb:1.0465 dl:195-196 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b249/2084 bl:2.5697 bb:1.2153 rl:2.2822 rb:1.0466 dl:193-193 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b240/2084 bl:2.4313 bb:1.2221 rl:2.2823 rb:1.0467 dl:190-190 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b230/2084 bl:2.4692 bb:1.1827 rl:2.2824 rb:1.0468 dl:187-187 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b221/2084 bl:2.4013 bb:1.1579 rl:2.2825 rb:1.0469 dl:184-184 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b209/2084 bl:2.4439 bb:1.1537 rl:2.2826 rb:1.0470 dl:180-180 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b199/2084 bl:2.6419 bb:1.2201 rl:2.2829 rb:1.0471 dl:177-177 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b191/2084 bl:2.4381 bb:1.1777 rl:2.2830 rb:1.0472 dl:174-174 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b185/2084 bl:2.4590 bb:1.1794 rl:2.2831 rb:1.0473 dl:172-172 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b173/2084 bl:2.5616 bb:1.1560 rl:2.2833 rb:1.0473 dl:168-168 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b161/2084 bl:2.3960 bb:1.2656 rl:2.2834 rb:1.0475 dl:164-164 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b152/2084 bl:2.2819 bb:1.0952 rl:2.2834 rb:1.0475 dl:161-161 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b145/2084 bl:2.6088 bb:1.2246 rl:2.2836 rb:1.0476 dl:158-158 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b134/2084 bl:2.4499 bb:1.1580 rl:2.2837 rb:1.0477 dl:154-154 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b124/2084 bl:2.4511 bb:1.1234 rl:2.2838 rb:1.0477 dl:150-150 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b115/2084 bl:2.4979 bb:1.1707 rl:2.2839 rb:1.0478 dl:147-147 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b103/2084 bl:2.3459 bb:1.1379 rl:2.2840 rb:1.0478 dl:143-143 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b94/2084 bl:2.6218 bb:1.2699 rl:2.2842 rb:1.0480 dl:139-139 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b82/2084 bl:2.5826 bb:1.1584 rl:2.2843 rb:1.0480 dl:134-135 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b78/2084 bl:2.6787 bb:1.2217 rl:2.2845 rb:1.0481 dl:133-133 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b64/2084 bl:2.4783 bb:1.1524 rl:2.2846 rb:1.0482 dl:128-128 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b55/2084 bl:2.5603 bb:1.2609 rl:2.2848 rb:1.0483 dl:124-124 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b46/2084 bl:2.5400 bb:1.1652 rl:2.2849 rb:1.0483 dl:119-120 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b36/2084 bl:2.6525 bb:1.2323 rl:2.2851 rb:1.0484 dl:114-115 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b24/2084 bl:2.8335 bb:1.3262 rl:2.2853 rb:1.0485 dl:106-107 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b16/2084 bl:2.6544 bb:1.1558 rl:2.2854 rb:1.0486 dl:100-101 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b4/2084 bl:2.7921 bb:1.2699 rl:2.2856 rb:1.0486 dl:84-86 gd:1 sr:0 sf:1 tr:24/24 wt:0 +quantized_ttt_phased val_loss:2.30935268 val_bpb:1.05528133 eval_time:580667ms +total_eval_time:580.7s diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/lossless_caps.py b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/lossless_caps.py new file mode 100644 index 0000000000..98e472f824 --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/lossless_caps.py @@ -0,0 +1,833 @@ +"""Lossless capitalization pre-encoding helpers. + +This module provides a narrow, reversible transform that only touches +ASCII capital letters `A-Z`. Each uppercase ASCII letter is rewritten as +``, where `sentinel` is a private-use Unicode +character that is escaped by doubling if it appears literally in the +input text. + +Example with the default sentinel `\\uE000`: + + "The NASA Launch" -> "\\uE000the \\uE000n\\uE000a\\uE000s\\uE000a \\uE000launch" + +The transform is intentionally simple for v1: + +- lowercase ASCII letters are unchanged +- uppercase ASCII letters become sentinel + lowercase letter +- non-ASCII characters are left untouched +- literal sentinel characters are escaped as sentinel + sentinel + +This makes the transform exactly invertible while allowing a downstream +tokenizer to reuse lowercase subwords across case variants. +""" + +from __future__ import annotations + +import json +from pathlib import Path +from typing import Callable, Iterable + +LOSSLESS_CAPS_V1 = "lossless_caps_v1" +LOSSLESS_CAPS_V2 = "lossless_caps_v2" +LOSSLESS_CAPS_V3 = "lossless_caps_v3" +LOSSLESS_CAPS_V4 = "lossless_caps_v4" +LOSSLESS_CAPS_V5 = "lossless_caps_v5" +LOSSLESS_CAPS_V6 = "lossless_caps_v6" +LOSSLESS_CAPS_V7 = "lossless_caps_v7" +LOSSLESS_CAPS_CASEOPS_V1 = "lossless_caps_caseops_v1" +IDENTITY = "identity" +DEFAULT_SENTINEL = "\uE000" +DEFAULT_V2_TITLE = "\uE001" +DEFAULT_V2_ALLCAPS = "\uE002" +DEFAULT_V2_CAPNEXT = "\uE003" +DEFAULT_V2_ESC = "\uE004" +DEFAULT_V5_TITLE_MIN_LEN = 7 +DEFAULT_V6_ALLCAPS_MIN_LEN = 3 +DEFAULT_V7_ALLCAPS_MIN_LEN = 4 + + +class LosslessCapsError(ValueError): + """Raised when a transformed string is malformed.""" + + +def _is_ascii_upper(ch: str) -> bool: + return "A" <= ch <= "Z" + + +def _is_ascii_lower(ch: str) -> bool: + return "a" <= ch <= "z" + + +def _is_ascii_alpha(ch: str) -> bool: + return _is_ascii_lower(ch) or _is_ascii_upper(ch) + + +def _validate_distinct_single_chars(*chars: str) -> None: + if any(len(ch) != 1 for ch in chars): + raise ValueError("all control characters must be exactly one character") + if len(set(chars)) != len(chars): + raise ValueError("control characters must be distinct") + + +def encode_lossless_caps_v1(text: str, *, sentinel: str = DEFAULT_SENTINEL) -> str: + """Encode ASCII capitals reversibly using a one-character sentinel.""" + if len(sentinel) != 1: + raise ValueError("sentinel must be exactly one character") + out: list[str] = [] + for ch in text: + if ch == sentinel: + out.append(sentinel) + out.append(sentinel) + elif _is_ascii_upper(ch): + out.append(sentinel) + out.append(ch.lower()) + else: + out.append(ch) + return "".join(out) + + +def decode_lossless_caps_v1(text: str, *, sentinel: str = DEFAULT_SENTINEL) -> str: + """Decode the `lossless_caps_v1` transform back to the original text.""" + if len(sentinel) != 1: + raise ValueError("sentinel must be exactly one character") + out: list[str] = [] + i = 0 + n = len(text) + while i < n: + ch = text[i] + if ch != sentinel: + out.append(ch) + i += 1 + continue + if i + 1 >= n: + raise LosslessCapsError("dangling capitalization sentinel at end of string") + nxt = text[i + 1] + if nxt == sentinel: + out.append(sentinel) + elif _is_ascii_lower(nxt): + out.append(nxt.upper()) + else: + raise LosslessCapsError( + f"invalid sentinel escape sequence {sentinel + nxt!r}; " + "expected doubled sentinel or sentinel + lowercase ASCII letter" + ) + i += 2 + return "".join(out) + + +def encode_lossless_caps_v2( + text: str, + *, + title: str = DEFAULT_V2_TITLE, + allcaps: str = DEFAULT_V2_ALLCAPS, + capnext: str = DEFAULT_V2_CAPNEXT, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Encode ASCII word capitalization with cheap word-level markers. + + Rules over maximal ASCII alphabetic runs: + - lowercase words stay unchanged + - TitleCase words become `title + lowercase(word)` + - ALLCAPS words become `allcaps + lowercase(word)` + - mixed-case words use: + - optional `title` when the first letter is uppercase + - `capnext + lowercase(letter)` for subsequent uppercase letters + - literal control characters are escaped as `esc + literal` + """ + _validate_distinct_single_chars(title, allcaps, capnext, esc) + controls = {title, allcaps, capnext, esc} + out: list[str] = [] + i = 0 + n = len(text) + while i < n: + ch = text[i] + if ch in controls: + out.append(esc) + out.append(ch) + i += 1 + continue + if not _is_ascii_alpha(ch): + out.append(ch) + i += 1 + continue + + j = i + 1 + while j < n and _is_ascii_alpha(text[j]): + j += 1 + word = text[i:j] + lower_word = word.lower() + + if word.islower(): + out.append(word) + elif len(word) >= 2 and word.isupper(): + out.append(allcaps) + out.append(lower_word) + elif _is_ascii_upper(word[0]) and word[1:].islower(): + out.append(title) + out.append(lower_word) + else: + if _is_ascii_upper(word[0]): + out.append(title) + out.append(lower_word[0]) + for orig_ch, lower_ch in zip(word[1:], lower_word[1:], strict=True): + if _is_ascii_upper(orig_ch): + out.append(capnext) + out.append(lower_ch) + i = j + return "".join(out) + + +def decode_lossless_caps_v2( + text: str, + *, + title: str = DEFAULT_V2_TITLE, + allcaps: str = DEFAULT_V2_ALLCAPS, + capnext: str = DEFAULT_V2_CAPNEXT, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Decode the `lossless_caps_v2` transform back to the original text.""" + _validate_distinct_single_chars(title, allcaps, capnext, esc) + out: list[str] = [] + pending_escape = False + pending_word_mode: str | None = None + active_allcaps = False + pending_capnext = False + in_ascii_word = False + + for ch in text: + if pending_escape: + if pending_word_mode is not None and not _is_ascii_alpha(ch): + raise LosslessCapsError("escaped control char cannot satisfy pending word capitalization mode") + out.append(ch) + pending_escape = False + if _is_ascii_alpha(ch): + in_ascii_word = True + else: + in_ascii_word = False + active_allcaps = False + continue + + if ch == esc: + pending_escape = True + continue + if ch == title: + if pending_word_mode is not None or in_ascii_word or pending_capnext: + raise LosslessCapsError("invalid title marker placement") + pending_word_mode = "title" + continue + if ch == allcaps: + if pending_word_mode is not None or in_ascii_word or pending_capnext: + raise LosslessCapsError("invalid allcaps marker placement") + pending_word_mode = "allcaps" + continue + if ch == capnext: + if pending_capnext: + raise LosslessCapsError("duplicate capnext marker") + pending_capnext = True + continue + + if _is_ascii_alpha(ch): + at_word_start = not in_ascii_word + if at_word_start: + if pending_word_mode == "allcaps": + out.append(ch.upper()) + active_allcaps = True + elif pending_word_mode == "title": + out.append(ch.upper()) + elif pending_capnext: + out.append(ch.upper()) + else: + out.append(ch) + pending_word_mode = None + pending_capnext = False + in_ascii_word = True + continue + + if pending_word_mode is not None: + raise LosslessCapsError("word capitalization marker leaked into the middle of a word") + if active_allcaps: + out.append(ch.upper()) + elif pending_capnext: + out.append(ch.upper()) + else: + out.append(ch) + pending_capnext = False + continue + + if pending_word_mode is not None or pending_capnext: + raise LosslessCapsError("capitalization marker not followed by an ASCII letter") + out.append(ch) + in_ascii_word = False + active_allcaps = False + + if pending_escape: + raise LosslessCapsError("dangling escape marker at end of string") + if pending_word_mode is not None or pending_capnext: + raise LosslessCapsError("dangling capitalization marker at end of string") + return "".join(out) + + +def encode_lossless_caps_v3( + text: str, + *, + title: str = DEFAULT_V2_TITLE, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Encode only common word-level capitalization patterns. + + Rules over maximal ASCII alphabetic runs: + - lowercase words stay unchanged + - TitleCase words become `title + lowercase(word)` + - ALLCAPS words become `allcaps + lowercase(word)` + - all other mixed-case words are left unchanged + - literal control characters are escaped as `esc + literal` + """ + _validate_distinct_single_chars(title, allcaps, esc) + controls = {title, allcaps, esc} + out: list[str] = [] + i = 0 + n = len(text) + while i < n: + ch = text[i] + if ch in controls: + out.append(esc) + out.append(ch) + i += 1 + continue + if not _is_ascii_alpha(ch): + out.append(ch) + i += 1 + continue + + j = i + 1 + while j < n and _is_ascii_alpha(text[j]): + j += 1 + word = text[i:j] + + if word.islower(): + out.append(word) + elif len(word) >= 2 and word.isupper(): + out.append(allcaps) + out.append(word.lower()) + elif _is_ascii_upper(word[0]) and word[1:].islower(): + out.append(title) + out.append(word.lower()) + else: + out.append(word) + i = j + return "".join(out) + + +def decode_lossless_caps_v3( + text: str, + *, + title: str = DEFAULT_V2_TITLE, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Decode the `lossless_caps_v3` transform back to the original text.""" + _validate_distinct_single_chars(title, allcaps, esc) + out: list[str] = [] + pending_escape = False + pending_word_mode: str | None = None + active_allcaps = False + in_ascii_word = False + + for ch in text: + if pending_escape: + if pending_word_mode is not None and not _is_ascii_alpha(ch): + raise LosslessCapsError("escaped control char cannot satisfy pending word capitalization mode") + out.append(ch) + pending_escape = False + if _is_ascii_alpha(ch): + in_ascii_word = True + else: + in_ascii_word = False + active_allcaps = False + continue + + if ch == esc: + pending_escape = True + continue + if ch == title: + if pending_word_mode is not None or in_ascii_word: + raise LosslessCapsError("invalid title marker placement") + pending_word_mode = "title" + continue + if ch == allcaps: + if pending_word_mode is not None or in_ascii_word: + raise LosslessCapsError("invalid allcaps marker placement") + pending_word_mode = "allcaps" + continue + + if _is_ascii_alpha(ch): + at_word_start = not in_ascii_word + if at_word_start: + if pending_word_mode == "allcaps": + out.append(ch.upper()) + active_allcaps = True + elif pending_word_mode == "title": + out.append(ch.upper()) + else: + out.append(ch) + pending_word_mode = None + in_ascii_word = True + continue + + if pending_word_mode is not None: + raise LosslessCapsError("word capitalization marker leaked into the middle of a word") + out.append(ch.upper() if active_allcaps else ch) + continue + + if pending_word_mode is not None: + raise LosslessCapsError("capitalization marker not followed by an ASCII letter") + out.append(ch) + in_ascii_word = False + active_allcaps = False + + if pending_escape: + raise LosslessCapsError("dangling escape marker at end of string") + if pending_word_mode is not None: + raise LosslessCapsError("dangling capitalization marker at end of string") + return "".join(out) + + +def encode_lossless_caps_v4( + text: str, + *, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Encode only ALLCAPS ASCII words, leaving all other case untouched.""" + _validate_distinct_single_chars(allcaps, esc) + controls = {allcaps, esc} + out: list[str] = [] + i = 0 + n = len(text) + while i < n: + ch = text[i] + if ch in controls: + out.append(esc) + out.append(ch) + i += 1 + continue + if not _is_ascii_alpha(ch): + out.append(ch) + i += 1 + continue + j = i + 1 + while j < n and _is_ascii_alpha(text[j]): + j += 1 + word = text[i:j] + if len(word) >= 2 and word.isupper(): + out.append(allcaps) + out.append(word.lower()) + else: + out.append(word) + i = j + return "".join(out) + + +def decode_lossless_caps_v4( + text: str, + *, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Decode the `lossless_caps_v4` transform back to the original text.""" + _validate_distinct_single_chars(allcaps, esc) + out: list[str] = [] + pending_escape = False + pending_allcaps = False + in_ascii_word = False + active_allcaps = False + + for ch in text: + if pending_escape: + if pending_allcaps and not _is_ascii_alpha(ch): + raise LosslessCapsError("escaped control char cannot satisfy pending allcaps mode") + out.append(ch) + pending_escape = False + if _is_ascii_alpha(ch): + in_ascii_word = True + else: + in_ascii_word = False + active_allcaps = False + continue + + if ch == esc: + pending_escape = True + continue + if ch == allcaps: + if pending_allcaps or in_ascii_word: + raise LosslessCapsError("invalid allcaps marker placement") + pending_allcaps = True + continue + + if _is_ascii_alpha(ch): + if not in_ascii_word: + active_allcaps = pending_allcaps + pending_allcaps = False + in_ascii_word = True + out.append(ch.upper() if active_allcaps else ch) + continue + + if pending_allcaps: + raise LosslessCapsError("allcaps marker not followed by an ASCII letter") + out.append(ch) + in_ascii_word = False + active_allcaps = False + + if pending_escape: + raise LosslessCapsError("dangling escape marker at end of string") + if pending_allcaps: + raise LosslessCapsError("dangling allcaps marker at end of string") + return "".join(out) + + +def encode_lossless_caps_v5( + text: str, + *, + title: str = DEFAULT_V2_TITLE, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, + title_min_len: int = DEFAULT_V5_TITLE_MIN_LEN, +) -> str: + """Encode ALLCAPS words and only sufficiently long TitleCase words.""" + _validate_distinct_single_chars(title, allcaps, esc) + controls = {title, allcaps, esc} + out: list[str] = [] + i = 0 + n = len(text) + while i < n: + ch = text[i] + if ch in controls: + out.append(esc) + out.append(ch) + i += 1 + continue + if not _is_ascii_alpha(ch): + out.append(ch) + i += 1 + continue + j = i + 1 + while j < n and _is_ascii_alpha(text[j]): + j += 1 + word = text[i:j] + if len(word) >= 2 and word.isupper(): + out.append(allcaps) + out.append(word.lower()) + elif len(word) >= title_min_len and _is_ascii_upper(word[0]) and word[1:].islower(): + out.append(title) + out.append(word.lower()) + else: + out.append(word) + i = j + return "".join(out) + + +def decode_lossless_caps_v5( + text: str, + *, + title: str = DEFAULT_V2_TITLE, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Decode the `lossless_caps_v5` transform back to the original text.""" + return decode_lossless_caps_v3(text, title=title, allcaps=allcaps, esc=esc) + + +def encode_lossless_caps_v6( + text: str, + *, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, + allcaps_min_len: int = DEFAULT_V6_ALLCAPS_MIN_LEN, +) -> str: + """Encode only ALLCAPS words with length >= allcaps_min_len.""" + _validate_distinct_single_chars(allcaps, esc) + controls = {allcaps, esc} + out: list[str] = [] + i = 0 + n = len(text) + while i < n: + ch = text[i] + if ch in controls: + out.append(esc) + out.append(ch) + i += 1 + continue + if not _is_ascii_alpha(ch): + out.append(ch) + i += 1 + continue + j = i + 1 + while j < n and _is_ascii_alpha(text[j]): + j += 1 + word = text[i:j] + if len(word) >= allcaps_min_len and word.isupper(): + out.append(allcaps) + out.append(word.lower()) + else: + out.append(word) + i = j + return "".join(out) + + +def decode_lossless_caps_v6( + text: str, + *, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Decode the `lossless_caps_v6` transform back to the original text.""" + return decode_lossless_caps_v4(text, allcaps=allcaps, esc=esc) + + +def encode_lossless_caps_v7( + text: str, + *, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, + allcaps_min_len: int = DEFAULT_V7_ALLCAPS_MIN_LEN, +) -> str: + """Encode only ALLCAPS words with length >= 4.""" + return encode_lossless_caps_v6( + text, + allcaps=allcaps, + esc=esc, + allcaps_min_len=allcaps_min_len, + ) + + +def decode_lossless_caps_v7( + text: str, + *, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Decode the `lossless_caps_v7` transform back to the original text.""" + return decode_lossless_caps_v6(text, allcaps=allcaps, esc=esc) + + +def get_text_transform(name: str | None) -> Callable[[str], str]: + """Return the forward text transform for the given config name.""" + normalized = IDENTITY if name in {None, "", IDENTITY} else str(name) + if normalized == IDENTITY: + return lambda text: text + if normalized == LOSSLESS_CAPS_V1: + return encode_lossless_caps_v1 + if normalized == LOSSLESS_CAPS_V2: + return encode_lossless_caps_v2 + if normalized == LOSSLESS_CAPS_V3: + return encode_lossless_caps_v3 + if normalized == LOSSLESS_CAPS_V4: + return encode_lossless_caps_v4 + if normalized == LOSSLESS_CAPS_V5: + return encode_lossless_caps_v5 + if normalized == LOSSLESS_CAPS_V6: + return encode_lossless_caps_v6 + if normalized == LOSSLESS_CAPS_V7: + return encode_lossless_caps_v7 + if normalized == LOSSLESS_CAPS_CASEOPS_V1: + return encode_lossless_caps_v2 + raise ValueError(f"unsupported text_transform={name!r}") + + +def get_text_inverse_transform(name: str | None) -> Callable[[str], str]: + """Return the inverse transform for the given config name.""" + normalized = IDENTITY if name in {None, "", IDENTITY} else str(name) + if normalized == IDENTITY: + return lambda text: text + if normalized == LOSSLESS_CAPS_V1: + return decode_lossless_caps_v1 + if normalized == LOSSLESS_CAPS_V2: + return decode_lossless_caps_v2 + if normalized == LOSSLESS_CAPS_V3: + return decode_lossless_caps_v3 + if normalized == LOSSLESS_CAPS_V4: + return decode_lossless_caps_v4 + if normalized == LOSSLESS_CAPS_V5: + return decode_lossless_caps_v5 + if normalized == LOSSLESS_CAPS_V6: + return decode_lossless_caps_v6 + if normalized == LOSSLESS_CAPS_V7: + return decode_lossless_caps_v7 + if normalized == LOSSLESS_CAPS_CASEOPS_V1: + return decode_lossless_caps_v2 + raise ValueError(f"unsupported text_transform={name!r}") + + +def normalize_text_transform_name(name: str | None) -> str: + """Normalize empty/None transform names to the identity transform.""" + return IDENTITY if name in {None, "", IDENTITY} else str(name) + + +def get_text_transform_control_symbols(name: str | None) -> list[str]: + """Return reserved control symbols used by a transform, if any.""" + normalized = normalize_text_transform_name(name) + if normalized == IDENTITY: + return [] + if normalized == LOSSLESS_CAPS_V1: + return [DEFAULT_SENTINEL] + if normalized == LOSSLESS_CAPS_V2: + return [DEFAULT_V2_TITLE, DEFAULT_V2_ALLCAPS, DEFAULT_V2_CAPNEXT, DEFAULT_V2_ESC] + if normalized == LOSSLESS_CAPS_CASEOPS_V1: + return [DEFAULT_V2_TITLE, DEFAULT_V2_ALLCAPS, DEFAULT_V2_CAPNEXT, DEFAULT_V2_ESC] + if normalized in {LOSSLESS_CAPS_V3, LOSSLESS_CAPS_V5}: + return [DEFAULT_V2_TITLE, DEFAULT_V2_ALLCAPS, DEFAULT_V2_ESC] + if normalized in {LOSSLESS_CAPS_V4, LOSSLESS_CAPS_V6, LOSSLESS_CAPS_V7}: + return [DEFAULT_V2_ALLCAPS, DEFAULT_V2_ESC] + raise ValueError(f"unsupported text_transform={name!r}") + + +def infer_text_transform_from_manifest(tokenizer_path: str | Path) -> str: + """Best-effort lookup of a tokenizer's text transform from a local manifest.""" + tokenizer_path = Path(tokenizer_path).expanduser().resolve() + manifest_candidates = [ + tokenizer_path.parent.parent / "manifest.json", + tokenizer_path.parent / "manifest.json", + ] + for manifest_path in manifest_candidates: + if not manifest_path.is_file(): + continue + try: + payload = json.loads(manifest_path.read_text(encoding="utf-8")) + except (OSError, json.JSONDecodeError): + continue + tokenizers = payload.get("tokenizers") + if not isinstance(tokenizers, list): + continue + for tokenizer_meta in tokenizers: + if not isinstance(tokenizer_meta, dict): + continue + model_path = tokenizer_meta.get("model_path") or tokenizer_meta.get("path") + if not model_path: + continue + candidate = (manifest_path.parent / str(model_path)).resolve() + if candidate == tokenizer_path: + return normalize_text_transform_name(tokenizer_meta.get("text_transform")) + return IDENTITY + + +def surface_piece_original_byte_counts( + surfaces: Iterable[str], + *, + text_transform_name: str | None = None, + sentinel: str = DEFAULT_SENTINEL, +) -> list[int]: + """Return exact original UTF-8 byte counts contributed by each surface piece. + + `surfaces` must be the exact decoded text fragments emitted by SentencePiece + in order, e.g. `piece.surface` from `encode_as_immutable_proto`. + """ + normalized = normalize_text_transform_name(text_transform_name) + if normalized == IDENTITY: + return [len(surface.encode("utf-8")) for surface in surfaces] + if normalized == LOSSLESS_CAPS_V1: + if len(sentinel) != 1: + raise ValueError("sentinel must be exactly one character") + sentinel_bytes = len(sentinel.encode("utf-8")) + pending_sentinel = False + counts: list[int] = [] + for surface in surfaces: + piece_bytes = 0 + for ch in surface: + if pending_sentinel: + if ch == sentinel: + piece_bytes += sentinel_bytes + elif _is_ascii_lower(ch): + piece_bytes += 1 + else: + raise LosslessCapsError( + f"invalid continuation {ch!r} after capitalization sentinel" + ) + pending_sentinel = False + continue + if ch == sentinel: + pending_sentinel = True + else: + piece_bytes += len(ch.encode("utf-8")) + counts.append(piece_bytes) + if pending_sentinel: + raise LosslessCapsError("dangling capitalization sentinel across piece boundary") + return counts + if normalized not in {LOSSLESS_CAPS_V2, LOSSLESS_CAPS_V3, LOSSLESS_CAPS_V4, LOSSLESS_CAPS_V5, LOSSLESS_CAPS_V6, LOSSLESS_CAPS_V7, LOSSLESS_CAPS_CASEOPS_V1}: + raise ValueError(f"unsupported text_transform={text_transform_name!r}") + + title = DEFAULT_V2_TITLE + allcaps = DEFAULT_V2_ALLCAPS + capnext = DEFAULT_V2_CAPNEXT + esc = DEFAULT_V2_ESC + if normalized in {LOSSLESS_CAPS_V2, LOSSLESS_CAPS_CASEOPS_V1}: + _validate_distinct_single_chars(title, allcaps, capnext, esc) + elif normalized in {LOSSLESS_CAPS_V4, LOSSLESS_CAPS_V6, LOSSLESS_CAPS_V7}: + _validate_distinct_single_chars(allcaps, esc) + else: + _validate_distinct_single_chars(title, allcaps, esc) + pending_escape = False + pending_word_mode: str | None = None + active_allcaps = False + pending_capnext = False + in_ascii_word = False + counts: list[int] = [] + for surface in surfaces: + piece_bytes = 0 + for ch in surface: + if pending_escape: + if pending_word_mode is not None and not _is_ascii_alpha(ch): + raise LosslessCapsError("escaped control char cannot satisfy pending word capitalization mode") + piece_bytes += len(ch.encode("utf-8")) + pending_escape = False + if _is_ascii_alpha(ch): + in_ascii_word = True + else: + in_ascii_word = False + active_allcaps = False + continue + if ch == esc: + pending_escape = True + continue + if normalized in {LOSSLESS_CAPS_V2, LOSSLESS_CAPS_V3, LOSSLESS_CAPS_V5, LOSSLESS_CAPS_CASEOPS_V1} and ch == title: + if pending_word_mode is not None or in_ascii_word or pending_capnext: + raise LosslessCapsError("invalid title marker placement") + pending_word_mode = "title" + continue + if ch == allcaps: + if pending_word_mode is not None or in_ascii_word or pending_capnext: + raise LosslessCapsError("invalid allcaps marker placement") + pending_word_mode = "allcaps" + continue + if normalized in {LOSSLESS_CAPS_V2, LOSSLESS_CAPS_CASEOPS_V1} and ch == capnext: + if pending_capnext: + raise LosslessCapsError("duplicate capnext marker") + pending_capnext = True + continue + + if _is_ascii_alpha(ch): + at_word_start = not in_ascii_word + if at_word_start: + piece_bytes += 1 + active_allcaps = pending_word_mode == "allcaps" + pending_word_mode = None + pending_capnext = False + in_ascii_word = True + continue + if pending_word_mode is not None: + raise LosslessCapsError("word capitalization marker leaked into the middle of a word") + piece_bytes += 1 + pending_capnext = False + continue + + if pending_word_mode is not None or pending_capnext: + raise LosslessCapsError("capitalization marker not followed by an ASCII letter") + piece_bytes += len(ch.encode("utf-8")) + in_ascii_word = False + active_allcaps = False + counts.append(piece_bytes) + if pending_escape: + raise LosslessCapsError("dangling escape marker across piece boundary") + if pending_word_mode is not None or pending_capnext: + raise LosslessCapsError("dangling capitalization marker across piece boundary") + return counts diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/online_ngram_state.c b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/online_ngram_state.c new file mode 100644 index 0000000000..f8472a6f05 --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/online_ngram_state.c @@ -0,0 +1,433 @@ +#include +#include +#include + +#define COEFF_COUNT 32 + +static const uint64_t ROLLING_COEFFS[COEFF_COUNT] = { + 36313ULL, 27191ULL, 51647ULL, 81929ULL, 131071ULL, 196613ULL, + 262147ULL, 393241ULL, 524309ULL, 655373ULL, 786433ULL, 917521ULL, + 1048583ULL, 1179653ULL, 1310729ULL, 1441801ULL, 1572869ULL, 1703941ULL, + 1835017ULL, 1966087ULL, 2097169ULL, 2228243ULL, 2359319ULL, 2490389ULL, + 2621471ULL, 2752549ULL, 2883617ULL, 3014687ULL, 3145757ULL, 3276833ULL, + 3407903ULL, 3538973ULL, +}; + +static const uint64_t PAIR_MIX = 1000003ULL; +static const uint64_t PREFIX_BASE = 1099511628211ULL; +static const uint64_t LEN_MIX = 0x9E3779B185EBCA87ULL; +static const uint64_t TABLE_MIX = 0x9e3779b97f4a7c15ULL; + +typedef struct { + uint64_t key; + uint32_t total; + uint32_t top_count; + uint16_t top_tok; + uint16_t _pad; +} CtxBucket; + +typedef struct { + uint64_t key; + uint32_t count; + uint32_t _pad; +} PairBucket; + +typedef struct { + int token_ctx_len; + int token_prefix_len; + int token_head; + uint16_t *token_ring; + + CtxBucket *token_ctx_tbl; + uint8_t *token_ctx_used; + size_t token_ctx_mask; + + PairBucket *token_pair_tbl; + uint8_t *token_pair_used; + size_t token_pair_mask; + + uint64_t within_hash; + uint32_t within_len; + + CtxBucket *within_ctx_tbl; + uint8_t *within_ctx_used; + size_t within_ctx_mask; + + PairBucket *within_pair_tbl; + uint8_t *within_pair_used; + size_t within_pair_mask; +} OnlineNgramState; + +static inline size_t mix_index(uint64_t key, size_t mask) { + return (size_t)((key * TABLE_MIX) & mask); +} + +static inline size_t find_ctx_slot( + CtxBucket *tbl, + uint8_t *used, + size_t mask, + uint64_t key, + int *found +) { + size_t idx = mix_index(key, mask); + for (size_t probe = 0; probe <= mask; ++probe) { + if (!used[idx]) { + *found = 0; + return idx; + } + if (tbl[idx].key == key) { + *found = 1; + return idx; + } + idx = (idx + 1U) & mask; + } + *found = -1; + return 0; +} + +static inline size_t find_pair_slot( + PairBucket *tbl, + uint8_t *used, + size_t mask, + uint64_t key, + int *found +) { + size_t idx = mix_index(key, mask); + for (size_t probe = 0; probe <= mask; ++probe) { + if (!used[idx]) { + *found = 0; + return idx; + } + if (tbl[idx].key == key) { + *found = 1; + return idx; + } + idx = (idx + 1U) & mask; + } + *found = -1; + return 0; +} + +static inline uint64_t token_pair_key(uint64_t ctx_key, uint16_t tok, int ctx_len) { + return (ctx_key * PAIR_MIX) ^ (((uint64_t)tok) * ROLLING_COEFFS[(size_t)ctx_len % COEFF_COUNT]); +} + +static inline uint64_t within_pair_key(uint64_t ctx_key, uint16_t tok) { + return (ctx_key * PAIR_MIX) ^ (((uint64_t)tok) * ROLLING_COEFFS[0]); +} + +static inline uint64_t extend_prefix_hash(uint64_t current_hash, uint16_t tok, uint32_t pos) { + return (current_hash * PREFIX_BASE) ^ (((uint64_t)tok + 1ULL) * ROLLING_COEFFS[(size_t)pos % COEFF_COUNT]); +} + +static inline uint32_t pair_increment( + PairBucket *tbl, + uint8_t *used, + size_t mask, + uint64_t key +) { + int found = 0; + size_t idx = find_pair_slot(tbl, used, mask, key, &found); + if (found < 0) { + return 0U; + } + if (!found) { + used[idx] = 1U; + tbl[idx].key = key; + tbl[idx].count = 1U; + return 1U; + } + tbl[idx].count += 1U; + return tbl[idx].count; +} + +static inline int ctx_increment( + CtxBucket *tbl, + uint8_t *used, + size_t mask, + uint64_t key, + uint16_t tok, + uint32_t pair_count +) { + int found = 0; + size_t idx = find_ctx_slot(tbl, used, mask, key, &found); + if (found < 0) { + return -1; + } + if (!found) { + used[idx] = 1U; + tbl[idx].key = key; + tbl[idx].total = 1U; + tbl[idx].top_count = pair_count; + tbl[idx].top_tok = tok; + return 0; + } + tbl[idx].total += 1U; + if (pair_count > tbl[idx].top_count) { + tbl[idx].top_count = pair_count; + tbl[idx].top_tok = tok; + } + return 0; +} + +static inline uint64_t token_context_hash(const OnlineNgramState *st) { + uint64_t h = 0ULL; + if (st->token_ctx_len <= 0) { + return h; + } + for (int j = 0; j < st->token_ctx_len; ++j) { + const int ring_idx = (st->token_head + j) % st->token_ctx_len; + h ^= ((uint64_t)st->token_ring[ring_idx]) * ROLLING_COEFFS[(size_t)j]; + } + return h; +} + +static inline void token_push(OnlineNgramState *st, uint16_t tok) { + if (st->token_ctx_len <= 0) { + return; + } + if (st->token_prefix_len < st->token_ctx_len) { + st->token_ring[st->token_prefix_len] = tok; + st->token_prefix_len += 1; + return; + } + st->token_ring[st->token_head] = tok; + st->token_head = (st->token_head + 1) % st->token_ctx_len; +} + +static void *xcalloc(size_t count, size_t size) { + if (count == 0 || size == 0) { + return NULL; + } + return calloc(count, size); +} + +static int alloc_tables( + size_t table_bits, + CtxBucket **ctx_tbl, + uint8_t **ctx_used, + size_t *ctx_mask, + PairBucket **pair_tbl, + uint8_t **pair_used, + size_t *pair_mask +) { + const size_t size = 1ULL << table_bits; + *ctx_tbl = (CtxBucket *)xcalloc(size, sizeof(CtxBucket)); + *ctx_used = (uint8_t *)xcalloc(size, sizeof(uint8_t)); + *pair_tbl = (PairBucket *)xcalloc(size, sizeof(PairBucket)); + *pair_used = (uint8_t *)xcalloc(size, sizeof(uint8_t)); + if (!*ctx_tbl || !*ctx_used || !*pair_tbl || !*pair_used) { + return -1; + } + *ctx_mask = size - 1U; + *pair_mask = size - 1U; + return 0; +} + +void *online_ngram_state_create( + int token_ctx_len, + int token_table_bits, + int within_table_bits +) { + if (token_ctx_len < 0 || token_table_bits <= 0 || within_table_bits <= 0) { + return NULL; + } + OnlineNgramState *st = (OnlineNgramState *)calloc(1, sizeof(OnlineNgramState)); + if (!st) { + return NULL; + } + st->token_ctx_len = token_ctx_len; + if (token_ctx_len > 0) { + st->token_ring = (uint16_t *)xcalloc((size_t)token_ctx_len, sizeof(uint16_t)); + if (!st->token_ring) { + free(st); + return NULL; + } + } + if (alloc_tables( + (size_t)token_table_bits, + &st->token_ctx_tbl, + &st->token_ctx_used, + &st->token_ctx_mask, + &st->token_pair_tbl, + &st->token_pair_used, + &st->token_pair_mask + ) != 0) { + free(st->token_ring); + free(st); + return NULL; + } + if (alloc_tables( + (size_t)within_table_bits, + &st->within_ctx_tbl, + &st->within_ctx_used, + &st->within_ctx_mask, + &st->within_pair_tbl, + &st->within_pair_used, + &st->within_pair_mask + ) != 0) { + free(st->token_pair_used); + free(st->token_pair_tbl); + free(st->token_ctx_used); + free(st->token_ctx_tbl); + free(st->token_ring); + free(st); + return NULL; + } + return (void *)st; +} + +void online_ngram_state_destroy(void *ptr) { + OnlineNgramState *st = (OnlineNgramState *)ptr; + if (!st) { + return; + } + free(st->within_pair_used); + free(st->within_pair_tbl); + free(st->within_ctx_used); + free(st->within_ctx_tbl); + free(st->token_pair_used); + free(st->token_pair_tbl); + free(st->token_ctx_used); + free(st->token_ctx_tbl); + free(st->token_ring); + free(st); +} + +void online_ngram_state_seed_prefix_token(void *ptr, uint16_t tok) { + OnlineNgramState *st = (OnlineNgramState *)ptr; + if (!st) { + return; + } + token_push(st, tok); +} + +int online_ngram_state_process_chunk( + void *ptr, + const uint16_t *tokens, + int64_t n_tokens, + const uint8_t *starts_new_word_lut, + const uint8_t *boundary_lut, + uint16_t *token_top_token, + float *token_top_prob, + uint16_t *within_top_token, + float *within_top_prob, + uint8_t *within_valid +) { + OnlineNgramState *st = (OnlineNgramState *)ptr; + if (!st || !tokens || n_tokens < 0) { + return -1; + } + for (int64_t i = 0; i < n_tokens; ++i) { + const uint16_t tok = tokens[i]; + const uint8_t is_boundary = boundary_lut[tok]; + const uint8_t is_new_word = starts_new_word_lut[tok]; + + uint64_t token_ctx_key = 0ULL; + if (st->token_ctx_len == 0 || st->token_prefix_len >= st->token_ctx_len) { + token_ctx_key = token_context_hash(st); + int found = 0; + size_t idx = find_ctx_slot( + st->token_ctx_tbl, + st->token_ctx_used, + st->token_ctx_mask, + token_ctx_key, + &found + ); + if (found > 0) { + token_top_token[i] = st->token_ctx_tbl[idx].top_tok; + token_top_prob[i] = + (float)st->token_ctx_tbl[idx].top_count / (float)st->token_ctx_tbl[idx].total; + } else { + token_top_token[i] = 0U; + token_top_prob[i] = 0.0f; + } + } else { + token_top_token[i] = 0U; + token_top_prob[i] = 0.0f; + } + + uint64_t within_ctx_key = 0ULL; + if (!is_boundary && !is_new_word && st->within_len > 0U) { + within_ctx_key = st->within_hash ^ ((uint64_t)st->within_len * LEN_MIX); + int found = 0; + size_t idx = find_ctx_slot( + st->within_ctx_tbl, + st->within_ctx_used, + st->within_ctx_mask, + within_ctx_key, + &found + ); + within_valid[i] = 1U; + if (found > 0) { + within_top_token[i] = st->within_ctx_tbl[idx].top_tok; + within_top_prob[i] = + (float)st->within_ctx_tbl[idx].top_count / (float)st->within_ctx_tbl[idx].total; + } else { + within_top_token[i] = 0U; + within_top_prob[i] = 0.0f; + } + } else { + within_valid[i] = 0U; + within_top_token[i] = 0U; + within_top_prob[i] = 0.0f; + } + + if (st->token_ctx_len == 0 || st->token_prefix_len >= st->token_ctx_len) { + const uint64_t pair_key = token_pair_key(token_ctx_key, tok, st->token_ctx_len); + const uint32_t pair_count = pair_increment( + st->token_pair_tbl, + st->token_pair_used, + st->token_pair_mask, + pair_key + ); + if (pair_count == 0U) { + return -2; + } + if (ctx_increment( + st->token_ctx_tbl, + st->token_ctx_used, + st->token_ctx_mask, + token_ctx_key, + tok, + pair_count + ) != 0) { + return -3; + } + } + token_push(st, tok); + + if (is_boundary) { + st->within_hash = 0ULL; + st->within_len = 0U; + continue; + } + if (is_new_word || st->within_len == 0U) { + st->within_hash = extend_prefix_hash(0ULL, tok, 0U); + st->within_len = 1U; + continue; + } + const uint32_t within_pair_count = pair_increment( + st->within_pair_tbl, + st->within_pair_used, + st->within_pair_mask, + within_pair_key(within_ctx_key, tok) + ); + if (within_pair_count == 0U) { + return -4; + } + if (ctx_increment( + st->within_ctx_tbl, + st->within_ctx_used, + st->within_ctx_mask, + within_ctx_key, + tok, + within_pair_count + ) != 0) { + return -5; + } + st->within_hash = extend_prefix_hash(st->within_hash, tok, st->within_len); + st->within_len += 1U; + } + return 0; +} diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/online_ngram_tilt.py b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/online_ngram_tilt.py new file mode 100644 index 0000000000..98c5571c2f --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/online_ngram_tilt.py @@ -0,0 +1,404 @@ +""" +Vendored online n-gram tilt helpers from PR #1145 (AnirudhRahul, valerio-endorsed). + +Provides causal, normalized, prefix-only n-gram experts that propose at most one +hinted token per scored position. Caller obtains q_t = p(h_t | x) from the model +(post-TTT-adapt logits) and applies multiplicative-boost-with-renorm: + + p'(a) = exp(beta * 1[a == h_t]) * p(a) / Z_t + Z_t = 1 - q_t + exp(beta) * q_t = 1 + q_t * (exp(beta) - 1) + -log p'(y_realized) = -log p(y) - beta * 1[y == h_t] + log Z_t + = ptl - beta * is_hit + log1p(q_t * (exp(beta) - 1)) + +Compliance: +- C1 causal: hint h_t computed from strict prefix (tokens 0..t-1 only) +- C2 normalized over Sigma: closed-form Z_t over full vocab softmax +- C3 score-before-update: hints precomputed in single L->R pass; loss uses prefix-only +- C4 single pass: process_chunk advances state monotonically + +Compatible with both #1934/#1855 base architectures via Hyperparameter env-var gates. +""" + +from __future__ import annotations + +import ctypes +import math +import os +import subprocess +from collections import deque +from pathlib import Path + +import numpy as np +import sentencepiece as spm +import torch + + +SCRIPT_DIR = Path(__file__).resolve().parent +ONLINE_NGRAM_SRC = SCRIPT_DIR / "online_ngram_state.c" +ONLINE_NGRAM_LIB = SCRIPT_DIR / "libonline_ngram_state.so" + +WHITESPACE_BYTE_IDS = {9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 36} +EDGE_PUNCT = ".,:;!?()[]{}<>\"'`" + + +def normalize_word(text: str, mode: str) -> str: + text = text.strip() + if mode == "lower": + return text.lower() + if mode == "identity": + return text + if mode == "strip_punct_lower": + return text.strip(EDGE_PUNCT).lower() + raise ValueError(f"Unknown word normalization mode: {mode}") + + +def suggest_table_bits(expected_entries: int, load_factor: float) -> int: + if expected_entries <= 0: + return 16 + target = max(int(expected_entries / max(load_factor, 1e-6)), 1) + bits = max(int(math.ceil(math.log2(target))), 12) + return min(bits, 28) + + +def ensure_online_ngram_lib(log0=print) -> ctypes.CDLL: + needs_build = (not ONLINE_NGRAM_LIB.exists()) or ( + ONLINE_NGRAM_SRC.stat().st_mtime_ns > ONLINE_NGRAM_LIB.stat().st_mtime_ns + ) + if needs_build: + log0(f"ngram_tilt:building_native_helper src={ONLINE_NGRAM_SRC.name}") + subprocess.run( + [ + "gcc", "-O3", "-march=native", "-shared", "-fPIC", + "-o", str(ONLINE_NGRAM_LIB), + str(ONLINE_NGRAM_SRC), + ], + check=True, + ) + lib = ctypes.CDLL(str(ONLINE_NGRAM_LIB)) + lib.online_ngram_state_create.restype = ctypes.c_void_p + lib.online_ngram_state_create.argtypes = [ctypes.c_int, ctypes.c_int, ctypes.c_int] + lib.online_ngram_state_destroy.restype = None + lib.online_ngram_state_destroy.argtypes = [ctypes.c_void_p] + lib.online_ngram_state_seed_prefix_token.restype = None + lib.online_ngram_state_seed_prefix_token.argtypes = [ctypes.c_void_p, ctypes.c_uint16] + lib.online_ngram_state_process_chunk.restype = ctypes.c_int + lib.online_ngram_state_process_chunk.argtypes = [ + ctypes.c_void_p, + ctypes.POINTER(ctypes.c_uint16), + ctypes.c_int64, + ctypes.POINTER(ctypes.c_uint8), + ctypes.POINTER(ctypes.c_uint8), + ctypes.POINTER(ctypes.c_uint16), + ctypes.POINTER(ctypes.c_float), + ctypes.POINTER(ctypes.c_uint16), + ctypes.POINTER(ctypes.c_float), + ctypes.POINTER(ctypes.c_uint8), + ] + return lib + + +class OnlineNgramState: + def __init__( + self, *, lib, token_ctx_len, token_table_bits, within_table_bits, + starts_new_word_lut, boundary_lut, seed_prefix_token, + ): + self.lib = lib + self.state = lib.online_ngram_state_create(token_ctx_len, token_table_bits, within_table_bits) + if not self.state: + raise RuntimeError( + f"Native ngram state alloc failed token_table_bits={token_table_bits} within_table_bits={within_table_bits}" + ) + self.starts_new_word_lut = np.ascontiguousarray(starts_new_word_lut.astype(np.uint8, copy=False)) + self.boundary_lut = np.ascontiguousarray(boundary_lut.astype(np.uint8, copy=False)) + self.lib.online_ngram_state_seed_prefix_token(self.state, ctypes.c_uint16(int(seed_prefix_token))) + + def close(self): + if self.state: + self.lib.online_ngram_state_destroy(self.state) + self.state = None + + def __del__(self): + self.close() + + def process_chunk(self, chunk_tokens): + chunk_tokens = np.ascontiguousarray(chunk_tokens.astype(np.uint16, copy=False)) + n = int(chunk_tokens.size) + token_top_token = np.zeros(n, dtype=np.uint16) + token_top_prob = np.zeros(n, dtype=np.float32) + within_top_token = np.zeros(n, dtype=np.uint16) + within_top_prob = np.zeros(n, dtype=np.float32) + within_valid = np.zeros(n, dtype=np.uint8) + rc = self.lib.online_ngram_state_process_chunk( + self.state, + chunk_tokens.ctypes.data_as(ctypes.POINTER(ctypes.c_uint16)), + ctypes.c_int64(n), + self.starts_new_word_lut.ctypes.data_as(ctypes.POINTER(ctypes.c_uint8)), + self.boundary_lut.ctypes.data_as(ctypes.POINTER(ctypes.c_uint8)), + token_top_token.ctypes.data_as(ctypes.POINTER(ctypes.c_uint16)), + token_top_prob.ctypes.data_as(ctypes.POINTER(ctypes.c_float)), + within_top_token.ctypes.data_as(ctypes.POINTER(ctypes.c_uint16)), + within_top_prob.ctypes.data_as(ctypes.POINTER(ctypes.c_float)), + within_valid.ctypes.data_as(ctypes.POINTER(ctypes.c_uint8)), + ) + if rc != 0: + raise RuntimeError(f"Native ngram process_chunk failed rc={rc}") + return token_top_token, token_top_prob, within_top_token, within_top_prob, within_valid.astype(bool) + + +class WordStartState: + def __init__(self, *, sp, order, normalize_mode): + self.sp = sp + self.ctx_w = max(order - 1, 0) + self.normalize_mode = normalize_mode + self.prev_word_ids: deque = deque(maxlen=self.ctx_w) + self.current_word_tokens: list = [] + self.word_to_id: dict = {} + self.next_word_id = 1 + self.ctx_total: dict = {} + self.pair_count: dict = {} + self.ctx_best_token: dict = {} + self.ctx_best_count: dict = {} + + def _flush_current_word(self): + if not self.current_word_tokens: + return + text = normalize_word(self.sp.decode(self.current_word_tokens), self.normalize_mode) + if text: + wid = self.word_to_id.get(text) + if wid is None: + wid = self.next_word_id + self.word_to_id[text] = wid + self.next_word_id += 1 + if self.ctx_w > 0: + self.prev_word_ids.append(wid) + self.current_word_tokens = [] + + def process_chunk(self, chunk_tokens, *, starts_new_word_lut, boundary_lut): + chunk_tokens = np.ascontiguousarray(chunk_tokens.astype(np.uint16, copy=False)) + top_token = np.zeros(chunk_tokens.size, dtype=np.uint16) + top_prob = np.zeros(chunk_tokens.size, dtype=np.float32) + for i, tok_u16 in enumerate(chunk_tokens): + tok = int(tok_u16) + is_boundary = bool(boundary_lut[tok]) + is_word_start = bool(starts_new_word_lut[tok]) or not self.current_word_tokens + if is_boundary: + self._flush_current_word() + continue + if bool(starts_new_word_lut[tok]): + self._flush_current_word() + ctx_key = None + if is_word_start and len(self.prev_word_ids) >= self.ctx_w: + ctx_key = tuple(self.prev_word_ids) if self.ctx_w > 0 else () + total = self.ctx_total.get(ctx_key, 0) + if total > 0: + top_token[i] = np.uint16(self.ctx_best_token[ctx_key]) + top_prob[i] = np.float32(self.ctx_best_count[ctx_key] / total) + if is_word_start: + if ctx_key is not None: + pair_key = (ctx_key, tok) + pair = self.pair_count.get(pair_key, 0) + 1 + self.pair_count[pair_key] = pair + total = self.ctx_total.get(ctx_key, 0) + 1 + self.ctx_total[ctx_key] = total + best_count = self.ctx_best_count.get(ctx_key, 0) + if pair > best_count: + self.ctx_best_count[ctx_key] = pair + self.ctx_best_token[ctx_key] = tok + self.current_word_tokens = [tok] + else: + self.current_word_tokens.append(tok) + return top_token, top_prob + + +def build_piece_luts(*, tokenizer_path, vocab_size): + sp = spm.SentencePieceProcessor(model_file=tokenizer_path) + pieces = [sp.id_to_piece(i) for i in range(sp.vocab_size())] + starts_new_word_lut = np.zeros(vocab_size, dtype=np.uint8) + for i, piece in enumerate(pieces): + starts_new_word_lut[i] = 1 if piece.startswith("▁") else 0 + boundary_lut = np.zeros(vocab_size, dtype=np.uint8) + bos_id = sp.bos_id() + if bos_id >= 0 and bos_id < vocab_size: + boundary_lut[bos_id] = 1 + for tok in range(min(sp.vocab_size(), vocab_size)): + if sp.is_byte(tok) and tok in WHITESPACE_BYTE_IDS: + boundary_lut[tok] = 1 + return sp, starts_new_word_lut, boundary_lut + + +def build_hints_for_targets( + *, target_token_ids_np, tokenizer_path, vocab_size, log0=print, + token_order=16, token_threshold=0.800, token_boost=2.625, + within_tau=0.450, within_boost=0.0, + word_order=4, word_normalize="strip_punct_lower", + word_tau=0.650, word_boost=0.0, + agree_add_boost=0.0, +): + """Single L->R pass. Returns dict with hint_ids, gate_mask, boost_per_pos. + + target_token_ids_np: np.uint16 array of realized targets (length = total_targets). + Output arrays are aligned to target_token_ids_np indexing. + + For each scored position t we pick at most one hint h_t: + - prefer the expert with highest expected gain = p_top * boost - log1p(p_top * (exp(boost)-1)) + - if multiple experts agree on the same h_t, additive boost agree_add_boost + - gate (don't tilt) when no expert clears its threshold + + The realized loss formula used by the caller: + ptl' = ptl - beta * 1[y == h_t] + log1p(q_t * (exp(beta) - 1)) when gate_mask == True + ptl' = ptl when gate_mask == False + """ + sp, starts_new_word_lut, boundary_lut = build_piece_luts( + tokenizer_path=tokenizer_path, vocab_size=vocab_size + ) + total = int(target_token_ids_np.size) + if total == 0: + return { + "hint_ids": np.zeros(0, dtype=np.int64), + "gate_mask": np.zeros(0, dtype=bool), + "boost": np.zeros(0, dtype=np.float32), + "sp": sp, + "starts_new_word_lut": starts_new_word_lut, + "boundary_lut": boundary_lut, + } + + token_table_bits = suggest_table_bits(total, load_factor=0.55) + within_table_bits = suggest_table_bits(max(total // 2, 1), load_factor=0.60) + online_lib = ensure_online_ngram_lib(log0) + within_enabled = within_boost > 0.0 + word_enabled = word_boost > 0.0 + agree_enabled = agree_add_boost > 0.0 + within_starts_new_word_lut = starts_new_word_lut if within_enabled else np.ones_like(starts_new_word_lut) + within_boundary_lut = boundary_lut if within_enabled else np.ones_like(boundary_lut) + + ngram_state = OnlineNgramState( + lib=online_lib, + token_ctx_len=max(token_order - 1, 0), + token_table_bits=token_table_bits, + within_table_bits=within_table_bits, + starts_new_word_lut=within_starts_new_word_lut, + boundary_lut=within_boundary_lut, + seed_prefix_token=int(target_token_ids_np[0]), + ) + + token_top_tok, token_top_prob, within_top_tok, within_top_prob, within_valid = ( + ngram_state.process_chunk(target_token_ids_np) + ) + if not within_enabled: + within_top_tok.fill(0) + within_top_prob.fill(0.0) + within_valid.fill(False) + if word_enabled: + word_state = WordStartState(sp=sp, order=word_order, normalize_mode=word_normalize) + word_top_tok, word_top_prob = word_state.process_chunk( + target_token_ids_np, + starts_new_word_lut=starts_new_word_lut, + boundary_lut=boundary_lut, + ) + else: + word_top_tok = np.zeros(total, dtype=np.uint16) + word_top_prob = np.zeros(total, dtype=np.float32) + + def _expected_gain(p_top, boost): + # E[ -log p'(y) under -log p(y)] when y ~ p + # = p_top * boost - log1p(p_top * (exp(boost) - 1)) + # Maximizing this over experts => pick the most informative hint. + log_norm = np.log1p(p_top * (math.exp(boost) - 1.0)) + return p_top * boost - log_norm + + token_gate = token_top_prob >= np.float32(token_threshold) + within_gate = within_valid & (within_top_prob >= np.float32(within_tau)) if within_enabled else np.zeros(total, dtype=bool) + word_gate = word_top_prob >= np.float32(word_tau) if word_enabled else np.zeros(total, dtype=bool) + + token_gain = np.where(token_gate, _expected_gain(token_top_prob.astype(np.float64), token_boost), -np.inf) + within_gain = np.where(within_gate, _expected_gain(within_top_prob.astype(np.float64), within_boost), -np.inf) + word_gain = np.where(word_gate, _expected_gain(word_top_prob.astype(np.float64), word_boost), -np.inf) + + stack = np.stack([token_gain, within_gain, word_gain], axis=1) + best_idx = np.argmax(stack, axis=1) + best_gain = np.max(stack, axis=1) + any_gate = best_gain > -np.inf + + hint_ids = np.zeros(total, dtype=np.int64) + boost = np.zeros(total, dtype=np.float32) + base_boost_per_expert = np.array([token_boost, within_boost, word_boost], dtype=np.float32) + hint_per_expert = np.stack([ + token_top_tok.astype(np.int64), + within_top_tok.astype(np.int64), + word_top_tok.astype(np.int64), + ], axis=1) + + rows = np.arange(total) + hint_ids[any_gate] = hint_per_expert[rows[any_gate], best_idx[any_gate]] + boost[any_gate] = base_boost_per_expert[best_idx[any_gate]] + + # Agreement bonus is a separate channel; keep it fully off when boost is 0. + if agree_enabled: + gate_mask_each = np.stack([token_gate, within_gate, word_gate], axis=1) + expert_hints = hint_per_expert.copy() + expert_hints[~gate_mask_each] = -1 + agreements = (expert_hints == hint_ids[:, None]).sum(axis=1) + agreement_extra = np.where(agreements >= 2, np.float32(agree_add_boost), np.float32(0.0)) + boost = (boost + agreement_extra).astype(np.float32) + agree2plus = int((agreements >= 2).sum()) + else: + agree2plus = 0 + + log0( + f"ngram_tilt:hints total={total} gated={int(any_gate.sum())} " + f"token_gate={int(token_gate.sum())} within_gate={int(within_gate.sum())} word_gate={int(word_gate.sum())} " + f"agree2plus={agree2plus}" + ) + + return { + "hint_ids": hint_ids, + "gate_mask": any_gate, + "boost": boost, + "sp": sp, + "starts_new_word_lut": starts_new_word_lut, + "boundary_lut": boundary_lut, + } + + +def apply_tilt_to_ptl_torch( + ptl: torch.Tensor, + log_q_hint: torch.Tensor, + target_ids: torch.Tensor, + hint_ids: torch.Tensor, + gate_mask: torch.Tensor, + boost: torch.Tensor, +): + """Closed-form tilt applied to per-token NLL. + + All tensors same shape [..., L]. + ptl_tilted = ptl - beta * 1[y == h] + log1p(q * (exp(beta) - 1)) if gate else ptl + """ + boost64 = boost.to(torch.float64) + q = log_q_hint.to(torch.float64).clamp_(max=0.0).exp() + is_hit = (target_ids == hint_ids).to(torch.float64) + log_Z = torch.log1p(q * (torch.expm1(boost64))) + ptl_tilted = ptl.to(torch.float64) - boost64 * is_hit + log_Z + return torch.where(gate_mask, ptl_tilted, ptl.to(torch.float64)).to(ptl.dtype) + + +def apply_tilt_to_ptl_torch_fast( + ptl: torch.Tensor, + log_q_hint: torch.Tensor, + target_ids: torch.Tensor, + hint_ids: torch.Tensor, + gate_mask: torch.Tensor, + boost: torch.Tensor, +): + """fp32 variant of apply_tilt — cast removed where safe. + + BPB downstream accumulator is fp64, so per-token tilt computation in + fp32 has no impact on final precision. Saves ~10-15s per eval pass on + H100 (avoids fp64 ALU + double memory traffic). + """ + boost32 = boost.to(torch.float32) + q = log_q_hint.to(torch.float32).clamp_(max=0.0).exp() + is_hit = (target_ids == hint_ids).to(torch.float32) + log_Z = torch.log1p(q * (torch.expm1(boost32))) + ptl_f32 = ptl.to(torch.float32) + ptl_tilted = ptl_f32 - boost32 * is_hit + log_Z + return torch.where(gate_mask, ptl_tilted, ptl_f32).to(ptl.dtype) diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/prepare_caseops_data.py b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/prepare_caseops_data.py new file mode 100644 index 0000000000..5c3f13e69c --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/prepare_caseops_data.py @@ -0,0 +1,177 @@ +"""Prepare CaseOps-tokenized FineWeb shards + per-token byte sidecar. + +CaseOps (``lossless_caps_caseops_v1``) is a bijective, character-level text +transform that introduces four operator tokens in place of explicit +capitalization: TITLE, ALLCAPS, CAPNEXT, ESC. The transform is fully +reversible — no information is lost relative to the untransformed UTF-8 +text, so BPB stays computable on TRUE byte counts. + +Forward pipeline: + 1. Read the canonical FineWeb-10B doc stream (``docs_selected.jsonl`` + produced by ``data/download_hf_docs_and_tokenize.py`` in the root repo). + 2. Apply ``encode_lossless_caps_v2`` (the caseops_v1 alias) to each doc. + 3. Tokenize with the shipped SP model + ``tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model`` + (reserves TITLE/ALLCAPS/CAPNEXT/ESC + sentinel as user_defined_symbols). + 4. Write uint16 train/val shards (``fineweb_{train,val}_XXXXXX.bin``). + 5. For the VAL stream only, emit per-token byte sidecar shards + (``fineweb_val_bytes_XXXXXX.bin``, uint16 parallel arrays) that record + each token's ORIGINAL pre-transform UTF-8 byte count. BPB is computed + from these canonical bytes so the score is on the untransformed text + (not the transformed representation). + +Output layout — matches what ``train_gpt.py`` expects under +``DATA_DIR=./data`` with ``CASEOPS_ENABLED=1``: + + data/datasets/fineweb10B_sp8192_caseops/datasets/ + tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/ + fineweb_train_000000.bin + fineweb_train_000001.bin + ... + fineweb_val_000000.bin + fineweb_val_bytes_000000.bin + +Usage: + + python3 prepare_caseops_data.py \\ + --docs ./fineweb10B_raw/docs_selected.jsonl \\ + --out ./data/datasets/fineweb10B_sp8192_caseops/datasets \\ + --sp ./tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + +Requirements: sentencepiece, numpy. CPU-only. Runs once; reused across seeds. +""" +from __future__ import annotations + +import argparse +import json +import pathlib +import struct +import sys + +import numpy as np +import sentencepiece as spm + +# Local import — lossless_caps.py ships next to this script. +sys.path.insert(0, str(pathlib.Path(__file__).resolve().parent)) +from lossless_caps import ( # noqa: E402 + LOSSLESS_CAPS_CASEOPS_V1, + encode_lossless_caps_v2, + surface_piece_original_byte_counts, +) + + +SHARD_MAGIC = 20240520 +SHARD_VERSION = 1 +SHARD_TOKENS = 10_000_000 # tokens per shard — matches the main pipeline +BOS_ID = 1 # SP model's control token; train_gpt.py:_find_docs requires BOS per doc + + +def _write_shard(out_path: pathlib.Path, arr: np.ndarray) -> None: + """Write a uint16 shard in the standard header-prefixed format.""" + assert arr.dtype == np.uint16 + header = np.zeros(256, dtype=np.int32) + header[0] = SHARD_MAGIC + header[1] = SHARD_VERSION + header[2] = int(arr.size) + with out_path.open("wb") as fh: + fh.write(header.tobytes()) + fh.write(arr.tobytes()) + + +def _iter_docs(docs_path: pathlib.Path): + """Yield doc strings from a jsonl file (one json object per line).""" + with docs_path.open("r", encoding="utf-8") as fh: + for line in fh: + line = line.strip() + if not line: + continue + obj = json.loads(line) + # Support both {"text": ...} and raw strings. + yield obj["text"] if isinstance(obj, dict) else obj + + +def _token_original_byte_counts( + sp: spm.SentencePieceProcessor, + original_text: str, + transformed_text: str, +) -> np.ndarray: + """Per-token canonical (pre-transform) UTF-8 byte counts. + + Delegates to ``surface_piece_original_byte_counts`` in ``lossless_caps.py`` + — the canonical exporter used by the PR #1729 / HF-hosted CaseOps dataset. + Operator pieces (U+E001..U+E004) contribute 0 original bytes; letter pieces + contribute their pre-transform UTF-8 byte count. + """ + proto = sp.encode_as_immutable_proto(transformed_text) + byte_counts = surface_piece_original_byte_counts( + (piece.surface for piece in proto.pieces), + text_transform_name=LOSSLESS_CAPS_CASEOPS_V1, + ) + return np.asarray(list(byte_counts), dtype=np.uint16) + + +def main() -> None: + ap = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter) + ap.add_argument("--docs", required=True, type=pathlib.Path, help="Path to docs_selected.jsonl") + ap.add_argument("--out", required=True, type=pathlib.Path, help="Output datasets dir") + ap.add_argument("--sp", required=True, type=pathlib.Path, help="Path to CaseOps SP model") + ap.add_argument("--val-docs", type=int, default=10_000, help="Validation docs count") + args = ap.parse_args() + + sp = spm.SentencePieceProcessor(model_file=str(args.sp)) + print(f"loaded sp: vocab={sp.vocab_size()}", flush=True) + + train_out = args.out / "datasets" / "fineweb10B_sp8192_lossless_caps_caseops_v1_reserved" + train_out.mkdir(parents=True, exist_ok=True) + + val_buf_tokens: list[int] = [] + val_buf_bytes: list[int] = [] + train_buf: list[int] = [] + val_written = 0 + train_written = 0 + n_docs = 0 + + for text in _iter_docs(args.docs): + transformed = encode_lossless_caps_v2(text) + token_ids = [BOS_ID] + sp.encode(transformed, out_type=int) + if n_docs < args.val_docs: + # Validation doc — also compute byte sidecar + byte_counts = _token_original_byte_counts(sp, text, transformed) + val_buf_tokens.extend(token_ids) + val_buf_bytes.append(0) # BOS contributes 0 original bytes + val_buf_bytes.extend(int(b) for b in byte_counts) + if len(val_buf_tokens) >= SHARD_TOKENS: + _write_shard(train_out / f"fineweb_val_{val_written:06d}.bin", + np.array(val_buf_tokens[:SHARD_TOKENS], dtype=np.uint16)) + _write_shard(train_out / f"fineweb_val_bytes_{val_written:06d}.bin", + np.array(val_buf_bytes[:SHARD_TOKENS], dtype=np.uint16)) + val_buf_tokens = val_buf_tokens[SHARD_TOKENS:] + val_buf_bytes = val_buf_bytes[SHARD_TOKENS:] + val_written += 1 + else: + train_buf.extend(token_ids) + if len(train_buf) >= SHARD_TOKENS: + _write_shard(train_out / f"fineweb_train_{train_written:06d}.bin", + np.array(train_buf[:SHARD_TOKENS], dtype=np.uint16)) + train_buf = train_buf[SHARD_TOKENS:] + train_written += 1 + n_docs += 1 + if n_docs % 10_000 == 0: + print(f" processed {n_docs} docs train_shards={train_written} val_shards={val_written}", flush=True) + + # Flush tail buffers into final (possibly short) shards. + if val_buf_tokens: + _write_shard(train_out / f"fineweb_val_{val_written:06d}.bin", + np.array(val_buf_tokens, dtype=np.uint16)) + _write_shard(train_out / f"fineweb_val_bytes_{val_written:06d}.bin", + np.array(val_buf_bytes, dtype=np.uint16)) + if train_buf: + _write_shard(train_out / f"fineweb_train_{train_written:06d}.bin", + np.array(train_buf, dtype=np.uint16)) + + print(f"done. docs={n_docs} train_shards={train_written + (1 if train_buf else 0)} val_shards={val_written + (1 if val_buf_tokens else 0)}") + + +if __name__ == "__main__": + main() diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/requirements.txt b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/requirements.txt new file mode 100644 index 0000000000..b6c55e13aa --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/requirements.txt @@ -0,0 +1,13 @@ +# Python deps. Install with: pip install -r requirements.txt +torch==2.9.1+cu128 +sentencepiece +brotli +huggingface_hub +numpy +python-minifier + +# FlashAttention 3 must be installed separately (not on PyPI): +# pip install --no-deps flash_attn_3 --find-links https://windreamer.github.io/flash-attention3-wheels/cu128_torch291/ + +# System dep (apt): lrzip (used by per-group compressor) +# apt-get install -y lrzip diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/submission.json b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/submission.json new file mode 100644 index 0000000000..0b666ab5ed --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/submission.json @@ -0,0 +1,71 @@ +{ + "author": "Simon Bissonnette", + "github_id": "simon-marcus", + "name": "PR #2014 + LeakyReLU 0.3 + token-only in-timer n-gram TTT", + "blurb": "Corrected 3-seed token-only result on top of the PR #2014 strict-compliance CaseOps stack. Adds LeakyReLU-square slope 0.3 and the PR #2018 in-timer online causal token-only n-gram tilt inside the measured TTT eval timer. 3-seed mean post-TTT val_bpb: 1.05701907, max eval_time: 553.5s, max artifact: 15,989,637 bytes.", + "date": "2026-05-02", + "track": "10min_16mb", + "status": "corrected_3_seed", + "val_loss": 2.31315552, + "val_bpb": 1.05701907, + "val_loss_std": 0.00224551, + "val_bpb_std": 0.00102611, + "seeds": [ + 42, + 0, + 314 + ], + "seed_results": { + "42": { + "train_steps": 4901, + "train_time_s": 596.069, + "prequant_val_loss": 2.31773315, + "prequant_val_bpb": 1.05911087, + "quantized_val_loss": 2.33545972, + "quantized_val_bpb": 1.0672112, + "ttt_val_loss": 2.31072442, + "ttt_val_bpb": 1.05590816, + "eval_time_s": 506.254, + "artifact_bytes": 15989637 + }, + "0": { + "train_steps": 4861, + "train_time_s": 596.182, + "prequant_val_loss": 2.32243873, + "prequant_val_bpb": 1.06126113, + "quantized_val_loss": 2.34211201, + "quantized_val_bpb": 1.07025102, + "ttt_val_loss": 2.31614048, + "ttt_val_bpb": 1.05838308, + "eval_time_s": 553.458, + "artifact_bytes": 15985432 + }, + "314": { + "train_steps": 4855, + "train_time_s": 596.099, + "prequant_val_loss": 2.31997987, + "prequant_val_bpb": 1.06013753, + "quantized_val_loss": 2.33736937, + "quantized_val_bpb": 1.06808383, + "ttt_val_loss": 2.31260166, + "ttt_val_bpb": 1.05676598, + "eval_time_s": 473.217, + "artifact_bytes": 15983433 + } + }, + "comparison_baseline_bpb": 1.06107587, + "delta_vs_merged_leader_bpb": -0.0040568, + "artifact_bytes_mean": 15986167, + "artifact_bytes_max": 15989637, + "bytes_total": 15989637, + "train_steps_mean": 4872.333333333333, + "train_time_s_mean": 596.1166666666667, + "eval_time_s_mean": 510.97633333333334, + "eval_time_s_max": 553.458, + "hardware": "8xH100 80GB SXM", + "pytorch_version": "2.9.1+cu128", + "cuda_version": "12.8", + "flash_attn_version": "FA3 (cu128_torch291 wheel)", + "technique_summary": "PR #2014 progressive-context CaseOps stack + LeakyReLU-square slope 0.3 + token-only in-timer online causal n-gram tilt during TTT eval. PHASED_TTT_PREFIX_DOCS=2500, TTT_CHUNK_SIZE=64, NGRAM_HINT_PRECOMPUTE_OUTSIDE=0, WITHIN_BOOST=0.0, WORD_BOOST=0.0, AGREE_ADD_BOOST=0.0.", + "correction_note": "This supersedes the initial #2140 submitted state, which accidentally restored within-word, word-start, and agreement n-gram channels. Corrected eval diagnostics report within_gate=0, word_gate=0, and agree2plus=0." +} diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model new file mode 100644 index 0000000000..fffc8bb306 Binary files /dev/null and b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model differ diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed0.log b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed0.log new file mode 100644 index 0000000000..a2ccb3c4d3 --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed0.log @@ -0,0 +1,842 @@ +W0501 23:43:29.280000 813471 torch/distributed/run.py:803] +W0501 23:43:29.280000 813471 torch/distributed/run.py:803] ***************************************** +W0501 23:43:29.280000 813471 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0501 23:43:29.280000 813471 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + agree_add_boost: 0.5 + artifact_dir: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_ngram_p2500_c64_record_3seed/seed0 + attn_clip_sigmas: 13.0 + attn_out_gate_enabled: False + attn_out_gate_src: proj + awq_lite_bits: 8 + awq_lite_enabled: True + awq_lite_group_size: 64 + awq_lite_group_top_k: 1 + beta1: 0.9 + beta2: 0.99 + caseops_enabled: True + compile_shape_warmup: True + compile_shape_warmup_iters: 1 + compile_shape_warmup_loop_modes: auto + compressor: pergroup + data_dir: ./data + datasets_dir: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved + distributed: True + ema_decay: 0.9965 + embed_bits: 7 + embed_clip_sigmas: 14.0 + embed_lr: 0.6 + embed_wd: 0.085 + enable_looping_at: 0.35 + eval_include_tail: True + eval_seq_len: 3072 + eval_stride: 1536 + fused_ce_enabled: True + gate_window: 12 + gated_attn_enabled: False + gated_attn_init_std: 0.01 + gated_attn_quant_gate: True + global_ttt_batch_seqs: 32 + global_ttt_chunk_tokens: 32768 + global_ttt_epochs: 1 + global_ttt_grad_clip: 1.0 + global_ttt_lr: 0.001 + global_ttt_momentum: 0.9 + global_ttt_respect_doc_boundaries: True + global_ttt_warmup_chunks: 0 + global_ttt_warmup_start_lr: 0.0 + gptq_calibration_batches: 16 + gptq_reserve_seconds: 4.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + is_main_process: True + iterations: 20000 + leaky_relu_sq_slope: 0.3 + ln_scale: True + local_rank: 0 + logfile: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_ngram_p2500_c64_record_3seed/seed0/lrelu03_ngram_p2500_c64_s0.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + lqer_asym_enabled: True + lqer_asym_group: 64 + lqer_enabled: True + lqer_factor_bits: 4 + lqer_gain_select: False + lqer_rank: 4 + lqer_scope: all + lqer_top_k: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.026 + max_wallclock_seconds: 600.0 + midrun_cap_log_updates: False + midrun_cap_schedule: + min_lr: 0.1 + mlp_clip_sigmas: 11.5 + mlp_mult: 4.0 + model_dim: 512 + model_path: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_ngram_p2500_c64_record_3seed/seed0/final_model.pt + muon_backend_steps: 5 + muon_momentum: 0.97 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + ngram_hint_precompute_outside: False + ngram_tilt_enabled: True + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_final_lane: mean + parallel_start_layer: 8 + phased_ttt_num_phases: 1 + phased_ttt_prefix_docs: 2500 + qk_gain_init: 5.25 + quantized_model_path: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_ngram_p2500_c64_record_3seed/seed0/final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 3072 + rope_yarn: False + run_id: lrelu03_ngram_p2500_c64_s0 + scalar_lr: 0.02 + seed: 0 + seq_change_warmup_steps: 32 + skip_gates_enabled: True + skylight_norm_beta2: 0.95 + skylight_norm_ema: False + skylight_norm_eps: 1e-07 + skylight_uw_floor: False + skylight_uw_ratio: 0.35 + smear_gate_enabled: True + sparse_attn_gate_enabled: True + sparse_attn_gate_init_std: 0.0 + sparse_attn_gate_scale: 0.5 + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + token_boost: 2.625 + token_order: 16 + token_threshold: 0.8 + tokenizer_path: /tmp/parameter-golf-data-authorhf/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + train_batch_tokens: 786432 + train_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 3072 + train_seq_schedule: 1024@0.100,2048@0.700,3072@1.000 + train_seq_schedule_mode: wallclock + ttt_batch_size: 24 + ttt_beta1: 0.0 + ttt_beta2: 0.99 + ttt_chunk_size: 64 + ttt_enabled: True + ttt_eval_batches: + ttt_eval_seq_len: 3072 + ttt_grad_steps: 1 + ttt_k_lora: True + ttt_local_lr_mult: 0.75 + ttt_lora_lr: 0.0001 + ttt_lora_rank: 80 + ttt_mask: no_qv + ttt_mlp_lora: True + ttt_o_lora: True + ttt_optimizer: adam + ttt_q_lora: False + ttt_short_beta2: 0.99 + ttt_short_chunk_size: 32 + ttt_short_doc_len: 2000 + ttt_short_lora_enabled: False + ttt_short_lora_lr: 0.0001 + ttt_short_lora_rank: 80 + ttt_short_score_first_enabled: True + ttt_short_score_first_steps: 256:16,2000:32 + ttt_short_weight_decay: 0.5 + ttt_train_max_doc_len: 0 + ttt_train_min_doc_len: 0 + ttt_v_lora: False + ttt_warm_start_mean_doc_len: 2000 + ttt_warm_start_mean_enabled: False + ttt_warm_start_mean_momentum: 0.95 + ttt_weight_decay: 0.5 + val_batch_tokens: 524288 + val_bytes_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_bytes_*.bin + val_doc_fraction: 1.0 + val_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_*.bin + val_loss_every: 0 + vocab_size: 8192 + warmdown_frac: 0.85 + warmdown_iters: 0 + warmup_steps: 20 + within_boost: 0.75 + within_tau: 0.45 + word_boost: 0.75 + word_normalize: strip_punct_lower + word_order: 4 + word_tau: 0.65 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 47853343 +model_params:35945673 +train_seq_schedule:1024@0.100,2048@0.700,3072@1.000 +local_microbatch_tokens:98304 +growth_stage:seq_len:1024 progress:0.000 +gptq:reserving 4s, effective=596000ms +warmup_cu_buckets:64,128,192,256 iters_each:3 +compile_shape_warmup:start 1024xplain,2048xplain,2048xloop,3072xloop +compile_shape_warmup:shape seq_len:1024 loop:0 +compile_shape_warmup:shape seq_len:2048 loop:0 +compile_shape_warmup:shape seq_len:2048 loop:1 +compile_shape_warmup:shape seq_len:3072 loop:1 +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +1/20000 train_loss: 9.0105 train_time: 0.0m tok/s: 18515866 +2/20000 train_loss: 12.9543 train_time: 0.0m tok/s: 7604946 +3/20000 train_loss: 10.2572 train_time: 0.0m tok/s: 7764189 +4/20000 train_loss: 8.7489 train_time: 0.0m tok/s: 7830500 +5/20000 train_loss: 7.9902 train_time: 0.0m tok/s: 7840879 +500/20000 train_loss: 2.6130 train_time: 0.8m tok/s: 8602304 +growth_stage:seq_len:2048 progress:0.100 step:647 +growth_stage_rewarmup:start step:647 steps:32 seq_len:2048 +1000/20000 train_loss: 2.5898 train_time: 1.6m tok/s: 8405849 +1500/20000 train_loss: 2.6201 train_time: 2.4m tok/s: 8317402 +2000/20000 train_loss: 2.6517 train_time: 3.2m tok/s: 8273082 +layer_loop:enabled step:2190 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 2.5076 train_time: 4.2m tok/s: 7796547 +3000/20000 train_loss: 2.4590 train_time: 5.4m tok/s: 7313729 +3500/20000 train_loss: 2.4636 train_time: 6.6m tok/s: 7003236 +growth_stage:seq_len:3072 progress:0.700 step:3672 +growth_stage_rewarmup:start step:3672 steps:32 seq_len:3072 +4000/20000 train_loss: 2.3858 train_time: 7.8m tok/s: 6763705 +4500/20000 train_loss: 2.3447 train_time: 9.0m tok/s: 6559519 +4877/20000 val_loss: 2.3463 val_bpb: 1.0721 +stopping_early: wallclock_cap train_time: 596061ms step: 4877/20000 +peak memory allocated: 41707 MiB reserved: 46984 MiB +ema:applying EMA weights +diagnostic pre-quantization post-ema val_loss:2.32189590 val_bpb:1.06101308 eval_time:16426ms +Serialized model: 135418111 bytes +Code size (uncompressed): 207583 bytes +Code size (compressed): 51324 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 4.2s +Quantized weights: + gate_int8_row: blocks.attn.attn_gate_w + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int6)+lqer_asym: blocks.mlp.fc.weight + gptq (int7)+awqgrpint8+lqer_asym: tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, parallel_post_lambdas, parallel_resid_lambdas, skip_gates, skip_weights, smear_gate.weight, smear_lambda, softcap_neg, softcap_pos +Serialize: per-group lrzip compression... +Serialize: per-group compression done in 102.4s +Serialized model quantized+pergroup: 15944964 bytes +Total submission size quantized+pergroup: 15996288 bytes +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 16.4s +diagnostic quantized val_loss:2.33918024 val_bpb:1.06891132 eval_time:17049ms +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 16.6s +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (177.4s) + +beginning TTT eval timer +ngram_tilt:hints total=47853343 gated=13023831 token_gate=628156 within_gate=9867233 word_gate=2891718 agree2plus=303187 +ngram_tilt:precompute_outside_timer_done elapsed=126.99s total_targets=47853343 +ttt_phased: total_docs:50000 prefix_docs:2500 suffix_docs:47500 num_phases:1 boundaries:[2500] target_tokens:47853343 +ttp: b2078/2084 bl:2.1529 bb:1.0373 rl:2.1529 rb:1.0373 dl:12664-13654 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2076/2084 bl:2.2394 bb:1.0776 rl:2.1927 rb:1.0558 dl:10795-11603 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2070/2084 bl:2.3630 bb:1.1231 rl:2.2366 rb:1.0733 dl:8228-8606 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2065/2084 bl:2.3347 bb:1.0733 rl:2.2538 rb:1.0733 dl:6892-7069 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2058/2084 bl:2.2850 bb:1.0805 rl:2.2578 rb:1.0743 dl:5866-5996 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2049/2084 bl:2.2525 bb:1.0044 rl:2.2573 rb:1.0668 dl:5097-5173 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2043/2084 bl:2.1255 bb:1.0132 rl:2.2462 rb:1.0623 dl:4647-4696 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2035/2084 bl:2.2997 bb:1.0767 rl:2.2500 rb:1.0634 dl:4250-4292 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2028/2084 bl:2.4777 bb:1.1218 rl:2.2641 rb:1.0671 dl:3935-3966 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2022/2084 bl:2.2973 bb:1.0256 rl:2.2660 rb:1.0647 dl:3729-3760 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2015/2084 bl:2.3367 bb:1.0072 rl:2.2695 rb:1.0616 dl:3488-3516 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2007/2084 bl:2.2432 bb:0.9965 rl:2.2683 rb:1.0586 dl:3303-3324 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2001/2084 bl:2.2670 bb:1.0299 rl:2.2682 rb:1.0574 dl:3150-3175 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1990/2084 bl:2.3464 bb:1.0513 rl:2.2711 rb:1.0571 dl:2933-2953 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1982/2084 bl:2.3859 bb:1.0567 rl:2.2750 rb:1.0571 dl:2803-2809 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1976/2084 bl:2.3205 bb:1.0350 rl:2.2764 rb:1.0564 dl:2714-2730 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttpp: phase:1/1 pd:2672 gd:2500 t:152.3s +tttg: c1/344 lr:0.001000 t:0.3s +tttg: c2/344 lr:0.001000 t:0.4s +tttg: c3/344 lr:0.001000 t:0.6s +tttg: c4/344 lr:0.001000 t:0.7s +tttg: c5/344 lr:0.001000 t:0.8s +tttg: c6/344 lr:0.000999 t:0.9s +tttg: c7/344 lr:0.000999 t:1.0s +tttg: c8/344 lr:0.000999 t:1.2s +tttg: c9/344 lr:0.000999 t:1.3s +tttg: c10/344 lr:0.000998 t:1.4s +tttg: c11/344 lr:0.000998 t:1.5s +tttg: c12/344 lr:0.000997 t:1.7s +tttg: c13/344 lr:0.000997 t:1.8s +tttg: c14/344 lr:0.000996 t:1.9s +tttg: c15/344 lr:0.000996 t:2.0s +tttg: c16/344 lr:0.000995 t:2.2s +tttg: c17/344 lr:0.000995 t:2.3s +tttg: c18/344 lr:0.000994 t:2.4s +tttg: c19/344 lr:0.000993 t:2.5s +tttg: c20/344 lr:0.000992 t:2.6s +tttg: c21/344 lr:0.000992 t:2.8s +tttg: c22/344 lr:0.000991 t:2.9s +tttg: c23/344 lr:0.000990 t:3.0s +tttg: c24/344 lr:0.000989 t:3.1s +tttg: c25/344 lr:0.000988 t:3.3s +tttg: c26/344 lr:0.000987 t:3.4s +tttg: c27/344 lr:0.000986 t:3.5s +tttg: c28/344 lr:0.000985 t:3.6s +tttg: c29/344 lr:0.000984 t:3.8s +tttg: c30/344 lr:0.000982 t:3.9s +tttg: c31/344 lr:0.000981 t:4.0s +tttg: c32/344 lr:0.000980 t:4.1s +tttg: c33/344 lr:0.000979 t:4.2s +tttg: c34/344 lr:0.000977 t:4.4s +tttg: c35/344 lr:0.000976 t:4.5s +tttg: c36/344 lr:0.000975 t:4.6s +tttg: c37/344 lr:0.000973 t:4.7s +tttg: c38/344 lr:0.000972 t:4.9s +tttg: c39/344 lr:0.000970 t:5.0s +tttg: c40/344 lr:0.000968 t:5.1s +tttg: c41/344 lr:0.000967 t:5.2s +tttg: c42/344 lr:0.000965 t:5.3s +tttg: c43/344 lr:0.000963 t:5.5s +tttg: c44/344 lr:0.000962 t:5.6s +tttg: c45/344 lr:0.000960 t:5.7s +tttg: c46/344 lr:0.000958 t:5.8s +tttg: c47/344 lr:0.000956 t:6.0s +tttg: c48/344 lr:0.000954 t:6.1s +tttg: c49/344 lr:0.000952 t:6.2s +tttg: c50/344 lr:0.000950 t:6.3s +tttg: c51/344 lr:0.000948 t:6.4s +tttg: c52/344 lr:0.000946 t:6.6s +tttg: c53/344 lr:0.000944 t:6.7s +tttg: c54/344 lr:0.000942 t:6.8s +tttg: c55/344 lr:0.000940 t:6.9s +tttg: c56/344 lr:0.000938 t:7.1s +tttg: c57/344 lr:0.000936 t:7.2s +tttg: c58/344 lr:0.000933 t:7.3s +tttg: c59/344 lr:0.000931 t:7.4s +tttg: c60/344 lr:0.000929 t:7.5s +tttg: c61/344 lr:0.000926 t:7.7s +tttg: c62/344 lr:0.000924 t:7.8s +tttg: c63/344 lr:0.000922 t:7.9s +tttg: c64/344 lr:0.000919 t:8.0s +tttg: c65/344 lr:0.000917 t:8.2s +tttg: c66/344 lr:0.000914 t:8.3s +tttg: c67/344 lr:0.000911 t:8.4s +tttg: c68/344 lr:0.000909 t:8.5s +tttg: c69/344 lr:0.000906 t:8.7s +tttg: c70/344 lr:0.000903 t:8.8s +tttg: c71/344 lr:0.000901 t:8.9s +tttg: c72/344 lr:0.000898 t:9.0s +tttg: c73/344 lr:0.000895 t:9.2s +tttg: c74/344 lr:0.000892 t:9.3s +tttg: c75/344 lr:0.000889 t:9.4s +tttg: c76/344 lr:0.000887 t:9.5s +tttg: c77/344 lr:0.000884 t:9.7s +tttg: c78/344 lr:0.000881 t:9.8s +tttg: c79/344 lr:0.000878 t:9.9s +tttg: c80/344 lr:0.000875 t:10.0s +tttg: c81/344 lr:0.000872 t:10.2s +tttg: c82/344 lr:0.000869 t:10.3s +tttg: c83/344 lr:0.000865 t:10.4s +tttg: c84/344 lr:0.000862 t:10.5s +tttg: c85/344 lr:0.000859 t:10.7s +tttg: c86/344 lr:0.000856 t:10.8s +tttg: c87/344 lr:0.000853 t:10.9s +tttg: c88/344 lr:0.000849 t:11.0s +tttg: c89/344 lr:0.000846 t:11.2s +tttg: c90/344 lr:0.000843 t:11.3s +tttg: c91/344 lr:0.000840 t:11.4s +tttg: c92/344 lr:0.000836 t:11.5s +tttg: c93/344 lr:0.000833 t:11.7s +tttg: c94/344 lr:0.000829 t:11.8s +tttg: c95/344 lr:0.000826 t:11.9s +tttg: c96/344 lr:0.000822 t:12.1s +tttg: c97/344 lr:0.000819 t:12.2s +tttg: c98/344 lr:0.000815 t:12.3s +tttg: c99/344 lr:0.000812 t:12.4s +tttg: c100/344 lr:0.000808 t:12.6s +tttg: c101/344 lr:0.000805 t:12.7s +tttg: c102/344 lr:0.000801 t:12.8s +tttg: c103/344 lr:0.000797 t:12.9s +tttg: c104/344 lr:0.000794 t:13.1s +tttg: c105/344 lr:0.000790 t:13.2s +tttg: c106/344 lr:0.000786 t:13.3s +tttg: c107/344 lr:0.000782 t:13.4s +tttg: c108/344 lr:0.000778 t:13.6s +tttg: c109/344 lr:0.000775 t:13.7s +tttg: c110/344 lr:0.000771 t:13.8s +tttg: c111/344 lr:0.000767 t:13.9s +tttg: c112/344 lr:0.000763 t:14.1s +tttg: c113/344 lr:0.000759 t:14.2s +tttg: c114/344 lr:0.000755 t:14.3s +tttg: c115/344 lr:0.000751 t:14.4s +tttg: c116/344 lr:0.000747 t:14.6s +tttg: c117/344 lr:0.000743 t:14.7s +tttg: c118/344 lr:0.000739 t:14.8s +tttg: c119/344 lr:0.000735 t:15.0s +tttg: c120/344 lr:0.000731 t:15.1s +tttg: c121/344 lr:0.000727 t:15.2s +tttg: c122/344 lr:0.000723 t:15.3s +tttg: c123/344 lr:0.000719 t:15.5s +tttg: c124/344 lr:0.000715 t:15.6s +tttg: c125/344 lr:0.000711 t:15.7s +tttg: c126/344 lr:0.000707 t:15.8s +tttg: c127/344 lr:0.000702 t:16.0s +tttg: c128/344 lr:0.000698 t:16.1s +tttg: c129/344 lr:0.000694 t:16.2s +tttg: c130/344 lr:0.000690 t:16.3s +tttg: c131/344 lr:0.000686 t:16.5s +tttg: c132/344 lr:0.000681 t:16.6s +tttg: c133/344 lr:0.000677 t:16.7s +tttg: c134/344 lr:0.000673 t:16.8s +tttg: c135/344 lr:0.000668 t:17.0s +tttg: c136/344 lr:0.000664 t:17.1s +tttg: c137/344 lr:0.000660 t:17.2s +tttg: c138/344 lr:0.000655 t:17.3s +tttg: c139/344 lr:0.000651 t:17.5s +tttg: c140/344 lr:0.000647 t:17.6s +tttg: c141/344 lr:0.000642 t:17.7s +tttg: c142/344 lr:0.000638 t:17.8s +tttg: c143/344 lr:0.000633 t:18.0s +tttg: c144/344 lr:0.000629 t:18.1s +tttg: c145/344 lr:0.000625 t:18.2s +tttg: c146/344 lr:0.000620 t:18.3s +tttg: c147/344 lr:0.000616 t:18.5s +tttg: c148/344 lr:0.000611 t:18.6s +tttg: c149/344 lr:0.000607 t:18.7s +tttg: c150/344 lr:0.000602 t:18.8s +tttg: c151/344 lr:0.000598 t:19.0s +tttg: c152/344 lr:0.000593 t:19.1s +tttg: c153/344 lr:0.000589 t:19.2s +tttg: c154/344 lr:0.000584 t:19.3s +tttg: c155/344 lr:0.000580 t:19.5s +tttg: c156/344 lr:0.000575 t:19.6s +tttg: c157/344 lr:0.000571 t:19.7s +tttg: c158/344 lr:0.000566 t:19.8s +tttg: c159/344 lr:0.000562 t:20.0s +tttg: c160/344 lr:0.000557 t:20.1s +tttg: c161/344 lr:0.000553 t:20.2s +tttg: c162/344 lr:0.000548 t:20.3s +tttg: c163/344 lr:0.000543 t:20.5s +tttg: c164/344 lr:0.000539 t:20.6s +tttg: c165/344 lr:0.000534 t:20.7s +tttg: c166/344 lr:0.000530 t:20.8s +tttg: c167/344 lr:0.000525 t:21.0s +tttg: c168/344 lr:0.000521 t:21.1s +tttg: c169/344 lr:0.000516 t:21.2s +tttg: c170/344 lr:0.000511 t:21.3s +tttg: c171/344 lr:0.000507 t:21.5s +tttg: c172/344 lr:0.000502 t:21.6s +tttg: c173/344 lr:0.000498 t:21.7s +tttg: c174/344 lr:0.000493 t:21.8s +tttg: c175/344 lr:0.000489 t:22.0s +tttg: c176/344 lr:0.000484 t:22.1s +tttg: c177/344 lr:0.000479 t:22.2s +tttg: c178/344 lr:0.000475 t:22.3s +tttg: c179/344 lr:0.000470 t:22.5s +tttg: c180/344 lr:0.000466 t:22.6s +tttg: c181/344 lr:0.000461 t:22.7s +tttg: c182/344 lr:0.000457 t:22.8s +tttg: c183/344 lr:0.000452 t:23.0s +tttg: c184/344 lr:0.000447 t:23.1s +tttg: c185/344 lr:0.000443 t:23.2s +tttg: c186/344 lr:0.000438 t:23.3s +tttg: c187/344 lr:0.000434 t:23.5s +tttg: c188/344 lr:0.000429 t:23.6s +tttg: c189/344 lr:0.000425 t:23.7s +tttg: c190/344 lr:0.000420 t:23.8s +tttg: c191/344 lr:0.000416 t:23.9s +tttg: c192/344 lr:0.000411 t:24.1s +tttg: c193/344 lr:0.000407 t:24.2s +tttg: c194/344 lr:0.000402 t:24.3s +tttg: c195/344 lr:0.000398 t:24.4s +tttg: c196/344 lr:0.000393 t:24.6s +tttg: c197/344 lr:0.000389 t:24.7s +tttg: c198/344 lr:0.000384 t:24.8s +tttg: c199/344 lr:0.000380 t:24.9s +tttg: c200/344 lr:0.000375 t:25.0s +tttg: c201/344 lr:0.000371 t:25.2s +tttg: c202/344 lr:0.000367 t:25.3s +tttg: c203/344 lr:0.000362 t:25.4s +tttg: c204/344 lr:0.000358 t:25.6s +tttg: c205/344 lr:0.000353 t:25.7s +tttg: c206/344 lr:0.000349 t:25.8s +tttg: c207/344 lr:0.000345 t:25.9s +tttg: c208/344 lr:0.000340 t:26.0s +tttg: c209/344 lr:0.000336 t:26.2s +tttg: c210/344 lr:0.000332 t:26.3s +tttg: c211/344 lr:0.000327 t:26.4s +tttg: c212/344 lr:0.000323 t:26.6s +tttg: c213/344 lr:0.000319 t:26.7s +tttg: c214/344 lr:0.000314 t:26.8s +tttg: c215/344 lr:0.000310 t:26.9s +tttg: c216/344 lr:0.000306 t:27.1s +tttg: c217/344 lr:0.000302 t:27.2s +tttg: c218/344 lr:0.000298 t:27.3s +tttg: c219/344 lr:0.000293 t:27.4s +tttg: c220/344 lr:0.000289 t:27.6s +tttg: c221/344 lr:0.000285 t:27.7s +tttg: c222/344 lr:0.000281 t:27.8s +tttg: c223/344 lr:0.000277 t:27.9s +tttg: c224/344 lr:0.000273 t:28.1s +tttg: c225/344 lr:0.000269 t:28.2s +tttg: c226/344 lr:0.000265 t:28.3s +tttg: c227/344 lr:0.000261 t:28.4s +tttg: c228/344 lr:0.000257 t:28.6s +tttg: c229/344 lr:0.000253 t:28.7s +tttg: c230/344 lr:0.000249 t:28.8s +tttg: c231/344 lr:0.000245 t:28.9s +tttg: c232/344 lr:0.000241 t:29.1s +tttg: c233/344 lr:0.000237 t:29.2s +tttg: c234/344 lr:0.000233 t:29.3s +tttg: c235/344 lr:0.000229 t:29.4s +tttg: c236/344 lr:0.000225 t:29.6s +tttg: c237/344 lr:0.000222 t:29.7s +tttg: c238/344 lr:0.000218 t:29.8s +tttg: c239/344 lr:0.000214 t:29.9s +tttg: c240/344 lr:0.000210 t:30.0s +tttg: c241/344 lr:0.000206 t:30.2s +tttg: c242/344 lr:0.000203 t:30.3s +tttg: c243/344 lr:0.000199 t:30.4s +tttg: c244/344 lr:0.000195 t:30.5s +tttg: c245/344 lr:0.000192 t:30.7s +tttg: c246/344 lr:0.000188 t:30.8s +tttg: c247/344 lr:0.000185 t:30.9s +tttg: c248/344 lr:0.000181 t:31.0s +tttg: c249/344 lr:0.000178 t:31.1s +tttg: c250/344 lr:0.000174 t:31.3s +tttg: c251/344 lr:0.000171 t:31.4s +tttg: c252/344 lr:0.000167 t:31.5s +tttg: c253/344 lr:0.000164 t:31.6s +tttg: c254/344 lr:0.000160 t:31.8s +tttg: c255/344 lr:0.000157 t:31.9s +tttg: c256/344 lr:0.000154 t:32.0s +tttg: c257/344 lr:0.000151 t:32.1s +tttg: c258/344 lr:0.000147 t:32.3s +tttg: c259/344 lr:0.000144 t:32.4s +tttg: c260/344 lr:0.000141 t:32.5s +tttg: c261/344 lr:0.000138 t:32.6s +tttg: c262/344 lr:0.000135 t:32.8s +tttg: c263/344 lr:0.000131 t:32.9s +tttg: c264/344 lr:0.000128 t:33.0s +tttg: c265/344 lr:0.000125 t:33.1s +tttg: c266/344 lr:0.000122 t:33.3s +tttg: c267/344 lr:0.000119 t:33.4s +tttg: c268/344 lr:0.000116 t:33.5s +tttg: c269/344 lr:0.000113 t:33.7s +tttg: c270/344 lr:0.000111 t:33.8s +tttg: c271/344 lr:0.000108 t:33.9s +tttg: c272/344 lr:0.000105 t:34.0s +tttg: c273/344 lr:0.000102 t:34.2s +tttg: c274/344 lr:0.000099 t:34.3s +tttg: c275/344 lr:0.000097 t:34.4s +tttg: c276/344 lr:0.000094 t:34.5s +tttg: c277/344 lr:0.000091 t:34.7s +tttg: c278/344 lr:0.000089 t:34.8s +tttg: c279/344 lr:0.000086 t:34.9s +tttg: c280/344 lr:0.000083 t:35.0s +tttg: c281/344 lr:0.000081 t:35.2s +tttg: c282/344 lr:0.000078 t:35.3s +tttg: c283/344 lr:0.000076 t:35.4s +tttg: c284/344 lr:0.000074 t:35.5s +tttg: c285/344 lr:0.000071 t:35.7s +tttg: c286/344 lr:0.000069 t:35.8s +tttg: c287/344 lr:0.000067 t:35.9s +tttg: c288/344 lr:0.000064 t:36.0s +tttg: c289/344 lr:0.000062 t:36.2s +tttg: c290/344 lr:0.000060 t:36.3s +tttg: c291/344 lr:0.000058 t:36.4s +tttg: c292/344 lr:0.000056 t:36.5s +tttg: c293/344 lr:0.000054 t:36.7s +tttg: c294/344 lr:0.000052 t:36.8s +tttg: c295/344 lr:0.000050 t:36.9s +tttg: c296/344 lr:0.000048 t:37.0s +tttg: c297/344 lr:0.000046 t:37.2s +tttg: c298/344 lr:0.000044 t:37.3s +tttg: c299/344 lr:0.000042 t:37.4s +tttg: c300/344 lr:0.000040 t:37.5s +tttg: c301/344 lr:0.000038 t:37.7s +tttg: c302/344 lr:0.000037 t:37.8s +tttg: c303/344 lr:0.000035 t:37.9s +tttg: c304/344 lr:0.000033 t:38.1s +tttg: c305/344 lr:0.000032 t:38.2s +tttg: c306/344 lr:0.000030 t:38.3s +tttg: c307/344 lr:0.000028 t:38.4s +tttg: c308/344 lr:0.000027 t:38.5s +tttg: c309/344 lr:0.000025 t:38.7s +tttg: c310/344 lr:0.000024 t:38.8s +tttg: c311/344 lr:0.000023 t:38.9s +tttg: c312/344 lr:0.000021 t:39.1s +tttg: c313/344 lr:0.000020 t:39.2s +tttg: c314/344 lr:0.000019 t:39.3s +tttg: c315/344 lr:0.000018 t:39.4s +tttg: c316/344 lr:0.000016 t:39.6s +tttg: c317/344 lr:0.000015 t:39.7s +tttg: c318/344 lr:0.000014 t:39.8s +tttg: c319/344 lr:0.000013 t:39.9s +tttg: c320/344 lr:0.000012 t:40.1s +tttg: c321/344 lr:0.000011 t:40.2s +tttg: c322/344 lr:0.000010 t:40.3s +tttg: c323/344 lr:0.000009 t:40.4s +tttg: c324/344 lr:0.000008 t:40.6s +tttg: c325/344 lr:0.000008 t:40.7s +tttg: c326/344 lr:0.000007 t:40.8s +tttg: c327/344 lr:0.000006 t:40.9s +tttg: c328/344 lr:0.000005 t:41.1s +tttg: c329/344 lr:0.000005 t:41.2s +tttg: c330/344 lr:0.000004 t:41.3s +tttg: c331/344 lr:0.000004 t:41.4s +tttg: c332/344 lr:0.000003 t:41.6s +tttg: c333/344 lr:0.000003 t:41.7s +tttg: c334/344 lr:0.000002 t:41.8s +tttg: c335/344 lr:0.000002 t:41.9s +tttg: c336/344 lr:0.000001 t:42.0s +tttg: c337/344 lr:0.000001 t:42.2s +tttg: c338/344 lr:0.000001 t:42.3s +tttg: c339/344 lr:0.000001 t:42.4s +tttg: c340/344 lr:0.000000 t:42.6s +tttg: c341/344 lr:0.000000 t:42.7s +tttg: c342/344 lr:0.000000 t:42.8s +tttg: c343/344 lr:0.000000 t:42.9s +ttpr: phase:1/1 t:195.7s +ttp: b1965/2084 bl:2.2722 bb:1.0060 rl:2.2763 rb:1.0549 dl:2565-2577 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1959/2084 bl:2.2383 bb:1.0296 rl:2.2752 rb:1.0542 dl:2501-2514 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1953/2084 bl:2.2887 bb:1.0442 rl:2.2756 rb:1.0539 dl:2441-2454 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1947/2084 bl:2.2147 bb:0.9535 rl:2.2741 rb:1.0512 dl:2368-2382 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1941/2084 bl:2.3015 bb:1.0498 rl:2.2747 rb:1.0512 dl:2314-2323 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1935/2084 bl:2.2669 bb:1.0280 rl:2.2746 rb:1.0507 dl:2260-2270 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1929/2084 bl:2.2726 bb:1.0211 rl:2.2745 rb:1.0500 dl:2203-2216 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1923/2084 bl:2.3678 bb:1.0788 rl:2.2764 rb:1.0506 dl:2160-2164 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1917/2084 bl:2.3231 bb:1.0575 rl:2.2774 rb:1.0508 dl:2117-2122 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1911/2084 bl:2.2051 bb:0.9663 rl:2.2760 rb:1.0491 dl:2072-2081 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1905/2084 bl:2.4135 bb:1.0291 rl:2.2785 rb:1.0487 dl:2036-2041 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1899/2084 bl:2.4008 bb:1.0553 rl:2.2807 rb:1.0488 dl:1997-2004 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1893/2084 bl:2.1841 bb:1.0267 rl:2.2790 rb:1.0484 dl:1958-1963 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1887/2084 bl:2.2630 bb:1.0147 rl:2.2788 rb:1.0479 dl:1927-1931 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1881/2084 bl:2.3478 bb:1.0860 rl:2.2799 rb:1.0485 dl:1898-1902 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1875/2084 bl:2.3331 bb:1.0218 rl:2.2807 rb:1.0480 dl:1868-1873 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1869/2084 bl:2.2933 bb:1.0247 rl:2.2809 rb:1.0477 dl:1841-1846 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1861/2084 bl:2.2684 bb:1.0365 rl:2.2807 rb:1.0475 dl:1808-1813 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1853/2084 bl:2.3439 bb:1.1135 rl:2.2816 rb:1.0484 dl:1774-1778 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1845/2084 bl:2.2574 bb:1.0091 rl:2.2813 rb:1.0478 dl:1741-1744 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1836/2084 bl:2.3610 bb:1.0518 rl:2.2823 rb:1.0479 dl:1707-1710 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1825/2084 bl:2.2319 bb:1.0141 rl:2.2817 rb:1.0475 dl:1665-1669 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1817/2084 bl:2.4025 bb:1.0916 rl:2.2832 rb:1.0480 dl:1638-1641 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1809/2084 bl:2.3809 bb:1.1004 rl:2.2843 rb:1.0486 dl:1610-1613 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1801/2084 bl:2.2867 bb:1.0211 rl:2.2844 rb:1.0483 dl:1586-1589 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1793/2084 bl:2.4367 bb:1.0644 rl:2.2861 rb:1.0485 dl:1562-1565 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1783/2084 bl:2.2632 bb:1.0059 rl:2.2858 rb:1.0480 dl:1532-1534 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1776/2084 bl:2.4426 bb:1.0697 rl:2.2875 rb:1.0483 dl:1512-1514 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1768/2084 bl:2.3759 bb:1.0577 rl:2.2885 rb:1.0484 dl:1490-1493 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1760/2084 bl:2.3637 bb:1.0250 rl:2.2892 rb:1.0481 dl:1471-1474 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1752/2084 bl:2.3095 bb:1.0586 rl:2.2894 rb:1.0482 dl:1450-1452 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1744/2084 bl:2.2383 bb:1.0513 rl:2.2889 rb:1.0482 dl:1429-1432 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1734/2084 bl:2.2829 bb:1.0077 rl:2.2889 rb:1.0478 dl:1405-1407 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1724/2084 bl:2.1694 bb:1.0263 rl:2.2878 rb:1.0477 dl:1382-1384 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1716/2084 bl:2.2962 bb:1.0406 rl:2.2878 rb:1.0476 dl:1362-1364 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1707/2084 bl:2.2450 bb:1.0227 rl:2.2875 rb:1.0474 dl:1341-1344 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1690/2084 bl:2.3985 bb:0.9989 rl:2.2884 rb:1.0469 dl:1305-1307 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1683/2084 bl:2.3526 bb:0.9992 rl:2.2889 rb:1.0465 dl:1289-1291 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1675/2084 bl:2.2859 bb:1.0070 rl:2.2889 rb:1.0462 dl:1274-1277 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1667/2084 bl:2.4558 bb:1.0634 rl:2.2903 rb:1.0463 dl:1258-1260 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1657/2084 bl:2.2818 bb:1.0536 rl:2.2902 rb:1.0464 dl:1237-1239 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1648/2084 bl:2.4291 bb:1.0883 rl:2.2913 rb:1.0467 dl:1220-1221 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1639/2084 bl:2.2395 bb:1.0344 rl:2.2909 rb:1.0466 dl:1201-1203 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1631/2084 bl:2.2349 bb:1.0094 rl:2.2905 rb:1.0463 dl:1187-1189 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1622/2084 bl:2.2120 bb:1.0212 rl:2.2899 rb:1.0461 dl:1172-1174 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1614/2084 bl:2.3073 bb:1.0662 rl:2.2900 rb:1.0463 dl:1158-1160 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1605/2084 bl:2.3246 bb:1.0290 rl:2.2903 rb:1.0462 dl:1144-1146 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1596/2084 bl:2.2521 bb:1.0604 rl:2.2900 rb:1.0462 dl:1130-1131 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1586/2084 bl:2.4316 bb:1.0787 rl:2.2909 rb:1.0465 dl:1112-1113 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1578/2084 bl:2.3261 bb:1.0380 rl:2.2912 rb:1.0464 dl:1100-1101 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1569/2084 bl:2.2155 bb:0.9869 rl:2.2907 rb:1.0460 dl:1086-1087 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1559/2084 bl:2.2331 bb:1.0219 rl:2.2903 rb:1.0459 dl:1070-1071 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1551/2084 bl:2.2934 bb:1.0029 rl:2.2903 rb:1.0456 dl:1057-1059 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1543/2084 bl:2.4249 bb:1.0435 rl:2.2911 rb:1.0456 dl:1045-1046 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1534/2084 bl:2.3037 bb:1.0459 rl:2.2912 rb:1.0456 dl:1033-1034 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1525/2084 bl:2.2876 bb:1.0148 rl:2.2912 rb:1.0454 dl:1019-1021 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1517/2084 bl:2.3416 bb:1.0995 rl:2.2915 rb:1.0457 dl:1009-1010 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1508/2084 bl:2.3323 bb:1.0544 rl:2.2917 rb:1.0458 dl:997-998 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1498/2084 bl:2.2400 bb:1.0132 rl:2.2914 rb:1.0456 dl:985-986 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1490/2084 bl:2.3221 bb:1.0947 rl:2.2916 rb:1.0458 dl:973-975 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1481/2084 bl:2.3898 bb:1.0424 rl:2.2921 rb:1.0458 dl:961-963 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1470/2084 bl:2.4202 bb:1.0832 rl:2.2928 rb:1.0460 dl:949-950 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1462/2084 bl:2.2583 bb:1.0606 rl:2.2926 rb:1.0461 dl:939-940 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1454/2084 bl:2.4540 bb:1.0735 rl:2.2934 rb:1.0462 dl:930-931 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1446/2084 bl:2.2172 bb:1.0446 rl:2.2931 rb:1.0462 dl:921-922 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1436/2084 bl:2.3381 bb:1.0353 rl:2.2933 rb:1.0462 dl:909-910 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1427/2084 bl:2.4903 bb:1.1173 rl:2.2942 rb:1.0465 dl:898-899 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1416/2084 bl:2.4012 bb:1.0807 rl:2.2947 rb:1.0467 dl:886-887 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1408/2084 bl:2.4217 bb:1.0756 rl:2.2953 rb:1.0468 dl:877-878 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1400/2084 bl:2.3253 bb:0.9985 rl:2.2955 rb:1.0466 dl:868-869 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1393/2084 bl:2.2318 bb:0.9919 rl:2.2952 rb:1.0463 dl:860-861 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1385/2084 bl:2.4329 bb:1.0321 rl:2.2958 rb:1.0463 dl:852-853 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1376/2084 bl:2.5026 bb:1.0716 rl:2.2967 rb:1.0464 dl:842-843 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1365/2084 bl:2.3690 bb:1.1087 rl:2.2970 rb:1.0466 dl:831-832 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1356/2084 bl:2.3880 bb:1.0933 rl:2.2974 rb:1.0468 dl:820-821 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1348/2084 bl:2.4307 bb:1.0185 rl:2.2980 rb:1.0467 dl:812-813 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1338/2084 bl:2.3389 bb:1.0399 rl:2.2981 rb:1.0467 dl:803-804 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1330/2084 bl:2.4926 bb:1.0963 rl:2.2989 rb:1.0469 dl:795-796 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1323/2084 bl:2.4391 bb:1.0997 rl:2.2995 rb:1.0471 dl:788-788 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1314/2084 bl:2.1586 bb:0.9895 rl:2.2989 rb:1.0469 dl:779-780 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1307/2084 bl:2.2998 bb:0.9990 rl:2.2989 rb:1.0467 dl:773-773 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1296/2084 bl:2.4084 bb:1.1353 rl:2.2994 rb:1.0470 dl:763-764 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1288/2084 bl:2.2284 bb:0.9796 rl:2.2991 rb:1.0467 dl:756-756 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1278/2084 bl:2.4843 bb:1.1156 rl:2.2998 rb:1.0470 dl:746-747 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1270/2084 bl:2.3318 bb:1.0261 rl:2.2999 rb:1.0469 dl:740-741 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1261/2084 bl:2.3392 bb:1.0656 rl:2.3000 rb:1.0470 dl:731-732 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1252/2084 bl:2.1911 bb:1.0240 rl:2.2997 rb:1.0469 dl:724-725 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1245/2084 bl:2.3912 bb:1.0173 rl:2.3000 rb:1.0468 dl:718-719 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1233/2084 bl:2.3934 bb:1.0686 rl:2.3003 rb:1.0469 dl:708-709 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1226/2084 bl:2.2146 bb:1.0718 rl:2.3000 rb:1.0470 dl:702-703 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1218/2084 bl:2.1679 bb:1.0140 rl:2.2996 rb:1.0468 dl:696-696 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1207/2084 bl:2.3913 bb:1.0900 rl:2.2999 rb:1.0470 dl:688-689 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1199/2084 bl:2.3615 bb:1.0428 rl:2.3001 rb:1.0470 dl:681-682 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1190/2084 bl:2.2985 bb:1.0791 rl:2.3001 rb:1.0471 dl:674-675 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1182/2084 bl:2.2345 bb:1.0352 rl:2.2999 rb:1.0470 dl:668-668 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1173/2084 bl:2.2652 bb:1.0016 rl:2.2997 rb:1.0469 dl:661-661 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1163/2084 bl:2.3410 bb:1.0433 rl:2.2999 rb:1.0469 dl:653-653 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1155/2084 bl:2.3816 bb:1.0808 rl:2.3001 rb:1.0470 dl:646-647 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1145/2084 bl:2.4423 bb:1.0419 rl:2.3006 rb:1.0470 dl:640-641 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1138/2084 bl:2.2197 bb:1.0921 rl:2.3003 rb:1.0471 dl:635-636 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1128/2084 bl:2.4145 bb:1.0941 rl:2.3007 rb:1.0472 dl:628-629 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1119/2084 bl:2.3482 bb:1.0850 rl:2.3008 rb:1.0473 dl:621-621 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1109/2084 bl:2.4135 bb:1.1005 rl:2.3011 rb:1.0475 dl:614-615 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1100/2084 bl:2.2801 bb:1.0272 rl:2.3011 rb:1.0474 dl:607-608 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1092/2084 bl:2.2977 bb:1.0326 rl:2.3011 rb:1.0474 dl:601-602 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1083/2084 bl:2.2697 bb:1.0515 rl:2.3010 rb:1.0474 dl:595-596 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1075/2084 bl:2.4288 bb:1.0641 rl:2.3013 rb:1.0475 dl:589-590 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1066/2084 bl:2.3155 bb:1.0711 rl:2.3014 rb:1.0475 dl:583-584 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1058/2084 bl:2.4074 bb:1.0730 rl:2.3016 rb:1.0476 dl:578-578 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1045/2084 bl:2.2441 bb:1.0554 rl:2.3015 rb:1.0476 dl:569-570 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1037/2084 bl:2.3154 bb:1.0813 rl:2.3015 rb:1.0477 dl:563-564 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1028/2084 bl:2.3311 bb:1.1045 rl:2.3016 rb:1.0478 dl:557-558 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1020/2084 bl:2.2736 bb:1.0285 rl:2.3015 rb:1.0478 dl:552-553 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1013/2084 bl:2.3023 bb:1.0839 rl:2.3015 rb:1.0479 dl:548-548 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1003/2084 bl:2.3957 bb:1.0556 rl:2.3018 rb:1.0479 dl:542-542 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b992/2084 bl:2.3040 bb:1.0713 rl:2.3018 rb:1.0479 dl:534-535 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b983/2084 bl:2.2654 bb:1.0150 rl:2.3017 rb:1.0479 dl:529-529 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b974/2084 bl:2.3820 bb:1.0889 rl:2.3019 rb:1.0480 dl:523-524 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b966/2084 bl:2.4270 bb:1.0769 rl:2.3022 rb:1.0480 dl:518-519 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b956/2084 bl:2.3586 bb:1.0320 rl:2.3023 rb:1.0480 dl:512-513 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b943/2084 bl:2.2229 bb:1.0131 rl:2.3021 rb:1.0479 dl:503-504 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b935/2084 bl:2.3576 bb:1.0803 rl:2.3022 rb:1.0480 dl:499-499 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b925/2084 bl:2.2453 bb:1.1042 rl:2.3021 rb:1.0481 dl:493-494 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b918/2084 bl:2.3593 bb:1.0727 rl:2.3022 rb:1.0482 dl:489-490 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b909/2084 bl:2.2799 bb:1.0717 rl:2.3022 rb:1.0482 dl:484-485 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b899/2084 bl:2.3746 bb:1.0962 rl:2.3023 rb:1.0483 dl:478-479 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b892/2084 bl:2.4533 bb:1.1134 rl:2.3027 rb:1.0484 dl:474-475 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b883/2084 bl:2.3739 bb:1.0527 rl:2.3028 rb:1.0485 dl:469-470 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b875/2084 bl:2.2658 bb:1.0375 rl:2.3027 rb:1.0484 dl:465-465 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b863/2084 bl:2.2087 bb:1.0674 rl:2.3025 rb:1.0485 dl:458-459 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b855/2084 bl:2.3203 bb:1.0404 rl:2.3026 rb:1.0485 dl:453-454 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b847/2084 bl:2.4539 bb:1.0633 rl:2.3029 rb:1.0485 dl:448-449 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b839/2084 bl:2.3282 bb:1.0409 rl:2.3029 rb:1.0485 dl:444-444 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b811/2084 bl:2.2825 bb:1.0808 rl:2.3029 rb:1.0485 dl:428-429 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b803/2084 bl:2.3523 bb:1.1095 rl:2.3030 rb:1.0486 dl:424-424 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b792/2084 bl:2.4184 bb:1.1200 rl:2.3032 rb:1.0488 dl:418-418 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b783/2084 bl:2.3595 bb:1.1018 rl:2.3033 rb:1.0489 dl:412-413 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b772/2084 bl:2.2470 bb:1.0408 rl:2.3032 rb:1.0488 dl:406-407 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b763/2084 bl:2.3553 bb:1.0833 rl:2.3033 rb:1.0489 dl:401-402 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b756/2084 bl:2.4913 bb:1.1533 rl:2.3036 rb:1.0491 dl:398-398 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b746/2084 bl:2.3367 bb:1.1261 rl:2.3037 rb:1.0492 dl:393-393 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b736/2084 bl:2.4330 bb:1.0917 rl:2.3039 rb:1.0493 dl:387-388 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b726/2084 bl:2.2541 bb:1.0877 rl:2.3038 rb:1.0493 dl:383-383 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b715/2084 bl:2.2455 bb:0.9962 rl:2.3037 rb:1.0492 dl:377-378 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b705/2084 bl:2.2063 bb:1.0636 rl:2.3035 rb:1.0493 dl:372-373 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b696/2084 bl:2.3488 bb:1.0899 rl:2.3036 rb:1.0493 dl:368-369 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b687/2084 bl:2.4912 bb:1.1269 rl:2.3039 rb:1.0495 dl:364-364 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b676/2084 bl:2.3619 bb:1.1071 rl:2.3040 rb:1.0495 dl:358-359 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b666/2084 bl:2.1859 bb:1.0281 rl:2.3038 rb:1.0495 dl:354-355 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b658/2084 bl:2.3962 bb:1.1001 rl:2.3040 rb:1.0496 dl:350-351 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b649/2084 bl:2.2813 bb:1.0886 rl:2.3039 rb:1.0496 dl:346-346 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b637/2084 bl:2.3001 bb:1.0802 rl:2.3039 rb:1.0497 dl:340-341 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b630/2084 bl:2.3813 bb:1.1579 rl:2.3040 rb:1.0498 dl:337-337 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b619/2084 bl:2.1965 bb:1.0606 rl:2.3039 rb:1.0498 dl:332-333 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b608/2084 bl:2.3695 bb:1.0820 rl:2.3040 rb:1.0499 dl:327-328 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b599/2084 bl:2.4709 bb:1.1405 rl:2.3042 rb:1.0500 dl:324-324 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b589/2084 bl:2.3212 bb:1.0834 rl:2.3042 rb:1.0500 dl:319-320 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b580/2084 bl:2.5167 bb:1.1279 rl:2.3045 rb:1.0502 dl:315-316 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b572/2084 bl:2.3768 bb:1.1202 rl:2.3046 rb:1.0502 dl:312-312 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b562/2084 bl:2.3016 bb:1.0555 rl:2.3046 rb:1.0502 dl:308-308 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b552/2084 bl:2.3743 bb:1.1631 rl:2.3047 rb:1.0504 dl:304-304 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b545/2084 bl:2.3904 bb:1.1669 rl:2.3048 rb:1.0505 dl:301-301 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b535/2084 bl:2.4620 bb:1.1310 rl:2.3050 rb:1.0506 dl:297-297 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b521/2084 bl:2.3857 bb:1.1045 rl:2.3051 rb:1.0507 dl:291-292 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b512/2084 bl:2.2873 bb:1.1459 rl:2.3051 rb:1.0508 dl:287-288 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b502/2084 bl:2.2983 bb:1.0784 rl:2.3051 rb:1.0508 dl:283-284 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b493/2084 bl:2.4410 bb:1.1235 rl:2.3052 rb:1.0509 dl:280-280 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b485/2084 bl:2.3155 bb:1.0808 rl:2.3052 rb:1.0509 dl:277-277 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b475/2084 bl:2.2867 bb:1.1339 rl:2.3052 rb:1.0510 dl:273-273 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b463/2084 bl:2.4960 bb:1.1508 rl:2.3054 rb:1.0511 dl:269-269 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b452/2084 bl:2.2530 bb:1.1435 rl:2.3054 rb:1.0512 dl:264-265 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b444/2084 bl:2.5224 bb:1.1464 rl:2.3056 rb:1.0513 dl:262-262 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b430/2084 bl:2.4144 bb:1.1838 rl:2.3057 rb:1.0515 dl:256-257 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b421/2084 bl:2.4692 bb:1.2036 rl:2.3059 rb:1.0516 dl:253-254 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b414/2084 bl:2.4389 bb:1.1951 rl:2.3060 rb:1.0517 dl:251-251 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b406/2084 bl:2.2283 bb:1.0748 rl:2.3059 rb:1.0518 dl:248-248 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b398/2084 bl:2.2186 bb:1.0816 rl:2.3058 rb:1.0518 dl:245-245 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b386/2084 bl:2.4731 bb:1.0827 rl:2.3060 rb:1.0518 dl:241-241 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b376/2084 bl:2.4923 bb:1.1622 rl:2.3062 rb:1.0519 dl:237-237 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b367/2084 bl:2.3697 bb:1.1037 rl:2.3063 rb:1.0520 dl:234-234 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b357/2084 bl:2.2488 bb:1.0361 rl:2.3062 rb:1.0520 dl:230-231 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b353/2084 bl:2.4472 bb:1.0966 rl:2.3063 rb:1.0520 dl:229-229 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b341/2084 bl:2.3100 bb:1.1148 rl:2.3063 rb:1.0521 dl:225-225 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b330/2084 bl:2.4624 bb:1.2123 rl:2.3065 rb:1.0522 dl:221-221 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b322/2084 bl:2.3952 bb:1.0881 rl:2.3066 rb:1.0522 dl:218-218 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b312/2084 bl:2.4604 bb:1.1686 rl:2.3067 rb:1.0523 dl:215-215 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b302/2084 bl:2.4686 bb:1.1468 rl:2.3068 rb:1.0524 dl:211-212 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b295/2084 bl:2.3509 bb:1.1860 rl:2.3069 rb:1.0525 dl:209-209 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b284/2084 bl:2.4350 bb:1.1662 rl:2.3070 rb:1.0526 dl:205-205 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b274/2084 bl:2.4818 bb:1.1574 rl:2.3071 rb:1.0527 dl:202-202 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b264/2084 bl:2.5139 bb:1.2043 rl:2.3073 rb:1.0528 dl:198-199 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b256/2084 bl:2.3709 bb:1.1289 rl:2.3073 rb:1.0529 dl:196-196 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b244/2084 bl:2.4321 bb:1.1403 rl:2.3074 rb:1.0529 dl:192-192 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b237/2084 bl:2.4156 bb:1.1085 rl:2.3075 rb:1.0530 dl:189-189 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b226/2084 bl:2.4827 bb:1.2181 rl:2.3076 rb:1.0531 dl:186-186 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b217/2084 bl:2.4920 bb:1.1296 rl:2.3078 rb:1.0531 dl:183-183 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b205/2084 bl:2.5775 bb:1.2409 rl:2.3080 rb:1.0533 dl:178-179 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b197/2084 bl:2.6223 bb:1.1519 rl:2.3082 rb:1.0533 dl:176-176 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b187/2084 bl:2.4507 bb:1.2052 rl:2.3083 rb:1.0534 dl:173-173 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b177/2084 bl:2.5504 bb:1.2657 rl:2.3085 rb:1.0536 dl:169-170 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b166/2084 bl:2.5417 bb:1.2687 rl:2.3086 rb:1.0537 dl:165-166 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b157/2084 bl:2.4719 bb:1.2130 rl:2.3087 rb:1.0538 dl:162-163 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b149/2084 bl:2.6401 bb:1.2810 rl:2.3089 rb:1.0539 dl:159-160 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b142/2084 bl:2.4407 bb:1.2358 rl:2.3090 rb:1.0540 dl:157-157 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b132/2084 bl:2.2676 bb:1.0946 rl:2.3090 rb:1.0541 dl:153-153 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b121/2084 bl:2.5878 bb:1.2011 rl:2.3092 rb:1.0541 dl:149-149 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b109/2084 bl:2.5017 bb:1.1970 rl:2.3093 rb:1.0542 dl:145-145 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b101/2084 bl:2.5812 bb:1.2194 rl:2.3094 rb:1.0543 dl:142-142 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b91/2084 bl:2.6322 bb:1.2221 rl:2.3096 rb:1.0544 dl:138-138 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b80/2084 bl:2.6579 bb:1.2773 rl:2.3098 rb:1.0545 dl:134-134 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b73/2084 bl:2.5994 bb:1.2203 rl:2.3099 rb:1.0546 dl:131-131 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b62/2084 bl:2.4170 bb:1.1437 rl:2.3100 rb:1.0546 dl:127-127 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b50/2084 bl:2.7368 bb:1.2231 rl:2.3102 rb:1.0547 dl:121-122 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b41/2084 bl:2.6526 bb:1.2717 rl:2.3104 rb:1.0548 dl:117-117 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b32/2084 bl:2.6001 bb:1.1613 rl:2.3105 rb:1.0549 dl:112-112 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b21/2084 bl:2.6862 bb:1.2306 rl:2.3107 rb:1.0549 dl:104-105 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b10/2084 bl:2.7445 bb:1.2184 rl:2.3108 rb:1.0550 dl:94-95 gd:1 sr:0 sf:1 tr:24/24 wt:0 +quantized_ttt_phased val_loss:2.31193897 val_bpb:1.05646316 eval_time:545161ms +total_eval_time:545.2s diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed0_corrected_token_only.log b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed0_corrected_token_only.log new file mode 100644 index 0000000000..f286f3bda2 --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed0_corrected_token_only.log @@ -0,0 +1,878 @@ +W0502 18:01:54.156000 345876 torch/distributed/run.py:803] +W0502 18:01:54.156000 345876 torch/distributed/run.py:803] ***************************************** +W0502 18:01:54.156000 345876 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0502 18:01:54.156000 345876 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + agree_add_boost: 0.0 + artifact_dir: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_20260502/seed0 + attn_clip_sigmas: 13.0 + attn_out_gate_enabled: False + attn_out_gate_src: proj + awq_lite_bits: 8 + awq_lite_enabled: True + awq_lite_group_size: 64 + awq_lite_group_top_k: 1 + beta1: 0.9 + beta2: 0.99 + caseops_enabled: True + compile_shape_warmup: True + compile_shape_warmup_iters: 1 + compile_shape_warmup_loop_modes: auto + compressor: pergroup + data_dir: ./data + datasets_dir: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved + distributed: True + ema_decay: 0.9965 + embed_bits: 7 + embed_clip_sigmas: 14.0 + embed_lr: 0.6 + embed_wd: 0.085 + enable_looping_at: 0.35 + eval_include_tail: True + eval_seq_len: 3072 + eval_stride: 1536 + fused_ce_enabled: True + gate_window: 12 + gated_attn_enabled: False + gated_attn_init_std: 0.01 + gated_attn_quant_gate: True + global_ttt_batch_seqs: 32 + global_ttt_chunk_tokens: 32768 + global_ttt_epochs: 1 + global_ttt_grad_clip: 1.0 + global_ttt_lr: 0.001 + global_ttt_momentum: 0.9 + global_ttt_respect_doc_boundaries: True + global_ttt_warmup_chunks: 0 + global_ttt_warmup_start_lr: 0.0 + gptq_calibration_batches: 16 + gptq_reserve_seconds: 4.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + is_main_process: True + iterations: 20000 + leaky_relu_sq_slope: 0.3 + ln_scale: True + local_rank: 0 + logfile: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_20260502/seed0/pr2140_corrected_authorhf_hardoff_s0.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + lqer_asym_enabled: True + lqer_asym_group: 64 + lqer_enabled: True + lqer_factor_bits: 4 + lqer_gain_select: False + lqer_rank: 4 + lqer_scope: all + lqer_top_k: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.026 + max_wallclock_seconds: 600.0 + midrun_cap_log_updates: False + midrun_cap_schedule: + min_lr: 0.1 + mlp_clip_sigmas: 11.5 + mlp_mult: 4.0 + model_dim: 512 + model_path: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_20260502/seed0/final_model.pt + muon_backend_steps: 5 + muon_momentum: 0.97 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + ngram_hint_precompute_outside: False + ngram_tilt_enabled: True + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_final_lane: mean + parallel_start_layer: 8 + phased_ttt_num_phases: 1 + phased_ttt_prefix_docs: 2500 + qk_gain_init: 5.25 + quantized_model_path: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_20260502/seed0/final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 3072 + rope_yarn: False + run_id: pr2140_corrected_authorhf_hardoff_s0 + scalar_lr: 0.02 + seed: 0 + seq_change_warmup_steps: 32 + skip_gates_enabled: True + skylight_norm_beta2: 0.95 + skylight_norm_ema: False + skylight_norm_eps: 1e-07 + skylight_uw_floor: False + skylight_uw_ratio: 0.35 + smear_gate_enabled: True + sparse_attn_gate_enabled: True + sparse_attn_gate_init_std: 0.0 + sparse_attn_gate_scale: 0.5 + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + token_boost: 2.625 + token_order: 16 + token_threshold: 0.8 + tokenizer_path: /tmp/parameter-golf-data-authorhf/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + train_batch_tokens: 786432 + train_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 3072 + train_seq_schedule: 1024@0.100,2048@0.700,3072@1.000 + train_seq_schedule_mode: wallclock + ttt_batch_size: 24 + ttt_beta1: 0.0 + ttt_beta2: 0.99 + ttt_chunk_size: 64 + ttt_enabled: True + ttt_eval_batches: + ttt_eval_seq_len: 3072 + ttt_grad_steps: 1 + ttt_k_lora: True + ttt_local_lr_mult: 0.75 + ttt_lora_lr: 0.0001 + ttt_lora_rank: 80 + ttt_mask: no_qv + ttt_mlp_lora: True + ttt_o_lora: True + ttt_optimizer: adam + ttt_q_lora: False + ttt_short_beta2: 0.99 + ttt_short_chunk_size: 32 + ttt_short_doc_len: 2000 + ttt_short_lora_enabled: False + ttt_short_lora_lr: 0.0001 + ttt_short_lora_rank: 80 + ttt_short_score_first_enabled: True + ttt_short_score_first_steps: 256:16,2000:32 + ttt_short_weight_decay: 0.5 + ttt_train_max_doc_len: 0 + ttt_train_min_doc_len: 0 + ttt_v_lora: False + ttt_warm_start_mean_doc_len: 2000 + ttt_warm_start_mean_enabled: False + ttt_warm_start_mean_momentum: 0.95 + ttt_weight_decay: 0.5 + val_batch_tokens: 524288 + val_bytes_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_bytes_*.bin + val_doc_fraction: 1.0 + val_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_*.bin + val_loss_every: 0 + vocab_size: 8192 + warmdown_frac: 0.85 + warmdown_iters: 0 + warmup_steps: 20 + within_boost: 0.0 + within_tau: 0.45 + word_boost: 0.0 + word_normalize: strip_punct_lower + word_order: 4 + word_tau: 0.65 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 47853343 +model_params:35945673 +train_seq_schedule:1024@0.100,2048@0.700,3072@1.000 +local_microbatch_tokens:98304 +growth_stage:seq_len:1024 progress:0.000 +gptq:reserving 4s, effective=596000ms +warmup_cu_buckets:64,128,192,256 iters_each:3 +compile_shape_warmup:start 1024xplain,2048xplain,2048xloop,3072xloop +compile_shape_warmup:shape seq_len:1024 loop:0 +compile_shape_warmup:shape seq_len:2048 loop:0 +compile_shape_warmup:shape seq_len:2048 loop:1 +compile_shape_warmup:shape seq_len:3072 loop:1 +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +1/20000 train_loss: 9.0105 train_time: 0.0m tok/s: 17590870 +2/20000 train_loss: 12.9459 train_time: 0.0m tok/s: 730352 +3/20000 train_loss: 10.2468 train_time: 0.0m tok/s: 1053126 +4/20000 train_loss: 8.7587 train_time: 0.0m tok/s: 1346644 +5/20000 train_loss: 8.0140 train_time: 0.0m tok/s: 1618471 +500/20000 train_loss: 2.6146 train_time: 0.9m tok/s: 7700066 +growth_stage:seq_len:2048 progress:0.100 step:593 +growth_stage_rewarmup:start step:593 steps:32 seq_len:2048 +1000/20000 train_loss: 2.5802 train_time: 1.6m tok/s: 7993682 +1500/20000 train_loss: 2.6234 train_time: 2.4m tok/s: 8077910 +2000/20000 train_loss: 2.6543 train_time: 3.2m tok/s: 8112548 +layer_loop:enabled step:2155 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 2.5045 train_time: 4.3m tok/s: 7591754 +3000/20000 train_loss: 2.4542 train_time: 5.5m tok/s: 7176221 +3500/20000 train_loss: 2.4661 train_time: 6.6m tok/s: 6907294 +growth_stage:seq_len:3072 progress:0.700 step:3634 +growth_stage_rewarmup:start step:3634 steps:32 seq_len:3072 +4000/20000 train_loss: 2.3833 train_time: 7.8m tok/s: 6693234 +4500/20000 train_loss: 2.3477 train_time: 9.0m tok/s: 6527538 +4861/20000 val_loss: 2.3470 val_bpb: 1.0725 +stopping_early: wallclock_cap train_time: 596182ms step: 4861/20000 +peak memory allocated: 41707 MiB reserved: 46984 MiB +ema:applying EMA weights +diagnostic pre-quantization post-ema val_loss:2.32243873 val_bpb:1.06126113 eval_time:18048ms +Serialized model: 135418111 bytes +Code size (uncompressed): 207577 bytes +Code size (compressed): 40445 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 4.1s +Quantized weights: + gate_int8_row: blocks.attn.attn_gate_w + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int6)+lqer_asym: blocks.mlp.fc.weight + gptq (int7)+awqgrpint8+lqer_asym: tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, parallel_post_lambdas, parallel_resid_lambdas, skip_gates, skip_weights, smear_gate.weight, smear_lambda, softcap_neg, softcap_pos +Serialize: per-group lrzip compression... +Serialize: per-group compression done in 112.1s +Serialized model quantized+pergroup: 15944987 bytes +Total submission size quantized+pergroup: 15985432 bytes +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 17.9s +diagnostic quantized val_loss:2.34211201 val_bpb:1.07025102 eval_time:18952ms +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 17.9s +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (186.5s) + +beginning TTT eval timer +ngram_tilt:building_native_helper src=online_ngram_state.c +ngram_tilt:hints total=47853343 gated=628156 token_gate=628156 within_gate=0 word_gate=0 agree2plus=0 +ngram_tilt:precompute_outside_timer_done elapsed=16.08s total_targets=47853343 +ttt_phased: total_docs:50000 prefix_docs:2500 suffix_docs:47500 num_phases:1 boundaries:[2500] target_tokens:47853343 +ttp: b2079/2084 bl:2.2483 bb:1.0826 rl:2.2483 rb:1.0826 dl:13679-14936 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2074/2084 bl:2.4393 bb:1.1100 rl:2.3260 rb:1.0941 dl:9553-10083 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2070/2084 bl:2.3649 bb:1.1240 rl:2.3361 rb:1.1018 dl:8228-8606 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2063/2084 bl:2.2820 bb:1.0748 rl:2.3270 rb:1.0972 dl:6523-6721 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2059/2084 bl:2.2096 bb:1.0724 rl:2.3112 rb:1.0940 dl:6007-6142 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2051/2084 bl:2.3310 bb:1.1030 rl:2.3132 rb:1.0949 dl:5231-5322 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2042/2084 bl:2.1117 bb:0.9778 rl:2.2964 rb:1.0849 dl:4576-4641 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2036/2084 bl:2.2523 bb:1.0566 rl:2.2932 rb:1.0828 dl:4294-4331 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2029/2084 bl:2.3192 bb:1.1010 rl:2.2948 rb:1.0840 dl:3968-4022 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2022/2084 bl:2.3028 bb:1.0281 rl:2.2952 rb:1.0807 dl:3729-3760 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2015/2084 bl:2.3420 bb:1.0095 rl:2.2976 rb:1.0768 dl:3488-3516 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2007/2084 bl:2.2482 bb:0.9987 rl:2.2954 rb:1.0731 dl:3303-3324 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1999/2084 bl:2.3445 bb:1.0478 rl:2.2973 rb:1.0721 dl:3109-3122 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1992/2084 bl:2.3466 bb:1.0508 rl:2.2992 rb:1.0713 dl:2976-2991 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1986/2084 bl:2.2686 bb:1.0312 rl:2.2981 rb:1.0698 dl:2856-2872 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1979/2084 bl:2.3681 bb:1.0895 rl:2.3004 rb:1.0705 dl:2753-2769 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttpp: phase:1/1 pd:2672 gd:2500 t:215.1s +tttg: c1/344 lr:0.001000 t:2.3s +tttg: c2/344 lr:0.001000 t:2.4s +tttg: c3/344 lr:0.001000 t:2.5s +tttg: c4/344 lr:0.001000 t:2.6s +tttg: c5/344 lr:0.001000 t:2.7s +tttg: c6/344 lr:0.000999 t:2.8s +tttg: c7/344 lr:0.000999 t:2.9s +tttg: c8/344 lr:0.000999 t:3.0s +tttg: c9/344 lr:0.000999 t:3.1s +tttg: c10/344 lr:0.000998 t:3.2s +tttg: c11/344 lr:0.000998 t:3.3s +tttg: c12/344 lr:0.000997 t:3.3s +tttg: c13/344 lr:0.000997 t:3.4s +tttg: c14/344 lr:0.000996 t:3.5s +tttg: c15/344 lr:0.000996 t:3.6s +tttg: c16/344 lr:0.000995 t:3.7s +tttg: c17/344 lr:0.000995 t:3.8s +tttg: c18/344 lr:0.000994 t:3.9s +tttg: c19/344 lr:0.000993 t:4.0s +tttg: c20/344 lr:0.000992 t:4.1s +tttg: c21/344 lr:0.000992 t:4.2s +tttg: c22/344 lr:0.000991 t:4.3s +tttg: c23/344 lr:0.000990 t:4.4s +tttg: c24/344 lr:0.000989 t:4.5s +tttg: c25/344 lr:0.000988 t:4.6s +tttg: c26/344 lr:0.000987 t:4.7s +tttg: c27/344 lr:0.000986 t:4.8s +tttg: c28/344 lr:0.000985 t:4.9s +tttg: c29/344 lr:0.000984 t:5.0s +tttg: c30/344 lr:0.000982 t:5.1s +tttg: c31/344 lr:0.000981 t:5.1s +tttg: c32/344 lr:0.000980 t:5.2s +tttg: c33/344 lr:0.000979 t:5.3s +tttg: c34/344 lr:0.000977 t:5.4s +tttg: c35/344 lr:0.000976 t:5.5s +tttg: c36/344 lr:0.000975 t:5.6s +tttg: c37/344 lr:0.000973 t:5.7s +tttg: c38/344 lr:0.000972 t:5.8s +tttg: c39/344 lr:0.000970 t:5.9s +tttg: c40/344 lr:0.000968 t:6.0s +tttg: c41/344 lr:0.000967 t:6.1s +tttg: c42/344 lr:0.000965 t:6.2s +tttg: c43/344 lr:0.000963 t:6.3s +tttg: c44/344 lr:0.000962 t:6.4s +tttg: c45/344 lr:0.000960 t:6.5s +tttg: c46/344 lr:0.000958 t:6.6s +tttg: c47/344 lr:0.000956 t:6.7s +tttg: c48/344 lr:0.000954 t:6.8s +tttg: c49/344 lr:0.000952 t:6.8s +tttg: c50/344 lr:0.000950 t:6.9s +tttg: c51/344 lr:0.000948 t:7.0s +tttg: c52/344 lr:0.000946 t:7.1s +tttg: c53/344 lr:0.000944 t:7.2s +tttg: c54/344 lr:0.000942 t:7.3s +tttg: c55/344 lr:0.000940 t:7.4s +tttg: c56/344 lr:0.000938 t:7.5s +tttg: c57/344 lr:0.000936 t:7.6s +tttg: c58/344 lr:0.000933 t:7.7s +tttg: c59/344 lr:0.000931 t:7.8s +tttg: c60/344 lr:0.000929 t:7.9s +tttg: c61/344 lr:0.000926 t:8.0s +tttg: c62/344 lr:0.000924 t:8.1s +tttg: c63/344 lr:0.000922 t:8.2s +tttg: c64/344 lr:0.000919 t:8.3s +tttg: c65/344 lr:0.000917 t:8.4s +tttg: c66/344 lr:0.000914 t:8.5s +tttg: c67/344 lr:0.000911 t:8.6s +tttg: c68/344 lr:0.000909 t:8.6s +tttg: c69/344 lr:0.000906 t:8.7s +tttg: c70/344 lr:0.000903 t:8.8s +tttg: c71/344 lr:0.000901 t:8.9s +tttg: c72/344 lr:0.000898 t:9.0s +tttg: c73/344 lr:0.000895 t:9.1s +tttg: c74/344 lr:0.000892 t:9.2s +tttg: c75/344 lr:0.000889 t:9.3s +tttg: c76/344 lr:0.000887 t:9.4s +tttg: c77/344 lr:0.000884 t:9.5s +tttg: c78/344 lr:0.000881 t:9.6s +tttg: c79/344 lr:0.000878 t:9.7s +tttg: c80/344 lr:0.000875 t:9.8s +tttg: c81/344 lr:0.000872 t:9.9s +tttg: c82/344 lr:0.000869 t:10.0s +tttg: c83/344 lr:0.000865 t:10.1s +tttg: c84/344 lr:0.000862 t:10.2s +tttg: c85/344 lr:0.000859 t:10.3s +tttg: c86/344 lr:0.000856 t:10.3s +tttg: c87/344 lr:0.000853 t:10.4s +tttg: c88/344 lr:0.000849 t:10.5s +tttg: c89/344 lr:0.000846 t:10.6s +tttg: c90/344 lr:0.000843 t:10.7s +tttg: c91/344 lr:0.000840 t:10.8s +tttg: c92/344 lr:0.000836 t:10.9s +tttg: c93/344 lr:0.000833 t:11.0s +tttg: c94/344 lr:0.000829 t:11.1s +tttg: c95/344 lr:0.000826 t:11.2s +tttg: c96/344 lr:0.000822 t:11.3s +tttg: c97/344 lr:0.000819 t:11.4s +tttg: c98/344 lr:0.000815 t:11.5s +tttg: c99/344 lr:0.000812 t:11.6s +tttg: c100/344 lr:0.000808 t:11.7s +tttg: c101/344 lr:0.000805 t:11.8s +tttg: c102/344 lr:0.000801 t:11.9s +tttg: c103/344 lr:0.000797 t:12.0s +tttg: c104/344 lr:0.000794 t:12.1s +tttg: c105/344 lr:0.000790 t:12.1s +tttg: c106/344 lr:0.000786 t:12.2s +tttg: c107/344 lr:0.000782 t:12.3s +tttg: c108/344 lr:0.000778 t:12.4s +tttg: c109/344 lr:0.000775 t:12.5s +tttg: c110/344 lr:0.000771 t:12.6s +tttg: c111/344 lr:0.000767 t:12.7s +tttg: c112/344 lr:0.000763 t:12.8s +tttg: c113/344 lr:0.000759 t:12.9s +tttg: c114/344 lr:0.000755 t:13.0s +tttg: c115/344 lr:0.000751 t:13.1s +tttg: c116/344 lr:0.000747 t:13.2s +tttg: c117/344 lr:0.000743 t:13.3s +tttg: c118/344 lr:0.000739 t:13.4s +tttg: c119/344 lr:0.000735 t:13.5s +tttg: c120/344 lr:0.000731 t:13.6s +tttg: c121/344 lr:0.000727 t:13.7s +tttg: c122/344 lr:0.000723 t:13.8s +tttg: c123/344 lr:0.000719 t:13.8s +tttg: c124/344 lr:0.000715 t:13.9s +tttg: c125/344 lr:0.000711 t:14.0s +tttg: c126/344 lr:0.000707 t:14.1s +tttg: c127/344 lr:0.000702 t:14.2s +tttg: c128/344 lr:0.000698 t:14.3s +tttg: c129/344 lr:0.000694 t:14.4s +tttg: c130/344 lr:0.000690 t:14.5s +tttg: c131/344 lr:0.000686 t:14.6s +tttg: c132/344 lr:0.000681 t:14.7s +tttg: c133/344 lr:0.000677 t:14.8s +tttg: c134/344 lr:0.000673 t:14.9s +tttg: c135/344 lr:0.000668 t:15.0s +tttg: c136/344 lr:0.000664 t:15.1s +tttg: c137/344 lr:0.000660 t:15.2s +tttg: c138/344 lr:0.000655 t:15.3s +tttg: c139/344 lr:0.000651 t:15.4s +tttg: c140/344 lr:0.000647 t:15.5s +tttg: c141/344 lr:0.000642 t:15.6s +tttg: c142/344 lr:0.000638 t:15.6s +tttg: c143/344 lr:0.000633 t:15.7s +tttg: c144/344 lr:0.000629 t:15.8s +tttg: c145/344 lr:0.000625 t:15.9s +tttg: c146/344 lr:0.000620 t:16.0s +tttg: c147/344 lr:0.000616 t:16.1s +tttg: c148/344 lr:0.000611 t:16.2s +tttg: c149/344 lr:0.000607 t:16.3s +tttg: c150/344 lr:0.000602 t:16.4s +tttg: c151/344 lr:0.000598 t:16.5s +tttg: c152/344 lr:0.000593 t:16.6s +tttg: c153/344 lr:0.000589 t:16.7s +tttg: c154/344 lr:0.000584 t:16.8s +tttg: c155/344 lr:0.000580 t:16.9s +tttg: c156/344 lr:0.000575 t:17.0s +tttg: c157/344 lr:0.000571 t:17.1s +tttg: c158/344 lr:0.000566 t:17.2s +tttg: c159/344 lr:0.000562 t:17.3s +tttg: c160/344 lr:0.000557 t:17.4s +tttg: c161/344 lr:0.000553 t:17.4s +tttg: c162/344 lr:0.000548 t:17.5s +tttg: c163/344 lr:0.000543 t:17.6s +tttg: c164/344 lr:0.000539 t:17.7s +tttg: c165/344 lr:0.000534 t:17.8s +tttg: c166/344 lr:0.000530 t:17.9s +tttg: c167/344 lr:0.000525 t:18.0s +tttg: c168/344 lr:0.000521 t:18.1s +tttg: c169/344 lr:0.000516 t:18.2s +tttg: c170/344 lr:0.000511 t:18.3s +tttg: c171/344 lr:0.000507 t:18.4s +tttg: c172/344 lr:0.000502 t:18.5s +tttg: c173/344 lr:0.000498 t:18.6s +tttg: c174/344 lr:0.000493 t:18.7s +tttg: c175/344 lr:0.000489 t:18.8s +tttg: c176/344 lr:0.000484 t:18.9s +tttg: c177/344 lr:0.000479 t:19.0s +tttg: c178/344 lr:0.000475 t:19.1s +tttg: c179/344 lr:0.000470 t:19.2s +tttg: c180/344 lr:0.000466 t:19.2s +tttg: c181/344 lr:0.000461 t:19.3s +tttg: c182/344 lr:0.000457 t:19.4s +tttg: c183/344 lr:0.000452 t:19.5s +tttg: c184/344 lr:0.000447 t:19.6s +tttg: c185/344 lr:0.000443 t:19.7s +tttg: c186/344 lr:0.000438 t:19.8s +tttg: c187/344 lr:0.000434 t:19.9s +tttg: c188/344 lr:0.000429 t:20.0s +tttg: c189/344 lr:0.000425 t:20.1s +tttg: c190/344 lr:0.000420 t:20.2s +tttg: c191/344 lr:0.000416 t:20.3s +tttg: c192/344 lr:0.000411 t:20.4s +tttg: c193/344 lr:0.000407 t:20.5s +tttg: c194/344 lr:0.000402 t:20.6s +tttg: c195/344 lr:0.000398 t:20.7s +tttg: c196/344 lr:0.000393 t:20.8s +tttg: c197/344 lr:0.000389 t:20.8s +tttg: c198/344 lr:0.000384 t:20.9s +tttg: c199/344 lr:0.000380 t:21.0s +tttg: c200/344 lr:0.000375 t:21.1s +tttg: c201/344 lr:0.000371 t:21.2s +tttg: c202/344 lr:0.000367 t:21.3s +tttg: c203/344 lr:0.000362 t:21.4s +tttg: c204/344 lr:0.000358 t:21.5s +tttg: c205/344 lr:0.000353 t:21.6s +tttg: c206/344 lr:0.000349 t:21.7s +tttg: c207/344 lr:0.000345 t:21.8s +tttg: c208/344 lr:0.000340 t:21.9s +tttg: c209/344 lr:0.000336 t:22.0s +tttg: c210/344 lr:0.000332 t:22.1s +tttg: c211/344 lr:0.000327 t:22.2s +tttg: c212/344 lr:0.000323 t:22.3s +tttg: c213/344 lr:0.000319 t:22.4s +tttg: c214/344 lr:0.000314 t:22.5s +tttg: c215/344 lr:0.000310 t:22.5s +tttg: c216/344 lr:0.000306 t:22.6s +tttg: c217/344 lr:0.000302 t:22.7s +tttg: c218/344 lr:0.000298 t:22.8s +tttg: c219/344 lr:0.000293 t:22.9s +tttg: c220/344 lr:0.000289 t:23.0s +tttg: c221/344 lr:0.000285 t:23.1s +tttg: c222/344 lr:0.000281 t:23.2s +tttg: c223/344 lr:0.000277 t:23.3s +tttg: c224/344 lr:0.000273 t:23.4s +tttg: c225/344 lr:0.000269 t:23.5s +tttg: c226/344 lr:0.000265 t:23.6s +tttg: c227/344 lr:0.000261 t:23.7s +tttg: c228/344 lr:0.000257 t:23.8s +tttg: c229/344 lr:0.000253 t:23.9s +tttg: c230/344 lr:0.000249 t:24.0s +tttg: c231/344 lr:0.000245 t:24.1s +tttg: c232/344 lr:0.000241 t:24.2s +tttg: c233/344 lr:0.000237 t:24.2s +tttg: c234/344 lr:0.000233 t:24.3s +tttg: c235/344 lr:0.000229 t:24.4s +tttg: c236/344 lr:0.000225 t:24.5s +tttg: c237/344 lr:0.000222 t:24.6s +tttg: c238/344 lr:0.000218 t:24.7s +tttg: c239/344 lr:0.000214 t:24.8s +tttg: c240/344 lr:0.000210 t:24.9s +tttg: c241/344 lr:0.000206 t:25.0s +tttg: c242/344 lr:0.000203 t:25.1s +tttg: c243/344 lr:0.000199 t:25.2s +tttg: c244/344 lr:0.000195 t:25.3s +tttg: c245/344 lr:0.000192 t:25.4s +tttg: c246/344 lr:0.000188 t:25.5s +tttg: c247/344 lr:0.000185 t:25.6s +tttg: c248/344 lr:0.000181 t:25.7s +tttg: c249/344 lr:0.000178 t:25.8s +tttg: c250/344 lr:0.000174 t:25.9s +tttg: c251/344 lr:0.000171 t:25.9s +tttg: c252/344 lr:0.000167 t:26.0s +tttg: c253/344 lr:0.000164 t:26.1s +tttg: c254/344 lr:0.000160 t:26.2s +tttg: c255/344 lr:0.000157 t:26.3s +tttg: c256/344 lr:0.000154 t:26.4s +tttg: c257/344 lr:0.000151 t:26.5s +tttg: c258/344 lr:0.000147 t:26.6s +tttg: c259/344 lr:0.000144 t:26.7s +tttg: c260/344 lr:0.000141 t:26.8s +tttg: c261/344 lr:0.000138 t:26.9s +tttg: c262/344 lr:0.000135 t:27.0s +tttg: c263/344 lr:0.000131 t:27.1s +tttg: c264/344 lr:0.000128 t:27.2s +tttg: c265/344 lr:0.000125 t:27.3s +tttg: c266/344 lr:0.000122 t:27.4s +tttg: c267/344 lr:0.000119 t:27.5s +tttg: c268/344 lr:0.000116 t:27.6s +tttg: c269/344 lr:0.000113 t:27.7s +tttg: c270/344 lr:0.000111 t:27.8s +tttg: c271/344 lr:0.000108 t:27.8s +tttg: c272/344 lr:0.000105 t:27.9s +tttg: c273/344 lr:0.000102 t:28.0s +tttg: c274/344 lr:0.000099 t:28.1s +tttg: c275/344 lr:0.000097 t:28.2s +tttg: c276/344 lr:0.000094 t:28.3s +tttg: c277/344 lr:0.000091 t:28.4s +tttg: c278/344 lr:0.000089 t:28.5s +tttg: c279/344 lr:0.000086 t:28.6s +tttg: c280/344 lr:0.000083 t:28.7s +tttg: c281/344 lr:0.000081 t:28.8s +tttg: c282/344 lr:0.000078 t:28.9s +tttg: c283/344 lr:0.000076 t:29.0s +tttg: c284/344 lr:0.000074 t:29.1s +tttg: c285/344 lr:0.000071 t:29.2s +tttg: c286/344 lr:0.000069 t:29.3s +tttg: c287/344 lr:0.000067 t:29.4s +tttg: c288/344 lr:0.000064 t:29.5s +tttg: c289/344 lr:0.000062 t:29.6s +tttg: c290/344 lr:0.000060 t:29.7s +tttg: c291/344 lr:0.000058 t:29.7s +tttg: c292/344 lr:0.000056 t:29.8s +tttg: c293/344 lr:0.000054 t:29.9s +tttg: c294/344 lr:0.000052 t:30.0s +tttg: c295/344 lr:0.000050 t:30.1s +tttg: c296/344 lr:0.000048 t:30.2s +tttg: c297/344 lr:0.000046 t:30.3s +tttg: c298/344 lr:0.000044 t:30.4s +tttg: c299/344 lr:0.000042 t:30.5s +tttg: c300/344 lr:0.000040 t:30.6s +tttg: c301/344 lr:0.000038 t:30.7s +tttg: c302/344 lr:0.000037 t:30.8s +tttg: c303/344 lr:0.000035 t:30.9s +tttg: c304/344 lr:0.000033 t:31.0s +tttg: c305/344 lr:0.000032 t:31.1s +tttg: c306/344 lr:0.000030 t:31.2s +tttg: c307/344 lr:0.000028 t:31.3s +tttg: c308/344 lr:0.000027 t:31.4s +tttg: c309/344 lr:0.000025 t:31.4s +tttg: c310/344 lr:0.000024 t:31.5s +tttg: c311/344 lr:0.000023 t:31.6s +tttg: c312/344 lr:0.000021 t:31.7s +tttg: c313/344 lr:0.000020 t:31.8s +tttg: c314/344 lr:0.000019 t:31.9s +tttg: c315/344 lr:0.000018 t:32.0s +tttg: c316/344 lr:0.000016 t:32.1s +tttg: c317/344 lr:0.000015 t:32.2s +tttg: c318/344 lr:0.000014 t:32.3s +tttg: c319/344 lr:0.000013 t:32.4s +tttg: c320/344 lr:0.000012 t:32.5s +tttg: c321/344 lr:0.000011 t:32.6s +tttg: c322/344 lr:0.000010 t:32.7s +tttg: c323/344 lr:0.000009 t:32.8s +tttg: c324/344 lr:0.000008 t:32.9s +tttg: c325/344 lr:0.000008 t:33.0s +tttg: c326/344 lr:0.000007 t:33.0s +tttg: c327/344 lr:0.000006 t:33.1s +tttg: c328/344 lr:0.000005 t:33.2s +tttg: c329/344 lr:0.000005 t:33.3s +tttg: c330/344 lr:0.000004 t:33.4s +tttg: c331/344 lr:0.000004 t:33.5s +tttg: c332/344 lr:0.000003 t:33.6s +tttg: c333/344 lr:0.000003 t:33.7s +tttg: c334/344 lr:0.000002 t:33.8s +tttg: c335/344 lr:0.000002 t:33.9s +tttg: c336/344 lr:0.000001 t:34.0s +tttg: c337/344 lr:0.000001 t:34.1s +tttg: c338/344 lr:0.000001 t:34.2s +tttg: c339/344 lr:0.000001 t:34.3s +tttg: c340/344 lr:0.000000 t:34.4s +tttg: c341/344 lr:0.000000 t:34.5s +tttg: c342/344 lr:0.000000 t:34.6s +tttg: c343/344 lr:0.000000 t:34.7s +ttpr: phase:1/1 t:250.3s +ttp: b1965/2084 bl:2.2753 bb:1.0074 rl:2.2996 rb:1.0685 dl:2565-2577 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1964/2084 bl:2.3681 bb:1.0570 rl:2.3016 rb:1.0682 dl:2553-2565 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1958/2084 bl:2.3163 bb:1.0940 rl:2.3020 rb:1.0689 dl:2492-2501 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1952/2084 bl:2.2979 bb:1.1320 rl:2.3019 rb:1.0704 dl:2433-2441 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1945/2084 bl:2.3193 bb:1.0360 rl:2.3023 rb:1.0695 dl:2352-2361 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1940/2084 bl:2.3817 bb:1.1001 rl:2.3041 rb:1.0702 dl:2307-2314 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1934/2084 bl:2.3081 bb:1.0188 rl:2.3042 rb:1.0691 dl:2252-2260 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1928/2084 bl:2.4295 bb:1.0801 rl:2.3068 rb:1.0693 dl:2197-2203 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1922/2084 bl:2.2913 bb:1.0301 rl:2.3065 rb:1.0685 dl:2151-2160 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1907/2084 bl:2.3452 bb:1.0496 rl:2.3072 rb:1.0681 dl:2049-2054 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1901/2084 bl:2.2411 bb:1.0017 rl:2.3060 rb:1.0669 dl:2009-2014 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1895/2084 bl:2.2866 bb:1.0356 rl:2.3057 rb:1.0663 dl:1971-1976 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1891/2084 bl:2.4387 bb:1.1287 rl:2.3080 rb:1.0674 dl:1948-1953 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1886/2084 bl:2.4088 bb:1.0331 rl:2.3096 rb:1.0668 dl:1922-1927 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1880/2084 bl:2.4808 bb:1.0801 rl:2.3124 rb:1.0670 dl:1891-1898 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1874/2084 bl:2.3802 bb:1.0483 rl:2.3134 rb:1.0667 dl:1863-1868 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1868/2084 bl:2.2885 bb:1.0325 rl:2.3130 rb:1.0662 dl:1836-1841 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1862/2084 bl:2.4539 bb:1.0467 rl:2.3151 rb:1.0659 dl:1813-1817 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1856/2084 bl:2.2718 bb:1.0410 rl:2.3145 rb:1.0655 dl:1786-1790 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1849/2084 bl:2.2723 bb:1.0369 rl:2.3139 rb:1.0651 dl:1758-1762 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1841/2084 bl:2.4676 bb:1.0502 rl:2.3159 rb:1.0649 dl:1726-1730 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1834/2084 bl:2.4687 bb:1.1305 rl:2.3179 rb:1.0657 dl:1700-1704 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1826/2084 bl:2.3551 bb:1.0346 rl:2.3184 rb:1.0653 dl:1669-1673 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1817/2084 bl:2.4104 bb:1.0952 rl:2.3195 rb:1.0657 dl:1638-1641 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1810/2084 bl:2.2679 bb:1.0336 rl:2.3189 rb:1.0653 dl:1613-1616 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1804/2084 bl:2.3237 bb:1.0338 rl:2.3190 rb:1.0649 dl:1596-1598 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1797/2084 bl:2.3754 bb:1.1106 rl:2.3196 rb:1.0655 dl:1574-1577 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1791/2084 bl:2.3771 bb:1.0421 rl:2.3202 rb:1.0652 dl:1554-1558 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1785/2084 bl:2.3268 bb:1.0910 rl:2.3203 rb:1.0655 dl:1538-1539 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1778/2084 bl:2.3720 bb:1.0483 rl:2.3208 rb:1.0653 dl:1517-1519 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1771/2084 bl:2.3838 bb:1.0681 rl:2.3215 rb:1.0653 dl:1497-1500 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1763/2084 bl:2.2263 bb:1.0486 rl:2.3205 rb:1.0651 dl:1479-1481 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1756/2084 bl:2.2722 bb:1.0487 rl:2.3201 rb:1.0650 dl:1459-1462 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1748/2084 bl:2.2560 bb:1.0536 rl:2.3194 rb:1.0649 dl:1440-1442 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1739/2084 bl:2.2548 bb:1.0806 rl:2.3188 rb:1.0650 dl:1417-1420 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1732/2084 bl:2.2474 bb:1.0251 rl:2.3182 rb:1.0646 dl:1399-1401 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1723/2084 bl:2.3125 bb:1.0286 rl:2.3181 rb:1.0643 dl:1379-1382 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1715/2084 bl:2.4742 bb:1.0880 rl:2.3195 rb:1.0645 dl:1360-1362 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1706/2084 bl:2.4630 bb:1.0992 rl:2.3207 rb:1.0648 dl:1339-1341 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1698/2084 bl:2.3081 bb:1.0720 rl:2.3206 rb:1.0649 dl:1322-1324 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1689/2084 bl:2.3188 bb:1.0632 rl:2.3206 rb:1.0649 dl:1302-1305 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1680/2084 bl:2.3384 bb:1.0347 rl:2.3207 rb:1.0646 dl:1284-1285 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1673/2084 bl:2.3204 bb:1.0298 rl:2.3207 rb:1.0643 dl:1271-1273 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1665/2084 bl:2.3802 bb:1.0110 rl:2.3212 rb:1.0639 dl:1255-1257 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1657/2084 bl:2.2865 bb:1.0557 rl:2.3209 rb:1.0638 dl:1237-1239 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1649/2084 bl:2.3546 bb:1.0882 rl:2.3212 rb:1.0640 dl:1221-1223 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1641/2084 bl:2.3797 bb:1.0378 rl:2.3216 rb:1.0638 dl:1205-1207 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1633/2084 bl:2.4166 bb:1.1123 rl:2.3223 rb:1.0642 dl:1190-1192 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1625/2084 bl:2.2590 bb:0.9904 rl:2.3218 rb:1.0636 dl:1177-1179 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1617/2084 bl:2.3395 bb:1.0266 rl:2.3220 rb:1.0634 dl:1164-1166 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1609/2084 bl:2.3363 bb:1.0282 rl:2.3221 rb:1.0631 dl:1150-1151 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1601/2084 bl:2.4729 bb:1.0343 rl:2.3230 rb:1.0629 dl:1137-1138 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1594/2084 bl:2.3641 bb:1.0687 rl:2.3233 rb:1.0629 dl:1126-1128 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1586/2084 bl:2.4312 bb:1.0785 rl:2.3240 rb:1.0630 dl:1112-1113 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1580/2084 bl:2.2872 bb:1.0244 rl:2.3238 rb:1.0628 dl:1104-1105 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1574/2084 bl:2.1399 bb:0.9358 rl:2.3226 rb:1.0620 dl:1093-1095 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1566/2084 bl:2.2521 bb:1.0447 rl:2.3222 rb:1.0619 dl:1081-1082 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1558/2084 bl:2.2028 bb:0.9915 rl:2.3215 rb:1.0614 dl:1068-1069 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1550/2084 bl:2.3166 bb:1.0024 rl:2.3215 rb:1.0611 dl:1055-1057 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1543/2084 bl:2.4338 bb:1.0473 rl:2.3221 rb:1.0610 dl:1045-1046 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1535/2084 bl:2.3411 bb:1.1012 rl:2.3222 rb:1.0612 dl:1034-1035 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1527/2084 bl:2.3297 bb:1.0321 rl:2.3223 rb:1.0611 dl:1023-1024 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1519/2084 bl:2.3622 bb:1.0382 rl:2.3225 rb:1.0609 dl:1011-1012 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1511/2084 bl:2.2084 bb:1.0071 rl:2.3219 rb:1.0606 dl:1001-1003 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1503/2084 bl:2.2196 bb:0.9950 rl:2.3213 rb:1.0603 dl:991-992 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1495/2084 bl:2.1694 bb:0.9968 rl:2.3205 rb:1.0599 dl:980-981 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1474/2084 bl:2.3622 bb:1.0499 rl:2.3207 rb:1.0599 dl:953-955 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1466/2084 bl:2.4039 bb:1.0837 rl:2.3212 rb:1.0600 dl:944-946 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1457/2084 bl:2.2331 bb:1.0227 rl:2.3207 rb:1.0598 dl:934-935 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1449/2084 bl:2.4631 bb:1.1306 rl:2.3214 rb:1.0602 dl:924-926 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1441/2084 bl:2.3015 bb:1.0591 rl:2.3213 rb:1.0602 dl:915-916 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1433/2084 bl:2.3908 bb:1.1046 rl:2.3216 rb:1.0604 dl:905-906 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1426/2084 bl:2.4424 bb:1.0753 rl:2.3222 rb:1.0604 dl:896-898 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1417/2084 bl:2.3355 bb:1.0882 rl:2.3223 rb:1.0606 dl:887-888 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1409/2084 bl:2.3338 bb:1.0055 rl:2.3223 rb:1.0603 dl:878-879 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1401/2084 bl:2.3191 bb:1.0149 rl:2.3223 rb:1.0601 dl:869-870 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1393/2084 bl:2.2330 bb:0.9924 rl:2.3219 rb:1.0598 dl:860-861 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1385/2084 bl:2.4336 bb:1.0324 rl:2.3224 rb:1.0597 dl:852-853 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1377/2084 bl:2.3451 bb:1.0232 rl:2.3225 rb:1.0595 dl:843-844 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1369/2084 bl:2.2621 bb:1.0324 rl:2.3222 rb:1.0594 dl:835-836 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1360/2084 bl:2.4089 bb:1.0964 rl:2.3226 rb:1.0595 dl:825-826 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1352/2084 bl:2.3577 bb:1.0777 rl:2.3227 rb:1.0596 dl:816-817 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1344/2084 bl:2.2411 bb:1.0104 rl:2.3224 rb:1.0594 dl:808-809 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1339/2084 bl:2.3013 bb:1.0125 rl:2.3223 rb:1.0592 dl:804-804 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1330/2084 bl:2.4935 bb:1.0967 rl:2.3230 rb:1.0594 dl:795-796 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1322/2084 bl:2.3836 bb:1.0563 rl:2.3232 rb:1.0594 dl:786-788 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1318/2084 bl:2.2970 bb:1.0600 rl:2.3231 rb:1.0594 dl:783-783 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1311/2084 bl:2.1920 bb:0.9881 rl:2.3226 rb:1.0591 dl:776-777 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1304/2084 bl:2.2366 bb:1.0112 rl:2.3223 rb:1.0589 dl:770-771 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1298/2084 bl:2.1738 bb:1.0182 rl:2.3218 rb:1.0588 dl:765-766 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1290/2084 bl:2.2692 bb:1.0519 rl:2.3216 rb:1.0587 dl:757-758 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1282/2084 bl:2.2883 bb:1.0526 rl:2.3215 rb:1.0587 dl:750-751 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1277/2084 bl:2.4065 bb:1.0968 rl:2.3218 rb:1.0589 dl:746-746 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1272/2084 bl:2.4412 bb:1.0735 rl:2.3222 rb:1.0589 dl:742-742 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1264/2084 bl:2.3492 bb:1.0906 rl:2.3223 rb:1.0590 dl:734-735 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1255/2084 bl:2.2718 bb:1.0409 rl:2.3221 rb:1.0590 dl:726-727 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1248/2084 bl:2.4754 bb:1.1242 rl:2.3226 rb:1.0592 dl:721-721 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1240/2084 bl:2.3789 bb:1.0344 rl:2.3228 rb:1.0591 dl:714-714 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1231/2084 bl:2.2632 bb:0.9979 rl:2.3226 rb:1.0589 dl:707-707 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1224/2084 bl:2.2307 bb:0.9999 rl:2.3223 rb:1.0587 dl:701-701 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1214/2084 bl:2.3418 bb:1.0357 rl:2.3224 rb:1.0586 dl:693-694 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1207/2084 bl:2.3972 bb:1.0927 rl:2.3226 rb:1.0587 dl:688-689 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1200/2084 bl:2.3714 bb:1.0172 rl:2.3228 rb:1.0586 dl:682-682 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1191/2084 bl:2.3284 bb:1.0760 rl:2.3228 rb:1.0586 dl:675-676 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1183/2084 bl:2.3543 bb:1.0645 rl:2.3229 rb:1.0587 dl:668-669 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1175/2084 bl:2.4529 bb:1.1037 rl:2.3233 rb:1.0588 dl:662-663 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1167/2084 bl:2.3226 bb:1.0648 rl:2.3233 rb:1.0588 dl:655-656 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1159/2084 bl:2.2749 bb:1.0006 rl:2.3231 rb:1.0586 dl:649-650 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1152/2084 bl:2.2568 bb:1.0092 rl:2.3229 rb:1.0585 dl:645-645 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1142/2084 bl:2.4391 bb:1.1039 rl:2.3233 rb:1.0586 dl:638-639 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1136/2084 bl:2.3011 bb:1.0303 rl:2.3232 rb:1.0585 dl:634-634 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1126/2084 bl:2.3416 bb:1.0971 rl:2.3233 rb:1.0586 dl:626-627 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1119/2084 bl:2.3579 bb:1.0895 rl:2.3233 rb:1.0587 dl:621-621 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1111/2084 bl:2.2820 bb:1.0382 rl:2.3232 rb:1.0587 dl:615-616 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1104/2084 bl:2.3103 bb:1.1085 rl:2.3232 rb:1.0588 dl:611-611 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1095/2084 bl:2.3188 bb:1.0473 rl:2.3232 rb:1.0588 dl:603-604 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1088/2084 bl:2.3134 bb:1.0814 rl:2.3232 rb:1.0588 dl:598-598 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1080/2084 bl:2.3680 bb:1.0405 rl:2.3233 rb:1.0588 dl:593-593 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1070/2084 bl:2.2058 bb:1.0309 rl:2.3230 rb:1.0587 dl:586-587 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1063/2084 bl:2.3294 bb:1.0523 rl:2.3230 rb:1.0587 dl:581-582 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1056/2084 bl:2.3332 bb:1.1233 rl:2.3230 rb:1.0588 dl:576-577 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1049/2084 bl:2.3469 bb:1.0492 rl:2.3231 rb:1.0588 dl:571-572 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1041/2084 bl:2.2469 bb:1.0393 rl:2.3229 rb:1.0588 dl:566-567 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1034/2084 bl:2.2835 bb:1.0528 rl:2.3228 rb:1.0588 dl:561-562 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1026/2084 bl:2.3957 bb:1.0712 rl:2.3230 rb:1.0588 dl:556-557 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1021/2084 bl:2.3005 bb:1.0491 rl:2.3229 rb:1.0588 dl:553-553 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1013/2084 bl:2.3056 bb:1.0855 rl:2.3229 rb:1.0588 dl:548-548 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1005/2084 bl:2.2379 bb:0.9965 rl:2.3227 rb:1.0587 dl:543-543 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b995/2084 bl:2.3842 bb:1.0558 rl:2.3228 rb:1.0587 dl:536-537 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b989/2084 bl:2.2344 bb:1.1150 rl:2.3226 rb:1.0588 dl:533-533 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b979/2084 bl:2.4517 bb:1.1383 rl:2.3229 rb:1.0590 dl:526-527 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b972/2084 bl:2.3210 bb:1.0232 rl:2.3229 rb:1.0589 dl:522-523 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b964/2084 bl:2.3495 bb:1.1016 rl:2.3230 rb:1.0590 dl:517-518 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b956/2084 bl:2.3537 bb:1.0298 rl:2.3230 rb:1.0589 dl:512-513 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b950/2084 bl:2.2420 bb:1.0420 rl:2.3229 rb:1.0589 dl:508-508 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b942/2084 bl:2.2876 bb:1.0354 rl:2.3228 rb:1.0588 dl:503-503 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b931/2084 bl:2.4365 bb:1.0884 rl:2.3230 rb:1.0589 dl:496-497 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b924/2084 bl:2.3304 bb:1.0731 rl:2.3230 rb:1.0589 dl:492-493 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b915/2084 bl:2.1765 bb:1.0389 rl:2.3227 rb:1.0589 dl:488-488 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b907/2084 bl:2.4552 bb:1.1042 rl:2.3230 rb:1.0590 dl:483-484 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b901/2084 bl:2.3675 bb:1.1024 rl:2.3231 rb:1.0591 dl:480-480 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b894/2084 bl:2.3232 bb:1.0610 rl:2.3231 rb:1.0591 dl:476-476 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b886/2084 bl:2.4137 bb:1.0588 rl:2.3233 rb:1.0591 dl:471-471 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b874/2084 bl:2.2457 bb:1.0378 rl:2.3231 rb:1.0590 dl:464-465 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b870/2084 bl:2.3563 bb:1.0798 rl:2.3232 rb:1.0591 dl:462-462 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b860/2084 bl:2.2781 bb:1.0354 rl:2.3231 rb:1.0590 dl:457-457 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b850/2084 bl:2.3402 bb:1.0234 rl:2.3231 rb:1.0589 dl:450-451 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b841/2084 bl:2.1688 bb:1.0024 rl:2.3228 rb:1.0588 dl:445-446 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b834/2084 bl:2.2851 bb:1.0808 rl:2.3228 rb:1.0589 dl:441-442 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b827/2084 bl:2.2378 bb:1.1079 rl:2.3226 rb:1.0590 dl:437-438 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b818/2084 bl:2.3396 bb:1.1720 rl:2.3227 rb:1.0591 dl:432-433 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b809/2084 bl:2.3137 bb:1.0452 rl:2.3226 rb:1.0591 dl:427-428 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b803/2084 bl:2.3513 bb:1.1090 rl:2.3227 rb:1.0592 dl:424-424 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b793/2084 bl:2.3838 bb:1.0862 rl:2.3228 rb:1.0592 dl:418-419 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b787/2084 bl:2.2081 bb:1.0808 rl:2.3226 rb:1.0593 dl:415-415 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b776/2084 bl:2.3256 bb:1.0873 rl:2.3226 rb:1.0593 dl:408-409 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b771/2084 bl:2.3013 bb:1.0499 rl:2.3226 rb:1.0593 dl:406-406 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b761/2084 bl:2.4620 bb:1.0967 rl:2.3228 rb:1.0594 dl:400-401 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b756/2084 bl:2.4981 bb:1.1564 rl:2.3231 rb:1.0595 dl:398-398 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b748/2084 bl:2.2533 bb:1.0484 rl:2.3230 rb:1.0595 dl:394-394 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b737/2084 bl:2.3069 bb:1.0980 rl:2.3229 rb:1.0596 dl:388-389 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b731/2084 bl:2.3889 bb:1.0680 rl:2.3230 rb:1.0596 dl:385-385 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b721/2084 bl:2.4779 bb:1.1521 rl:2.3233 rb:1.0597 dl:380-381 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b713/2084 bl:2.3726 bb:1.1363 rl:2.3234 rb:1.0598 dl:376-377 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b705/2084 bl:2.2095 bb:1.0652 rl:2.3232 rb:1.0598 dl:372-373 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b699/2084 bl:2.4425 bb:1.1238 rl:2.3234 rb:1.0599 dl:370-370 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b691/2084 bl:2.3220 bb:1.0697 rl:2.3234 rb:1.0599 dl:366-366 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b682/2084 bl:2.4271 bb:1.0570 rl:2.3235 rb:1.0599 dl:361-362 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b673/2084 bl:2.3231 bb:1.1012 rl:2.3235 rb:1.0600 dl:357-358 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b666/2084 bl:2.1971 bb:1.0333 rl:2.3233 rb:1.0600 dl:354-355 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b658/2084 bl:2.3821 bb:1.0936 rl:2.3234 rb:1.0600 dl:350-351 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b650/2084 bl:2.3322 bb:1.0809 rl:2.3234 rb:1.0600 dl:346-347 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b644/2084 bl:2.3991 bb:1.1127 rl:2.3235 rb:1.0601 dl:344-344 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b636/2084 bl:2.4835 bb:1.1044 rl:2.3237 rb:1.0602 dl:340-340 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b627/2084 bl:2.2401 bb:1.0715 rl:2.3236 rb:1.0602 dl:336-336 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b620/2084 bl:2.3270 bb:1.1046 rl:2.3236 rb:1.0602 dl:333-333 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b610/2084 bl:2.3163 bb:1.0358 rl:2.3236 rb:1.0602 dl:328-329 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b603/2084 bl:2.4435 bb:1.0941 rl:2.3238 rb:1.0602 dl:325-326 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b598/2084 bl:2.2956 bb:1.0472 rl:2.3237 rb:1.0602 dl:323-323 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b588/2084 bl:2.3616 bb:1.0924 rl:2.3238 rb:1.0603 dl:319-319 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b579/2084 bl:2.4535 bb:1.1234 rl:2.3239 rb:1.0604 dl:315-315 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b570/2084 bl:2.2322 bb:1.0596 rl:2.3238 rb:1.0603 dl:311-311 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b563/2084 bl:2.3884 bb:1.0280 rl:2.3239 rb:1.0603 dl:308-308 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b551/2084 bl:2.3908 bb:1.1319 rl:2.3240 rb:1.0604 dl:303-304 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b543/2084 bl:2.4986 bb:1.1905 rl:2.3242 rb:1.0605 dl:300-301 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b536/2084 bl:2.2937 bb:1.0763 rl:2.3242 rb:1.0606 dl:297-298 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b529/2084 bl:2.4298 bb:1.1116 rl:2.3243 rb:1.0606 dl:295-295 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b523/2084 bl:2.3762 bb:1.0520 rl:2.3243 rb:1.0606 dl:292-292 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b512/2084 bl:2.2951 bb:1.1498 rl:2.3243 rb:1.0607 dl:287-288 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b506/2084 bl:2.3717 bb:1.1200 rl:2.3244 rb:1.0608 dl:285-285 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b498/2084 bl:2.4436 bb:1.1750 rl:2.3245 rb:1.0609 dl:282-282 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b491/2084 bl:2.2783 bb:1.0959 rl:2.3244 rb:1.0609 dl:279-279 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b482/2084 bl:2.3949 bb:1.1149 rl:2.3245 rb:1.0610 dl:276-276 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b474/2084 bl:2.2727 bb:1.0256 rl:2.3245 rb:1.0609 dl:273-273 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b466/2084 bl:2.3703 bb:1.1337 rl:2.3245 rb:1.0610 dl:270-270 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b458/2084 bl:2.4896 bb:1.1501 rl:2.3247 rb:1.0611 dl:267-267 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b449/2084 bl:2.2964 bb:1.0212 rl:2.3247 rb:1.0611 dl:263-264 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b441/2084 bl:2.5559 bb:1.1178 rl:2.3249 rb:1.0611 dl:260-261 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b433/2084 bl:2.4013 bb:1.1358 rl:2.3250 rb:1.0612 dl:257-258 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b429/2084 bl:2.4266 bb:1.1255 rl:2.3251 rb:1.0612 dl:256-256 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b419/2084 bl:2.3870 bb:1.0694 rl:2.3251 rb:1.0613 dl:253-253 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b411/2084 bl:2.3227 bb:1.1748 rl:2.3251 rb:1.0613 dl:250-250 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b404/2084 bl:2.3207 bb:1.0959 rl:2.3251 rb:1.0614 dl:247-247 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b392/2084 bl:2.4701 bb:1.1604 rl:2.3252 rb:1.0615 dl:242-243 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b387/2084 bl:2.5065 bb:1.1600 rl:2.3254 rb:1.0616 dl:241-241 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b379/2084 bl:2.3277 bb:1.0812 rl:2.3254 rb:1.0616 dl:238-238 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b367/2084 bl:2.3752 bb:1.1063 rl:2.3255 rb:1.0616 dl:234-234 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b357/2084 bl:2.2629 bb:1.0426 rl:2.3254 rb:1.0616 dl:230-231 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b353/2084 bl:2.4493 bb:1.0975 rl:2.3255 rb:1.0616 dl:229-229 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b344/2084 bl:2.3957 bb:1.1918 rl:2.3256 rb:1.0617 dl:226-226 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b336/2084 bl:2.5131 bb:1.1787 rl:2.3257 rb:1.0618 dl:223-223 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b328/2084 bl:2.3636 bb:1.1462 rl:2.3258 rb:1.0619 dl:220-220 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b318/2084 bl:2.3321 bb:1.1572 rl:2.3258 rb:1.0620 dl:217-217 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b313/2084 bl:2.5063 bb:1.1344 rl:2.3259 rb:1.0620 dl:215-215 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b300/2084 bl:2.6043 bb:1.1409 rl:2.3261 rb:1.0621 dl:210-211 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b291/2084 bl:2.4567 bb:1.1327 rl:2.3262 rb:1.0621 dl:207-208 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b287/2084 bl:2.2534 bb:1.1374 rl:2.3262 rb:1.0622 dl:206-206 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b277/2084 bl:2.4815 bb:1.2314 rl:2.3263 rb:1.0623 dl:203-203 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b271/2084 bl:2.6164 bb:1.2573 rl:2.3265 rb:1.0625 dl:201-201 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b258/2084 bl:2.5188 bb:1.2012 rl:2.3267 rb:1.0626 dl:196-197 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b254/2084 bl:2.3858 bb:1.2057 rl:2.3267 rb:1.0627 dl:195-195 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b245/2084 bl:2.3841 bb:1.1505 rl:2.3268 rb:1.0627 dl:192-192 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b236/2084 bl:2.4641 bb:1.1466 rl:2.3268 rb:1.0628 dl:189-189 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b225/2084 bl:2.5083 bb:1.1856 rl:2.3270 rb:1.0629 dl:185-186 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b220/2084 bl:2.3414 bb:1.1029 rl:2.3270 rb:1.0629 dl:184-184 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b214/2084 bl:2.4052 bb:1.1458 rl:2.3270 rb:1.0629 dl:182-182 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b206/2084 bl:2.4945 bb:1.1522 rl:2.3272 rb:1.0630 dl:179-179 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b201/2084 bl:2.5903 bb:1.1954 rl:2.3273 rb:1.0631 dl:177-177 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b192/2084 bl:2.3014 bb:1.1034 rl:2.3273 rb:1.0631 dl:174-174 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b180/2084 bl:2.4760 bb:1.2131 rl:2.3274 rb:1.0632 dl:170-171 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b176/2084 bl:2.6144 bb:1.1946 rl:2.3276 rb:1.0633 dl:169-169 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b168/2084 bl:2.4233 bb:1.1913 rl:2.3276 rb:1.0634 dl:166-166 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b158/2084 bl:2.5917 bb:1.2342 rl:2.3278 rb:1.0635 dl:163-163 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b149/2084 bl:2.6308 bb:1.2764 rl:2.3280 rb:1.0636 dl:159-160 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b141/2084 bl:2.3728 bb:1.1331 rl:2.3280 rb:1.0636 dl:157-157 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b133/2084 bl:2.3901 bb:1.2030 rl:2.3280 rb:1.0637 dl:153-154 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b125/2084 bl:2.4171 bb:1.1231 rl:2.3281 rb:1.0637 dl:150-151 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b121/2084 bl:2.5885 bb:1.2014 rl:2.3282 rb:1.0638 dl:149-149 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b110/2084 bl:2.5170 bb:1.1839 rl:2.3283 rb:1.0639 dl:145-145 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b105/2084 bl:2.5044 bb:1.1613 rl:2.3284 rb:1.0639 dl:143-143 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b93/2084 bl:2.6396 bb:1.2549 rl:2.3286 rb:1.0640 dl:139-139 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b85/2084 bl:2.5990 bb:1.1922 rl:2.3287 rb:1.0641 dl:136-136 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b77/2084 bl:2.5175 bb:1.1816 rl:2.3288 rb:1.0641 dl:133-133 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b66/2084 bl:2.6695 bb:1.2098 rl:2.3290 rb:1.0642 dl:128-129 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b62/2084 bl:2.4210 bb:1.1456 rl:2.3290 rb:1.0642 dl:127-127 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b52/2084 bl:2.5042 bb:1.2254 rl:2.3291 rb:1.0643 dl:122-123 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b45/2084 bl:2.7528 bb:1.2858 rl:2.3293 rb:1.0644 dl:119-119 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b37/2084 bl:2.4496 bb:1.1577 rl:2.3293 rb:1.0644 dl:115-115 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b29/2084 bl:2.5967 bb:1.2751 rl:2.3295 rb:1.0645 dl:109-110 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b22/2084 bl:2.8544 bb:1.2762 rl:2.3297 rb:1.0646 dl:105-105 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b14/2084 bl:2.6919 bb:1.1747 rl:2.3298 rb:1.0646 dl:99-99 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b5/2084 bl:2.8461 bb:1.2136 rl:2.3300 rb:1.0647 dl:87-89 gd:1 sr:0 sf:1 tr:24/24 wt:0 +quantized_ttt_phased val_loss:2.31614048 val_bpb:1.05838308 eval_time:553458ms +total_eval_time:553.5s diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed314.log b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed314.log new file mode 100644 index 0000000000..21c57da43b --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed314.log @@ -0,0 +1,837 @@ +W0501 23:11:41.915000 700805 torch/distributed/run.py:803] +W0501 23:11:41.915000 700805 torch/distributed/run.py:803] ***************************************** +W0501 23:11:41.915000 700805 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0501 23:11:41.915000 700805 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + agree_add_boost: 0.5 + artifact_dir: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_ngram_p2500_c64_record_3seed/seed314 + attn_clip_sigmas: 13.0 + attn_out_gate_enabled: False + attn_out_gate_src: proj + awq_lite_bits: 8 + awq_lite_enabled: True + awq_lite_group_size: 64 + awq_lite_group_top_k: 1 + beta1: 0.9 + beta2: 0.99 + caseops_enabled: True + compile_shape_warmup: True + compile_shape_warmup_iters: 1 + compile_shape_warmup_loop_modes: auto + compressor: pergroup + data_dir: ./data + datasets_dir: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved + distributed: True + ema_decay: 0.9965 + embed_bits: 7 + embed_clip_sigmas: 14.0 + embed_lr: 0.6 + embed_wd: 0.085 + enable_looping_at: 0.35 + eval_include_tail: True + eval_seq_len: 3072 + eval_stride: 1536 + fused_ce_enabled: True + gate_window: 12 + gated_attn_enabled: False + gated_attn_init_std: 0.01 + gated_attn_quant_gate: True + global_ttt_batch_seqs: 32 + global_ttt_chunk_tokens: 32768 + global_ttt_epochs: 1 + global_ttt_grad_clip: 1.0 + global_ttt_lr: 0.001 + global_ttt_momentum: 0.9 + global_ttt_respect_doc_boundaries: True + global_ttt_warmup_chunks: 0 + global_ttt_warmup_start_lr: 0.0 + gptq_calibration_batches: 16 + gptq_reserve_seconds: 4.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + is_main_process: True + iterations: 20000 + leaky_relu_sq_slope: 0.3 + ln_scale: True + local_rank: 0 + logfile: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_ngram_p2500_c64_record_3seed/seed314/lrelu03_ngram_p2500_c64_s314.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + lqer_asym_enabled: True + lqer_asym_group: 64 + lqer_enabled: True + lqer_factor_bits: 4 + lqer_gain_select: False + lqer_rank: 4 + lqer_scope: all + lqer_top_k: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.026 + max_wallclock_seconds: 600.0 + midrun_cap_log_updates: False + midrun_cap_schedule: + min_lr: 0.1 + mlp_clip_sigmas: 11.5 + mlp_mult: 4.0 + model_dim: 512 + model_path: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_ngram_p2500_c64_record_3seed/seed314/final_model.pt + muon_backend_steps: 5 + muon_momentum: 0.97 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + ngram_hint_precompute_outside: False + ngram_tilt_enabled: True + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_final_lane: mean + parallel_start_layer: 8 + phased_ttt_num_phases: 1 + phased_ttt_prefix_docs: 2500 + qk_gain_init: 5.25 + quantized_model_path: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_ngram_p2500_c64_record_3seed/seed314/final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 3072 + rope_yarn: False + run_id: lrelu03_ngram_p2500_c64_s314 + scalar_lr: 0.02 + seed: 314 + seq_change_warmup_steps: 32 + skip_gates_enabled: True + skylight_norm_beta2: 0.95 + skylight_norm_ema: False + skylight_norm_eps: 1e-07 + skylight_uw_floor: False + skylight_uw_ratio: 0.35 + smear_gate_enabled: True + sparse_attn_gate_enabled: True + sparse_attn_gate_init_std: 0.0 + sparse_attn_gate_scale: 0.5 + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + token_boost: 2.625 + token_order: 16 + token_threshold: 0.8 + tokenizer_path: /tmp/parameter-golf-data-authorhf/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + train_batch_tokens: 786432 + train_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 3072 + train_seq_schedule: 1024@0.100,2048@0.700,3072@1.000 + train_seq_schedule_mode: wallclock + ttt_batch_size: 24 + ttt_beta1: 0.0 + ttt_beta2: 0.99 + ttt_chunk_size: 64 + ttt_enabled: True + ttt_eval_batches: + ttt_eval_seq_len: 3072 + ttt_grad_steps: 1 + ttt_k_lora: True + ttt_local_lr_mult: 0.75 + ttt_lora_lr: 0.0001 + ttt_lora_rank: 80 + ttt_mask: no_qv + ttt_mlp_lora: True + ttt_o_lora: True + ttt_optimizer: adam + ttt_q_lora: False + ttt_short_beta2: 0.99 + ttt_short_chunk_size: 32 + ttt_short_doc_len: 2000 + ttt_short_lora_enabled: False + ttt_short_lora_lr: 0.0001 + ttt_short_lora_rank: 80 + ttt_short_score_first_enabled: True + ttt_short_score_first_steps: 256:16,2000:32 + ttt_short_weight_decay: 0.5 + ttt_train_max_doc_len: 0 + ttt_train_min_doc_len: 0 + ttt_v_lora: False + ttt_warm_start_mean_doc_len: 2000 + ttt_warm_start_mean_enabled: False + ttt_warm_start_mean_momentum: 0.95 + ttt_weight_decay: 0.5 + val_batch_tokens: 524288 + val_bytes_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_bytes_*.bin + val_doc_fraction: 1.0 + val_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_*.bin + val_loss_every: 0 + vocab_size: 8192 + warmdown_frac: 0.85 + warmdown_iters: 0 + warmup_steps: 20 + within_boost: 0.75 + within_tau: 0.45 + word_boost: 0.75 + word_normalize: strip_punct_lower + word_order: 4 + word_tau: 0.65 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 47853343 +model_params:35945673 +train_seq_schedule:1024@0.100,2048@0.700,3072@1.000 +local_microbatch_tokens:98304 +growth_stage:seq_len:1024 progress:0.000 +gptq:reserving 4s, effective=596000ms +warmup_cu_buckets:64,128,192,256 iters_each:3 +compile_shape_warmup:start 1024xplain,2048xplain,2048xloop,3072xloop +compile_shape_warmup:shape seq_len:1024 loop:0 +compile_shape_warmup:shape seq_len:2048 loop:0 +compile_shape_warmup:shape seq_len:2048 loop:1 +compile_shape_warmup:shape seq_len:3072 loop:1 +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +1/20000 train_loss: 8.9988 train_time: 0.0m tok/s: 18020166 +2/20000 train_loss: 12.8580 train_time: 0.0m tok/s: 10190709 +3/20000 train_loss: 10.2240 train_time: 0.0m tok/s: 9349587 +4/20000 train_loss: 8.6650 train_time: 0.0m tok/s: 8983474 +5/20000 train_loss: 7.9068 train_time: 0.0m tok/s: 8707456 +500/20000 train_loss: 2.6062 train_time: 0.8m tok/s: 8612336 +growth_stage:seq_len:2048 progress:0.100 step:649 +growth_stage_rewarmup:start step:649 steps:32 seq_len:2048 +1000/20000 train_loss: 2.5827 train_time: 1.6m tok/s: 8430424 +1500/20000 train_loss: 2.6235 train_time: 2.4m tok/s: 8340612 +2000/20000 train_loss: 2.6550 train_time: 3.2m tok/s: 8287994 +layer_loop:enabled step:2194 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 2.5059 train_time: 4.2m tok/s: 7813683 +3000/20000 train_loss: 2.4525 train_time: 5.4m tok/s: 7325923 +3500/20000 train_loss: 2.4606 train_time: 6.5m tok/s: 7012932 +growth_stage:seq_len:3072 progress:0.700 step:3676 +growth_stage_rewarmup:start step:3676 steps:32 seq_len:3072 +4000/20000 train_loss: 2.3841 train_time: 7.7m tok/s: 6770867 +4500/20000 train_loss: 2.3463 train_time: 9.0m tok/s: 6564818 +4879/20000 val_loss: 2.3452 val_bpb: 1.0717 +stopping_early: wallclock_cap train_time: 596003ms step: 4879/20000 +peak memory allocated: 41707 MiB reserved: 46984 MiB +ema:applying EMA weights +diagnostic pre-quantization post-ema val_loss:2.32082993 val_bpb:1.06052597 eval_time:16060ms +Serialized model: 135418111 bytes +Code size (uncompressed): 207583 bytes +Code size (compressed): 51324 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 4.1s +Quantized weights: + gate_int8_row: blocks.attn.attn_gate_w + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int6)+lqer_asym: blocks.mlp.fc.weight + gptq (int7)+awqgrpint8+lqer_asym: tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, parallel_post_lambdas, parallel_resid_lambdas, skip_gates, skip_weights, smear_gate.weight, smear_lambda, softcap_neg, softcap_pos +Serialize: per-group lrzip compression... +Serialize: per-group compression done in 95.6s +Serialized model quantized+pergroup: 15941357 bytes +Total submission size quantized+pergroup: 15992681 bytes +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 16.6s +diagnostic quantized val_loss:2.33888469 val_bpb:1.06877627 eval_time:18961ms +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 17.8s +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (186.2s) + +beginning TTT eval timer +ngram_tilt:hints total=47853343 gated=13023831 token_gate=628156 within_gate=9867233 word_gate=2891718 agree2plus=303187 +ngram_tilt:precompute_outside_timer_done elapsed=128.60s total_targets=47853343 +ttt_phased: total_docs:50000 prefix_docs:2500 suffix_docs:47500 num_phases:1 boundaries:[2500] target_tokens:47853343 +ttp: b2079/2084 bl:2.2419 bb:1.0795 rl:2.2419 rb:1.0795 dl:13679-14936 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2074/2084 bl:2.4366 bb:1.1087 rl:2.3211 rb:1.0918 dl:9553-10083 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2070/2084 bl:2.3664 bb:1.1247 rl:2.3329 rb:1.1003 dl:8228-8606 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2064/2084 bl:2.2507 bb:1.0294 rl:2.3187 rb:1.0877 dl:6723-6872 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2057/2084 bl:2.3573 bb:1.0923 rl:2.3237 rb:1.0883 dl:5762-5854 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2051/2084 bl:2.3274 bb:1.1012 rl:2.3241 rb:1.0897 dl:5231-5322 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2040/2084 bl:2.4149 bb:1.1177 rl:2.3315 rb:1.0920 dl:4465-4510 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2036/2084 bl:2.2480 bb:1.0545 rl:2.3254 rb:1.0893 dl:4294-4331 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2028/2084 bl:2.4793 bb:1.1225 rl:2.3350 rb:1.0914 dl:3935-3966 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2023/2084 bl:2.3745 bb:1.0529 rl:2.3373 rb:1.0891 dl:3761-3786 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2013/2084 bl:2.3908 bb:1.0518 rl:2.3399 rb:1.0872 dl:3436-3454 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2007/2084 bl:2.2374 bb:0.9939 rl:2.3353 rb:1.0828 dl:3303-3324 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2001/2084 bl:2.2709 bb:1.0317 rl:2.3326 rb:1.0807 dl:3150-3175 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1991/2084 bl:2.2438 bb:1.0634 rl:2.3293 rb:1.0800 dl:2955-2976 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1984/2084 bl:2.3406 bb:1.0653 rl:2.3297 rb:1.0795 dl:2824-2842 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1979/2084 bl:2.3637 bb:1.0874 rl:2.3308 rb:1.0798 dl:2753-2769 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttpp: phase:1/1 pd:2672 gd:2500 t:206.2s +tttg: c1/344 lr:0.001000 t:0.4s +tttg: c2/344 lr:0.001000 t:0.5s +tttg: c3/344 lr:0.001000 t:0.6s +tttg: c4/344 lr:0.001000 t:0.7s +tttg: c5/344 lr:0.001000 t:0.8s +tttg: c6/344 lr:0.000999 t:1.0s +tttg: c7/344 lr:0.000999 t:1.1s +tttg: c8/344 lr:0.000999 t:1.2s +tttg: c9/344 lr:0.000999 t:1.3s +tttg: c10/344 lr:0.000998 t:1.5s +tttg: c11/344 lr:0.000998 t:1.6s +tttg: c12/344 lr:0.000997 t:1.7s +tttg: c13/344 lr:0.000997 t:1.8s +tttg: c14/344 lr:0.000996 t:2.0s +tttg: c15/344 lr:0.000996 t:2.1s +tttg: c16/344 lr:0.000995 t:2.2s +tttg: c17/344 lr:0.000995 t:2.3s +tttg: c18/344 lr:0.000994 t:2.5s +tttg: c19/344 lr:0.000993 t:2.6s +tttg: c20/344 lr:0.000992 t:2.7s +tttg: c21/344 lr:0.000992 t:2.8s +tttg: c22/344 lr:0.000991 t:3.0s +tttg: c23/344 lr:0.000990 t:3.1s +tttg: c24/344 lr:0.000989 t:3.2s +tttg: c25/344 lr:0.000988 t:3.3s +tttg: c26/344 lr:0.000987 t:3.5s +tttg: c27/344 lr:0.000986 t:3.6s +tttg: c28/344 lr:0.000985 t:3.7s +tttg: c29/344 lr:0.000984 t:3.8s +tttg: c30/344 lr:0.000982 t:4.0s +tttg: c31/344 lr:0.000981 t:4.1s +tttg: c32/344 lr:0.000980 t:4.2s +tttg: c33/344 lr:0.000979 t:4.3s +tttg: c34/344 lr:0.000977 t:4.5s +tttg: c35/344 lr:0.000976 t:4.6s +tttg: c36/344 lr:0.000975 t:4.7s +tttg: c37/344 lr:0.000973 t:4.8s +tttg: c38/344 lr:0.000972 t:5.0s +tttg: c39/344 lr:0.000970 t:5.1s +tttg: c40/344 lr:0.000968 t:5.2s +tttg: c41/344 lr:0.000967 t:5.3s +tttg: c42/344 lr:0.000965 t:5.4s +tttg: c43/344 lr:0.000963 t:5.6s +tttg: c44/344 lr:0.000962 t:5.7s +tttg: c45/344 lr:0.000960 t:5.8s +tttg: c46/344 lr:0.000958 t:5.9s +tttg: c47/344 lr:0.000956 t:6.1s +tttg: c48/344 lr:0.000954 t:6.2s +tttg: c49/344 lr:0.000952 t:6.3s +tttg: c50/344 lr:0.000950 t:6.4s +tttg: c51/344 lr:0.000948 t:6.6s +tttg: c52/344 lr:0.000946 t:6.7s +tttg: c53/344 lr:0.000944 t:6.8s +tttg: c54/344 lr:0.000942 t:7.0s +tttg: c55/344 lr:0.000940 t:7.1s +tttg: c56/344 lr:0.000938 t:7.3s +tttg: c57/344 lr:0.000936 t:7.4s +tttg: c58/344 lr:0.000933 t:7.5s +tttg: c59/344 lr:0.000931 t:7.6s +tttg: c60/344 lr:0.000929 t:7.8s +tttg: c61/344 lr:0.000926 t:7.9s +tttg: c62/344 lr:0.000924 t:8.0s +tttg: c63/344 lr:0.000922 t:8.2s +tttg: c64/344 lr:0.000919 t:8.3s +tttg: c65/344 lr:0.000917 t:8.4s +tttg: c66/344 lr:0.000914 t:8.5s +tttg: c67/344 lr:0.000911 t:8.7s +tttg: c68/344 lr:0.000909 t:8.8s +tttg: c69/344 lr:0.000906 t:8.9s +tttg: c70/344 lr:0.000903 t:9.0s +tttg: c71/344 lr:0.000901 t:9.2s +tttg: c72/344 lr:0.000898 t:9.3s +tttg: c73/344 lr:0.000895 t:9.4s +tttg: c74/344 lr:0.000892 t:9.5s +tttg: c75/344 lr:0.000889 t:9.7s +tttg: c76/344 lr:0.000887 t:9.8s +tttg: c77/344 lr:0.000884 t:9.9s +tttg: c78/344 lr:0.000881 t:10.0s +tttg: c79/344 lr:0.000878 t:10.2s +tttg: c80/344 lr:0.000875 t:10.3s +tttg: c81/344 lr:0.000872 t:10.4s +tttg: c82/344 lr:0.000869 t:10.5s +tttg: c83/344 lr:0.000865 t:10.6s +tttg: c84/344 lr:0.000862 t:10.8s +tttg: c85/344 lr:0.000859 t:10.9s +tttg: c86/344 lr:0.000856 t:11.0s +tttg: c87/344 lr:0.000853 t:11.2s +tttg: c88/344 lr:0.000849 t:11.3s +tttg: c89/344 lr:0.000846 t:11.4s +tttg: c90/344 lr:0.000843 t:11.5s +tttg: c91/344 lr:0.000840 t:11.6s +tttg: c92/344 lr:0.000836 t:11.8s +tttg: c93/344 lr:0.000833 t:11.9s +tttg: c94/344 lr:0.000829 t:12.0s +tttg: c95/344 lr:0.000826 t:12.1s +tttg: c96/344 lr:0.000822 t:12.3s +tttg: c97/344 lr:0.000819 t:12.4s +tttg: c98/344 lr:0.000815 t:12.5s +tttg: c99/344 lr:0.000812 t:12.6s +tttg: c100/344 lr:0.000808 t:12.7s +tttg: c101/344 lr:0.000805 t:12.9s +tttg: c102/344 lr:0.000801 t:13.0s +tttg: c103/344 lr:0.000797 t:13.1s +tttg: c104/344 lr:0.000794 t:13.2s +tttg: c105/344 lr:0.000790 t:13.4s +tttg: c106/344 lr:0.000786 t:13.5s +tttg: c107/344 lr:0.000782 t:13.6s +tttg: c108/344 lr:0.000778 t:13.7s +tttg: c109/344 lr:0.000775 t:13.9s +tttg: c110/344 lr:0.000771 t:14.0s +tttg: c111/344 lr:0.000767 t:14.1s +tttg: c112/344 lr:0.000763 t:14.2s +tttg: c113/344 lr:0.000759 t:14.4s +tttg: c114/344 lr:0.000755 t:14.5s +tttg: c115/344 lr:0.000751 t:14.6s +tttg: c116/344 lr:0.000747 t:14.8s +tttg: c117/344 lr:0.000743 t:14.9s +tttg: c118/344 lr:0.000739 t:15.0s +tttg: c119/344 lr:0.000735 t:15.1s +tttg: c120/344 lr:0.000731 t:15.3s +tttg: c121/344 lr:0.000727 t:15.4s +tttg: c122/344 lr:0.000723 t:15.5s +tttg: c123/344 lr:0.000719 t:15.6s +tttg: c124/344 lr:0.000715 t:15.7s +tttg: c125/344 lr:0.000711 t:15.9s +tttg: c126/344 lr:0.000707 t:16.0s +tttg: c127/344 lr:0.000702 t:16.2s +tttg: c128/344 lr:0.000698 t:16.3s +tttg: c129/344 lr:0.000694 t:16.4s +tttg: c130/344 lr:0.000690 t:16.6s +tttg: c131/344 lr:0.000686 t:16.7s +tttg: c132/344 lr:0.000681 t:16.8s +tttg: c133/344 lr:0.000677 t:17.0s +tttg: c134/344 lr:0.000673 t:17.1s +tttg: c135/344 lr:0.000668 t:17.2s +tttg: c136/344 lr:0.000664 t:17.3s +tttg: c137/344 lr:0.000660 t:17.5s +tttg: c138/344 lr:0.000655 t:17.6s +tttg: c139/344 lr:0.000651 t:17.7s +tttg: c140/344 lr:0.000647 t:17.8s +tttg: c141/344 lr:0.000642 t:17.9s +tttg: c142/344 lr:0.000638 t:18.1s +tttg: c143/344 lr:0.000633 t:18.2s +tttg: c144/344 lr:0.000629 t:18.3s +tttg: c145/344 lr:0.000625 t:18.4s +tttg: c146/344 lr:0.000620 t:18.6s +tttg: c147/344 lr:0.000616 t:18.7s +tttg: c148/344 lr:0.000611 t:18.8s +tttg: c149/344 lr:0.000607 t:18.9s +tttg: c150/344 lr:0.000602 t:19.0s +tttg: c151/344 lr:0.000598 t:19.2s +tttg: c152/344 lr:0.000593 t:19.3s +tttg: c153/344 lr:0.000589 t:19.4s +tttg: c154/344 lr:0.000584 t:19.6s +tttg: c155/344 lr:0.000580 t:19.7s +tttg: c156/344 lr:0.000575 t:19.8s +tttg: c157/344 lr:0.000571 t:19.9s +tttg: c158/344 lr:0.000566 t:20.1s +tttg: c159/344 lr:0.000562 t:20.2s +tttg: c160/344 lr:0.000557 t:20.3s +tttg: c161/344 lr:0.000553 t:20.4s +tttg: c162/344 lr:0.000548 t:20.6s +tttg: c163/344 lr:0.000543 t:20.7s +tttg: c164/344 lr:0.000539 t:20.8s +tttg: c165/344 lr:0.000534 t:20.9s +tttg: c166/344 lr:0.000530 t:21.1s +tttg: c167/344 lr:0.000525 t:21.2s +tttg: c168/344 lr:0.000521 t:21.3s +tttg: c169/344 lr:0.000516 t:21.4s +tttg: c170/344 lr:0.000511 t:21.5s +tttg: c171/344 lr:0.000507 t:21.7s +tttg: c172/344 lr:0.000502 t:21.8s +tttg: c173/344 lr:0.000498 t:21.9s +tttg: c174/344 lr:0.000493 t:22.0s +tttg: c175/344 lr:0.000489 t:22.1s +tttg: c176/344 lr:0.000484 t:22.2s +tttg: c177/344 lr:0.000479 t:22.3s +tttg: c178/344 lr:0.000475 t:22.4s +tttg: c179/344 lr:0.000470 t:22.5s +tttg: c180/344 lr:0.000466 t:22.6s +tttg: c181/344 lr:0.000461 t:22.7s +tttg: c182/344 lr:0.000457 t:22.8s +tttg: c183/344 lr:0.000452 t:22.9s +tttg: c184/344 lr:0.000447 t:23.0s +tttg: c185/344 lr:0.000443 t:23.1s +tttg: c186/344 lr:0.000438 t:23.2s +tttg: c187/344 lr:0.000434 t:23.3s +tttg: c188/344 lr:0.000429 t:23.4s +tttg: c189/344 lr:0.000425 t:23.5s +tttg: c190/344 lr:0.000420 t:23.6s +tttg: c191/344 lr:0.000416 t:23.7s +tttg: c192/344 lr:0.000411 t:23.8s +tttg: c193/344 lr:0.000407 t:23.9s +tttg: c194/344 lr:0.000402 t:24.0s +tttg: c195/344 lr:0.000398 t:24.1s +tttg: c196/344 lr:0.000393 t:24.2s +tttg: c197/344 lr:0.000389 t:24.3s +tttg: c198/344 lr:0.000384 t:24.4s +tttg: c199/344 lr:0.000380 t:24.5s +tttg: c200/344 lr:0.000375 t:24.6s +tttg: c201/344 lr:0.000371 t:24.7s +tttg: c202/344 lr:0.000367 t:24.8s +tttg: c203/344 lr:0.000362 t:24.9s +tttg: c204/344 lr:0.000358 t:25.0s +tttg: c205/344 lr:0.000353 t:25.1s +tttg: c206/344 lr:0.000349 t:25.2s +tttg: c207/344 lr:0.000345 t:25.3s +tttg: c208/344 lr:0.000340 t:25.4s +tttg: c209/344 lr:0.000336 t:25.5s +tttg: c210/344 lr:0.000332 t:25.6s +tttg: c211/344 lr:0.000327 t:25.7s +tttg: c212/344 lr:0.000323 t:25.8s +tttg: c213/344 lr:0.000319 t:25.9s +tttg: c214/344 lr:0.000314 t:26.1s +tttg: c215/344 lr:0.000310 t:26.2s +tttg: c216/344 lr:0.000306 t:26.3s +tttg: c217/344 lr:0.000302 t:26.4s +tttg: c218/344 lr:0.000298 t:26.5s +tttg: c219/344 lr:0.000293 t:26.6s +tttg: c220/344 lr:0.000289 t:26.7s +tttg: c221/344 lr:0.000285 t:26.8s +tttg: c222/344 lr:0.000281 t:26.9s +tttg: c223/344 lr:0.000277 t:27.0s +tttg: c224/344 lr:0.000273 t:27.1s +tttg: c225/344 lr:0.000269 t:27.2s +tttg: c226/344 lr:0.000265 t:27.3s +tttg: c227/344 lr:0.000261 t:27.4s +tttg: c228/344 lr:0.000257 t:27.5s +tttg: c229/344 lr:0.000253 t:27.6s +tttg: c230/344 lr:0.000249 t:27.7s +tttg: c231/344 lr:0.000245 t:27.8s +tttg: c232/344 lr:0.000241 t:27.9s +tttg: c233/344 lr:0.000237 t:28.0s +tttg: c234/344 lr:0.000233 t:28.1s +tttg: c235/344 lr:0.000229 t:28.2s +tttg: c236/344 lr:0.000225 t:28.3s +tttg: c237/344 lr:0.000222 t:28.4s +tttg: c238/344 lr:0.000218 t:28.5s +tttg: c239/344 lr:0.000214 t:28.6s +tttg: c240/344 lr:0.000210 t:28.7s +tttg: c241/344 lr:0.000206 t:28.8s +tttg: c242/344 lr:0.000203 t:28.9s +tttg: c243/344 lr:0.000199 t:29.0s +tttg: c244/344 lr:0.000195 t:29.1s +tttg: c245/344 lr:0.000192 t:29.2s +tttg: c246/344 lr:0.000188 t:29.3s +tttg: c247/344 lr:0.000185 t:29.4s +tttg: c248/344 lr:0.000181 t:29.5s +tttg: c249/344 lr:0.000178 t:29.6s +tttg: c250/344 lr:0.000174 t:29.7s +tttg: c251/344 lr:0.000171 t:29.8s +tttg: c252/344 lr:0.000167 t:29.9s +tttg: c253/344 lr:0.000164 t:30.0s +tttg: c254/344 lr:0.000160 t:30.1s +tttg: c255/344 lr:0.000157 t:30.2s +tttg: c256/344 lr:0.000154 t:30.3s +tttg: c257/344 lr:0.000151 t:30.4s +tttg: c258/344 lr:0.000147 t:30.5s +tttg: c259/344 lr:0.000144 t:30.6s +tttg: c260/344 lr:0.000141 t:30.7s +tttg: c261/344 lr:0.000138 t:30.8s +tttg: c262/344 lr:0.000135 t:30.9s +tttg: c263/344 lr:0.000131 t:31.0s +tttg: c264/344 lr:0.000128 t:31.1s +tttg: c265/344 lr:0.000125 t:31.2s +tttg: c266/344 lr:0.000122 t:31.3s +tttg: c267/344 lr:0.000119 t:31.4s +tttg: c268/344 lr:0.000116 t:31.5s +tttg: c269/344 lr:0.000113 t:31.6s +tttg: c270/344 lr:0.000111 t:31.7s +tttg: c271/344 lr:0.000108 t:31.8s +tttg: c272/344 lr:0.000105 t:31.9s +tttg: c273/344 lr:0.000102 t:32.0s +tttg: c274/344 lr:0.000099 t:32.1s +tttg: c275/344 lr:0.000097 t:32.2s +tttg: c276/344 lr:0.000094 t:32.3s +tttg: c277/344 lr:0.000091 t:32.4s +tttg: c278/344 lr:0.000089 t:32.5s +tttg: c279/344 lr:0.000086 t:32.6s +tttg: c280/344 lr:0.000083 t:32.7s +tttg: c281/344 lr:0.000081 t:32.8s +tttg: c282/344 lr:0.000078 t:32.9s +tttg: c283/344 lr:0.000076 t:33.0s +tttg: c284/344 lr:0.000074 t:33.1s +tttg: c285/344 lr:0.000071 t:33.2s +tttg: c286/344 lr:0.000069 t:33.3s +tttg: c287/344 lr:0.000067 t:33.4s +tttg: c288/344 lr:0.000064 t:33.5s +tttg: c289/344 lr:0.000062 t:33.6s +tttg: c290/344 lr:0.000060 t:33.7s +tttg: c291/344 lr:0.000058 t:33.8s +tttg: c292/344 lr:0.000056 t:33.9s +tttg: c293/344 lr:0.000054 t:34.0s +tttg: c294/344 lr:0.000052 t:34.1s +tttg: c295/344 lr:0.000050 t:34.2s +tttg: c296/344 lr:0.000048 t:34.3s +tttg: c297/344 lr:0.000046 t:34.4s +tttg: c298/344 lr:0.000044 t:34.5s +tttg: c299/344 lr:0.000042 t:34.6s +tttg: c300/344 lr:0.000040 t:34.7s +tttg: c301/344 lr:0.000038 t:34.8s +tttg: c302/344 lr:0.000037 t:34.9s +tttg: c303/344 lr:0.000035 t:35.0s +tttg: c304/344 lr:0.000033 t:35.1s +tttg: c305/344 lr:0.000032 t:35.2s +tttg: c306/344 lr:0.000030 t:35.3s +tttg: c307/344 lr:0.000028 t:35.4s +tttg: c308/344 lr:0.000027 t:35.5s +tttg: c309/344 lr:0.000025 t:35.6s +tttg: c310/344 lr:0.000024 t:35.7s +tttg: c311/344 lr:0.000023 t:35.8s +tttg: c312/344 lr:0.000021 t:35.9s +tttg: c313/344 lr:0.000020 t:36.0s +tttg: c314/344 lr:0.000019 t:36.1s +tttg: c315/344 lr:0.000018 t:36.2s +tttg: c316/344 lr:0.000016 t:36.3s +tttg: c317/344 lr:0.000015 t:36.4s +tttg: c318/344 lr:0.000014 t:36.5s +tttg: c319/344 lr:0.000013 t:36.6s +tttg: c320/344 lr:0.000012 t:36.7s +tttg: c321/344 lr:0.000011 t:36.8s +tttg: c322/344 lr:0.000010 t:36.9s +tttg: c323/344 lr:0.000009 t:37.1s +tttg: c324/344 lr:0.000008 t:37.2s +tttg: c325/344 lr:0.000008 t:37.3s +tttg: c326/344 lr:0.000007 t:37.4s +tttg: c327/344 lr:0.000006 t:37.5s +tttg: c328/344 lr:0.000005 t:37.6s +tttg: c329/344 lr:0.000005 t:37.7s +tttg: c330/344 lr:0.000004 t:37.8s +tttg: c331/344 lr:0.000004 t:37.9s +tttg: c332/344 lr:0.000003 t:38.0s +tttg: c333/344 lr:0.000003 t:38.1s +tttg: c334/344 lr:0.000002 t:38.2s +tttg: c335/344 lr:0.000002 t:38.3s +tttg: c336/344 lr:0.000001 t:38.4s +tttg: c337/344 lr:0.000001 t:38.5s +tttg: c338/344 lr:0.000001 t:38.6s +tttg: c339/344 lr:0.000001 t:38.7s +tttg: c340/344 lr:0.000000 t:38.8s +tttg: c341/344 lr:0.000000 t:38.9s +tttg: c342/344 lr:0.000000 t:39.0s +tttg: c343/344 lr:0.000000 t:39.1s +ttpr: phase:1/1 t:245.8s +ttp: b1965/2084 bl:2.2749 bb:1.0072 rl:2.3292 rb:1.0776 dl:2565-2577 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1959/2084 bl:2.2375 bb:1.0293 rl:2.3266 rb:1.0762 dl:2501-2514 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1953/2084 bl:2.2867 bb:1.0432 rl:2.3256 rb:1.0753 dl:2441-2454 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1947/2084 bl:2.2173 bb:0.9547 rl:2.3229 rb:1.0721 dl:2368-2382 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1941/2084 bl:2.3003 bb:1.0492 rl:2.3224 rb:1.0716 dl:2314-2323 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1935/2084 bl:2.2675 bb:1.0283 rl:2.3211 rb:1.0706 dl:2260-2270 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1929/2084 bl:2.2712 bb:1.0205 rl:2.3200 rb:1.0695 dl:2203-2216 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1923/2084 bl:2.3665 bb:1.0782 rl:2.3210 rb:1.0696 dl:2160-2164 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1917/2084 bl:2.3238 bb:1.0578 rl:2.3210 rb:1.0694 dl:2117-2122 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1911/2084 bl:2.2088 bb:0.9679 rl:2.3189 rb:1.0674 dl:2072-2081 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1905/2084 bl:2.4130 bb:1.0289 rl:2.3206 rb:1.0666 dl:2036-2041 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1899/2084 bl:2.4001 bb:1.0550 rl:2.3220 rb:1.0664 dl:1997-2004 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1893/2084 bl:2.1856 bb:1.0274 rl:2.3197 rb:1.0657 dl:1958-1963 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1887/2084 bl:2.2614 bb:1.0140 rl:2.3187 rb:1.0648 dl:1927-1931 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1881/2084 bl:2.3487 bb:1.0864 rl:2.3192 rb:1.0652 dl:1898-1902 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1875/2084 bl:2.3342 bb:1.0223 rl:2.3195 rb:1.0645 dl:1868-1873 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1868/2084 bl:2.2864 bb:1.0315 rl:2.3190 rb:1.0640 dl:1836-1841 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1860/2084 bl:2.1827 bb:1.0347 rl:2.3170 rb:1.0636 dl:1805-1808 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1852/2084 bl:2.2709 bb:1.0683 rl:2.3163 rb:1.0636 dl:1770-1774 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1839/2084 bl:2.4181 bb:1.1201 rl:2.3177 rb:1.0644 dl:1718-1721 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1831/2084 bl:2.3015 bb:1.0453 rl:2.3175 rb:1.0641 dl:1688-1691 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1823/2084 bl:2.3837 bb:1.0462 rl:2.3183 rb:1.0639 dl:1659-1662 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1816/2084 bl:2.3631 bb:1.0555 rl:2.3189 rb:1.0638 dl:1634-1638 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1807/2084 bl:2.3448 bb:1.0273 rl:2.3192 rb:1.0633 dl:1604-1607 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1799/2084 bl:2.2969 bb:1.0280 rl:2.3189 rb:1.0629 dl:1581-1583 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1791/2084 bl:2.3730 bb:1.0403 rl:2.3195 rb:1.0626 dl:1554-1558 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1784/2084 bl:2.2618 bb:1.0562 rl:2.3189 rb:1.0626 dl:1534-1537 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1776/2084 bl:2.4479 bb:1.0720 rl:2.3203 rb:1.0627 dl:1512-1514 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1768/2084 bl:2.3703 bb:1.0552 rl:2.3208 rb:1.0626 dl:1490-1493 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1760/2084 bl:2.3690 bb:1.0273 rl:2.3213 rb:1.0622 dl:1471-1474 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1753/2084 bl:2.2524 bb:1.0433 rl:2.3206 rb:1.0620 dl:1452-1454 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1745/2084 bl:2.3031 bb:1.0517 rl:2.3205 rb:1.0619 dl:1432-1435 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1737/2084 bl:2.2373 bb:1.0614 rl:2.3197 rb:1.0619 dl:1411-1414 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1729/2084 bl:2.3683 bb:1.0444 rl:2.3201 rb:1.0618 dl:1392-1394 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1721/2084 bl:2.2034 bb:1.0319 rl:2.3190 rb:1.0615 dl:1375-1377 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1713/2084 bl:2.2374 bb:0.9955 rl:2.3183 rb:1.0609 dl:1356-1358 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1704/2084 bl:2.2253 bb:1.0237 rl:2.3175 rb:1.0606 dl:1335-1337 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1696/2084 bl:2.3922 bb:1.0432 rl:2.3181 rb:1.0604 dl:1318-1320 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1680/2084 bl:2.3376 bb:1.0344 rl:2.3183 rb:1.0602 dl:1284-1285 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1671/2084 bl:2.4078 bb:1.1068 rl:2.3190 rb:1.0606 dl:1267-1269 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1662/2084 bl:2.4311 bb:1.0443 rl:2.3199 rb:1.0604 dl:1248-1250 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1654/2084 bl:2.2520 bb:1.0337 rl:2.3194 rb:1.0602 dl:1231-1232 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1647/2084 bl:2.4385 bb:1.1043 rl:2.3203 rb:1.0605 dl:1218-1220 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1639/2084 bl:2.2399 bb:1.0346 rl:2.3197 rb:1.0604 dl:1201-1203 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1630/2084 bl:2.2322 bb:1.0132 rl:2.3191 rb:1.0600 dl:1185-1187 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1622/2084 bl:2.2116 bb:1.0210 rl:2.3183 rb:1.0597 dl:1172-1174 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1613/2084 bl:2.1276 bb:0.9502 rl:2.3169 rb:1.0589 dl:1157-1158 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1604/2084 bl:2.4355 bb:1.1026 rl:2.3178 rb:1.0593 dl:1142-1144 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1594/2084 bl:2.3631 bb:1.0682 rl:2.3181 rb:1.0593 dl:1126-1128 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1586/2084 bl:2.4319 bb:1.0788 rl:2.3188 rb:1.0594 dl:1112-1113 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1578/2084 bl:2.3206 bb:1.0356 rl:2.3188 rb:1.0593 dl:1100-1101 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1567/2084 bl:2.3144 bb:1.0471 rl:2.3188 rb:1.0592 dl:1082-1084 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1558/2084 bl:2.2033 bb:0.9916 rl:2.3181 rb:1.0588 dl:1068-1069 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1550/2084 bl:2.3038 bb:0.9968 rl:2.3180 rb:1.0584 dl:1055-1057 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1539/2084 bl:2.1650 bb:1.0136 rl:2.3171 rb:1.0581 dl:1040-1041 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1531/2084 bl:2.3677 bb:1.0628 rl:2.3174 rb:1.0581 dl:1028-1029 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1523/2084 bl:2.2784 bb:1.0042 rl:2.3172 rb:1.0578 dl:1016-1018 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1513/2084 bl:2.3885 bb:1.0548 rl:2.3176 rb:1.0578 dl:1004-1005 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1505/2084 bl:2.2882 bb:1.0011 rl:2.3174 rb:1.0575 dl:993-994 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1495/2084 bl:2.1624 bb:0.9936 rl:2.3165 rb:1.0571 dl:980-981 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1486/2084 bl:2.2186 bb:0.9794 rl:2.3160 rb:1.0567 dl:967-969 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1478/2084 bl:2.4267 bb:1.0936 rl:2.3166 rb:1.0569 dl:958-959 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1468/2084 bl:2.2692 bb:1.0117 rl:2.3164 rb:1.0567 dl:947-948 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1458/2084 bl:2.3115 bb:1.0289 rl:2.3163 rb:1.0565 dl:935-936 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1450/2084 bl:2.3620 bb:1.0601 rl:2.3166 rb:1.0565 dl:926-927 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1441/2084 bl:2.3035 bb:1.0600 rl:2.3165 rb:1.0565 dl:915-916 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1432/2084 bl:2.4147 bb:1.0461 rl:2.3170 rb:1.0565 dl:904-905 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1421/2084 bl:2.3384 bb:1.0138 rl:2.3171 rb:1.0563 dl:891-892 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1413/2084 bl:2.2363 bb:1.0482 rl:2.3167 rb:1.0562 dl:883-884 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1403/2084 bl:2.2730 bb:1.0654 rl:2.3165 rb:1.0563 dl:871-873 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1395/2084 bl:2.3485 bb:1.0054 rl:2.3166 rb:1.0560 dl:862-863 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1384/2084 bl:2.3895 bb:1.0325 rl:2.3170 rb:1.0559 dl:851-852 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1376/2084 bl:2.5032 bb:1.0718 rl:2.3178 rb:1.0560 dl:842-843 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1367/2084 bl:2.3689 bb:1.0183 rl:2.3180 rb:1.0558 dl:833-834 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1357/2084 bl:2.2835 bb:0.9935 rl:2.3179 rb:1.0555 dl:821-822 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1347/2084 bl:2.2767 bb:1.0211 rl:2.3177 rb:1.0554 dl:811-812 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1339/2084 bl:2.3046 bb:1.0140 rl:2.3176 rb:1.0552 dl:804-804 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1329/2084 bl:2.2056 bb:1.0189 rl:2.3172 rb:1.0551 dl:794-795 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1320/2084 bl:2.2650 bb:0.9964 rl:2.3170 rb:1.0548 dl:784-785 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1312/2084 bl:2.3933 bb:1.1003 rl:2.3173 rb:1.0550 dl:777-778 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1304/2084 bl:2.2344 bb:1.0102 rl:2.3170 rb:1.0548 dl:770-771 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1295/2084 bl:2.3255 bb:1.0214 rl:2.3170 rb:1.0547 dl:762-763 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1288/2084 bl:2.2286 bb:0.9797 rl:2.3166 rb:1.0544 dl:756-756 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1276/2084 bl:2.4333 bb:1.0666 rl:2.3171 rb:1.0544 dl:745-746 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1267/2084 bl:2.2633 bb:1.0368 rl:2.3169 rb:1.0544 dl:737-738 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1259/2084 bl:2.3125 bb:1.0375 rl:2.3169 rb:1.0543 dl:730-730 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1249/2084 bl:2.2814 bb:0.9968 rl:2.3167 rb:1.0541 dl:722-722 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1239/2084 bl:2.3705 bb:1.0743 rl:2.3169 rb:1.0542 dl:713-714 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1231/2084 bl:2.2667 bb:0.9994 rl:2.3168 rb:1.0540 dl:707-707 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1224/2084 bl:2.2212 bb:0.9956 rl:2.3164 rb:1.0538 dl:701-701 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1213/2084 bl:2.3541 bb:1.0659 rl:2.3166 rb:1.0538 dl:692-693 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1202/2084 bl:2.4497 bb:1.1598 rl:2.3170 rb:1.0542 dl:683-684 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1194/2084 bl:2.3488 bb:1.0298 rl:2.3171 rb:1.0541 dl:677-678 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1184/2084 bl:2.2834 bb:1.0609 rl:2.3170 rb:1.0541 dl:669-670 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1175/2084 bl:2.4556 bb:1.1049 rl:2.3174 rb:1.0543 dl:662-663 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1167/2084 bl:2.3206 bb:1.0639 rl:2.3174 rb:1.0543 dl:655-656 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1157/2084 bl:2.3718 bb:1.0426 rl:2.3176 rb:1.0543 dl:648-648 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1147/2084 bl:2.2542 bb:1.0757 rl:2.3174 rb:1.0543 dl:641-642 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1137/2084 bl:2.2935 bb:1.0618 rl:2.3174 rb:1.0543 dl:634-635 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1129/2084 bl:2.2788 bb:1.0794 rl:2.3172 rb:1.0544 dl:629-629 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1120/2084 bl:2.3064 bb:1.0040 rl:2.3172 rb:1.0543 dl:621-622 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1108/2084 bl:2.3027 bb:1.0321 rl:2.3172 rb:1.0542 dl:613-614 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1098/2084 bl:2.2610 bb:1.0194 rl:2.3170 rb:1.0541 dl:606-607 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1090/2084 bl:2.3741 bb:1.0735 rl:2.3172 rb:1.0541 dl:599-600 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1081/2084 bl:2.2913 bb:1.0603 rl:2.3171 rb:1.0542 dl:593-594 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1073/2084 bl:2.2678 bb:1.0183 rl:2.3170 rb:1.0541 dl:588-588 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1061/2084 bl:2.2714 bb:1.0044 rl:2.3168 rb:1.0539 dl:579-580 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1051/2084 bl:2.1564 bb:0.9833 rl:2.3164 rb:1.0537 dl:573-574 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1044/2084 bl:2.3692 bb:1.0375 rl:2.3165 rb:1.0537 dl:568-569 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1036/2084 bl:2.2688 bb:1.0699 rl:2.3164 rb:1.0537 dl:563-563 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1025/2084 bl:2.3401 bb:1.0487 rl:2.3165 rb:1.0537 dl:555-556 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1015/2084 bl:2.4875 bb:1.1196 rl:2.3169 rb:1.0539 dl:549-549 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1004/2084 bl:2.1930 bb:1.0278 rl:2.3166 rb:1.0538 dl:542-543 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b997/2084 bl:2.2878 bb:1.0355 rl:2.3165 rb:1.0538 dl:537-538 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b989/2084 bl:2.2357 bb:1.1157 rl:2.3163 rb:1.0539 dl:533-533 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b979/2084 bl:2.4344 bb:1.1303 rl:2.3166 rb:1.0541 dl:526-527 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b971/2084 bl:2.3727 bb:1.0675 rl:2.3168 rb:1.0541 dl:522-522 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b959/2084 bl:2.3392 bb:1.1102 rl:2.3168 rb:1.0543 dl:514-515 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b953/2084 bl:2.3995 bb:1.0791 rl:2.3170 rb:1.0543 dl:510-510 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b942/2084 bl:2.2783 bb:1.0311 rl:2.3169 rb:1.0543 dl:503-503 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b931/2084 bl:2.4385 bb:1.0893 rl:2.3172 rb:1.0543 dl:496-497 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b923/2084 bl:2.3312 bb:1.1169 rl:2.3172 rb:1.0545 dl:492-492 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b913/2084 bl:2.3234 bb:1.0269 rl:2.3172 rb:1.0544 dl:486-487 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b905/2084 bl:2.4259 bb:1.0645 rl:2.3175 rb:1.0544 dl:482-482 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b894/2084 bl:2.3119 bb:1.0558 rl:2.3174 rb:1.0544 dl:476-476 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b884/2084 bl:2.2905 bb:1.0017 rl:2.3174 rb:1.0543 dl:470-470 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b875/2084 bl:2.2623 bb:1.0359 rl:2.3173 rb:1.0543 dl:465-465 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b864/2084 bl:2.3185 bb:1.0676 rl:2.3173 rb:1.0543 dl:459-459 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b855/2084 bl:2.3291 bb:1.0444 rl:2.3173 rb:1.0543 dl:453-454 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b845/2084 bl:2.3564 bb:1.0913 rl:2.3174 rb:1.0544 dl:447-448 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b838/2084 bl:2.3035 bb:1.0936 rl:2.3174 rb:1.0544 dl:444-444 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b827/2084 bl:2.2250 bb:1.1016 rl:2.3172 rb:1.0545 dl:437-438 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b816/2084 bl:2.5155 bb:1.1321 rl:2.3175 rb:1.0547 dl:431-432 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b810/2084 bl:2.4229 bb:1.0994 rl:2.3177 rb:1.0547 dl:428-428 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b800/2084 bl:2.2886 bb:1.0810 rl:2.3177 rb:1.0548 dl:422-422 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b786/2084 bl:2.2704 bb:1.0388 rl:2.3176 rb:1.0548 dl:414-415 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b778/2084 bl:2.3048 bb:1.0980 rl:2.3176 rb:1.0548 dl:409-410 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b771/2084 bl:2.3085 bb:1.0531 rl:2.3176 rb:1.0548 dl:406-406 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b759/2084 bl:2.3937 bb:1.1062 rl:2.3177 rb:1.0549 dl:400-400 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b747/2084 bl:2.4547 bb:1.1200 rl:2.3179 rb:1.0550 dl:393-394 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b715/2084 bl:2.2411 bb:0.9943 rl:2.3178 rb:1.0549 dl:377-378 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b705/2084 bl:2.2140 bb:1.0673 rl:2.3176 rb:1.0550 dl:372-373 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b697/2084 bl:2.4974 bb:1.1707 rl:2.3179 rb:1.0551 dl:369-369 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b687/2084 bl:2.5031 bb:1.1323 rl:2.3182 rb:1.0553 dl:364-364 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b677/2084 bl:2.3460 bb:1.1156 rl:2.3183 rb:1.0553 dl:359-359 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b666/2084 bl:2.1703 bb:1.0207 rl:2.3180 rb:1.0553 dl:354-355 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b654/2084 bl:2.2693 bb:1.0716 rl:2.3180 rb:1.0553 dl:348-349 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b645/2084 bl:2.4055 bb:1.1220 rl:2.3181 rb:1.0554 dl:344-345 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b637/2084 bl:2.2798 bb:1.0707 rl:2.3180 rb:1.0554 dl:340-341 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b632/2084 bl:2.3119 bb:1.0534 rl:2.3180 rb:1.0554 dl:338-338 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b620/2084 bl:2.3364 bb:1.1091 rl:2.3181 rb:1.0555 dl:333-333 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b608/2084 bl:2.3809 bb:1.0872 rl:2.3181 rb:1.0555 dl:327-328 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b602/2084 bl:2.3152 bb:1.1341 rl:2.3181 rb:1.0556 dl:325-325 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b592/2084 bl:2.3657 bb:1.0873 rl:2.3182 rb:1.0557 dl:321-321 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b582/2084 bl:2.3559 bb:1.1184 rl:2.3183 rb:1.0558 dl:316-316 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b569/2084 bl:2.4725 bb:1.2003 rl:2.3185 rb:1.0559 dl:310-311 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b563/2084 bl:2.3917 bb:1.0294 rl:2.3186 rb:1.0559 dl:308-308 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b553/2084 bl:2.4420 bb:1.1614 rl:2.3187 rb:1.0560 dl:304-304 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b539/2084 bl:2.4097 bb:1.1143 rl:2.3188 rb:1.0561 dl:299-299 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b528/2084 bl:2.3020 bb:1.0450 rl:2.3188 rb:1.0561 dl:294-295 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b520/2084 bl:2.3859 bb:1.1268 rl:2.3189 rb:1.0562 dl:291-291 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b509/2084 bl:2.3885 bb:1.0830 rl:2.3190 rb:1.0562 dl:286-286 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b498/2084 bl:2.4331 bb:1.1699 rl:2.3191 rb:1.0563 dl:282-282 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b488/2084 bl:2.3887 bb:1.0933 rl:2.3192 rb:1.0564 dl:278-278 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b477/2084 bl:2.5650 bb:1.1849 rl:2.3195 rb:1.0565 dl:274-274 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b469/2084 bl:2.3945 bb:1.1219 rl:2.3196 rb:1.0566 dl:271-271 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b454/2084 bl:2.5808 bb:1.1708 rl:2.3198 rb:1.0567 dl:265-266 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b447/2084 bl:2.3264 bb:1.1396 rl:2.3198 rb:1.0568 dl:263-263 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b435/2084 bl:2.5030 bb:1.1632 rl:2.3200 rb:1.0569 dl:258-259 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b426/2084 bl:2.3610 bb:1.1298 rl:2.3201 rb:1.0570 dl:255-255 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b414/2084 bl:2.4280 bb:1.1898 rl:2.3202 rb:1.0571 dl:251-251 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b406/2084 bl:2.2278 bb:1.0746 rl:2.3201 rb:1.0571 dl:248-248 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b399/2084 bl:2.3267 bb:1.0945 rl:2.3201 rb:1.0572 dl:245-245 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b386/2084 bl:2.4790 bb:1.0853 rl:2.3203 rb:1.0572 dl:241-241 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b379/2084 bl:2.3229 bb:1.0790 rl:2.3203 rb:1.0572 dl:238-238 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b366/2084 bl:2.4139 bb:1.1100 rl:2.3204 rb:1.0573 dl:233-234 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b357/2084 bl:2.2655 bb:1.0438 rl:2.3203 rb:1.0573 dl:230-231 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b345/2084 bl:2.4248 bb:1.0877 rl:2.3204 rb:1.0573 dl:226-227 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b338/2084 bl:2.4781 bb:1.1460 rl:2.3206 rb:1.0574 dl:224-224 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b330/2084 bl:2.4518 bb:1.2071 rl:2.3207 rb:1.0575 dl:221-221 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b321/2084 bl:2.3923 bb:1.0570 rl:2.3207 rb:1.0575 dl:218-218 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b310/2084 bl:2.4888 bb:1.2081 rl:2.3209 rb:1.0576 dl:214-214 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b296/2084 bl:2.3133 bb:1.1596 rl:2.3209 rb:1.0577 dl:209-210 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b288/2084 bl:2.4947 bb:1.1489 rl:2.3210 rb:1.0578 dl:206-207 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b281/2084 bl:2.3454 bb:1.1827 rl:2.3210 rb:1.0579 dl:204-204 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b270/2084 bl:2.4465 bb:1.1597 rl:2.3212 rb:1.0580 dl:201-201 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b263/2084 bl:2.5173 bb:1.1869 rl:2.3213 rb:1.0581 dl:198-198 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b253/2084 bl:2.3199 bb:1.1032 rl:2.3213 rb:1.0581 dl:195-195 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b238/2084 bl:2.4639 bb:1.1446 rl:2.3214 rb:1.0582 dl:190-190 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b229/2084 bl:2.4413 bb:1.2025 rl:2.3215 rb:1.0583 dl:186-187 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b224/2084 bl:2.4476 bb:1.1142 rl:2.3216 rb:1.0583 dl:185-185 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b214/2084 bl:2.3834 bb:1.1354 rl:2.3217 rb:1.0584 dl:182-182 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b204/2084 bl:2.4543 bb:1.2537 rl:2.3217 rb:1.0585 dl:178-178 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b191/2084 bl:2.4425 bb:1.1798 rl:2.3218 rb:1.0586 dl:174-174 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b180/2084 bl:2.4777 bb:1.2139 rl:2.3219 rb:1.0587 dl:170-171 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b172/2084 bl:2.3654 bb:1.1738 rl:2.3220 rb:1.0587 dl:167-168 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b160/2084 bl:2.3825 bb:1.1635 rl:2.3220 rb:1.0588 dl:163-164 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b153/2084 bl:2.5661 bb:1.2087 rl:2.3222 rb:1.0589 dl:161-161 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b141/2084 bl:2.3712 bb:1.1323 rl:2.3222 rb:1.0589 dl:157-157 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b130/2084 bl:2.3989 bb:1.1294 rl:2.3222 rb:1.0590 dl:152-153 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b122/2084 bl:2.5333 bb:1.2096 rl:2.3224 rb:1.0591 dl:149-150 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b113/2084 bl:2.3599 bb:1.1535 rl:2.3224 rb:1.0591 dl:146-146 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b99/2084 bl:2.5281 bb:1.1526 rl:2.3225 rb:1.0592 dl:141-142 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b93/2084 bl:2.6433 bb:1.2566 rl:2.3227 rb:1.0593 dl:139-139 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b79/2084 bl:2.4598 bb:1.2415 rl:2.3228 rb:1.0594 dl:133-134 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b69/2084 bl:2.7112 bb:1.3877 rl:2.3230 rb:1.0595 dl:129-130 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b60/2084 bl:2.7497 bb:1.2740 rl:2.3232 rb:1.0596 dl:126-126 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b48/2084 bl:2.7161 bb:1.2188 rl:2.3234 rb:1.0597 dl:120-121 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b41/2084 bl:2.6497 bb:1.2703 rl:2.3235 rb:1.0598 dl:117-117 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b27/2084 bl:2.5164 bb:1.1627 rl:2.3236 rb:1.0599 dl:108-109 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b17/2084 bl:2.7736 bb:1.2126 rl:2.3238 rb:1.0599 dl:101-102 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b7/2084 bl:2.6846 bb:1.2140 rl:2.3239 rb:1.0600 dl:91-92 gd:1 sr:0 sf:1 tr:24/24 wt:0 +quantized_ttt_phased val_loss:2.31156036 val_bpb:1.05629015 eval_time:583138ms +total_eval_time:583.1s diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed314_corrected_token_only.log b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed314_corrected_token_only.log new file mode 100644 index 0000000000..8294baa091 --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed314_corrected_token_only.log @@ -0,0 +1,873 @@ +W0502 18:33:22.649000 458757 torch/distributed/run.py:803] +W0502 18:33:22.649000 458757 torch/distributed/run.py:803] ***************************************** +W0502 18:33:22.649000 458757 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0502 18:33:22.649000 458757 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + agree_add_boost: 0.0 + artifact_dir: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_20260502/seed314 + attn_clip_sigmas: 13.0 + attn_out_gate_enabled: False + attn_out_gate_src: proj + awq_lite_bits: 8 + awq_lite_enabled: True + awq_lite_group_size: 64 + awq_lite_group_top_k: 1 + beta1: 0.9 + beta2: 0.99 + caseops_enabled: True + compile_shape_warmup: True + compile_shape_warmup_iters: 1 + compile_shape_warmup_loop_modes: auto + compressor: pergroup + data_dir: ./data + datasets_dir: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved + distributed: True + ema_decay: 0.9965 + embed_bits: 7 + embed_clip_sigmas: 14.0 + embed_lr: 0.6 + embed_wd: 0.085 + enable_looping_at: 0.35 + eval_include_tail: True + eval_seq_len: 3072 + eval_stride: 1536 + fused_ce_enabled: True + gate_window: 12 + gated_attn_enabled: False + gated_attn_init_std: 0.01 + gated_attn_quant_gate: True + global_ttt_batch_seqs: 32 + global_ttt_chunk_tokens: 32768 + global_ttt_epochs: 1 + global_ttt_grad_clip: 1.0 + global_ttt_lr: 0.001 + global_ttt_momentum: 0.9 + global_ttt_respect_doc_boundaries: True + global_ttt_warmup_chunks: 0 + global_ttt_warmup_start_lr: 0.0 + gptq_calibration_batches: 16 + gptq_reserve_seconds: 4.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + is_main_process: True + iterations: 20000 + leaky_relu_sq_slope: 0.3 + ln_scale: True + local_rank: 0 + logfile: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_20260502/seed314/pr2140_corrected_authorhf_hardoff_s314.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + lqer_asym_enabled: True + lqer_asym_group: 64 + lqer_enabled: True + lqer_factor_bits: 4 + lqer_gain_select: False + lqer_rank: 4 + lqer_scope: all + lqer_top_k: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.026 + max_wallclock_seconds: 600.0 + midrun_cap_log_updates: False + midrun_cap_schedule: + min_lr: 0.1 + mlp_clip_sigmas: 11.5 + mlp_mult: 4.0 + model_dim: 512 + model_path: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_20260502/seed314/final_model.pt + muon_backend_steps: 5 + muon_momentum: 0.97 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + ngram_hint_precompute_outside: False + ngram_tilt_enabled: True + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_final_lane: mean + parallel_start_layer: 8 + phased_ttt_num_phases: 1 + phased_ttt_prefix_docs: 2500 + qk_gain_init: 5.25 + quantized_model_path: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_20260502/seed314/final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 3072 + rope_yarn: False + run_id: pr2140_corrected_authorhf_hardoff_s314 + scalar_lr: 0.02 + seed: 314 + seq_change_warmup_steps: 32 + skip_gates_enabled: True + skylight_norm_beta2: 0.95 + skylight_norm_ema: False + skylight_norm_eps: 1e-07 + skylight_uw_floor: False + skylight_uw_ratio: 0.35 + smear_gate_enabled: True + sparse_attn_gate_enabled: True + sparse_attn_gate_init_std: 0.0 + sparse_attn_gate_scale: 0.5 + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + token_boost: 2.625 + token_order: 16 + token_threshold: 0.8 + tokenizer_path: /tmp/parameter-golf-data-authorhf/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + train_batch_tokens: 786432 + train_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 3072 + train_seq_schedule: 1024@0.100,2048@0.700,3072@1.000 + train_seq_schedule_mode: wallclock + ttt_batch_size: 24 + ttt_beta1: 0.0 + ttt_beta2: 0.99 + ttt_chunk_size: 64 + ttt_enabled: True + ttt_eval_batches: + ttt_eval_seq_len: 3072 + ttt_grad_steps: 1 + ttt_k_lora: True + ttt_local_lr_mult: 0.75 + ttt_lora_lr: 0.0001 + ttt_lora_rank: 80 + ttt_mask: no_qv + ttt_mlp_lora: True + ttt_o_lora: True + ttt_optimizer: adam + ttt_q_lora: False + ttt_short_beta2: 0.99 + ttt_short_chunk_size: 32 + ttt_short_doc_len: 2000 + ttt_short_lora_enabled: False + ttt_short_lora_lr: 0.0001 + ttt_short_lora_rank: 80 + ttt_short_score_first_enabled: True + ttt_short_score_first_steps: 256:16,2000:32 + ttt_short_weight_decay: 0.5 + ttt_train_max_doc_len: 0 + ttt_train_min_doc_len: 0 + ttt_v_lora: False + ttt_warm_start_mean_doc_len: 2000 + ttt_warm_start_mean_enabled: False + ttt_warm_start_mean_momentum: 0.95 + ttt_weight_decay: 0.5 + val_batch_tokens: 524288 + val_bytes_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_bytes_*.bin + val_doc_fraction: 1.0 + val_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_*.bin + val_loss_every: 0 + vocab_size: 8192 + warmdown_frac: 0.85 + warmdown_iters: 0 + warmup_steps: 20 + within_boost: 0.0 + within_tau: 0.45 + word_boost: 0.0 + word_normalize: strip_punct_lower + word_order: 4 + word_tau: 0.65 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 47853343 +model_params:35945673 +train_seq_schedule:1024@0.100,2048@0.700,3072@1.000 +local_microbatch_tokens:98304 +growth_stage:seq_len:1024 progress:0.000 +gptq:reserving 4s, effective=596000ms +warmup_cu_buckets:64,128,192,256 iters_each:3 +compile_shape_warmup:start 1024xplain,2048xplain,2048xloop,3072xloop +compile_shape_warmup:shape seq_len:1024 loop:0 +compile_shape_warmup:shape seq_len:2048 loop:0 +compile_shape_warmup:shape seq_len:2048 loop:1 +compile_shape_warmup:shape seq_len:3072 loop:1 +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +1/20000 train_loss: 8.9988 train_time: 0.0m tok/s: 18309210 +2/20000 train_loss: 12.8536 train_time: 0.0m tok/s: 754861 +3/20000 train_loss: 10.2144 train_time: 0.0m tok/s: 1086644 +4/20000 train_loss: 8.6549 train_time: 0.0m tok/s: 1388255 +5/20000 train_loss: 7.9020 train_time: 0.0m tok/s: 1666596 +500/20000 train_loss: 2.6013 train_time: 0.9m tok/s: 7696538 +growth_stage:seq_len:2048 progress:0.100 step:592 +growth_stage_rewarmup:start step:592 steps:32 seq_len:2048 +1000/20000 train_loss: 2.5776 train_time: 1.6m tok/s: 7979023 +1500/20000 train_loss: 2.6200 train_time: 2.4m tok/s: 8062534 +2000/20000 train_loss: 2.6515 train_time: 3.2m tok/s: 8101902 +layer_loop:enabled step:2152 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 2.5014 train_time: 4.3m tok/s: 7587170 +3000/20000 train_loss: 2.4531 train_time: 5.5m tok/s: 7171426 +3500/20000 train_loss: 2.4613 train_time: 6.6m tok/s: 6902177 +growth_stage:seq_len:3072 progress:0.700 step:3632 +growth_stage_rewarmup:start step:3632 steps:32 seq_len:3072 +4000/20000 train_loss: 2.3752 train_time: 7.8m tok/s: 6686862 +4500/20000 train_loss: 2.3437 train_time: 9.0m tok/s: 6521189 +4855/20000 val_loss: 2.3444 val_bpb: 1.0713 +stopping_early: wallclock_cap train_time: 596099ms step: 4855/20000 +peak memory allocated: 41707 MiB reserved: 46984 MiB +ema:applying EMA weights +diagnostic pre-quantization post-ema val_loss:2.31997987 val_bpb:1.06013753 eval_time:17540ms +Serialized model: 135418111 bytes +Code size (uncompressed): 207577 bytes +Code size (compressed): 40445 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 4.1s +Quantized weights: + gate_int8_row: blocks.attn.attn_gate_w + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int6)+lqer_asym: blocks.mlp.fc.weight + gptq (int7)+awqgrpint8+lqer_asym: tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, parallel_post_lambdas, parallel_resid_lambdas, skip_gates, skip_weights, smear_gate.weight, smear_lambda, softcap_neg, softcap_pos +Serialize: per-group lrzip compression... +Serialize: per-group compression done in 112.8s +Serialized model quantized+pergroup: 15942988 bytes +Total submission size quantized+pergroup: 15983433 bytes +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 18.1s +diagnostic quantized val_loss:2.33736937 val_bpb:1.06808383 eval_time:17623ms +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 18.0s +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (191.0s) + +beginning TTT eval timer +ngram_tilt:hints total=47853343 gated=628156 token_gate=628156 within_gate=0 word_gate=0 agree2plus=0 +ngram_tilt:precompute_outside_timer_done elapsed=15.81s total_targets=47853343 +ttt_phased: total_docs:50000 prefix_docs:2500 suffix_docs:47500 num_phases:1 boundaries:[2500] target_tokens:47853343 +ttp: b2079/2084 bl:2.2441 bb:1.0806 rl:2.2441 rb:1.0806 dl:13679-14936 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2074/2084 bl:2.4373 bb:1.1090 rl:2.3227 rb:1.0925 dl:9553-10083 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2069/2084 bl:2.3061 bb:1.0417 rl:2.3185 rb:1.0794 dl:7881-8179 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2063/2084 bl:2.2769 bb:1.0724 rl:2.3114 rb:1.0782 dl:6523-6721 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2059/2084 bl:2.2028 bb:1.0691 rl:2.2966 rb:1.0770 dl:6007-6142 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2048/2084 bl:2.1900 bb:1.0520 rl:2.2858 rb:1.0745 dl:4995-5083 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2044/2084 bl:2.1648 bb:1.0784 rl:2.2753 rb:1.0749 dl:4697-4743 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2035/2084 bl:2.3021 bb:1.0778 rl:2.2773 rb:1.0751 dl:4250-4292 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2028/2084 bl:2.4790 bb:1.1224 rl:2.2900 rb:1.0782 dl:3935-3966 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2023/2084 bl:2.3773 bb:1.0542 rl:2.2949 rb:1.0767 dl:3761-3786 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2014/2084 bl:2.3314 bb:1.0515 rl:2.2967 rb:1.0754 dl:3455-3487 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2007/2084 bl:2.2434 bb:0.9966 rl:2.2943 rb:1.0717 dl:3303-3324 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2002/2084 bl:2.3702 bb:1.0483 rl:2.2975 rb:1.0707 dl:3177-3210 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1988/2084 bl:2.4961 bb:1.0639 rl:2.3048 rb:1.0704 dl:2890-2911 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1980/2084 bl:2.3092 bb:1.0203 rl:2.3049 rb:1.0686 dl:2770-2786 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttpp: phase:1/1 pd:2672 gd:2500 t:157.1s +tttg: c1/344 lr:0.001000 t:0.2s +tttg: c2/344 lr:0.001000 t:0.3s +tttg: c3/344 lr:0.001000 t:0.4s +tttg: c4/344 lr:0.001000 t:2.6s +tttg: c5/344 lr:0.001000 t:2.7s +tttg: c6/344 lr:0.000999 t:2.8s +tttg: c7/344 lr:0.000999 t:2.9s +tttg: c8/344 lr:0.000999 t:3.0s +tttg: c9/344 lr:0.000999 t:3.1s +tttg: c10/344 lr:0.000998 t:5.3s +tttg: c11/344 lr:0.000998 t:5.4s +tttg: c12/344 lr:0.000997 t:5.5s +tttg: c13/344 lr:0.000997 t:5.6s +tttg: c14/344 lr:0.000996 t:5.7s +tttg: c15/344 lr:0.000996 t:5.8s +tttg: c16/344 lr:0.000995 t:5.9s +tttg: c17/344 lr:0.000995 t:6.0s +tttg: c18/344 lr:0.000994 t:6.1s +tttg: c19/344 lr:0.000993 t:6.2s +tttg: c20/344 lr:0.000992 t:6.3s +tttg: c21/344 lr:0.000992 t:6.4s +tttg: c22/344 lr:0.000991 t:6.5s +tttg: c23/344 lr:0.000990 t:6.6s +tttg: c24/344 lr:0.000989 t:6.7s +tttg: c25/344 lr:0.000988 t:6.7s +tttg: c26/344 lr:0.000987 t:6.8s +tttg: c27/344 lr:0.000986 t:6.9s +tttg: c28/344 lr:0.000985 t:7.0s +tttg: c29/344 lr:0.000984 t:7.1s +tttg: c30/344 lr:0.000982 t:7.2s +tttg: c31/344 lr:0.000981 t:7.3s +tttg: c32/344 lr:0.000980 t:7.4s +tttg: c33/344 lr:0.000979 t:7.5s +tttg: c34/344 lr:0.000977 t:7.6s +tttg: c35/344 lr:0.000976 t:7.7s +tttg: c36/344 lr:0.000975 t:7.8s +tttg: c37/344 lr:0.000973 t:7.9s +tttg: c38/344 lr:0.000972 t:8.0s +tttg: c39/344 lr:0.000970 t:8.0s +tttg: c40/344 lr:0.000968 t:8.1s +tttg: c41/344 lr:0.000967 t:8.2s +tttg: c42/344 lr:0.000965 t:8.3s +tttg: c43/344 lr:0.000963 t:8.4s +tttg: c44/344 lr:0.000962 t:8.5s +tttg: c45/344 lr:0.000960 t:8.6s +tttg: c46/344 lr:0.000958 t:8.7s +tttg: c47/344 lr:0.000956 t:8.8s +tttg: c48/344 lr:0.000954 t:8.9s +tttg: c49/344 lr:0.000952 t:9.0s +tttg: c50/344 lr:0.000950 t:9.1s +tttg: c51/344 lr:0.000948 t:9.2s +tttg: c52/344 lr:0.000946 t:9.2s +tttg: c53/344 lr:0.000944 t:9.3s +tttg: c54/344 lr:0.000942 t:9.4s +tttg: c55/344 lr:0.000940 t:9.5s +tttg: c56/344 lr:0.000938 t:9.6s +tttg: c57/344 lr:0.000936 t:9.7s +tttg: c58/344 lr:0.000933 t:9.8s +tttg: c59/344 lr:0.000931 t:9.9s +tttg: c60/344 lr:0.000929 t:10.0s +tttg: c61/344 lr:0.000926 t:10.1s +tttg: c62/344 lr:0.000924 t:10.2s +tttg: c63/344 lr:0.000922 t:10.3s +tttg: c64/344 lr:0.000919 t:10.4s +tttg: c65/344 lr:0.000917 t:10.5s +tttg: c66/344 lr:0.000914 t:10.6s +tttg: c67/344 lr:0.000911 t:10.7s +tttg: c68/344 lr:0.000909 t:10.8s +tttg: c69/344 lr:0.000906 t:10.9s +tttg: c70/344 lr:0.000903 t:10.9s +tttg: c71/344 lr:0.000901 t:11.0s +tttg: c72/344 lr:0.000898 t:11.1s +tttg: c73/344 lr:0.000895 t:11.2s +tttg: c74/344 lr:0.000892 t:11.3s +tttg: c75/344 lr:0.000889 t:11.4s +tttg: c76/344 lr:0.000887 t:11.5s +tttg: c77/344 lr:0.000884 t:11.6s +tttg: c78/344 lr:0.000881 t:11.7s +tttg: c79/344 lr:0.000878 t:11.8s +tttg: c80/344 lr:0.000875 t:11.9s +tttg: c81/344 lr:0.000872 t:12.0s +tttg: c82/344 lr:0.000869 t:12.1s +tttg: c83/344 lr:0.000865 t:12.2s +tttg: c84/344 lr:0.000862 t:12.2s +tttg: c85/344 lr:0.000859 t:12.3s +tttg: c86/344 lr:0.000856 t:12.4s +tttg: c87/344 lr:0.000853 t:12.5s +tttg: c88/344 lr:0.000849 t:12.6s +tttg: c89/344 lr:0.000846 t:12.7s +tttg: c90/344 lr:0.000843 t:12.8s +tttg: c91/344 lr:0.000840 t:12.9s +tttg: c92/344 lr:0.000836 t:13.0s +tttg: c93/344 lr:0.000833 t:13.1s +tttg: c94/344 lr:0.000829 t:13.2s +tttg: c95/344 lr:0.000826 t:13.3s +tttg: c96/344 lr:0.000822 t:13.4s +tttg: c97/344 lr:0.000819 t:13.5s +tttg: c98/344 lr:0.000815 t:13.6s +tttg: c99/344 lr:0.000812 t:13.7s +tttg: c100/344 lr:0.000808 t:13.8s +tttg: c101/344 lr:0.000805 t:13.9s +tttg: c102/344 lr:0.000801 t:13.9s +tttg: c103/344 lr:0.000797 t:14.0s +tttg: c104/344 lr:0.000794 t:14.1s +tttg: c105/344 lr:0.000790 t:14.2s +tttg: c106/344 lr:0.000786 t:14.3s +tttg: c107/344 lr:0.000782 t:14.4s +tttg: c108/344 lr:0.000778 t:14.5s +tttg: c109/344 lr:0.000775 t:14.6s +tttg: c110/344 lr:0.000771 t:14.7s +tttg: c111/344 lr:0.000767 t:14.8s +tttg: c112/344 lr:0.000763 t:14.9s +tttg: c113/344 lr:0.000759 t:15.0s +tttg: c114/344 lr:0.000755 t:15.1s +tttg: c115/344 lr:0.000751 t:15.2s +tttg: c116/344 lr:0.000747 t:15.2s +tttg: c117/344 lr:0.000743 t:15.3s +tttg: c118/344 lr:0.000739 t:15.4s +tttg: c119/344 lr:0.000735 t:15.5s +tttg: c120/344 lr:0.000731 t:15.6s +tttg: c121/344 lr:0.000727 t:15.7s +tttg: c122/344 lr:0.000723 t:15.8s +tttg: c123/344 lr:0.000719 t:15.9s +tttg: c124/344 lr:0.000715 t:16.0s +tttg: c125/344 lr:0.000711 t:16.1s +tttg: c126/344 lr:0.000707 t:16.2s +tttg: c127/344 lr:0.000702 t:16.3s +tttg: c128/344 lr:0.000698 t:16.4s +tttg: c129/344 lr:0.000694 t:16.5s +tttg: c130/344 lr:0.000690 t:16.5s +tttg: c131/344 lr:0.000686 t:16.6s +tttg: c132/344 lr:0.000681 t:16.7s +tttg: c133/344 lr:0.000677 t:16.8s +tttg: c134/344 lr:0.000673 t:16.9s +tttg: c135/344 lr:0.000668 t:17.0s +tttg: c136/344 lr:0.000664 t:17.1s +tttg: c137/344 lr:0.000660 t:17.2s +tttg: c138/344 lr:0.000655 t:17.3s +tttg: c139/344 lr:0.000651 t:17.4s +tttg: c140/344 lr:0.000647 t:17.5s +tttg: c141/344 lr:0.000642 t:17.6s +tttg: c142/344 lr:0.000638 t:17.7s +tttg: c143/344 lr:0.000633 t:17.7s +tttg: c144/344 lr:0.000629 t:17.8s +tttg: c145/344 lr:0.000625 t:17.9s +tttg: c146/344 lr:0.000620 t:18.0s +tttg: c147/344 lr:0.000616 t:18.1s +tttg: c148/344 lr:0.000611 t:18.2s +tttg: c149/344 lr:0.000607 t:18.3s +tttg: c150/344 lr:0.000602 t:18.4s +tttg: c151/344 lr:0.000598 t:18.5s +tttg: c152/344 lr:0.000593 t:18.6s +tttg: c153/344 lr:0.000589 t:18.7s +tttg: c154/344 lr:0.000584 t:18.8s +tttg: c155/344 lr:0.000580 t:18.9s +tttg: c156/344 lr:0.000575 t:18.9s +tttg: c157/344 lr:0.000571 t:19.0s +tttg: c158/344 lr:0.000566 t:19.1s +tttg: c159/344 lr:0.000562 t:19.2s +tttg: c160/344 lr:0.000557 t:19.3s +tttg: c161/344 lr:0.000553 t:19.4s +tttg: c162/344 lr:0.000548 t:19.5s +tttg: c163/344 lr:0.000543 t:19.6s +tttg: c164/344 lr:0.000539 t:19.7s +tttg: c165/344 lr:0.000534 t:19.9s +tttg: c166/344 lr:0.000530 t:20.0s +tttg: c167/344 lr:0.000525 t:20.1s +tttg: c168/344 lr:0.000521 t:20.2s +tttg: c169/344 lr:0.000516 t:20.3s +tttg: c170/344 lr:0.000511 t:20.4s +tttg: c171/344 lr:0.000507 t:20.5s +tttg: c172/344 lr:0.000502 t:20.6s +tttg: c173/344 lr:0.000498 t:20.7s +tttg: c174/344 lr:0.000493 t:20.8s +tttg: c175/344 lr:0.000489 t:20.9s +tttg: c176/344 lr:0.000484 t:21.0s +tttg: c177/344 lr:0.000479 t:21.2s +tttg: c178/344 lr:0.000475 t:21.3s +tttg: c179/344 lr:0.000470 t:21.4s +tttg: c180/344 lr:0.000466 t:21.5s +tttg: c181/344 lr:0.000461 t:21.6s +tttg: c182/344 lr:0.000457 t:21.7s +tttg: c183/344 lr:0.000452 t:21.8s +tttg: c184/344 lr:0.000447 t:21.9s +tttg: c185/344 lr:0.000443 t:22.0s +tttg: c186/344 lr:0.000438 t:22.1s +tttg: c187/344 lr:0.000434 t:22.2s +tttg: c188/344 lr:0.000429 t:22.3s +tttg: c189/344 lr:0.000425 t:22.5s +tttg: c190/344 lr:0.000420 t:22.6s +tttg: c191/344 lr:0.000416 t:22.7s +tttg: c192/344 lr:0.000411 t:22.8s +tttg: c193/344 lr:0.000407 t:22.9s +tttg: c194/344 lr:0.000402 t:23.0s +tttg: c195/344 lr:0.000398 t:23.1s +tttg: c196/344 lr:0.000393 t:23.2s +tttg: c197/344 lr:0.000389 t:23.3s +tttg: c198/344 lr:0.000384 t:23.4s +tttg: c199/344 lr:0.000380 t:23.5s +tttg: c200/344 lr:0.000375 t:23.6s +tttg: c201/344 lr:0.000371 t:23.7s +tttg: c202/344 lr:0.000367 t:23.8s +tttg: c203/344 lr:0.000362 t:24.0s +tttg: c204/344 lr:0.000358 t:24.1s +tttg: c205/344 lr:0.000353 t:24.2s +tttg: c206/344 lr:0.000349 t:24.3s +tttg: c207/344 lr:0.000345 t:24.4s +tttg: c208/344 lr:0.000340 t:24.5s +tttg: c209/344 lr:0.000336 t:24.6s +tttg: c210/344 lr:0.000332 t:24.7s +tttg: c211/344 lr:0.000327 t:24.8s +tttg: c212/344 lr:0.000323 t:24.9s +tttg: c213/344 lr:0.000319 t:25.0s +tttg: c214/344 lr:0.000314 t:25.1s +tttg: c215/344 lr:0.000310 t:25.2s +tttg: c216/344 lr:0.000306 t:25.3s +tttg: c217/344 lr:0.000302 t:25.4s +tttg: c218/344 lr:0.000298 t:25.5s +tttg: c219/344 lr:0.000293 t:25.6s +tttg: c220/344 lr:0.000289 t:25.8s +tttg: c221/344 lr:0.000285 t:25.9s +tttg: c222/344 lr:0.000281 t:26.0s +tttg: c223/344 lr:0.000277 t:26.1s +tttg: c224/344 lr:0.000273 t:26.2s +tttg: c225/344 lr:0.000269 t:26.3s +tttg: c226/344 lr:0.000265 t:26.4s +tttg: c227/344 lr:0.000261 t:26.5s +tttg: c228/344 lr:0.000257 t:26.6s +tttg: c229/344 lr:0.000253 t:26.7s +tttg: c230/344 lr:0.000249 t:26.8s +tttg: c231/344 lr:0.000245 t:26.9s +tttg: c232/344 lr:0.000241 t:27.1s +tttg: c233/344 lr:0.000237 t:27.2s +tttg: c234/344 lr:0.000233 t:27.3s +tttg: c235/344 lr:0.000229 t:27.4s +tttg: c236/344 lr:0.000225 t:27.5s +tttg: c237/344 lr:0.000222 t:27.6s +tttg: c238/344 lr:0.000218 t:27.7s +tttg: c239/344 lr:0.000214 t:27.8s +tttg: c240/344 lr:0.000210 t:27.9s +tttg: c241/344 lr:0.000206 t:28.0s +tttg: c242/344 lr:0.000203 t:28.2s +tttg: c243/344 lr:0.000199 t:28.3s +tttg: c244/344 lr:0.000195 t:28.4s +tttg: c245/344 lr:0.000192 t:28.5s +tttg: c246/344 lr:0.000188 t:28.7s +tttg: c247/344 lr:0.000185 t:28.8s +tttg: c248/344 lr:0.000181 t:28.9s +tttg: c249/344 lr:0.000178 t:29.0s +tttg: c250/344 lr:0.000174 t:29.1s +tttg: c251/344 lr:0.000171 t:29.2s +tttg: c252/344 lr:0.000167 t:29.4s +tttg: c253/344 lr:0.000164 t:29.5s +tttg: c254/344 lr:0.000160 t:29.6s +tttg: c255/344 lr:0.000157 t:29.7s +tttg: c256/344 lr:0.000154 t:29.8s +tttg: c257/344 lr:0.000151 t:29.9s +tttg: c258/344 lr:0.000147 t:30.0s +tttg: c259/344 lr:0.000144 t:30.1s +tttg: c260/344 lr:0.000141 t:30.3s +tttg: c261/344 lr:0.000138 t:30.4s +tttg: c262/344 lr:0.000135 t:30.5s +tttg: c263/344 lr:0.000131 t:30.6s +tttg: c264/344 lr:0.000128 t:30.7s +tttg: c265/344 lr:0.000125 t:30.8s +tttg: c266/344 lr:0.000122 t:30.9s +tttg: c267/344 lr:0.000119 t:31.0s +tttg: c268/344 lr:0.000116 t:31.1s +tttg: c269/344 lr:0.000113 t:31.2s +tttg: c270/344 lr:0.000111 t:31.3s +tttg: c271/344 lr:0.000108 t:31.4s +tttg: c272/344 lr:0.000105 t:31.6s +tttg: c273/344 lr:0.000102 t:31.7s +tttg: c274/344 lr:0.000099 t:31.8s +tttg: c275/344 lr:0.000097 t:31.9s +tttg: c276/344 lr:0.000094 t:32.0s +tttg: c277/344 lr:0.000091 t:32.1s +tttg: c278/344 lr:0.000089 t:32.2s +tttg: c279/344 lr:0.000086 t:32.3s +tttg: c280/344 lr:0.000083 t:32.4s +tttg: c281/344 lr:0.000081 t:32.5s +tttg: c282/344 lr:0.000078 t:32.6s +tttg: c283/344 lr:0.000076 t:32.8s +tttg: c284/344 lr:0.000074 t:32.9s +tttg: c285/344 lr:0.000071 t:33.0s +tttg: c286/344 lr:0.000069 t:33.1s +tttg: c287/344 lr:0.000067 t:33.2s +tttg: c288/344 lr:0.000064 t:33.3s +tttg: c289/344 lr:0.000062 t:33.4s +tttg: c290/344 lr:0.000060 t:33.5s +tttg: c291/344 lr:0.000058 t:33.6s +tttg: c292/344 lr:0.000056 t:33.7s +tttg: c293/344 lr:0.000054 t:33.8s +tttg: c294/344 lr:0.000052 t:33.9s +tttg: c295/344 lr:0.000050 t:34.1s +tttg: c296/344 lr:0.000048 t:34.2s +tttg: c297/344 lr:0.000046 t:34.3s +tttg: c298/344 lr:0.000044 t:34.4s +tttg: c299/344 lr:0.000042 t:34.5s +tttg: c300/344 lr:0.000040 t:34.6s +tttg: c301/344 lr:0.000038 t:34.7s +tttg: c302/344 lr:0.000037 t:34.8s +tttg: c303/344 lr:0.000035 t:34.9s +tttg: c304/344 lr:0.000033 t:35.0s +tttg: c305/344 lr:0.000032 t:35.1s +tttg: c306/344 lr:0.000030 t:35.3s +tttg: c307/344 lr:0.000028 t:35.4s +tttg: c308/344 lr:0.000027 t:35.5s +tttg: c309/344 lr:0.000025 t:35.6s +tttg: c310/344 lr:0.000024 t:35.7s +tttg: c311/344 lr:0.000023 t:35.8s +tttg: c312/344 lr:0.000021 t:35.9s +tttg: c313/344 lr:0.000020 t:36.0s +tttg: c314/344 lr:0.000019 t:36.1s +tttg: c315/344 lr:0.000018 t:36.2s +tttg: c316/344 lr:0.000016 t:36.4s +tttg: c317/344 lr:0.000015 t:36.5s +tttg: c318/344 lr:0.000014 t:36.6s +tttg: c319/344 lr:0.000013 t:36.7s +tttg: c320/344 lr:0.000012 t:36.8s +tttg: c321/344 lr:0.000011 t:36.9s +tttg: c322/344 lr:0.000010 t:37.0s +tttg: c323/344 lr:0.000009 t:37.1s +tttg: c324/344 lr:0.000008 t:37.2s +tttg: c325/344 lr:0.000008 t:37.3s +tttg: c326/344 lr:0.000007 t:37.4s +tttg: c327/344 lr:0.000006 t:37.6s +tttg: c328/344 lr:0.000005 t:37.7s +tttg: c329/344 lr:0.000005 t:37.8s +tttg: c330/344 lr:0.000004 t:37.9s +tttg: c331/344 lr:0.000004 t:38.0s +tttg: c332/344 lr:0.000003 t:38.1s +tttg: c333/344 lr:0.000003 t:38.2s +tttg: c334/344 lr:0.000002 t:38.3s +tttg: c335/344 lr:0.000002 t:38.4s +tttg: c336/344 lr:0.000001 t:38.5s +tttg: c337/344 lr:0.000001 t:38.6s +tttg: c338/344 lr:0.000001 t:38.8s +tttg: c339/344 lr:0.000001 t:38.9s +tttg: c340/344 lr:0.000000 t:39.0s +tttg: c341/344 lr:0.000000 t:39.1s +tttg: c342/344 lr:0.000000 t:39.2s +tttg: c343/344 lr:0.000000 t:39.3s +ttpr: phase:1/1 t:197.0s +ttp: b1965/2084 bl:2.2736 bb:1.0067 rl:2.3040 rb:1.0666 dl:2565-2577 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1959/2084 bl:2.2409 bb:1.0308 rl:2.3021 rb:1.0656 dl:2501-2514 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1953/2084 bl:2.2864 bb:1.0431 rl:2.3017 rb:1.0650 dl:2441-2454 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1947/2084 bl:2.2186 bb:0.9552 rl:2.2996 rb:1.0619 dl:2368-2382 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1941/2084 bl:2.3025 bb:1.0502 rl:2.2996 rb:1.0617 dl:2314-2323 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1935/2084 bl:2.2716 bb:1.0302 rl:2.2990 rb:1.0609 dl:2260-2270 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1929/2084 bl:2.2757 bb:1.0226 rl:2.2985 rb:1.0600 dl:2203-2216 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1923/2084 bl:2.3680 bb:1.0789 rl:2.3000 rb:1.0604 dl:2160-2164 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1917/2084 bl:2.3264 bb:1.0589 rl:2.3005 rb:1.0604 dl:2117-2122 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1911/2084 bl:2.2101 bb:0.9685 rl:2.2987 rb:1.0585 dl:2072-2081 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1905/2084 bl:2.4125 bb:1.0287 rl:2.3009 rb:1.0579 dl:2036-2041 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1899/2084 bl:2.4025 bb:1.0561 rl:2.3027 rb:1.0579 dl:1997-2004 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1893/2084 bl:2.1850 bb:1.0271 rl:2.3007 rb:1.0573 dl:1958-1963 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1887/2084 bl:2.2554 bb:1.0113 rl:2.2999 rb:1.0565 dl:1927-1931 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1881/2084 bl:2.3461 bb:1.0852 rl:2.3007 rb:1.0570 dl:1898-1902 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1875/2084 bl:2.3329 bb:1.0217 rl:2.3012 rb:1.0564 dl:1868-1873 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1869/2084 bl:2.2856 bb:1.0213 rl:2.3009 rb:1.0558 dl:1841-1846 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1861/2084 bl:2.2673 bb:1.0360 rl:2.3004 rb:1.0555 dl:1808-1813 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1855/2084 bl:2.3905 bb:1.0665 rl:2.3017 rb:1.0557 dl:1781-1785 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1847/2084 bl:2.2612 bb:1.0281 rl:2.3012 rb:1.0553 dl:1749-1753 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1838/2084 bl:2.2076 bb:1.0252 rl:2.2999 rb:1.0549 dl:1714-1718 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1829/2084 bl:2.4384 bb:1.0285 rl:2.3017 rb:1.0545 dl:1680-1684 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1822/2084 bl:2.2391 bb:1.0026 rl:2.3009 rb:1.0538 dl:1657-1659 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1813/2084 bl:2.2440 bb:1.0335 rl:2.3002 rb:1.0536 dl:1623-1627 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1806/2084 bl:2.3627 bb:1.0191 rl:2.3010 rb:1.0531 dl:1601-1604 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1797/2084 bl:2.3746 bb:1.1102 rl:2.3018 rb:1.0538 dl:1574-1577 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1789/2084 bl:2.3696 bb:1.0611 rl:2.3026 rb:1.0539 dl:1549-1552 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1781/2084 bl:2.3738 bb:1.0773 rl:2.3034 rb:1.0542 dl:1525-1529 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1775/2084 bl:2.4104 bb:1.0861 rl:2.3046 rb:1.0545 dl:1509-1512 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1766/2084 bl:2.2525 bb:1.0210 rl:2.3040 rb:1.0541 dl:1486-1488 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1759/2084 bl:2.3887 bb:1.0821 rl:2.3049 rb:1.0544 dl:1467-1471 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1750/2084 bl:2.4032 bb:1.1104 rl:2.3059 rb:1.0550 dl:1444-1447 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1743/2084 bl:2.2760 bb:0.9994 rl:2.3056 rb:1.0544 dl:1427-1429 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1735/2084 bl:2.5585 bb:1.0769 rl:2.3080 rb:1.0547 dl:1407-1409 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1727/2084 bl:2.2027 bb:1.0268 rl:2.3070 rb:1.0544 dl:1388-1390 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1719/2084 bl:2.3189 bb:1.0418 rl:2.3071 rb:1.0543 dl:1368-1371 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1710/2084 bl:2.3300 bb:1.0737 rl:2.3073 rb:1.0545 dl:1349-1351 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1702/2084 bl:2.3201 bb:1.0214 rl:2.3075 rb:1.0542 dl:1331-1332 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1694/2084 bl:2.3993 bb:1.0290 rl:2.3083 rb:1.0539 dl:1314-1316 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1686/2084 bl:2.2944 bb:1.0178 rl:2.3081 rb:1.0536 dl:1296-1299 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1678/2084 bl:2.3203 bb:1.0623 rl:2.3082 rb:1.0537 dl:1280-1281 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1671/2084 bl:2.4122 bb:1.1088 rl:2.3091 rb:1.0541 dl:1267-1269 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1662/2084 bl:2.4308 bb:1.0442 rl:2.3100 rb:1.0540 dl:1248-1250 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1653/2084 bl:2.3268 bb:1.0482 rl:2.3102 rb:1.0540 dl:1229-1230 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1646/2084 bl:2.3053 bb:0.9910 rl:2.3101 rb:1.0535 dl:1216-1218 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1638/2084 bl:2.2655 bb:1.0455 rl:2.3098 rb:1.0534 dl:1200-1201 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1630/2084 bl:2.2372 bb:1.0155 rl:2.3093 rb:1.0532 dl:1185-1187 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1622/2084 bl:2.2105 bb:1.0205 rl:2.3086 rb:1.0529 dl:1172-1174 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1614/2084 bl:2.3134 bb:1.0690 rl:2.3086 rb:1.0530 dl:1158-1160 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1606/2084 bl:2.3282 bb:1.0296 rl:2.3087 rb:1.0529 dl:1146-1147 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1600/2084 bl:2.2818 bb:1.0552 rl:2.3086 rb:1.0529 dl:1135-1137 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1593/2084 bl:2.3510 bb:1.0565 rl:2.3088 rb:1.0529 dl:1124-1126 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1584/2084 bl:2.2164 bb:1.0330 rl:2.3082 rb:1.0528 dl:1109-1110 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1577/2084 bl:2.1303 bb:0.9537 rl:2.3071 rb:1.0521 dl:1099-1100 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1561/2084 bl:2.1516 bb:0.9864 rl:2.3061 rb:1.0517 dl:1073-1074 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1554/2084 bl:2.3721 bb:1.0493 rl:2.3065 rb:1.0517 dl:1061-1063 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1545/2084 bl:2.1946 bb:0.9626 rl:2.3058 rb:1.0511 dl:1048-1049 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1537/2084 bl:2.3601 bb:1.0355 rl:2.3062 rb:1.0511 dl:1037-1039 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1530/2084 bl:2.4838 bb:1.1088 rl:2.3072 rb:1.0514 dl:1027-1028 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1522/2084 bl:2.2185 bb:1.0107 rl:2.3067 rb:1.0512 dl:1015-1016 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1515/2084 bl:2.3151 bb:1.0863 rl:2.3067 rb:1.0514 dl:1006-1008 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1506/2084 bl:2.3668 bb:1.0042 rl:2.3071 rb:1.0511 dl:995-996 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1498/2084 bl:2.2442 bb:1.0151 rl:2.3067 rb:1.0509 dl:985-986 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1490/2084 bl:2.3182 bb:1.0928 rl:2.3068 rb:1.0511 dl:973-975 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1482/2084 bl:2.3815 bb:1.0859 rl:2.3072 rb:1.0513 dl:963-964 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1473/2084 bl:2.2293 bb:1.0246 rl:2.3068 rb:1.0511 dl:952-953 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1465/2084 bl:2.3677 bb:1.0671 rl:2.3071 rb:1.0512 dl:943-944 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1457/2084 bl:2.2351 bb:1.0236 rl:2.3067 rb:1.0511 dl:934-935 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1449/2084 bl:2.4524 bb:1.1257 rl:2.3075 rb:1.0515 dl:924-926 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1441/2084 bl:2.3091 bb:1.0626 rl:2.3075 rb:1.0515 dl:915-916 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1432/2084 bl:2.4114 bb:1.0447 rl:2.3080 rb:1.0515 dl:904-905 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1425/2084 bl:2.2374 bb:0.9287 rl:2.3076 rb:1.0508 dl:896-896 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1416/2084 bl:2.4030 bb:1.0815 rl:2.3081 rb:1.0510 dl:886-887 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1409/2084 bl:2.3285 bb:1.0032 rl:2.3082 rb:1.0507 dl:878-879 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1401/2084 bl:2.3075 bb:1.0099 rl:2.3082 rb:1.0505 dl:869-870 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1394/2084 bl:2.3130 bb:1.0470 rl:2.3082 rb:1.0505 dl:861-862 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1386/2084 bl:2.3051 bb:0.9547 rl:2.3082 rb:1.0501 dl:853-854 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1378/2084 bl:2.3882 bb:1.0217 rl:2.3085 rb:1.0499 dl:844-845 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1370/2084 bl:2.2617 bb:1.0620 rl:2.3083 rb:1.0500 dl:836-837 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1361/2084 bl:2.3704 bb:1.0357 rl:2.3086 rb:1.0499 dl:826-827 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1352/2084 bl:2.3565 bb:1.0772 rl:2.3088 rb:1.0500 dl:816-817 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1345/2084 bl:2.3528 bb:1.0898 rl:2.3090 rb:1.0502 dl:809-810 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1337/2084 bl:2.2758 bb:1.0285 rl:2.3088 rb:1.0501 dl:802-803 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1328/2084 bl:2.2352 bb:1.0080 rl:2.3085 rb:1.0499 dl:792-794 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1319/2084 bl:2.2865 bb:1.0134 rl:2.3085 rb:1.0498 dl:783-784 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1311/2084 bl:2.1950 bb:0.9895 rl:2.3080 rb:1.0496 dl:776-777 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1302/2084 bl:2.3152 bb:1.0271 rl:2.3080 rb:1.0495 dl:768-770 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1295/2084 bl:2.3313 bb:1.0239 rl:2.3081 rb:1.0494 dl:762-763 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1285/2084 bl:2.2820 bb:1.0098 rl:2.3080 rb:1.0492 dl:753-754 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1276/2084 bl:2.4280 bb:1.0643 rl:2.3085 rb:1.0493 dl:745-746 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1269/2084 bl:2.3790 bb:1.0770 rl:2.3087 rb:1.0494 dl:739-740 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1261/2084 bl:2.3425 bb:1.0671 rl:2.3088 rb:1.0494 dl:731-732 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1253/2084 bl:2.3759 bb:1.0653 rl:2.3091 rb:1.0495 dl:725-725 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1244/2084 bl:2.3186 bb:1.0806 rl:2.3091 rb:1.0496 dl:717-718 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1237/2084 bl:2.1279 bb:0.9775 rl:2.3085 rb:1.0494 dl:711-712 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1230/2084 bl:2.4108 bb:1.0416 rl:2.3088 rb:1.0493 dl:706-706 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1221/2084 bl:2.1208 bb:1.0365 rl:2.3082 rb:1.0493 dl:699-699 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1212/2084 bl:2.2921 bb:1.0135 rl:2.3082 rb:1.0492 dl:692-692 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1205/2084 bl:2.2994 bb:1.0208 rl:2.3081 rb:1.0491 dl:686-687 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1197/2084 bl:2.3370 bb:0.9954 rl:2.3082 rb:1.0489 dl:679-680 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1188/2084 bl:2.3971 bb:1.1050 rl:2.3085 rb:1.0491 dl:673-673 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1181/2084 bl:2.1714 bb:1.0074 rl:2.3081 rb:1.0489 dl:667-667 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1172/2084 bl:2.2293 bb:0.9576 rl:2.3078 rb:1.0486 dl:660-661 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1166/2084 bl:2.3158 bb:1.0557 rl:2.3079 rb:1.0486 dl:655-655 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1156/2084 bl:2.3214 bb:1.0328 rl:2.3079 rb:1.0486 dl:647-648 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1148/2084 bl:2.3315 bb:1.0349 rl:2.3080 rb:1.0486 dl:642-643 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1140/2084 bl:2.2712 bb:1.0693 rl:2.3079 rb:1.0486 dl:637-638 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1133/2084 bl:2.3599 bb:1.0755 rl:2.3080 rb:1.0487 dl:631-632 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1125/2084 bl:2.2832 bb:1.0634 rl:2.3079 rb:1.0487 dl:625-626 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1115/2084 bl:2.2534 bb:1.0090 rl:2.3078 rb:1.0486 dl:618-619 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1108/2084 bl:2.2999 bb:1.0308 rl:2.3078 rb:1.0486 dl:613-614 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1100/2084 bl:2.2881 bb:1.0308 rl:2.3077 rb:1.0485 dl:607-608 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1092/2084 bl:2.2985 bb:1.0329 rl:2.3077 rb:1.0485 dl:601-602 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1083/2084 bl:2.2654 bb:1.0495 rl:2.3076 rb:1.0485 dl:595-596 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1077/2084 bl:2.3250 bb:1.0148 rl:2.3076 rb:1.0484 dl:591-591 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1066/2084 bl:2.3069 bb:1.0671 rl:2.3076 rb:1.0484 dl:583-584 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1060/2084 bl:2.2320 bb:0.9998 rl:2.3074 rb:1.0483 dl:579-579 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1052/2084 bl:2.2226 bb:1.0699 rl:2.3072 rb:1.0484 dl:574-574 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1042/2084 bl:2.3963 bb:1.0973 rl:2.3074 rb:1.0485 dl:567-568 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1035/2084 bl:2.2994 bb:1.0372 rl:2.3074 rb:1.0485 dl:562-563 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1028/2084 bl:2.3350 bb:1.1063 rl:2.3075 rb:1.0486 dl:557-558 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1020/2084 bl:2.2684 bb:1.0261 rl:2.3074 rb:1.0485 dl:552-553 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1012/2084 bl:2.4470 bb:1.1526 rl:2.3077 rb:1.0488 dl:547-548 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1006/2084 bl:2.3592 bb:1.0690 rl:2.3078 rb:1.0488 dl:544-544 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1000/2084 bl:2.4214 bb:1.0471 rl:2.3081 rb:1.0488 dl:540-540 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b991/2084 bl:2.3395 bb:1.0433 rl:2.3082 rb:1.0488 dl:534-534 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b984/2084 bl:2.4089 bb:1.0621 rl:2.3084 rb:1.0488 dl:529-530 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b976/2084 bl:2.3591 bb:1.1026 rl:2.3085 rb:1.0490 dl:525-525 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b969/2084 bl:2.3364 bb:1.0439 rl:2.3086 rb:1.0490 dl:521-521 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b959/2084 bl:2.3411 bb:1.1111 rl:2.3087 rb:1.0491 dl:514-515 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b953/2084 bl:2.3968 bb:1.0779 rl:2.3089 rb:1.0492 dl:510-510 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b943/2084 bl:2.2336 bb:1.0180 rl:2.3087 rb:1.0491 dl:503-504 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b934/2084 bl:2.3855 bb:1.1078 rl:2.3089 rb:1.0492 dl:498-499 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b929/2084 bl:2.2690 bb:0.9912 rl:2.3088 rb:1.0491 dl:495-495 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b918/2084 bl:2.3677 bb:1.0765 rl:2.3089 rb:1.0491 dl:489-490 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b911/2084 bl:2.3624 bb:1.1159 rl:2.3090 rb:1.0493 dl:485-486 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b906/2084 bl:2.4110 bb:1.0696 rl:2.3092 rb:1.0493 dl:482-483 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b898/2084 bl:2.3141 bb:1.0453 rl:2.3092 rb:1.0493 dl:478-478 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b891/2084 bl:2.3677 bb:1.0443 rl:2.3094 rb:1.0493 dl:474-474 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b882/2084 bl:2.4349 bb:1.1348 rl:2.3096 rb:1.0495 dl:468-469 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b876/2084 bl:2.1649 bb:1.0640 rl:2.3093 rb:1.0495 dl:465-465 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b865/2084 bl:2.3409 bb:1.0936 rl:2.3094 rb:1.0496 dl:459-460 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b858/2084 bl:2.3622 bb:1.0632 rl:2.3095 rb:1.0496 dl:456-456 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b851/2084 bl:2.4518 bb:1.1198 rl:2.3098 rb:1.0497 dl:451-451 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b841/2084 bl:2.1572 bb:0.9970 rl:2.3095 rb:1.0496 dl:445-446 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b834/2084 bl:2.2805 bb:1.0786 rl:2.3094 rb:1.0497 dl:441-442 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b830/2084 bl:2.2555 bb:1.0349 rl:2.3093 rb:1.0497 dl:439-439 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b820/2084 bl:2.5185 bb:1.0880 rl:2.3097 rb:1.0497 dl:434-434 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b811/2084 bl:2.2786 bb:1.0789 rl:2.3096 rb:1.0498 dl:428-429 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b805/2084 bl:2.2532 bb:0.9783 rl:2.3095 rb:1.0497 dl:425-425 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b795/2084 bl:2.3527 bb:1.0836 rl:2.3096 rb:1.0497 dl:419-420 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b790/2084 bl:2.4063 bb:1.0594 rl:2.3098 rb:1.0497 dl:417-417 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b782/2084 bl:2.2802 bb:1.0658 rl:2.3097 rb:1.0498 dl:412-412 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b772/2084 bl:2.2472 bb:1.0408 rl:2.3096 rb:1.0497 dl:406-407 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b763/2084 bl:2.3479 bb:1.0799 rl:2.3097 rb:1.0498 dl:401-402 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b755/2084 bl:2.4774 bb:1.1440 rl:2.3100 rb:1.0499 dl:397-398 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b747/2084 bl:2.4558 bb:1.1205 rl:2.3102 rb:1.0501 dl:393-394 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b739/2084 bl:2.1577 bb:1.0679 rl:2.3100 rb:1.0501 dl:389-390 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b730/2084 bl:2.1960 bb:1.0645 rl:2.3098 rb:1.0501 dl:384-385 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b724/2084 bl:2.3527 bb:1.0731 rl:2.3099 rb:1.0501 dl:382-382 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b716/2084 bl:2.4605 bb:1.1269 rl:2.3101 rb:1.0503 dl:378-378 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b708/2084 bl:2.4165 bb:1.1439 rl:2.3103 rb:1.0504 dl:374-374 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b699/2084 bl:2.4448 bb:1.1249 rl:2.3105 rb:1.0505 dl:370-370 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b691/2084 bl:2.3275 bb:1.0722 rl:2.3105 rb:1.0506 dl:366-366 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b683/2084 bl:2.4738 bb:1.1174 rl:2.3107 rb:1.0507 dl:362-362 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b674/2084 bl:2.3554 bb:1.1515 rl:2.3108 rb:1.0508 dl:358-358 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b664/2084 bl:2.4457 bb:1.1029 rl:2.3110 rb:1.0509 dl:353-354 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b656/2084 bl:2.3125 bb:1.0646 rl:2.3110 rb:1.0509 dl:349-350 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b647/2084 bl:2.4441 bb:1.1084 rl:2.3112 rb:1.0510 dl:345-346 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b642/2084 bl:2.3559 bb:1.0906 rl:2.3112 rb:1.0510 dl:343-343 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b631/2084 bl:2.3710 bb:1.1314 rl:2.3113 rb:1.0511 dl:337-338 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b625/2084 bl:2.2683 bb:1.1149 rl:2.3113 rb:1.0512 dl:335-335 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b614/2084 bl:2.3452 bb:1.1369 rl:2.3113 rb:1.0513 dl:330-331 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b609/2084 bl:2.5212 bb:1.0996 rl:2.3116 rb:1.0514 dl:328-328 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b599/2084 bl:2.4669 bb:1.1386 rl:2.3118 rb:1.0515 dl:324-324 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b592/2084 bl:2.3637 bb:1.0864 rl:2.3119 rb:1.0515 dl:321-321 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b583/2084 bl:2.3434 bb:1.1379 rl:2.3119 rb:1.0516 dl:316-317 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b577/2084 bl:2.3284 bb:1.0934 rl:2.3119 rb:1.0517 dl:314-314 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b567/2084 bl:2.2652 bb:1.1280 rl:2.3119 rb:1.0518 dl:310-310 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b560/2084 bl:2.2528 bb:1.0533 rl:2.3118 rb:1.0518 dl:307-307 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b553/2084 bl:2.4476 bb:1.1641 rl:2.3120 rb:1.0519 dl:304-304 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b541/2084 bl:2.3494 bb:1.0701 rl:2.3120 rb:1.0519 dl:299-300 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b534/2084 bl:2.1754 bb:1.0722 rl:2.3118 rb:1.0520 dl:297-297 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b527/2084 bl:2.4563 bb:1.0594 rl:2.3120 rb:1.0520 dl:294-294 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b517/2084 bl:2.4077 bb:1.1096 rl:2.3121 rb:1.0520 dl:289-290 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b508/2084 bl:2.3828 bb:1.1040 rl:2.3122 rb:1.0521 dl:285-286 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b500/2084 bl:2.3250 bb:1.1111 rl:2.3122 rb:1.0522 dl:282-283 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b493/2084 bl:2.4324 bb:1.1195 rl:2.3124 rb:1.0522 dl:280-280 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b485/2084 bl:2.3005 bb:1.0738 rl:2.3123 rb:1.0523 dl:277-277 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b478/2084 bl:2.4095 bb:1.1213 rl:2.3124 rb:1.0523 dl:274-274 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b465/2084 bl:2.3917 bb:1.1716 rl:2.3125 rb:1.0524 dl:269-270 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b457/2084 bl:2.5773 bb:1.1760 rl:2.3128 rb:1.0526 dl:266-267 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b450/2084 bl:2.5600 bb:1.2573 rl:2.3131 rb:1.0528 dl:264-264 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b441/2084 bl:2.5547 bb:1.1173 rl:2.3133 rb:1.0528 dl:260-261 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b433/2084 bl:2.3983 bb:1.1344 rl:2.3134 rb:1.0529 dl:257-258 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b424/2084 bl:2.5128 bb:1.1310 rl:2.3136 rb:1.0530 dl:254-255 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b416/2084 bl:2.3856 bb:1.1649 rl:2.3137 rb:1.0531 dl:251-252 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b408/2084 bl:2.2298 bb:1.0331 rl:2.3136 rb:1.0531 dl:248-249 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b402/2084 bl:2.4750 bb:1.2010 rl:2.3138 rb:1.0532 dl:246-246 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b388/2084 bl:2.5040 bb:1.2155 rl:2.3139 rb:1.0534 dl:241-242 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b380/2084 bl:2.3571 bb:1.2084 rl:2.3140 rb:1.0535 dl:238-239 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b376/2084 bl:2.4949 bb:1.1635 rl:2.3141 rb:1.0536 dl:237-237 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b367/2084 bl:2.3777 bb:1.1075 rl:2.3142 rb:1.0536 dl:234-234 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b362/2084 bl:2.4133 bb:1.1175 rl:2.3143 rb:1.0537 dl:232-232 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b353/2084 bl:2.4552 bb:1.1001 rl:2.3144 rb:1.0537 dl:229-229 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b344/2084 bl:2.3875 bb:1.1877 rl:2.3145 rb:1.0539 dl:226-226 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b339/2084 bl:2.4771 bb:1.2497 rl:2.3146 rb:1.0540 dl:224-224 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b330/2084 bl:2.4569 bb:1.2096 rl:2.3147 rb:1.0541 dl:221-221 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b321/2084 bl:2.3933 bb:1.0575 rl:2.3148 rb:1.0541 dl:218-218 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b311/2084 bl:2.5307 bb:1.2156 rl:2.3150 rb:1.0543 dl:214-215 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b307/2084 bl:2.5715 bb:1.1775 rl:2.3152 rb:1.0544 dl:213-213 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b298/2084 bl:2.4324 bb:1.1790 rl:2.3153 rb:1.0545 dl:210-210 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b289/2084 bl:2.3485 bb:1.1397 rl:2.3153 rb:1.0545 dl:207-207 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b284/2084 bl:2.4221 bb:1.1600 rl:2.3154 rb:1.0546 dl:205-205 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b272/2084 bl:2.5453 bb:1.1449 rl:2.3156 rb:1.0547 dl:201-202 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b267/2084 bl:2.5113 bb:1.2021 rl:2.3157 rb:1.0548 dl:200-200 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b260/2084 bl:2.5225 bb:1.2535 rl:2.3159 rb:1.0549 dl:197-197 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b251/2084 bl:2.4065 bb:1.1617 rl:2.3160 rb:1.0550 dl:194-194 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b241/2084 bl:2.3209 bb:1.1335 rl:2.3160 rb:1.0551 dl:190-191 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b232/2084 bl:2.4757 bb:1.1176 rl:2.3161 rb:1.0551 dl:187-188 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b227/2084 bl:2.4695 bb:1.1383 rl:2.3162 rb:1.0552 dl:186-186 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b216/2084 bl:2.4756 bb:1.1496 rl:2.3163 rb:1.0552 dl:182-183 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b208/2084 bl:2.5494 bb:1.1850 rl:2.3165 rb:1.0553 dl:179-180 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b204/2084 bl:2.4539 bb:1.2536 rl:2.3166 rb:1.0554 dl:178-178 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b194/2084 bl:2.4632 bb:1.1920 rl:2.3167 rb:1.0555 dl:175-175 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b189/2084 bl:2.5630 bb:1.2180 rl:2.3168 rb:1.0556 dl:173-173 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b178/2084 bl:2.5393 bb:1.1575 rl:2.3170 rb:1.0557 dl:170-170 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b170/2084 bl:2.5031 bb:1.1295 rl:2.3171 rb:1.0557 dl:167-167 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b164/2084 bl:2.5100 bb:1.1816 rl:2.3172 rb:1.0558 dl:165-165 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b155/2084 bl:2.4987 bb:1.1257 rl:2.3173 rb:1.0559 dl:161-162 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b146/2084 bl:2.5757 bb:1.2193 rl:2.3175 rb:1.0560 dl:158-159 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b142/2084 bl:2.4562 bb:1.2436 rl:2.3176 rb:1.0561 dl:157-157 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b134/2084 bl:2.4673 bb:1.1662 rl:2.3177 rb:1.0561 dl:154-154 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b126/2084 bl:2.6413 bb:1.2418 rl:2.3179 rb:1.0562 dl:151-151 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b117/2084 bl:2.6267 bb:1.2368 rl:2.3180 rb:1.0563 dl:148-148 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b110/2084 bl:2.4920 bb:1.1722 rl:2.3181 rb:1.0564 dl:145-145 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b99/2084 bl:2.5322 bb:1.1545 rl:2.3182 rb:1.0564 dl:141-142 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b93/2084 bl:2.6410 bb:1.2555 rl:2.3184 rb:1.0565 dl:139-139 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b85/2084 bl:2.6243 bb:1.2038 rl:2.3186 rb:1.0566 dl:136-136 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b77/2084 bl:2.5007 bb:1.1737 rl:2.3187 rb:1.0567 dl:133-133 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b70/2084 bl:2.4971 bb:1.1661 rl:2.3187 rb:1.0567 dl:130-130 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b62/2084 bl:2.4235 bb:1.1468 rl:2.3188 rb:1.0568 dl:127-127 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b52/2084 bl:2.5106 bb:1.2286 rl:2.3189 rb:1.0568 dl:122-123 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b44/2084 bl:2.5466 bb:1.1856 rl:2.3190 rb:1.0569 dl:118-119 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b36/2084 bl:2.6587 bb:1.2352 rl:2.3191 rb:1.0570 dl:114-115 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b29/2084 bl:2.5858 bb:1.2698 rl:2.3192 rb:1.0571 dl:109-110 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b22/2084 bl:2.8576 bb:1.2777 rl:2.3195 rb:1.0571 dl:105-105 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b13/2084 bl:2.7533 bb:1.2190 rl:2.3196 rb:1.0572 dl:98-99 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b6/2084 bl:2.6737 bb:1.1695 rl:2.3197 rb:1.0572 dl:89-90 gd:1 sr:0 sf:1 tr:24/24 wt:0 +quantized_ttt_phased val_loss:2.31260166 val_bpb:1.05676598 eval_time:473217ms +total_eval_time:473.2s diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed42_corrected_token_only.log b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed42_corrected_token_only.log new file mode 100644 index 0000000000..c757861221 --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_eval_seed42_corrected_token_only.log @@ -0,0 +1,813 @@ +W0502 17:48:25.420000 328439 torch/distributed/run.py:803] +W0502 17:48:25.420000 328439 torch/distributed/run.py:803] ***************************************** +W0502 17:48:25.420000 328439 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0502 17:48:25.420000 328439 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + agree_add_boost: 0.0 + artifact_dir: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_evalonly_20260502/seed42 + attn_clip_sigmas: 13.0 + attn_out_gate_enabled: False + attn_out_gate_src: proj + awq_lite_bits: 8 + awq_lite_enabled: True + awq_lite_group_size: 64 + awq_lite_group_top_k: 1 + beta1: 0.9 + beta2: 0.99 + caseops_enabled: True + compile_shape_warmup: True + compile_shape_warmup_iters: 1 + compile_shape_warmup_loop_modes: auto + compressor: pergroup + data_dir: ./data + datasets_dir: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved + distributed: True + ema_decay: 0.9965 + embed_bits: 7 + embed_clip_sigmas: 14.0 + embed_lr: 0.6 + embed_wd: 0.085 + enable_looping_at: 0.35 + eval_include_tail: True + eval_seq_len: 3072 + eval_stride: 1536 + fused_ce_enabled: True + gate_window: 12 + gated_attn_enabled: False + gated_attn_init_std: 0.01 + gated_attn_quant_gate: True + global_ttt_batch_seqs: 32 + global_ttt_chunk_tokens: 32768 + global_ttt_epochs: 1 + global_ttt_grad_clip: 1.0 + global_ttt_lr: 0.001 + global_ttt_momentum: 0.9 + global_ttt_respect_doc_boundaries: True + global_ttt_warmup_chunks: 0 + global_ttt_warmup_start_lr: 0.0 + gptq_calibration_batches: 16 + gptq_reserve_seconds: 4.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + is_main_process: True + iterations: 20000 + leaky_relu_sq_slope: 0.3 + ln_scale: True + local_rank: 0 + logfile: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_evalonly_20260502/seed42/pr2140_seed42_hardoff_evalonly.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + lqer_asym_enabled: True + lqer_asym_group: 64 + lqer_enabled: True + lqer_factor_bits: 4 + lqer_gain_select: False + lqer_rank: 4 + lqer_scope: all + lqer_top_k: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.026 + max_wallclock_seconds: 600.0 + midrun_cap_log_updates: False + midrun_cap_schedule: + min_lr: 0.1 + mlp_clip_sigmas: 11.5 + mlp_mult: 4.0 + model_dim: 512 + model_path: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_evalonly_20260502/seed42/final_model.pt + muon_backend_steps: 5 + muon_momentum: 0.97 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + ngram_hint_precompute_outside: False + ngram_tilt_enabled: True + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_final_lane: mean + parallel_start_layer: 8 + phased_ttt_num_phases: 1 + phased_ttt_prefix_docs: 2500 + qk_gain_init: 5.25 + quantized_model_path: /workspace/parameter-golf-pr2014-gated-clean/records/corrected_zeroed_channels_authorhf_hardoff_evalonly_20260502/seed42/final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 3072 + rope_yarn: False + run_id: pr2140_seed42_hardoff_evalonly + scalar_lr: 0.02 + seed: 42 + seq_change_warmup_steps: 32 + skip_gates_enabled: True + skylight_norm_beta2: 0.95 + skylight_norm_ema: False + skylight_norm_eps: 1e-07 + skylight_uw_floor: False + skylight_uw_ratio: 0.35 + smear_gate_enabled: True + sparse_attn_gate_enabled: True + sparse_attn_gate_init_std: 0.0 + sparse_attn_gate_scale: 0.5 + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + token_boost: 2.625 + token_order: 16 + token_threshold: 0.8 + tokenizer_path: /tmp/parameter-golf-data-authorhf/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + train_batch_tokens: 786432 + train_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 3072 + train_seq_schedule: 1024@0.100,2048@0.700,3072@1.000 + train_seq_schedule_mode: wallclock + ttt_batch_size: 24 + ttt_beta1: 0.0 + ttt_beta2: 0.99 + ttt_chunk_size: 64 + ttt_enabled: True + ttt_eval_batches: + ttt_eval_seq_len: 3072 + ttt_grad_steps: 1 + ttt_k_lora: True + ttt_local_lr_mult: 0.75 + ttt_lora_lr: 0.0001 + ttt_lora_rank: 80 + ttt_mask: no_qv + ttt_mlp_lora: True + ttt_o_lora: True + ttt_optimizer: adam + ttt_q_lora: False + ttt_short_beta2: 0.99 + ttt_short_chunk_size: 32 + ttt_short_doc_len: 2000 + ttt_short_lora_enabled: False + ttt_short_lora_lr: 0.0001 + ttt_short_lora_rank: 80 + ttt_short_score_first_enabled: True + ttt_short_score_first_steps: 256:16,2000:32 + ttt_short_weight_decay: 0.5 + ttt_train_max_doc_len: 0 + ttt_train_min_doc_len: 0 + ttt_v_lora: False + ttt_warm_start_mean_doc_len: 2000 + ttt_warm_start_mean_enabled: False + ttt_warm_start_mean_momentum: 0.95 + ttt_weight_decay: 0.5 + val_batch_tokens: 524288 + val_bytes_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_bytes_*.bin + val_doc_fraction: 1.0 + val_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_*.bin + val_loss_every: 0 + vocab_size: 8192 + warmdown_frac: 0.85 + warmdown_iters: 0 + warmup_steps: 20 + within_boost: 0.0 + within_tau: 0.45 + word_boost: 0.0 + word_normalize: strip_punct_lower + word_order: 4 + word_tau: 0.65 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 47853343 +TTT_EVAL_ONLY=1 — skipping training + GPTQ, loading saved artifact for TTT eval +ttt_lora_alpha: 144.0 +ttt_warm_start_a: True +ttt_weight_decay: 0.5 +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 18.2s +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 17.7s +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (239.0s) + +beginning TTT eval timer +ngram_tilt:hints total=47853343 gated=628156 token_gate=628156 within_gate=0 word_gate=0 agree2plus=0 +ngram_tilt:precompute_outside_timer_done elapsed=16.18s total_targets=47853343 +ttt_phased: total_docs:50000 prefix_docs:2500 suffix_docs:47500 num_phases:1 boundaries:[2500] target_tokens:47853343 +ttp: b2080/2084 bl:2.2459 bb:1.0844 rl:2.2459 rb:1.0844 dl:14991-17244 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2073/2084 bl:2.3355 bb:1.1031 rl:2.2789 rb:1.0914 dl:9244-9539 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2068/2084 bl:2.0767 bb:1.0190 rl:2.2315 rb:1.0748 dl:7689-7878 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2063/2084 bl:2.2753 bb:1.0716 rl:2.2388 rb:1.0742 dl:6523-6721 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2057/2084 bl:2.3559 bb:1.0916 rl:2.2537 rb:1.0765 dl:5762-5854 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2051/2084 bl:2.3253 bb:1.1002 rl:2.2611 rb:1.0790 dl:5231-5322 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2044/2084 bl:2.1605 bb:1.0763 rl:2.2526 rb:1.0788 dl:4697-4743 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2037/2084 bl:2.3895 bb:1.1016 rl:2.2625 rb:1.0805 dl:4333-4373 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2030/2084 bl:2.3983 bb:1.0899 rl:2.2711 rb:1.0811 dl:4022-4056 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2023/2084 bl:2.3748 bb:1.0531 rl:2.2768 rb:1.0794 dl:3761-3786 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2015/2084 bl:2.3348 bb:1.0064 rl:2.2797 rb:1.0755 dl:3488-3516 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2008/2084 bl:2.2425 bb:1.0283 rl:2.2780 rb:1.0733 dl:3325-3344 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b2001/2084 bl:2.2693 bb:1.0310 rl:2.2777 rb:1.0716 dl:3150-3175 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1993/2084 bl:2.2830 bb:1.0542 rl:2.2779 rb:1.0709 dl:2992-3008 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1986/2084 bl:2.2626 bb:1.0285 rl:2.2773 rb:1.0694 dl:2856-2872 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1979/2084 bl:2.3594 bb:1.0855 rl:2.2800 rb:1.0699 dl:2753-2769 gd:0 sr:0 sf:0 tr:24/24 wt:0 +ttpp: phase:1/1 pd:2672 gd:2500 t:214.9s +tttg: c1/344 lr:0.001000 t:0.3s +tttg: c2/344 lr:0.001000 t:0.4s +tttg: c3/344 lr:0.001000 t:0.5s +tttg: c4/344 lr:0.001000 t:0.6s +tttg: c5/344 lr:0.001000 t:0.7s +tttg: c6/344 lr:0.000999 t:0.8s +tttg: c7/344 lr:0.000999 t:0.9s +tttg: c8/344 lr:0.000999 t:1.0s +tttg: c9/344 lr:0.000999 t:1.1s +tttg: c10/344 lr:0.000998 t:1.2s +tttg: c11/344 lr:0.000998 t:1.3s +tttg: c12/344 lr:0.000997 t:1.4s +tttg: c13/344 lr:0.000997 t:1.5s +tttg: c14/344 lr:0.000996 t:1.6s +tttg: c15/344 lr:0.000996 t:1.7s +tttg: c16/344 lr:0.000995 t:1.8s +tttg: c17/344 lr:0.000995 t:1.9s +tttg: c18/344 lr:0.000994 t:2.0s +tttg: c19/344 lr:0.000993 t:2.1s +tttg: c20/344 lr:0.000992 t:2.2s +tttg: c21/344 lr:0.000992 t:2.3s +tttg: c22/344 lr:0.000991 t:2.4s +tttg: c23/344 lr:0.000990 t:2.5s +tttg: c24/344 lr:0.000989 t:2.6s +tttg: c25/344 lr:0.000988 t:2.7s +tttg: c26/344 lr:0.000987 t:2.8s +tttg: c27/344 lr:0.000986 t:2.8s +tttg: c28/344 lr:0.000985 t:2.9s +tttg: c29/344 lr:0.000984 t:3.0s +tttg: c30/344 lr:0.000982 t:3.1s +tttg: c31/344 lr:0.000981 t:3.2s +tttg: c32/344 lr:0.000980 t:3.3s +tttg: c33/344 lr:0.000979 t:3.4s +tttg: c34/344 lr:0.000977 t:3.5s +tttg: c35/344 lr:0.000976 t:3.6s +tttg: c36/344 lr:0.000975 t:3.7s +tttg: c37/344 lr:0.000973 t:3.8s +tttg: c38/344 lr:0.000972 t:3.9s +tttg: c39/344 lr:0.000970 t:4.0s +tttg: c40/344 lr:0.000968 t:4.0s +tttg: c41/344 lr:0.000967 t:4.1s +tttg: c42/344 lr:0.000965 t:4.2s +tttg: c43/344 lr:0.000963 t:4.3s +tttg: c44/344 lr:0.000962 t:4.4s +tttg: c45/344 lr:0.000960 t:4.5s +tttg: c46/344 lr:0.000958 t:4.6s +tttg: c47/344 lr:0.000956 t:4.7s +tttg: c48/344 lr:0.000954 t:4.8s +tttg: c49/344 lr:0.000952 t:4.9s +tttg: c50/344 lr:0.000950 t:5.0s +tttg: c51/344 lr:0.000948 t:5.1s +tttg: c52/344 lr:0.000946 t:5.2s +tttg: c53/344 lr:0.000944 t:5.3s +tttg: c54/344 lr:0.000942 t:5.3s +tttg: c55/344 lr:0.000940 t:5.4s +tttg: c56/344 lr:0.000938 t:5.5s +tttg: c57/344 lr:0.000936 t:5.6s +tttg: c58/344 lr:0.000933 t:5.7s +tttg: c59/344 lr:0.000931 t:5.8s +tttg: c60/344 lr:0.000929 t:5.9s +tttg: c61/344 lr:0.000926 t:6.0s +tttg: c62/344 lr:0.000924 t:6.1s +tttg: c63/344 lr:0.000922 t:6.2s +tttg: c64/344 lr:0.000919 t:6.3s +tttg: c65/344 lr:0.000917 t:6.4s +tttg: c66/344 lr:0.000914 t:6.5s +tttg: c67/344 lr:0.000911 t:6.6s +tttg: c68/344 lr:0.000909 t:6.6s +tttg: c69/344 lr:0.000906 t:6.7s +tttg: c70/344 lr:0.000903 t:6.8s +tttg: c71/344 lr:0.000901 t:6.9s +tttg: c72/344 lr:0.000898 t:7.0s +tttg: c73/344 lr:0.000895 t:7.1s +tttg: c74/344 lr:0.000892 t:7.2s +tttg: c75/344 lr:0.000889 t:7.3s +tttg: c76/344 lr:0.000887 t:7.4s +tttg: c77/344 lr:0.000884 t:7.5s +tttg: c78/344 lr:0.000881 t:7.6s +tttg: c79/344 lr:0.000878 t:7.7s +tttg: c80/344 lr:0.000875 t:7.8s +tttg: c81/344 lr:0.000872 t:7.9s +tttg: c82/344 lr:0.000869 t:7.9s +tttg: c83/344 lr:0.000865 t:8.0s +tttg: c84/344 lr:0.000862 t:8.1s +tttg: c85/344 lr:0.000859 t:8.2s +tttg: c86/344 lr:0.000856 t:8.3s +tttg: c87/344 lr:0.000853 t:8.4s +tttg: c88/344 lr:0.000849 t:8.5s +tttg: c89/344 lr:0.000846 t:8.6s +tttg: c90/344 lr:0.000843 t:8.7s +tttg: c91/344 lr:0.000840 t:8.8s +tttg: c92/344 lr:0.000836 t:8.9s +tttg: c93/344 lr:0.000833 t:9.0s +tttg: c94/344 lr:0.000829 t:9.1s +tttg: c95/344 lr:0.000826 t:9.2s +tttg: c96/344 lr:0.000822 t:9.2s +tttg: c97/344 lr:0.000819 t:9.3s +tttg: c98/344 lr:0.000815 t:9.4s +tttg: c99/344 lr:0.000812 t:9.5s +tttg: c100/344 lr:0.000808 t:9.6s +tttg: c101/344 lr:0.000805 t:9.7s +tttg: c102/344 lr:0.000801 t:9.8s +tttg: c103/344 lr:0.000797 t:9.9s +tttg: c104/344 lr:0.000794 t:10.0s +tttg: c105/344 lr:0.000790 t:10.1s +tttg: c106/344 lr:0.000786 t:10.2s +tttg: c107/344 lr:0.000782 t:10.3s +tttg: c108/344 lr:0.000778 t:10.4s +tttg: c109/344 lr:0.000775 t:10.5s +tttg: c110/344 lr:0.000771 t:10.6s +tttg: c111/344 lr:0.000767 t:10.7s +tttg: c112/344 lr:0.000763 t:10.7s +tttg: c113/344 lr:0.000759 t:10.8s +tttg: c114/344 lr:0.000755 t:10.9s +tttg: c115/344 lr:0.000751 t:11.0s +tttg: c116/344 lr:0.000747 t:11.1s +tttg: c117/344 lr:0.000743 t:11.2s +tttg: c118/344 lr:0.000739 t:11.3s +tttg: c119/344 lr:0.000735 t:11.4s +tttg: c120/344 lr:0.000731 t:11.5s +tttg: c121/344 lr:0.000727 t:11.6s +tttg: c122/344 lr:0.000723 t:11.7s +tttg: c123/344 lr:0.000719 t:11.8s +tttg: c124/344 lr:0.000715 t:11.9s +tttg: c125/344 lr:0.000711 t:11.9s +tttg: c126/344 lr:0.000707 t:12.0s +tttg: c127/344 lr:0.000702 t:12.1s +tttg: c128/344 lr:0.000698 t:12.2s +tttg: c129/344 lr:0.000694 t:12.3s +tttg: c130/344 lr:0.000690 t:12.4s +tttg: c131/344 lr:0.000686 t:12.5s +tttg: c132/344 lr:0.000681 t:12.6s +tttg: c133/344 lr:0.000677 t:12.7s +tttg: c134/344 lr:0.000673 t:12.8s +tttg: c135/344 lr:0.000668 t:12.9s +tttg: c136/344 lr:0.000664 t:13.0s +tttg: c137/344 lr:0.000660 t:13.1s +tttg: c138/344 lr:0.000655 t:13.1s +tttg: c139/344 lr:0.000651 t:13.2s +tttg: c140/344 lr:0.000647 t:13.3s +tttg: c141/344 lr:0.000642 t:13.4s +tttg: c142/344 lr:0.000638 t:13.5s +tttg: c143/344 lr:0.000633 t:13.6s +tttg: c144/344 lr:0.000629 t:13.7s +tttg: c145/344 lr:0.000625 t:13.8s +tttg: c146/344 lr:0.000620 t:13.9s +tttg: c147/344 lr:0.000616 t:14.0s +tttg: c148/344 lr:0.000611 t:14.1s +tttg: c149/344 lr:0.000607 t:14.2s +tttg: c150/344 lr:0.000602 t:14.3s +tttg: c151/344 lr:0.000598 t:14.4s +tttg: c152/344 lr:0.000593 t:14.4s +tttg: c153/344 lr:0.000589 t:14.5s +tttg: c154/344 lr:0.000584 t:14.6s +tttg: c155/344 lr:0.000580 t:14.7s +tttg: c156/344 lr:0.000575 t:14.8s +tttg: c157/344 lr:0.000571 t:14.9s +tttg: c158/344 lr:0.000566 t:15.0s +tttg: c159/344 lr:0.000562 t:15.1s +tttg: c160/344 lr:0.000557 t:15.2s +tttg: c161/344 lr:0.000553 t:15.3s +tttg: c162/344 lr:0.000548 t:15.4s +tttg: c163/344 lr:0.000543 t:15.6s +tttg: c164/344 lr:0.000539 t:15.7s +tttg: c165/344 lr:0.000534 t:15.8s +tttg: c166/344 lr:0.000530 t:15.9s +tttg: c167/344 lr:0.000525 t:16.0s +tttg: c168/344 lr:0.000521 t:16.1s +tttg: c169/344 lr:0.000516 t:16.2s +tttg: c170/344 lr:0.000511 t:16.3s +tttg: c171/344 lr:0.000507 t:16.4s +tttg: c172/344 lr:0.000502 t:16.5s +tttg: c173/344 lr:0.000498 t:16.6s +tttg: c174/344 lr:0.000493 t:16.8s +tttg: c175/344 lr:0.000489 t:16.9s +tttg: c176/344 lr:0.000484 t:17.0s +tttg: c177/344 lr:0.000479 t:17.1s +tttg: c178/344 lr:0.000475 t:17.2s +tttg: c179/344 lr:0.000470 t:17.3s +tttg: c180/344 lr:0.000466 t:17.4s +tttg: c181/344 lr:0.000461 t:17.5s +tttg: c182/344 lr:0.000457 t:17.6s +tttg: c183/344 lr:0.000452 t:17.7s +tttg: c184/344 lr:0.000447 t:17.8s +tttg: c185/344 lr:0.000443 t:17.9s +tttg: c186/344 lr:0.000438 t:18.0s +tttg: c187/344 lr:0.000434 t:18.2s +tttg: c188/344 lr:0.000429 t:18.3s +tttg: c189/344 lr:0.000425 t:18.4s +tttg: c190/344 lr:0.000420 t:18.5s +tttg: c191/344 lr:0.000416 t:18.6s +tttg: c192/344 lr:0.000411 t:18.7s +tttg: c193/344 lr:0.000407 t:18.8s +tttg: c194/344 lr:0.000402 t:18.9s +tttg: c195/344 lr:0.000398 t:19.0s +tttg: c196/344 lr:0.000393 t:19.1s +tttg: c197/344 lr:0.000389 t:19.2s +tttg: c198/344 lr:0.000384 t:19.3s +tttg: c199/344 lr:0.000380 t:19.4s +tttg: c200/344 lr:0.000375 t:19.6s +tttg: c201/344 lr:0.000371 t:19.7s +tttg: c202/344 lr:0.000367 t:19.8s +tttg: c203/344 lr:0.000362 t:19.9s +tttg: c204/344 lr:0.000358 t:20.0s +tttg: c205/344 lr:0.000353 t:20.1s +tttg: c206/344 lr:0.000349 t:20.2s +tttg: c207/344 lr:0.000345 t:20.3s +tttg: c208/344 lr:0.000340 t:20.4s +tttg: c209/344 lr:0.000336 t:20.5s +tttg: c210/344 lr:0.000332 t:20.6s +tttg: c211/344 lr:0.000327 t:20.7s +tttg: c212/344 lr:0.000323 t:20.8s +tttg: c213/344 lr:0.000319 t:21.0s +tttg: c214/344 lr:0.000314 t:21.1s +tttg: c215/344 lr:0.000310 t:21.2s +tttg: c216/344 lr:0.000306 t:21.3s +tttg: c217/344 lr:0.000302 t:21.4s +tttg: c218/344 lr:0.000298 t:21.5s +tttg: c219/344 lr:0.000293 t:21.6s +tttg: c220/344 lr:0.000289 t:21.7s +tttg: c221/344 lr:0.000285 t:21.8s +tttg: c222/344 lr:0.000281 t:21.9s +tttg: c223/344 lr:0.000277 t:22.0s +tttg: c224/344 lr:0.000273 t:22.1s +tttg: c225/344 lr:0.000269 t:22.2s +tttg: c226/344 lr:0.000265 t:22.4s +tttg: c227/344 lr:0.000261 t:22.5s +tttg: c228/344 lr:0.000257 t:22.6s +tttg: c229/344 lr:0.000253 t:22.7s +tttg: c230/344 lr:0.000249 t:22.8s +tttg: c231/344 lr:0.000245 t:22.9s +tttg: c232/344 lr:0.000241 t:23.0s +tttg: c233/344 lr:0.000237 t:23.1s +tttg: c234/344 lr:0.000233 t:23.2s +tttg: c235/344 lr:0.000229 t:23.3s +tttg: c236/344 lr:0.000225 t:23.4s +tttg: c237/344 lr:0.000222 t:23.5s +tttg: c238/344 lr:0.000218 t:23.6s +tttg: c239/344 lr:0.000214 t:23.7s +tttg: c240/344 lr:0.000210 t:23.9s +tttg: c241/344 lr:0.000206 t:24.0s +tttg: c242/344 lr:0.000203 t:24.1s +tttg: c243/344 lr:0.000199 t:24.2s +tttg: c244/344 lr:0.000195 t:24.3s +tttg: c245/344 lr:0.000192 t:24.4s +tttg: c246/344 lr:0.000188 t:24.5s +tttg: c247/344 lr:0.000185 t:24.6s +tttg: c248/344 lr:0.000181 t:24.7s +tttg: c249/344 lr:0.000178 t:24.8s +tttg: c250/344 lr:0.000174 t:24.9s +tttg: c251/344 lr:0.000171 t:25.0s +tttg: c252/344 lr:0.000167 t:25.1s +tttg: c253/344 lr:0.000164 t:25.3s +tttg: c254/344 lr:0.000160 t:25.4s +tttg: c255/344 lr:0.000157 t:25.5s +tttg: c256/344 lr:0.000154 t:25.6s +tttg: c257/344 lr:0.000151 t:25.7s +tttg: c258/344 lr:0.000147 t:25.8s +tttg: c259/344 lr:0.000144 t:25.9s +tttg: c260/344 lr:0.000141 t:26.0s +tttg: c261/344 lr:0.000138 t:26.1s +tttg: c262/344 lr:0.000135 t:26.2s +tttg: c263/344 lr:0.000131 t:26.3s +tttg: c264/344 lr:0.000128 t:26.4s +tttg: c265/344 lr:0.000125 t:26.6s +tttg: c266/344 lr:0.000122 t:26.7s +tttg: c267/344 lr:0.000119 t:26.8s +tttg: c268/344 lr:0.000116 t:26.9s +tttg: c269/344 lr:0.000113 t:27.0s +tttg: c270/344 lr:0.000111 t:27.1s +tttg: c271/344 lr:0.000108 t:27.2s +tttg: c272/344 lr:0.000105 t:27.3s +tttg: c273/344 lr:0.000102 t:27.4s +tttg: c274/344 lr:0.000099 t:27.5s +tttg: c275/344 lr:0.000097 t:27.6s +tttg: c276/344 lr:0.000094 t:27.7s +tttg: c277/344 lr:0.000091 t:27.8s +tttg: c278/344 lr:0.000089 t:28.0s +tttg: c279/344 lr:0.000086 t:28.1s +tttg: c280/344 lr:0.000083 t:28.2s +tttg: c281/344 lr:0.000081 t:28.3s +tttg: c282/344 lr:0.000078 t:28.4s +tttg: c283/344 lr:0.000076 t:28.5s +tttg: c284/344 lr:0.000074 t:28.6s +tttg: c285/344 lr:0.000071 t:28.7s +tttg: c286/344 lr:0.000069 t:28.8s +tttg: c287/344 lr:0.000067 t:28.9s +tttg: c288/344 lr:0.000064 t:29.0s +tttg: c289/344 lr:0.000062 t:29.2s +tttg: c290/344 lr:0.000060 t:29.3s +tttg: c291/344 lr:0.000058 t:29.4s +tttg: c292/344 lr:0.000056 t:29.5s +tttg: c293/344 lr:0.000054 t:29.6s +tttg: c294/344 lr:0.000052 t:29.7s +tttg: c295/344 lr:0.000050 t:29.8s +tttg: c296/344 lr:0.000048 t:29.9s +tttg: c297/344 lr:0.000046 t:30.0s +tttg: c298/344 lr:0.000044 t:30.2s +tttg: c299/344 lr:0.000042 t:30.3s +tttg: c300/344 lr:0.000040 t:30.4s +tttg: c301/344 lr:0.000038 t:30.5s +tttg: c302/344 lr:0.000037 t:30.6s +tttg: c303/344 lr:0.000035 t:30.7s +tttg: c304/344 lr:0.000033 t:30.8s +tttg: c305/344 lr:0.000032 t:30.9s +tttg: c306/344 lr:0.000030 t:31.0s +tttg: c307/344 lr:0.000028 t:31.1s +tttg: c308/344 lr:0.000027 t:31.2s +tttg: c309/344 lr:0.000025 t:31.3s +tttg: c310/344 lr:0.000024 t:31.5s +tttg: c311/344 lr:0.000023 t:31.6s +tttg: c312/344 lr:0.000021 t:31.7s +tttg: c313/344 lr:0.000020 t:31.8s +tttg: c314/344 lr:0.000019 t:31.9s +tttg: c315/344 lr:0.000018 t:32.0s +tttg: c316/344 lr:0.000016 t:32.1s +tttg: c317/344 lr:0.000015 t:32.2s +tttg: c318/344 lr:0.000014 t:32.3s +tttg: c319/344 lr:0.000013 t:32.4s +tttg: c320/344 lr:0.000012 t:32.5s +tttg: c321/344 lr:0.000011 t:32.6s +tttg: c322/344 lr:0.000010 t:32.8s +tttg: c323/344 lr:0.000009 t:32.9s +tttg: c324/344 lr:0.000008 t:33.0s +tttg: c325/344 lr:0.000008 t:33.1s +tttg: c326/344 lr:0.000007 t:33.2s +tttg: c327/344 lr:0.000006 t:33.3s +tttg: c328/344 lr:0.000005 t:33.4s +tttg: c329/344 lr:0.000005 t:33.5s +tttg: c330/344 lr:0.000004 t:33.6s +tttg: c331/344 lr:0.000004 t:33.7s +tttg: c332/344 lr:0.000003 t:33.8s +tttg: c333/344 lr:0.000003 t:33.9s +tttg: c334/344 lr:0.000002 t:34.0s +tttg: c335/344 lr:0.000002 t:34.2s +tttg: c336/344 lr:0.000001 t:34.3s +tttg: c337/344 lr:0.000001 t:34.4s +tttg: c338/344 lr:0.000001 t:34.5s +tttg: c339/344 lr:0.000001 t:34.6s +tttg: c340/344 lr:0.000000 t:34.7s +tttg: c341/344 lr:0.000000 t:34.8s +tttg: c342/344 lr:0.000000 t:34.9s +tttg: c343/344 lr:0.000000 t:35.0s +ttpr: phase:1/1 t:250.5s +ttp: b1965/2084 bl:2.2710 bb:1.0055 rl:2.2797 rb:1.0680 dl:2565-2577 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1960/2084 bl:2.5035 bb:1.1249 rl:2.2859 rb:1.0696 dl:2515-2526 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1954/2084 bl:2.3891 bb:1.0568 rl:2.2886 rb:1.0692 dl:2454-2459 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1948/2084 bl:2.4219 bb:1.0829 rl:2.2919 rb:1.0696 dl:2382-2397 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1941/2084 bl:2.3018 bb:1.0499 rl:2.2921 rb:1.0691 dl:2314-2323 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1935/2084 bl:2.2651 bb:1.0272 rl:2.2915 rb:1.0681 dl:2260-2270 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1929/2084 bl:2.2716 bb:1.0207 rl:2.2911 rb:1.0671 dl:2203-2216 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1923/2084 bl:2.3666 bb:1.0783 rl:2.2926 rb:1.0673 dl:2160-2164 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1917/2084 bl:2.3206 bb:1.0563 rl:2.2932 rb:1.0671 dl:2117-2122 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1911/2084 bl:2.2051 bb:0.9663 rl:2.2915 rb:1.0651 dl:2072-2081 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1905/2084 bl:2.4127 bb:1.0288 rl:2.2937 rb:1.0644 dl:2036-2041 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1899/2084 bl:2.4001 bb:1.0550 rl:2.2956 rb:1.0642 dl:1997-2004 gd:1 sr:0 sf:0 tr:24/24 wt:0 +ttp: b1893/2084 bl:2.1802 bb:1.0249 rl:2.2936 rb:1.0635 dl:1958-1963 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1888/2084 bl:2.2561 bb:1.0457 rl:2.2930 rb:1.0632 dl:1931-1937 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1881/2084 bl:2.3463 bb:1.0853 rl:2.2939 rb:1.0636 dl:1898-1902 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1875/2084 bl:2.3301 bb:1.0205 rl:2.2944 rb:1.0629 dl:1868-1873 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1869/2084 bl:2.2875 bb:1.0221 rl:2.2943 rb:1.0623 dl:1841-1846 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1863/2084 bl:2.4548 bb:1.0650 rl:2.2966 rb:1.0623 dl:1817-1820 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1855/2084 bl:2.3843 bb:1.0637 rl:2.2979 rb:1.0623 dl:1781-1785 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1849/2084 bl:2.2637 bb:1.0330 rl:2.2974 rb:1.0619 dl:1758-1762 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1843/2084 bl:2.3409 bb:1.0742 rl:2.2980 rb:1.0621 dl:1734-1738 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1837/2084 bl:2.4093 bb:1.0859 rl:2.2994 rb:1.0624 dl:1710-1713 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1831/2084 bl:2.3017 bb:1.0453 rl:2.2995 rb:1.0622 dl:1688-1691 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1824/2084 bl:2.4317 bb:1.0607 rl:2.3011 rb:1.0622 dl:1662-1665 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1818/2084 bl:2.3016 bb:1.0395 rl:2.3011 rb:1.0619 dl:1641-1644 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1812/2084 bl:2.3379 bb:1.0286 rl:2.3015 rb:1.0615 dl:1620-1623 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1805/2084 bl:2.2454 bb:1.0076 rl:2.3009 rb:1.0608 dl:1598-1601 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1799/2084 bl:2.2875 bb:1.0238 rl:2.3007 rb:1.0604 dl:1581-1583 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1793/2084 bl:2.4394 bb:1.0656 rl:2.3022 rb:1.0605 dl:1562-1565 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1787/2084 bl:2.3173 bb:1.0066 rl:2.3024 rb:1.0599 dl:1542-1546 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1781/2084 bl:2.3771 bb:1.0788 rl:2.3032 rb:1.0601 dl:1525-1529 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1774/2084 bl:2.2788 bb:1.0117 rl:2.3029 rb:1.0595 dl:1506-1509 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1769/2084 bl:2.2750 bb:1.0039 rl:2.3027 rb:1.0590 dl:1493-1496 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1762/2084 bl:2.3681 bb:1.0627 rl:2.3033 rb:1.0590 dl:1476-1479 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1755/2084 bl:2.3153 bb:1.0463 rl:2.3034 rb:1.0589 dl:1457-1459 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1741/2084 bl:2.2324 bb:1.0262 rl:2.3028 rb:1.0586 dl:1422-1425 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1734/2084 bl:2.2859 bb:1.0090 rl:2.3026 rb:1.0581 dl:1405-1407 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1725/2084 bl:2.2800 bb:1.0397 rl:2.3024 rb:1.0579 dl:1384-1387 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1717/2084 bl:2.4253 bb:1.0423 rl:2.3035 rb:1.0578 dl:1364-1366 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1708/2084 bl:2.2890 bb:1.0032 rl:2.3033 rb:1.0573 dl:1344-1346 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1699/2084 bl:2.3287 bb:1.0769 rl:2.3036 rb:1.0575 dl:1324-1326 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1689/2084 bl:2.3164 bb:1.0621 rl:2.3037 rb:1.0575 dl:1302-1305 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1681/2084 bl:2.2767 bb:0.9930 rl:2.3034 rb:1.0570 dl:1285-1287 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1673/2084 bl:2.3143 bb:1.0271 rl:2.3035 rb:1.0567 dl:1271-1273 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1663/2084 bl:2.2389 bb:1.0518 rl:2.3030 rb:1.0567 dl:1250-1252 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1655/2084 bl:2.2671 bb:1.0575 rl:2.3028 rb:1.0567 dl:1232-1235 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1646/2084 bl:2.3049 bb:0.9908 rl:2.3028 rb:1.0562 dl:1216-1218 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1637/2084 bl:2.3108 bb:1.0332 rl:2.3028 rb:1.0560 dl:1197-1199 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1629/2084 bl:2.2013 bb:0.9877 rl:2.3021 rb:1.0556 dl:1184-1185 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1620/2084 bl:2.2745 bb:1.0649 rl:2.3020 rb:1.0556 dl:1169-1170 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1611/2084 bl:2.2573 bb:1.0166 rl:2.3017 rb:1.0554 dl:1153-1155 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1603/2084 bl:2.2153 bb:0.9708 rl:2.3011 rb:1.0548 dl:1140-1142 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1596/2084 bl:2.2604 bb:1.0644 rl:2.3008 rb:1.0548 dl:1130-1131 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1586/2084 bl:2.4318 bb:1.0788 rl:2.3017 rb:1.0550 dl:1112-1113 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1576/2084 bl:2.4135 bb:1.0600 rl:2.3023 rb:1.0550 dl:1097-1098 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1568/2084 bl:2.3858 bb:1.0401 rl:2.3029 rb:1.0549 dl:1084-1086 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1560/2084 bl:2.3572 bb:1.0777 rl:2.3032 rb:1.0551 dl:1071-1073 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1551/2084 bl:2.2960 bb:1.0040 rl:2.3031 rb:1.0547 dl:1057-1059 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1543/2084 bl:2.4220 bb:1.0423 rl:2.3038 rb:1.0547 dl:1045-1046 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1533/2084 bl:2.3212 bb:1.0339 rl:2.3039 rb:1.0546 dl:1031-1032 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1525/2084 bl:2.2841 bb:1.0133 rl:2.3038 rb:1.0543 dl:1019-1021 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1517/2084 bl:2.3416 bb:1.0995 rl:2.3040 rb:1.0546 dl:1009-1010 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1509/2084 bl:2.3094 bb:1.0148 rl:2.3040 rb:1.0543 dl:998-1000 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1501/2084 bl:2.3185 bb:1.0548 rl:2.3041 rb:1.0543 dl:988-989 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1491/2084 bl:2.3446 bb:1.0911 rl:2.3043 rb:1.0545 dl:975-976 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1483/2084 bl:2.4686 bb:1.1218 rl:2.3052 rb:1.0549 dl:964-965 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1475/2084 bl:2.3152 bb:1.0373 rl:2.3052 rb:1.0548 dl:955-956 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1467/2084 bl:2.2074 bb:1.0066 rl:2.3047 rb:1.0545 dl:946-947 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1457/2084 bl:2.2291 bb:1.0208 rl:2.3044 rb:1.0544 dl:934-935 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1449/2084 bl:2.4517 bb:1.1254 rl:2.3051 rb:1.0547 dl:924-926 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1440/2084 bl:2.3938 bb:1.0644 rl:2.3055 rb:1.0548 dl:914-915 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1432/2084 bl:2.4166 bb:1.0469 rl:2.3060 rb:1.0547 dl:904-905 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1422/2084 bl:2.3931 bb:1.0584 rl:2.3064 rb:1.0547 dl:892-893 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1414/2084 bl:2.2872 bb:1.0386 rl:2.3063 rb:1.0547 dl:884-885 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1405/2084 bl:2.3914 bb:1.0111 rl:2.3067 rb:1.0545 dl:874-875 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1397/2084 bl:2.3092 bb:1.0301 rl:2.3067 rb:1.0543 dl:865-866 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1387/2084 bl:2.2887 bb:1.0699 rl:2.3066 rb:1.0544 dl:854-855 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1379/2084 bl:2.3654 bb:1.0512 rl:2.3069 rb:1.0544 dl:845-846 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1370/2084 bl:2.2615 bb:1.0619 rl:2.3067 rb:1.0544 dl:836-837 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1362/2084 bl:2.3896 bb:1.0452 rl:2.3070 rb:1.0544 dl:827-828 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1353/2084 bl:2.4084 bb:1.0425 rl:2.3074 rb:1.0543 dl:817-818 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1345/2084 bl:2.3481 bb:1.0876 rl:2.3076 rb:1.0545 dl:809-810 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1336/2084 bl:2.5067 bb:1.0818 rl:2.3084 rb:1.0546 dl:801-802 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1327/2084 bl:2.3369 bb:1.1006 rl:2.3085 rb:1.0548 dl:791-792 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1317/2084 bl:2.3377 bb:1.0290 rl:2.3086 rb:1.0547 dl:782-783 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1309/2084 bl:2.3742 bb:1.0407 rl:2.3089 rb:1.0546 dl:774-775 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1300/2084 bl:2.3557 bb:1.0155 rl:2.3090 rb:1.0544 dl:767-768 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1293/2084 bl:2.3408 bb:1.0457 rl:2.3092 rb:1.0544 dl:760-761 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1283/2084 bl:2.3156 bb:1.0401 rl:2.3092 rb:1.0544 dl:751-752 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1274/2084 bl:2.4701 bb:1.0925 rl:2.3097 rb:1.0545 dl:743-744 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1266/2084 bl:2.3648 bb:1.0408 rl:2.3099 rb:1.0545 dl:736-737 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1257/2084 bl:2.3208 bb:1.0424 rl:2.3100 rb:1.0544 dl:728-729 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1247/2084 bl:2.3095 bb:1.0101 rl:2.3100 rb:1.0543 dl:720-721 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1240/2084 bl:2.3828 bb:1.0361 rl:2.3102 rb:1.0542 dl:714-714 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1231/2084 bl:2.2677 bb:0.9998 rl:2.3101 rb:1.0540 dl:707-707 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1223/2084 bl:2.3203 bb:1.0641 rl:2.3101 rb:1.0540 dl:700-701 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1216/2084 bl:2.3037 bb:1.0045 rl:2.3101 rb:1.0539 dl:694-695 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1207/2084 bl:2.3912 bb:1.0900 rl:2.3104 rb:1.0540 dl:688-689 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1200/2084 bl:2.3693 bb:1.0163 rl:2.3105 rb:1.0539 dl:682-682 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1191/2084 bl:2.3225 bb:1.0732 rl:2.3106 rb:1.0539 dl:675-676 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1182/2084 bl:2.2337 bb:1.0348 rl:2.3103 rb:1.0539 dl:668-668 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1174/2084 bl:2.2292 bb:0.9971 rl:2.3101 rb:1.0537 dl:662-662 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1166/2084 bl:2.3047 bb:1.0506 rl:2.3101 rb:1.0537 dl:655-655 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1157/2084 bl:2.3618 bb:1.0382 rl:2.3102 rb:1.0536 dl:648-648 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1148/2084 bl:2.3189 bb:1.0293 rl:2.3103 rb:1.0535 dl:642-643 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1141/2084 bl:2.2661 bb:1.0339 rl:2.3101 rb:1.0535 dl:638-638 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1131/2084 bl:2.3835 bb:1.0268 rl:2.3103 rb:1.0534 dl:630-630 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1122/2084 bl:2.3138 bb:1.0014 rl:2.3103 rb:1.0533 dl:623-624 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1114/2084 bl:2.3469 bb:1.0597 rl:2.3104 rb:1.0533 dl:617-618 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1105/2084 bl:2.2289 bb:1.0601 rl:2.3102 rb:1.0533 dl:611-612 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1097/2084 bl:2.3681 bb:1.0430 rl:2.3104 rb:1.0533 dl:605-606 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1090/2084 bl:2.3739 bb:1.0734 rl:2.3106 rb:1.0533 dl:599-600 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1082/2084 bl:2.2493 bb:1.0408 rl:2.3104 rb:1.0533 dl:594-595 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1072/2084 bl:2.2445 bb:0.9668 rl:2.3102 rb:1.0530 dl:587-588 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1063/2084 bl:2.3239 bb:1.0498 rl:2.3103 rb:1.0530 dl:581-582 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1054/2084 bl:2.3022 bb:1.0653 rl:2.3102 rb:1.0531 dl:575-576 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1045/2084 bl:2.2407 bb:1.0538 rl:2.3101 rb:1.0531 dl:569-570 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1039/2084 bl:2.3785 bb:1.0843 rl:2.3102 rb:1.0531 dl:565-565 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1030/2084 bl:2.3907 bb:1.0719 rl:2.3104 rb:1.0532 dl:558-559 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1020/2084 bl:2.2636 bb:1.0239 rl:2.3103 rb:1.0531 dl:552-553 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1013/2084 bl:2.3036 bb:1.0845 rl:2.3103 rb:1.0532 dl:548-548 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b1005/2084 bl:2.2378 bb:0.9965 rl:2.3101 rb:1.0531 dl:543-543 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b996/2084 bl:2.2599 bb:1.0517 rl:2.3100 rb:1.0531 dl:537-537 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b985/2084 bl:2.5245 bb:1.0722 rl:2.3105 rb:1.0531 dl:530-531 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b977/2084 bl:2.3135 bb:1.0719 rl:2.3105 rb:1.0531 dl:525-526 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b971/2084 bl:2.3710 bb:1.0667 rl:2.3106 rb:1.0532 dl:522-522 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b962/2084 bl:2.2802 bb:1.0893 rl:2.3106 rb:1.0533 dl:516-516 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b953/2084 bl:2.3999 bb:1.0793 rl:2.3108 rb:1.0533 dl:510-510 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b945/2084 bl:2.3970 bb:1.1125 rl:2.3110 rb:1.0534 dl:504-505 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b939/2084 bl:2.2916 bb:1.0193 rl:2.3109 rb:1.0534 dl:501-501 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b930/2084 bl:2.3094 bb:1.0064 rl:2.3109 rb:1.0533 dl:495-496 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b927/2084 bl:2.3068 bb:1.0368 rl:2.3109 rb:1.0532 dl:494-494 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b917/2084 bl:2.2724 bb:1.0574 rl:2.3108 rb:1.0532 dl:488-489 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b910/2084 bl:2.4200 bb:1.1590 rl:2.3110 rb:1.0534 dl:485-485 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b900/2084 bl:2.3365 bb:1.0693 rl:2.3111 rb:1.0535 dl:479-480 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b895/2084 bl:2.4340 bb:1.1357 rl:2.3113 rb:1.0536 dl:476-477 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b887/2084 bl:2.4492 bb:1.0983 rl:2.3116 rb:1.0537 dl:471-472 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b880/2084 bl:2.2623 bb:1.0782 rl:2.3115 rb:1.0538 dl:467-468 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b876/2084 bl:2.1686 bb:1.0658 rl:2.3112 rb:1.0538 dl:465-465 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b867/2084 bl:2.3412 bb:1.0805 rl:2.3113 rb:1.0538 dl:460-461 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b860/2084 bl:2.2718 bb:1.0326 rl:2.3112 rb:1.0538 dl:457-457 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b852/2084 bl:2.2682 bb:1.0493 rl:2.3111 rb:1.0538 dl:451-452 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b846/2084 bl:2.2540 bb:1.0206 rl:2.3110 rb:1.0537 dl:448-448 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b837/2084 bl:2.1371 bb:0.9936 rl:2.3107 rb:1.0536 dl:443-443 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b829/2084 bl:2.3888 bb:1.1344 rl:2.3109 rb:1.0538 dl:439-439 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b820/2084 bl:2.5313 bb:1.0935 rl:2.3113 rb:1.0538 dl:434-434 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b814/2084 bl:2.3105 bb:1.0408 rl:2.3113 rb:1.0538 dl:430-430 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b804/2084 bl:2.3023 bb:1.1138 rl:2.3112 rb:1.0539 dl:424-425 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b799/2084 bl:2.3568 bb:1.0426 rl:2.3113 rb:1.0539 dl:421-422 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b791/2084 bl:2.4200 bb:1.0910 rl:2.3115 rb:1.0540 dl:417-418 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b784/2084 bl:2.3156 bb:1.0825 rl:2.3115 rb:1.0540 dl:413-414 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b776/2084 bl:2.3230 bb:1.0861 rl:2.3115 rb:1.0541 dl:408-409 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b768/2084 bl:2.1173 bb:0.9887 rl:2.3112 rb:1.0540 dl:404-405 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b761/2084 bl:2.4518 bb:1.0922 rl:2.3114 rb:1.0540 dl:400-401 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b753/2084 bl:2.3831 bb:1.1048 rl:2.3116 rb:1.0541 dl:396-397 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b750/2084 bl:2.4058 bb:1.0801 rl:2.3117 rb:1.0541 dl:395-395 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b741/2084 bl:2.3198 bb:1.0522 rl:2.3117 rb:1.0541 dl:390-391 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b735/2084 bl:2.3823 bb:1.1710 rl:2.3118 rb:1.0543 dl:387-387 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b729/2084 bl:2.3468 bb:1.0735 rl:2.3119 rb:1.0543 dl:384-384 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b720/2084 bl:2.2900 bb:1.0882 rl:2.3119 rb:1.0544 dl:380-380 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b712/2084 bl:2.4527 bb:1.1081 rl:2.3121 rb:1.0545 dl:376-376 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b702/2084 bl:2.3351 bb:1.0648 rl:2.3121 rb:1.0545 dl:371-372 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b697/2084 bl:2.4863 bb:1.1655 rl:2.3124 rb:1.0546 dl:369-369 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b688/2084 bl:2.4100 bb:1.0507 rl:2.3125 rb:1.0546 dl:364-365 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b680/2084 bl:2.3671 bb:1.1215 rl:2.3126 rb:1.0547 dl:360-361 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b675/2084 bl:2.3028 bb:1.0658 rl:2.3126 rb:1.0547 dl:358-358 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b664/2084 bl:2.4371 bb:1.0990 rl:2.3127 rb:1.0548 dl:353-354 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b659/2084 bl:2.4337 bb:1.1626 rl:2.3129 rb:1.0550 dl:351-351 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b651/2084 bl:2.4622 bb:1.2144 rl:2.3131 rb:1.0552 dl:347-347 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b643/2084 bl:2.3586 bb:1.1011 rl:2.3132 rb:1.0552 dl:343-344 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b638/2084 bl:2.3681 bb:1.0684 rl:2.3133 rb:1.0552 dl:341-341 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b626/2084 bl:2.2308 bb:1.0412 rl:2.3132 rb:1.0552 dl:335-336 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b619/2084 bl:2.2061 bb:1.0652 rl:2.3130 rb:1.0552 dl:332-333 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b613/2084 bl:2.3823 bb:1.1175 rl:2.3131 rb:1.0553 dl:330-330 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b603/2084 bl:2.4311 bb:1.0885 rl:2.3133 rb:1.0554 dl:325-326 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b598/2084 bl:2.2958 bb:1.0473 rl:2.3132 rb:1.0553 dl:323-323 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b590/2084 bl:2.3499 bb:1.1003 rl:2.3133 rb:1.0554 dl:320-320 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b586/2084 bl:2.3448 bb:1.0955 rl:2.3133 rb:1.0555 dl:318-318 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b577/2084 bl:2.3288 bb:1.0936 rl:2.3133 rb:1.0555 dl:314-314 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b566/2084 bl:2.2864 bb:1.1007 rl:2.3133 rb:1.0556 dl:309-310 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b560/2084 bl:2.2549 bb:1.0543 rl:2.3132 rb:1.0555 dl:307-307 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b553/2084 bl:2.4354 bb:1.1583 rl:2.3134 rb:1.0557 dl:304-304 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b545/2084 bl:2.3899 bb:1.1666 rl:2.3135 rb:1.0558 dl:301-301 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b533/2084 bl:2.3576 bb:1.1295 rl:2.3135 rb:1.0559 dl:296-297 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b527/2084 bl:2.4334 bb:1.0495 rl:2.3137 rb:1.0559 dl:294-294 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b517/2084 bl:2.4236 bb:1.1170 rl:2.3138 rb:1.0559 dl:289-290 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b508/2084 bl:2.3801 bb:1.1028 rl:2.3139 rb:1.0560 dl:285-286 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b501/2084 bl:2.3033 bb:1.0704 rl:2.3138 rb:1.0560 dl:283-283 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b490/2084 bl:2.3936 bb:1.0800 rl:2.3139 rb:1.0560 dl:278-279 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b481/2084 bl:2.2717 bb:1.1470 rl:2.3139 rb:1.0561 dl:275-276 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b474/2084 bl:2.2738 bb:1.0261 rl:2.3138 rb:1.0561 dl:273-273 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b467/2084 bl:2.2800 bb:1.1048 rl:2.3138 rb:1.0561 dl:270-270 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b459/2084 bl:2.4796 bb:1.1068 rl:2.3140 rb:1.0562 dl:267-267 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b446/2084 bl:2.5601 bb:1.2357 rl:2.3142 rb:1.0564 dl:262-263 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b439/2084 bl:2.2867 bb:1.0659 rl:2.3142 rb:1.0564 dl:260-260 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b432/2084 bl:2.3226 bb:1.0583 rl:2.3142 rb:1.0564 dl:257-257 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b422/2084 bl:2.5468 bb:1.2205 rl:2.3144 rb:1.0565 dl:254-254 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b414/2084 bl:2.4299 bb:1.1907 rl:2.3146 rb:1.0566 dl:251-251 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b407/2084 bl:2.3705 bb:1.0879 rl:2.3146 rb:1.0567 dl:248-248 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b394/2084 bl:2.3411 bb:1.1354 rl:2.3146 rb:1.0567 dl:243-244 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b391/2084 bl:2.5345 bb:1.1338 rl:2.3148 rb:1.0568 dl:242-242 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b382/2084 bl:2.4005 bb:1.1649 rl:2.3149 rb:1.0569 dl:239-239 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b369/2084 bl:2.3835 bb:1.0766 rl:2.3150 rb:1.0569 dl:234-235 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b365/2084 bl:2.3993 bb:1.1163 rl:2.3151 rb:1.0570 dl:233-233 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b359/2084 bl:2.4052 bb:1.1788 rl:2.3151 rb:1.0571 dl:231-231 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b349/2084 bl:2.3330 bb:1.1014 rl:2.3151 rb:1.0571 dl:228-228 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b341/2084 bl:2.3062 bb:1.1130 rl:2.3151 rb:1.0572 dl:225-225 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b331/2084 bl:2.3689 bb:1.1353 rl:2.3152 rb:1.0572 dl:221-222 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b328/2084 bl:2.3575 bb:1.1433 rl:2.3152 rb:1.0573 dl:220-220 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b319/2084 bl:2.5452 bb:1.1061 rl:2.3154 rb:1.0573 dl:217-217 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b312/2084 bl:2.4439 bb:1.1607 rl:2.3155 rb:1.0574 dl:215-215 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b302/2084 bl:2.4711 bb:1.1480 rl:2.3156 rb:1.0575 dl:211-212 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b299/2084 bl:2.5417 bb:1.1779 rl:2.3158 rb:1.0576 dl:210-210 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b290/2084 bl:2.6093 bb:1.2228 rl:2.3161 rb:1.0577 dl:207-207 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b280/2084 bl:2.3324 bb:1.1340 rl:2.3161 rb:1.0578 dl:204-204 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b275/2084 bl:2.5163 bb:1.2230 rl:2.3162 rb:1.0579 dl:202-202 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b268/2084 bl:2.5024 bb:1.1329 rl:2.3164 rb:1.0579 dl:200-200 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b258/2084 bl:2.5078 bb:1.1960 rl:2.3165 rb:1.0580 dl:196-197 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b254/2084 bl:2.3964 bb:1.2110 rl:2.3166 rb:1.0581 dl:195-195 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b246/2084 bl:2.4688 bb:1.1687 rl:2.3167 rb:1.0582 dl:192-192 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b237/2084 bl:2.3904 bb:1.0969 rl:2.3167 rb:1.0582 dl:189-189 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b226/2084 bl:2.4873 bb:1.2203 rl:2.3168 rb:1.0584 dl:186-186 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b221/2084 bl:2.4147 bb:1.1644 rl:2.3169 rb:1.0584 dl:184-184 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b215/2084 bl:2.4847 bb:1.1044 rl:2.3170 rb:1.0585 dl:182-182 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b206/2084 bl:2.4885 bb:1.1495 rl:2.3171 rb:1.0585 dl:179-179 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b200/2084 bl:2.5305 bb:1.1526 rl:2.3173 rb:1.0586 dl:177-177 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b191/2084 bl:2.4283 bb:1.1729 rl:2.3174 rb:1.0587 dl:174-174 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b185/2084 bl:2.4568 bb:1.1783 rl:2.3174 rb:1.0587 dl:172-172 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b175/2084 bl:2.7408 bb:1.2753 rl:2.3177 rb:1.0589 dl:169-169 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b167/2084 bl:2.5736 bb:1.1809 rl:2.3179 rb:1.0589 dl:166-166 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b161/2084 bl:2.3974 bb:1.2663 rl:2.3179 rb:1.0590 dl:164-164 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b152/2084 bl:2.2826 bb:1.0955 rl:2.3179 rb:1.0591 dl:161-161 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b148/2084 bl:2.4737 bb:1.2004 rl:2.3180 rb:1.0591 dl:159-159 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b139/2084 bl:2.6008 bb:1.2869 rl:2.3182 rb:1.0593 dl:156-156 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b131/2084 bl:2.4858 bb:1.1489 rl:2.3183 rb:1.0593 dl:153-153 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b124/2084 bl:2.4307 bb:1.1141 rl:2.3183 rb:1.0594 dl:150-150 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b114/2084 bl:2.6559 bb:1.3181 rl:2.3185 rb:1.0595 dl:146-147 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b110/2084 bl:2.4993 bb:1.1756 rl:2.3186 rb:1.0595 dl:145-145 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b99/2084 bl:2.5255 bb:1.1514 rl:2.3187 rb:1.0596 dl:141-142 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b94/2084 bl:2.6265 bb:1.2722 rl:2.3189 rb:1.0597 dl:139-139 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b85/2084 bl:2.6049 bb:1.1949 rl:2.3190 rb:1.0598 dl:136-136 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b77/2084 bl:2.5175 bb:1.1816 rl:2.3191 rb:1.0598 dl:133-133 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b70/2084 bl:2.5126 bb:1.1733 rl:2.3192 rb:1.0599 dl:130-130 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b61/2084 bl:2.4816 bb:1.1862 rl:2.3193 rb:1.0599 dl:126-127 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b54/2084 bl:2.5884 bb:1.2120 rl:2.3194 rb:1.0600 dl:123-124 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b49/2084 bl:2.5254 bb:1.1611 rl:2.3195 rb:1.0600 dl:121-121 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b41/2084 bl:2.6546 bb:1.2726 rl:2.3196 rb:1.0601 dl:117-117 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b32/2084 bl:2.5992 bb:1.1609 rl:2.3198 rb:1.0602 dl:112-112 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b23/2084 bl:2.6174 bb:1.1834 rl:2.3199 rb:1.0602 dl:106-106 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b16/2084 bl:2.6649 bb:1.1603 rl:2.3200 rb:1.0603 dl:100-101 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b10/2084 bl:2.7259 bb:1.2101 rl:2.3202 rb:1.0603 dl:94-95 gd:1 sr:0 sf:1 tr:24/24 wt:0 +ttp: b3/2084 bl:2.8523 bb:1.1767 rl:2.3203 rb:1.0604 dl:82-84 gd:1 sr:0 sf:1 tr:24/24 wt:0 +quantized_ttt_phased val_loss:2.31072442 val_bpb:1.05590816 eval_time:506254ms +total_eval_time:506.3s diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_gpt.py b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_gpt.py new file mode 100644 index 0000000000..d2e5381209 --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_gpt.py @@ -0,0 +1,4862 @@ +import base64, collections, copy, fcntl, glob, io, lzma, math, os +from pathlib import Path +import random, re, subprocess, sys, time, uuid, numpy as np, sentencepiece as spm, torch, torch.distributed as dist, torch.nn.functional as F +from torch import Tensor, nn +from flash_attn_interface import ( + flash_attn_func as flash_attn_3_func, + flash_attn_varlen_func, +) +from concurrent.futures import ThreadPoolExecutor +import triton +import triton.language as tl +from triton.tools.tensor_descriptor import TensorDescriptor + + +# ===== Fused softcapped cross-entropy (Triton) — training-only path ===== +# Replaces the eager +# logits_softcap = softcap * tanh(logits / softcap) +# F.cross_entropy(logits_softcap.float(), targets, reduction="mean") +# sequence with a single fused kernel that reads logits_proj once, applies +# softcap in-register, and computes (LSE, loss) in one streaming pass. The +# backward kernel mirrors the forward so there's no stored softcapped logits. +# Numerically identical to the eager path up to fp32 accumulation differences. +_FUSED_CE_LIBRARY = "pgsubmission1draft7fusedce" +_FUSED_CE_BLOCK_SIZE = 1024 +_FUSED_CE_NUM_WARPS = 4 + + +@triton.jit +def _softcapped_ce_fwd_kernel( + logits_ptr, losses_ptr, lse_ptr, targets_ptr, + stride_logits_n, stride_logits_v, + n_rows, n_cols, softcap, + block_size: tl.constexpr, +): + row_idx = tl.program_id(0).to(tl.int64) + logits_row_ptr = logits_ptr + row_idx * stride_logits_n + max_val = -float("inf") + sum_exp = 0.0 + A = 2.0 * softcap + inv_C = 2.0 / softcap + for off in range(0, n_cols, block_size): + cols = off + tl.arange(0, block_size) + mask = cols < n_cols + val = tl.load( + logits_row_ptr + cols * stride_logits_v, + mask=mask, other=-float("inf"), + ).to(tl.float32) + z = A * tl.sigmoid(val * inv_C) + z = tl.where(mask, z, -float("inf")) + curr_max = tl.max(z, axis=0) + new_max = tl.maximum(max_val, curr_max) + sum_exp = sum_exp * tl.exp(max_val - new_max) + tl.sum(tl.exp(z - new_max), axis=0) + max_val = new_max + lse = max_val + tl.log(sum_exp) + tl.store(lse_ptr + row_idx, lse) + target = tl.load(targets_ptr + row_idx).to(tl.int32) + target_val = tl.load(logits_row_ptr + target * stride_logits_v).to(tl.float32) + target_z = A * tl.sigmoid(target_val * inv_C) + tl.store(losses_ptr + row_idx, lse - target_z) + + +@triton.jit +def _softcapped_ce_bwd_kernel( + grad_logits_ptr, grad_losses_ptr, lse_ptr, logits_ptr, targets_ptr, + stride_logits_n, stride_logits_v, + stride_grad_n, stride_grad_v, + n_rows, n_cols, softcap, + block_size: tl.constexpr, +): + row_idx = tl.program_id(0).to(tl.int64) + logits_row_ptr = logits_ptr + row_idx * stride_logits_n + grad_row_ptr = grad_logits_ptr + row_idx * stride_grad_n + lse = tl.load(lse_ptr + row_idx) + grad_loss = tl.load(grad_losses_ptr + row_idx).to(tl.float32) + target = tl.load(targets_ptr + row_idx).to(tl.int32) + A = 2.0 * softcap + inv_C = 2.0 / softcap + dz_dx_scale = A * inv_C + for off in range(0, n_cols, block_size): + cols = off + tl.arange(0, block_size) + mask = cols < n_cols + val = tl.load( + logits_row_ptr + cols * stride_logits_v, + mask=mask, other=0.0, + ).to(tl.float32) + sigmoid_u = tl.sigmoid(val * inv_C) + z = A * sigmoid_u + probs = tl.exp(z - lse) + grad_z = grad_loss * (probs - tl.where(cols == target, 1.0, 0.0)) + grad_x = grad_z * (dz_dx_scale * sigmoid_u * (1.0 - sigmoid_u)) + tl.store(grad_row_ptr + cols * stride_grad_v, grad_x, mask=mask) + + +def _validate_softcapped_ce_inputs( + logits: Tensor, targets: Tensor, softcap: float, +) -> tuple[Tensor, Tensor]: + if logits.ndim != 2: + raise ValueError(f"Expected logits.ndim=2, got {logits.ndim}") + if targets.ndim != 1: + raise ValueError(f"Expected targets.ndim=1, got {targets.ndim}") + if logits.shape[0] != targets.shape[0]: + raise ValueError( + f"Expected matching rows, got logits={tuple(logits.shape)} targets={tuple(targets.shape)}" + ) + if not logits.is_cuda or not targets.is_cuda: + raise ValueError("softcapped_cross_entropy requires CUDA tensors") + if softcap <= 0.0: + raise ValueError(f"softcap must be positive, got {softcap}") + if logits.dtype not in (torch.float16, torch.bfloat16, torch.float32): + raise ValueError(f"Unsupported logits dtype: {logits.dtype}") + logits = logits.contiguous() + targets = targets.contiguous() + if targets.dtype != torch.int64: + targets = targets.to(dtype=torch.int64) + return logits, targets + + +@torch.library.custom_op(f"{_FUSED_CE_LIBRARY}::softcapped_ce", mutates_args=()) +def softcapped_ce_op(logits: Tensor, targets: Tensor, softcap: float) -> tuple[Tensor, Tensor]: + logits, targets = _validate_softcapped_ce_inputs(logits, targets, float(softcap)) + n_rows, n_cols = logits.shape + losses = torch.empty((n_rows,), device=logits.device, dtype=torch.float32) + lse = torch.empty((n_rows,), device=logits.device, dtype=torch.float32) + _softcapped_ce_fwd_kernel[(n_rows,)]( + logits, losses, lse, targets, + logits.stride(0), logits.stride(1), + n_rows, n_cols, float(softcap), + block_size=_FUSED_CE_BLOCK_SIZE, num_warps=_FUSED_CE_NUM_WARPS, + ) + return losses, lse + + +@softcapped_ce_op.register_fake +def _(logits: Tensor, targets: Tensor, softcap: float): + if logits.ndim != 2 or targets.ndim != 1: + raise ValueError("softcapped_ce fake impl expects 2D logits and 1D targets") + if logits.shape[0] != targets.shape[0]: + raise ValueError( + f"Expected matching rows, got logits={tuple(logits.shape)} targets={tuple(targets.shape)}" + ) + n_rows = logits.shape[0] + return ( + logits.new_empty((n_rows,), dtype=torch.float32), + logits.new_empty((n_rows,), dtype=torch.float32), + ) + + +@torch.library.custom_op(f"{_FUSED_CE_LIBRARY}::softcapped_ce_backward", mutates_args=()) +def softcapped_ce_backward_op( + logits: Tensor, targets: Tensor, lse: Tensor, grad_losses: Tensor, softcap: float, +) -> Tensor: + logits, targets = _validate_softcapped_ce_inputs(logits, targets, float(softcap)) + lse = lse.contiguous() + grad_losses = grad_losses.contiguous().to(dtype=torch.float32) + if lse.ndim != 1 or grad_losses.ndim != 1: + raise ValueError("Expected 1D lse and grad_losses") + if lse.shape[0] != logits.shape[0] or grad_losses.shape[0] != logits.shape[0]: + raise ValueError( + f"Expected row-aligned lse/grad_losses, got logits={tuple(logits.shape)} " + f"lse={tuple(lse.shape)} grad_losses={tuple(grad_losses.shape)}" + ) + grad_logits = torch.empty_like(logits) + n_rows, n_cols = logits.shape + _softcapped_ce_bwd_kernel[(n_rows,)]( + grad_logits, grad_losses, lse, logits, targets, + logits.stride(0), logits.stride(1), + grad_logits.stride(0), grad_logits.stride(1), + n_rows, n_cols, float(softcap), + block_size=_FUSED_CE_BLOCK_SIZE, num_warps=_FUSED_CE_NUM_WARPS, + ) + return grad_logits + + +@softcapped_ce_backward_op.register_fake +def _(logits: Tensor, targets: Tensor, lse: Tensor, grad_losses: Tensor, softcap: float): + if logits.ndim != 2 or targets.ndim != 1 or lse.ndim != 1 or grad_losses.ndim != 1: + raise ValueError("softcapped_ce_backward fake impl expects 2D logits and 1D row tensors") + if ( + logits.shape[0] != targets.shape[0] + or logits.shape[0] != lse.shape[0] + or logits.shape[0] != grad_losses.shape[0] + ): + raise ValueError("softcapped_ce_backward fake impl expects row-aligned tensors") + return logits.new_empty(logits.shape) + + +def _softcapped_ce_setup_context( + ctx: torch.autograd.function.FunctionCtx, inputs, output, +) -> None: + logits, targets, softcap = inputs + _losses, lse = output + ctx.save_for_backward(logits, targets, lse) + ctx.softcap = float(softcap) + + +def _softcapped_ce_backward( + ctx: torch.autograd.function.FunctionCtx, grad_losses: Tensor, grad_lse: "Tensor | None", +): + del grad_lse + logits, targets, lse = ctx.saved_tensors + grad_logits = torch.ops.pgsubmission1draft7fusedce.softcapped_ce_backward( + logits, targets, lse, grad_losses, ctx.softcap + ) + return grad_logits, None, None + + +softcapped_ce_op.register_autograd( + _softcapped_ce_backward, setup_context=_softcapped_ce_setup_context, +) + + +def softcapped_cross_entropy( + logits: Tensor, targets: Tensor, softcap: float, reduction: str = "mean", +) -> Tensor: + losses, _lse = torch.ops.pgsubmission1draft7fusedce.softcapped_ce( + logits, targets, float(softcap) + ) + if reduction == "none": + return losses + if reduction == "sum": + return losses.sum() + if reduction == "mean": + return losses.mean() + raise ValueError(f"Unsupported reduction={reduction!r}") + + +@triton.jit +def fused_log_softmax_dual_gather_kernel( + logits_ptr, + target_ids_ptr, + hint_ids_ptr, + log_p_y_out_ptr, + log_q_h_out_ptr, + BT, + V, + BLOCK_V: tl.constexpr, +): + pid = tl.program_id(0) + if pid >= BT: + return + target = tl.load(target_ids_ptr + pid) + hint = tl.load(hint_ids_ptr + pid) + row_offset = pid * V + target_logit = tl.load(logits_ptr + row_offset + target).to(tl.float32) + hint_logit = tl.load(logits_ptr + row_offset + hint).to(tl.float32) + max_val = -float("inf") + for v_start in tl.range(0, V, BLOCK_V): + v_offsets = v_start + tl.arange(0, BLOCK_V) + mask = v_offsets < V + chunk = tl.load( + logits_ptr + row_offset + v_offsets, mask=mask, other=-float("inf") + ).to(tl.float32) + max_val = tl.maximum(max_val, tl.max(chunk, axis=0)) + sum_exp = tl.zeros((), dtype=tl.float32) + for v_start in tl.range(0, V, BLOCK_V): + v_offsets = v_start + tl.arange(0, BLOCK_V) + mask = v_offsets < V + chunk = tl.load( + logits_ptr + row_offset + v_offsets, mask=mask, other=0.0 + ).to(tl.float32) + sum_exp += tl.sum(tl.where(mask, tl.exp(chunk - max_val), 0.0), axis=0) + lse = max_val + tl.log(sum_exp) + tl.store(log_p_y_out_ptr + pid, target_logit - lse) + tl.store(log_q_h_out_ptr + pid, hint_logit - lse) + + +def fused_log_softmax_dual_gather(logits, target_ids, hint_ids): + bsz, sl, V = logits.shape + BT = bsz * sl + logits_flat = logits.reshape(BT, V).contiguous() + target_flat = target_ids.reshape(BT).contiguous() + hint_flat = hint_ids.reshape(BT).contiguous() + log_p_y_out = torch.empty(BT, dtype=torch.float32, device=logits.device) + log_q_h_out = torch.empty(BT, dtype=torch.float32, device=logits.device) + fused_log_softmax_dual_gather_kernel[(BT,)]( + logits_flat, + target_flat, + hint_flat, + log_p_y_out, + log_q_h_out, + BT, + V, + BLOCK_V=1024, + num_warps=8, + ) + return log_p_y_out.reshape(bsz, sl), log_q_h_out.reshape(bsz, sl) + + +class Hyperparameters: + data_dir = os.environ.get("DATA_DIR", "./data/") + seed = int(os.environ.get("SEED", 1337)) + run_id = os.environ.get("RUN_ID", str(uuid.uuid4())) + iterations = int(os.environ.get("ITERATIONS", 20000)) + warmdown_frac = float(os.environ.get("WARMDOWN_FRAC", 0.75)) + warmdown_iters = int(os.environ.get("WARMDOWN_ITERS", 0)) + midrun_cap_schedule = os.environ.get("MIDRUN_CAP_SCHEDULE", "").strip() + midrun_cap_log_updates = bool(int(os.environ.get("MIDRUN_CAP_LOG_UPDATES", "0"))) + warmup_steps = int(os.environ.get("WARMUP_STEPS", 20)) + train_batch_tokens = int(os.environ.get("TRAIN_BATCH_TOKENS", 786432)) + # Fused softcapped CE (Triton). Training-only — forward_logits eval path still uses + # eager softcap+F.cross_entropy. Default ON since validated as at-worst neutral. + fused_ce_enabled = bool(int(os.environ.get("FUSED_CE_ENABLED", "1"))) + train_seq_len = int(os.environ.get("TRAIN_SEQ_LEN", 2048)) + train_seq_schedule = os.environ.get("TRAIN_SEQ_SCHEDULE", "") + train_seq_schedule_mode = os.environ.get("TRAIN_SEQ_SCHEDULE_MODE", "wallclock").strip().lower() + seq_change_warmup_steps = int(os.environ.get("SEQ_CHANGE_WARMUP_STEPS", 0)) + compile_shape_warmup = bool(int(os.environ.get("COMPILE_SHAPE_WARMUP", "0"))) + compile_shape_warmup_iters = int(os.environ.get("COMPILE_SHAPE_WARMUP_ITERS", "1")) + compile_shape_warmup_loop_modes = os.environ.get("COMPILE_SHAPE_WARMUP_LOOP_MODES", "auto").strip().lower() + train_log_every = int(os.environ.get("TRAIN_LOG_EVERY", 500)) + max_wallclock_seconds = float(os.environ.get("MAX_WALLCLOCK_SECONDS", 6e2)) + val_batch_tokens = int(os.environ.get("VAL_BATCH_TOKENS", 524288)) + eval_seq_len = int(os.environ.get("EVAL_SEQ_LEN", 2048)) + val_loss_every = int(os.environ.get("VAL_LOSS_EVERY", 4000)) + vocab_size = int(os.environ.get("VOCAB_SIZE", 8192)) + num_layers = int(os.environ.get("NUM_LAYERS", 11)) + xsa_last_n = int(os.environ.get("XSA_LAST_N", 11)) + model_dim = int(os.environ.get("MODEL_DIM", 512)) + num_kv_heads = int(os.environ.get("NUM_KV_HEADS", 4)) + num_heads = int(os.environ.get("NUM_HEADS", 8)) + mlp_mult = float(os.environ.get("MLP_MULT", 4.0)) + skip_gates_enabled = bool(int(os.environ.get("SKIP_GATES_ENABLED", "1"))) + tie_embeddings = bool(int(os.environ.get("TIE_EMBEDDINGS", "1"))) + logit_softcap = float(os.environ.get("LOGIT_SOFTCAP", 3e1)) + rope_base = float(os.environ.get("ROPE_BASE", 1e4)) + rope_dims = int(os.environ.get("ROPE_DIMS", 16)) + rope_train_seq_len = int(os.environ.get("ROPE_TRAIN_SEQ_LEN", 2048)) + rope_yarn = bool(int(os.environ.get("ROPE_YARN", "0"))) + ln_scale = bool(int(os.environ.get("LN_SCALE", "1"))) + qk_gain_init = float(os.environ.get("QK_GAIN_INIT", 5.0)) + num_loops = int(os.environ.get("NUM_LOOPS", 2)) + loop_start = int(os.environ.get("LOOP_START", 3)) + loop_end = int(os.environ.get("LOOP_END", 5)) + enable_looping_at = float(os.environ.get("ENABLE_LOOPING_AT", 0.35)) + parallel_start_layer = int(os.environ.get("PARALLEL_START_LAYER", 8)) + parallel_final_lane = os.environ.get("PARALLEL_FINAL_LANE", "mean") + min_lr = float(os.environ.get("MIN_LR", 0.0)) + embed_lr = float(os.environ.get("EMBED_LR", 0.6)) + tied_embed_lr = float(os.environ.get("TIED_EMBED_LR", 0.03)) + tied_embed_init_std = float(os.environ.get("TIED_EMBED_INIT_STD", 0.005)) + matrix_lr = float(os.environ.get("MATRIX_LR", 0.026)) + scalar_lr = float(os.environ.get("SCALAR_LR", 0.02)) + muon_momentum = float(os.environ.get("MUON_MOMENTUM", 0.97)) + muon_backend_steps = int(os.environ.get("MUON_BACKEND_STEPS", 5)) + muon_momentum_warmup_start = float( + os.environ.get("MUON_MOMENTUM_WARMUP_START", 0.92) + ) + muon_momentum_warmup_steps = int(os.environ.get("MUON_MOMENTUM_WARMUP_STEPS", 1500)) + muon_row_normalize = bool(int(os.environ.get("MUON_ROW_NORMALIZE", "1"))) + beta1 = float(os.environ.get("BETA1", 0.9)) + beta2 = float(os.environ.get("BETA2", 0.95)) + adam_eps = float(os.environ.get("ADAM_EPS", 1e-08)) + grad_clip_norm = float(os.environ.get("GRAD_CLIP_NORM", 0.3)) + eval_stride = int(os.environ.get("EVAL_STRIDE", 64)) + eval_include_tail = bool(int(os.environ.get("EVAL_INCLUDE_TAIL", "1"))) + adam_wd = float(os.environ.get("ADAM_WD", 0.02)) + muon_wd = float(os.environ.get("MUON_WD", 0.095)) + embed_wd = float(os.environ.get("EMBED_WD", 0.085)) + ema_decay = float(os.environ.get("EMA_DECAY", 0.9965)) + ttt_enabled = bool(int(os.environ.get("TTT_ENABLED", "1"))) + ttt_lora_rank = int(os.environ.get("TTT_LORA_RANK", 96)) + ttt_lora_lr = float(os.environ.get("TTT_LORA_LR", 0.0001)) + ttt_local_lr_mult = float(os.environ.get("TTT_LOCAL_LR_MULT", 1.0)) + ttt_chunk_size = int(os.environ.get("TTT_CHUNK_SIZE", 48)) + ttt_eval_seq_len = int(os.environ.get("TTT_EVAL_SEQ_LEN", 2048)) + ttt_batch_size = int(os.environ.get("TTT_BATCH_SIZE", 64)) + ttt_grad_steps = int(os.environ.get("TTT_GRAD_STEPS", 1)) + # V19: PR #1886 (renqianluo) + sunnypatneedi research log 2026-04-28 found that + # the Triton fused-CE kernel's fp32-accumulation interacts with warm-start LoRA-A + # to destabilize seeds 314/1337 at TTT_WEIGHT_DECAY=1.0. Raising the default to + # 2.0 prevents seed collapse without measurably moving stable seeds. + ttt_weight_decay = float(os.environ.get("TTT_WEIGHT_DECAY", 2.0)) + ttt_beta1 = float(os.environ.get("TTT_BETA1", 0)) + ttt_beta2 = float(os.environ.get("TTT_BETA2", 0.999)) + ttt_mask = os.environ.get("TTT_MASK", "").strip().lower() + _ttt_q_default = "1" + _ttt_v_default = "1" + if ttt_mask in ("", "all", "baseline_all"): + pass + elif ttt_mask == "no_q": + _ttt_q_default = "0" + elif ttt_mask == "no_v": + _ttt_v_default = "0" + elif ttt_mask == "no_qv": + _ttt_q_default = "0" + _ttt_v_default = "0" + else: + raise ValueError(f"Unsupported TTT_MASK={ttt_mask!r}") + ttt_q_lora = bool(int(os.environ.get("TTT_Q_LORA", _ttt_q_default))) + ttt_k_lora = bool(int(os.environ.get("TTT_K_LORA", "1"))) + ttt_v_lora = bool(int(os.environ.get("TTT_V_LORA", _ttt_v_default))) + ttt_mlp_lora = bool(int(os.environ.get("TTT_MLP_LORA", "1"))) + ttt_o_lora = bool(int(os.environ.get("TTT_O_LORA", "1"))) + ttt_optimizer = os.environ.get("TTT_OPTIMIZER", "adam") + ttt_eval_batches = os.environ.get("TTT_EVAL_BATCHES", "") + ttt_short_doc_len = int(os.environ.get("TTT_SHORT_DOC_LEN", 512)) + ttt_short_lora_enabled = bool(int(os.environ.get("TTT_SHORT_LORA_ENABLED", "0"))) + ttt_short_lora_rank = int(os.environ.get("TTT_SHORT_LORA_RANK", ttt_lora_rank)) + ttt_short_lora_lr = float(os.environ.get("TTT_SHORT_LORA_LR", ttt_lora_lr)) + ttt_short_weight_decay = float(os.environ.get("TTT_SHORT_WEIGHT_DECAY", ttt_weight_decay)) + ttt_short_beta2 = float(os.environ.get("TTT_SHORT_BETA2", ttt_beta2)) + ttt_short_score_first_enabled = bool(int(os.environ.get("TTT_SHORT_SCORE_FIRST_ENABLED", "0"))) + ttt_short_chunk_size = int(os.environ.get("TTT_SHORT_CHUNK_SIZE", ttt_chunk_size)) + ttt_short_score_first_steps = os.environ.get("TTT_SHORT_SCORE_FIRST_STEPS", "") + ttt_train_min_doc_len = int(os.environ.get("TTT_TRAIN_MIN_DOC_LEN", "0")) + ttt_train_max_doc_len = int(os.environ.get("TTT_TRAIN_MAX_DOC_LEN", "0")) + ttt_warm_start_mean_enabled = bool(int(os.environ.get("TTT_WARM_START_MEAN_ENABLED", "0"))) + ttt_warm_start_mean_doc_len = int(os.environ.get("TTT_WARM_START_MEAN_DOC_LEN", ttt_short_doc_len)) + ttt_warm_start_mean_momentum = float(os.environ.get("TTT_WARM_START_MEAN_MOMENTUM", 0.95)) + val_doc_fraction = float(os.environ.get("VAL_DOC_FRACTION", 1.0)) + compressor = os.environ.get("COMPRESSOR", "brotli") + gptq_calibration_batches = int(os.environ.get("GPTQ_CALIBRATION_BATCHES", 16)) + gptq_reserve_seconds = float(os.environ.get("GPTQ_RESERVE_SECONDS", 4.0)) + phased_ttt_prefix_docs = int(os.environ.get("PHASED_TTT_PREFIX_DOCS", 2000)) + phased_ttt_num_phases = int(os.environ.get("PHASED_TTT_NUM_PHASES", 1)) + global_ttt_lr = float(os.environ.get("GLOBAL_TTT_LR", 0.001)) + global_ttt_momentum = float(os.environ.get("GLOBAL_TTT_MOMENTUM", 0.9)) + global_ttt_epochs = int(os.environ.get("GLOBAL_TTT_EPOCHS", 1)) + global_ttt_chunk_tokens = int(os.environ.get("GLOBAL_TTT_CHUNK_TOKENS", 32768)) + global_ttt_batch_seqs = int(os.environ.get("GLOBAL_TTT_BATCH_SEQS", 32)) + global_ttt_warmup_start_lr = float(os.environ.get("GLOBAL_TTT_WARMUP_START_LR", 0.0)) + global_ttt_warmup_chunks = int(os.environ.get("GLOBAL_TTT_WARMUP_CHUNKS", 0)) + global_ttt_grad_clip = float(os.environ.get("GLOBAL_TTT_GRAD_CLIP", 1.0)) + global_ttt_respect_doc_boundaries = bool(int(os.environ.get("GLOBAL_TTT_RESPECT_DOC_BOUNDARIES", "1"))) + matrix_bits = int(os.environ.get("MATRIX_BITS", 6)) + embed_bits = int(os.environ.get("EMBED_BITS", 8)) + matrix_clip_sigmas = float(os.environ.get("MATRIX_CLIP_SIGMAS", 12.85)) + embed_clip_sigmas = float(os.environ.get("EMBED_CLIP_SIGMAS", 2e1)) + mlp_clip_sigmas = float(os.environ.get("MLP_CLIP_SIGMAS", 10.0)) + attn_clip_sigmas = float(os.environ.get("ATTN_CLIP_SIGMAS", 13.0)) + # AttnOutGate (per-head multiplicative output gate, PR #1667 MarioPaerle). + # Zero-init weight: 2*sigmoid(0)=1 -> transparent at start. Source defaults to + # block input x ('proj'); 'q' uses raw Q projection output. + attn_out_gate_enabled = bool(int(os.environ.get("ATTN_OUT_GATE_ENABLED", "0"))) + attn_out_gate_src = os.environ.get("ATTN_OUT_GATE_SRC", "proj") + # SmearGate (input-dependent forward-1 token smear, modded-nanogpt @classiclarryd + # via PR #1667). x_t <- x_t + lam * sigmoid(W*x_t[:gate_window]) * x_{t-1}. + # lam=0 + W=0 -> transparent at init. + smear_gate_enabled = bool(int(os.environ.get("SMEAR_GATE_ENABLED", "0"))) + # Window: first GATE_WINDOW dims of the source feed the gate projection. + gate_window = int(os.environ.get("GATE_WINDOW", 12)) + # Gated Attention (Qwen, NeurIPS 2025 Best Paper, arXiv:2505.06708; + # qiuzh20/gated_attention). Per-head sigmoid gate on SDPA output, BEFORE + # out_proj. Gate input = full block input x (paper's headwise G1 variant + # driven from hidden_states). W_g shape (num_heads, dim), plain sigmoid. + # Near-zero init gives g~0.5 at step 0 (half attention output); per-block + # attn_scale (init 1.0) compensates during training. Name contains + # "attn_gate" so CONTROL_TENSOR_NAME_PATTERNS routes it to scalar AdamW. + gated_attn_enabled = bool(int(os.environ.get("GATED_ATTN_ENABLED", "0"))) + gated_attn_init_std = float(os.environ.get("GATED_ATTN_INIT_STD", 0.01)) + # Dedicated int8-per-row quantization for `attn_gate_w` tensors. These are + # small ((num_heads, dim) = (8, 512) = 4096 params) and bypass GPTQ via the + # numel<=65536 passthrough branch -> stored as fp16 (8 KB/layer, ~65 KB total + # compressed). int8-per-row cuts the raw tensor in half with negligible BPB + # impact: scales per head (8 values), symmetric quant over [-127, 127]. + # No Hessian needed (gate weights not in collect_hessians()). + gated_attn_quant_gate = bool(int(os.environ.get("GATED_ATTN_QUANT_GATE", "0"))) + # Sparse Attention Gate (modded-nanogpt-style). Keeps dense SDPA and only + # swaps the output-gate input to the first GATE_WINDOW residual dims. + # W_g: (num_heads, gate_window) = (8, 12) = 96 params/layer (~44K total), + # vs dense GatedAttn's (8, 512) = 4K/layer (~44K diff). Name "attn_gate_w" + # is shared so quant routing and int8 gate passthrough Just Work. Gate + # passthrough int8 still applies via GATED_ATTN_QUANT_GATE=1. + # Mutually exclusive with ATTN_OUT_GATE_ENABLED and GATED_ATTN_ENABLED. + sparse_attn_gate_enabled = bool(int(os.environ.get("SPARSE_ATTN_GATE_ENABLED", "0"))) + sparse_attn_gate_init_std = float(os.environ.get("SPARSE_ATTN_GATE_INIT_STD", 0.0)) + sparse_attn_gate_scale = float(os.environ.get("SPARSE_ATTN_GATE_SCALE", 1.0)) + # LQER asymmetric rank-k correction on top-K quant-error tensors (PR #1530 v2 port). + # Computes SVD of E = W_fp - W_quant, packs top-r A,B as INT2/INT4 (asym) or INTk (sym). + lqer_enabled = bool(int(os.environ.get("LQER_ENABLED", "1"))) + lqer_rank = int(os.environ.get("LQER_RANK", 4)) + lqer_top_k = int(os.environ.get("LQER_TOP_K", 3)) + lqer_factor_bits = int(os.environ.get("LQER_FACTOR_BITS", 4)) + lqer_asym_enabled = bool(int(os.environ.get("LQER_ASYM_ENABLED", "1"))) + lqer_asym_group = int(os.environ.get("LQER_ASYM_GROUP", "64")) + lqer_scope = os.environ.get("LQER_SCOPE", "all") + lqer_gain_select = bool(int(os.environ.get("LQER_GAIN_SELECT", "0"))) + awq_lite_enabled = bool(int(os.environ.get("AWQ_LITE_ENABLED", "0"))) + awq_lite_bits = int(os.environ.get("AWQ_LITE_BITS", "8")) + awq_lite_group_top_k = int(os.environ.get("AWQ_LITE_GROUP_TOP_K", "1")) + awq_lite_group_size = int(os.environ.get("AWQ_LITE_GROUP_SIZE", "64")) + # PR #2041-style online n-gram tilt. Causal prefix-only hints over validation + # tokens, applied as a scoring-time posterior adjustment to per-token NLL. + ngram_tilt_enabled = bool(int(os.environ.get("NGRAM_TILT_ENABLED", "0"))) + token_order = int(os.environ.get("TOKEN_ORDER", "16")) + token_threshold = float(os.environ.get("TOKEN_THRESHOLD", "0.800")) + token_boost = float(os.environ.get("TOKEN_BOOST", "2.625")) + within_tau = float(os.environ.get("WITHIN_TAU", "0.450")) + within_boost = float(os.environ.get("WITHIN_BOOST", "0.0")) + word_order = int(os.environ.get("WORD_ORDER", "4")) + word_normalize = os.environ.get("WORD_NORMALIZE", "strip_punct_lower") + word_tau = float(os.environ.get("WORD_TAU", "0.650")) + word_boost = float(os.environ.get("WORD_BOOST", "0.0")) + agree_add_boost = float(os.environ.get("AGREE_ADD_BOOST", "0.0")) + ngram_hint_precompute_outside = bool(int(os.environ.get("NGRAM_HINT_PRECOMPUTE_OUTSIDE", "1"))) + # === ML INTERN PR #2014-EVOLUTION TRANSPLANTS (2026-05-01) === + # T1. Skylight Muon u/w floor (modded-nanogpt PR #269, merged 2026-04-30). + # For each Muon-managed param W with proposed update U (post Newton-Schulz), + # if ||U||_F / ||W||_F < uw_ratio, rescale U so the ratio equals uw_ratio. + # Iso-loss savings of ~250 steps in modded-nanogpt 6x6-seed evidence. + # Surgical: zero artifact-byte cost (pure optimizer-state side effect). + # Pre-quant gain ~ -0.003 BPB expected (per modded-nanogpt analog). + skylight_uw_floor = bool(int(os.environ.get("SKYLIGHT_UW_FLOOR", "0"))) + skylight_uw_ratio = float(os.environ.get("SKYLIGHT_UW_RATIO", "0.35")) + # T2. NorMuon-style per-row variance EMA on Muon updates. + # Maintain EMA(row_var(U)) with beta2=skylight_norm_beta2; divide each row + # of the update by sqrt(EMA+eps), then re-normalize Frobenius norm back to + # pre-normalization value. Pure optimizer state. Compose with uw_floor. + skylight_norm_ema = bool(int(os.environ.get("SKYLIGHT_NORM_EMA", "0"))) + skylight_norm_beta2 = float(os.environ.get("SKYLIGHT_NORM_BETA2", "0.95")) + skylight_norm_eps = float(os.environ.get("SKYLIGHT_NORM_EPS", "1e-7")) + # T3. Leaky-ReLU-square slope: PR #1948 found 0.3 > 0.5 in this regime. + # Patches both the eager path and the Triton fused kernel. Default 0.5 + # keeps the PR #2014 baseline byte-identical. + leaky_relu_sq_slope = float(os.environ.get("LEAKY_RELU_SQ_SLOPE", "0.5")) + # === END TRANSPLANTS === + distributed = "RANK" in os.environ and "WORLD_SIZE" in os.environ + rank = int(os.environ.get("RANK", "0")) + world_size = int(os.environ.get("WORLD_SIZE", "1")) + local_rank = int(os.environ.get("LOCAL_RANK", "0")) + is_main_process = rank == 0 + grad_accum_steps = 8 // world_size + # CaseOps integration: optional override of dataset root + tokenizer path. + # When CASEOPS_ENABLED=1, the wrapper loads a per-token byte sidecar + # (fineweb_val_bytes_*.bin, identical shard layout to val_*.bin) and uses + # it as the canonical raw-byte budget for BPB accounting. The sidecar + # REPLACES the build_sentencepiece_luts byte-counting path entirely. + caseops_enabled = bool(int(os.environ.get("CASEOPS_ENABLED", "0"))) + _default_caseops_data = os.path.join( + data_dir, + "datasets", + "fineweb10B_sp8192_caseops", + "datasets", + "datasets", + "fineweb10B_sp8192_lossless_caps_caseops_v1_reserved", + ) + _default_caseops_tok = os.path.join( + data_dir, + "datasets", + "fineweb10B_sp8192_caseops", + "datasets", + "tokenizers", + "fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model", + ) + if caseops_enabled: + datasets_dir = os.environ.get("DATA_PATH", _default_caseops_data) + tokenizer_path = os.environ.get("TOKENIZER_PATH", _default_caseops_tok) + else: + datasets_dir = os.environ.get( + "DATA_PATH", + os.path.join(data_dir, "datasets", f"fineweb10B_sp{vocab_size}"), + ) + tokenizer_path = os.environ.get( + "TOKENIZER_PATH", + os.path.join(data_dir, "tokenizers", f"fineweb_{vocab_size}_bpe.model"), + ) + train_files = os.path.join(datasets_dir, "fineweb_train_*.bin") + val_files = os.path.join(datasets_dir, "fineweb_val_*.bin") + val_bytes_files = os.path.join(datasets_dir, "fineweb_val_bytes_*.bin") + artifact_dir = os.environ.get("ARTIFACT_DIR", "") + logfile = ( + os.path.join(artifact_dir, f"{run_id}.txt") + if artifact_dir + else f"logs/{run_id}.txt" + ) + model_path = ( + os.path.join(artifact_dir, "final_model.pt") + if artifact_dir + else "final_model.pt" + ) + quantized_model_path = ( + os.path.join(artifact_dir, "final_model.int6.ptz") + if artifact_dir + else "final_model.int6.ptz" + ) + + +_logger_hparams = None + + +def set_logging_hparams(h): + global _logger_hparams + _logger_hparams = h + + +def log(msg, console=True): + if _logger_hparams is None: + print(msg) + return + if _logger_hparams.is_main_process: + if console: + print(msg) + if _logger_hparams.logfile is not None: + with open(_logger_hparams.logfile, "a", encoding="utf-8") as f: + print(msg, file=f) + + +def parse_train_seq_schedule(schedule, default_seq_len): + if not schedule.strip(): + return [(1.0, int(default_seq_len))] + plan = [] + for raw_stage in schedule.split(","): + raw_stage = raw_stage.strip() + if not raw_stage: + continue + if "@" not in raw_stage: + raise ValueError( + f"Invalid TRAIN_SEQ_SCHEDULE stage `{raw_stage}`; expected format like `1024@0.35`" + ) + seq_raw, progress_raw = raw_stage.split("@", 1) + seq_len = int(seq_raw.strip()) + progress = float(progress_raw.strip()) + if seq_len <= 0: + raise ValueError("TRAIN_SEQ_SCHEDULE sequence lengths must be positive") + if not (0.0 < progress <= 1.0): + raise ValueError("TRAIN_SEQ_SCHEDULE progress fractions must be in (0, 1]") + plan.append((progress, seq_len)) + if not plan: + return [(1.0, int(default_seq_len))] + plan.sort(key=lambda item: item[0]) + if plan[-1][0] < 1.0: + plan.append((1.0, plan[-1][1])) + return plan + + +def parse_scalar_schedule(schedule, default_value): + if not schedule.strip(): + return [(0.0, float(default_value))] + plan = [] + for raw_stage in schedule.split(","): + raw_stage = raw_stage.strip() + if not raw_stage: + continue + if "@" not in raw_stage: + raise ValueError( + f"Invalid scalar schedule stage `{raw_stage}`; expected format like `0.5@0.4`" + ) + value_raw, progress_raw = raw_stage.split("@", 1) + value = float(value_raw.strip()) + progress = float(progress_raw.strip()) + if not (0.0 <= progress <= 1.0): + raise ValueError("Scalar schedule progress fractions must be in [0, 1]") + plan.append((progress, value)) + if not plan: + return [(0.0, float(default_value))] + plan.sort(key=lambda item: item[0]) + if plan[0][0] > 0.0: + plan.insert(0, (0.0, plan[0][1])) + return plan + + +def schedule_value(plan, progress): + value = plan[0][1] + for threshold, candidate in plan: + if progress + 1e-12 >= threshold: + value = candidate + else: + break + return value + + +def max_train_seq_len_from_schedule(plan, default_seq_len): + return max([int(default_seq_len), *[seq_len for _, seq_len in plan]]) + + +def validate_train_seq_plan_compatibility( + plan, + *, + global_tokens, + world_size, + grad_accum_steps, +): + denom = world_size * grad_accum_steps + if denom <= 0: + raise ValueError(f"Invalid world_size * grad_accum_steps={denom}") + if global_tokens % denom != 0: + raise ValueError( + f"TRAIN_BATCH_TOKENS={global_tokens} must be divisible by world_size*grad_accum_steps={denom}" + ) + local_tokens = global_tokens // denom + invalid_seq_lens = sorted( + {seq_len for _, seq_len in plan if local_tokens % seq_len != 0} + ) + if invalid_seq_lens: + raise ValueError( + "TRAIN_SEQ_SCHEDULE contains sequence lengths incompatible with the local micro-batch: " + f"local_tokens={local_tokens}, invalid_seq_lens={invalid_seq_lens}. " + f"Each seq_len must divide {local_tokens} exactly." + ) + return local_tokens + + +def training_progress( + *, + step, + iterations, + elapsed_ms, + max_wallclock_ms, + schedule_mode, +): + if schedule_mode == "step" or max_wallclock_ms is None or max_wallclock_ms <= 0: + return min(max(step / max(iterations, 1), 0.0), 1.0) + if schedule_mode != "wallclock": + raise ValueError( + f"Unsupported TRAIN_SEQ_SCHEDULE_MODE={schedule_mode!r}; expected 'wallclock' or 'step'" + ) + return min(max(elapsed_ms / max(max_wallclock_ms, 1e-9), 0.0), 1.0) + + +def current_train_seq_len( + plan, + *, + step, + iterations, + elapsed_ms, + max_wallclock_ms, + schedule_mode, +): + progress = training_progress( + step=step, + iterations=iterations, + elapsed_ms=elapsed_ms, + max_wallclock_ms=max_wallclock_ms, + schedule_mode=schedule_mode, + ) + for threshold, seq_len in plan: + if progress <= threshold: + return seq_len, progress + return plan[-1][1], progress + + +class ValidationData: + def __init__(self, h, device): + self.sp = spm.SentencePieceProcessor(model_file=h.tokenizer_path) + if int(self.sp.vocab_size()) != h.vocab_size: + raise ValueError( + f"VOCAB_SIZE={h.vocab_size} does not match tokenizer vocab_size={int(self.sp.vocab_size())}" + ) + self.val_tokens = load_validation_tokens( + h.val_files, h.eval_seq_len, include_tail=h.eval_include_tail + ) + self.caseops_enabled = bool(getattr(h, "caseops_enabled", False)) + if self.caseops_enabled: + self.base_bytes_lut = None + self.has_leading_space_lut = None + self.is_boundary_token_lut = None + else: + ( + self.base_bytes_lut, + self.has_leading_space_lut, + self.is_boundary_token_lut, + ) = build_sentencepiece_luts(self.sp, h.vocab_size, device) + self.val_bytes = None + if self.caseops_enabled: + self.val_bytes = load_validation_byte_sidecar( + h.val_bytes_files, h.eval_seq_len, self.val_tokens.numel() + ) + + +def build_sentencepiece_luts(sp, vocab_size, device): + sp_vocab_size = int(sp.vocab_size()) + assert ( + sp.piece_to_id("▁") != sp.unk_id() + ), "Tokenizer must have '▁' (space) as its own token for correct BPB byte counting" + table_size = max(sp_vocab_size, vocab_size) + base_bytes_np = np.zeros((table_size,), dtype=np.int16) + has_leading_space_np = np.zeros((table_size,), dtype=np.bool_) + is_boundary_token_np = np.ones((table_size,), dtype=np.bool_) + for token_id in range(sp_vocab_size): + if sp.is_control(token_id) or sp.is_unknown(token_id) or sp.is_unused(token_id): + continue + is_boundary_token_np[token_id] = False + if sp.is_byte(token_id): + base_bytes_np[token_id] = 1 + continue + piece = sp.id_to_piece(token_id) + if piece.startswith("▁"): + has_leading_space_np[token_id] = True + piece = piece[1:] + base_bytes_np[token_id] = len(piece.encode("utf-8")) + return ( + torch.tensor(base_bytes_np, dtype=torch.int16, device=device), + torch.tensor(has_leading_space_np, dtype=torch.bool, device=device), + torch.tensor(is_boundary_token_np, dtype=torch.bool, device=device), + ) + + +def load_validation_tokens(pattern, seq_len, include_tail=True): + # Filter out CaseOps byte sidecar shards which share the val_*.bin glob. + files = [ + Path(p) + for p in sorted(glob.glob(pattern)) + if "_bytes_" not in Path(p).name + ] + if not files: + raise FileNotFoundError(f"No files found for pattern: {pattern}") + tokens = torch.cat([load_data_shard(file) for file in files]).contiguous() + if include_tail: + if tokens.numel() <= 1: + raise ValueError(f"Validation split is too short for TRAIN_SEQ_LEN={seq_len}") + return tokens + usable = (tokens.numel() - 1) // seq_len * seq_len + if usable <= 0: + raise ValueError(f"Validation split is too short for TRAIN_SEQ_LEN={seq_len}") + return tokens[: usable + 1] + + +def load_validation_byte_sidecar(pattern, seq_len, expected_len): + """Load CaseOps per-token byte sidecar(s). Same shard layout as token shards + (256 int32 header + uint16 array). Each entry = canonical raw-text byte + budget for that token in the corresponding val shard. Returns a CPU + int16 tensor sliced to match expected_len (i.e. val_tokens length).""" + files = [Path(p) for p in sorted(glob.glob(pattern))] + if not files: + raise FileNotFoundError(f"No byte sidecar files for pattern: {pattern}") + shards = [load_data_shard(file) for file in files] + # load_data_shard returns uint16 — that's exactly what the sidecar stores. + bytes_full = torch.cat(shards).contiguous() + if bytes_full.numel() < expected_len: + raise ValueError( + f"Byte sidecar too short: {bytes_full.numel()} < val_tokens {expected_len}" + ) + return bytes_full[:expected_len].to(torch.int32) + + +def load_data_shard(file): + header_bytes = 256 * np.dtype(" 0: + pos = start + while pos < end: + seg_starts.append(pos) + pos += max_doc_len + else: + seg_starts.append(start) + boundaries = seg_starts + [total_len] + padded_len = get_next_multiple_of_n(len(boundaries), bucket_size) + cu = torch.full((padded_len,), total_len, dtype=torch.int32, device=device) + cu[: len(boundaries)] = torch.tensor(boundaries, dtype=torch.int32, device=device) + seg_ends = seg_starts[1:] + [total_len] + max_seqlen = max(end - start for start, end in zip(seg_starts, seg_ends)) + return cu, max_seqlen + +class DocumentPackingLoader: + _shard_pool = ThreadPoolExecutor(1) + + def __init__(self, h, device, cu_bucket_size=64): + self.rank = h.rank + self.world_size = h.world_size + self.device = device + self.cu_bucket_size = cu_bucket_size + self.max_seq_len = h.train_seq_len + all_files = [Path(p) for p in sorted(glob.glob(h.train_files))] + if not all_files: + raise FileNotFoundError(f"No files found for pattern: {h.train_files}") + self.files = all_files + self.file_iter = iter(self.files) + self._init_shard(load_data_shard(next(self.file_iter))) + self._next_shard = self._submit_next_shard() + self._batch_pool = ThreadPoolExecutor(1) + self._prefetch_queue = [] + + def _init_shard(self, tokens): + global BOS_ID + self.tokens = tokens + self.shard_size = tokens.numel() + if BOS_ID is None: + BOS_ID = 1 + self.bos_idx = ( + (tokens == BOS_ID).nonzero(as_tuple=True)[0].to(torch.int64).cpu().numpy() + ) + self.cursor = int(self.bos_idx[0]) + + def _submit_next_shard(self): + try: + path = next(self.file_iter) + return self._shard_pool.submit(load_data_shard, path) + except StopIteration: + return None + + def _advance_shard(self): + if self._next_shard is None: + self.file_iter = iter(self.files) + self._next_shard = self._shard_pool.submit( + load_data_shard, next(self.file_iter) + ) + self._init_shard(self._next_shard.result()) + self._next_shard = self._submit_next_shard() + + def _local_doc_starts(self, local_start, total_len): + lo = np.searchsorted(self.bos_idx, local_start, side="left") + hi = np.searchsorted(self.bos_idx, local_start + total_len, side="left") + return (self.bos_idx[lo:hi] - local_start).tolist() + + def _prepare_batch(self, num_tokens_local, max_seq_len): + per_rank_span = num_tokens_local + 1 + global_span = per_rank_span * self.world_size + while self.cursor + global_span > self.shard_size: + self._advance_shard() + local_start = self.cursor + self.rank * per_rank_span + buf = self.tokens[local_start : local_start + per_rank_span] + inputs = torch.empty(per_rank_span - 1, dtype=torch.int64, pin_memory=True) + targets = torch.empty(per_rank_span - 1, dtype=torch.int64, pin_memory=True) + inputs.copy_(buf[:-1]) + targets.copy_(buf[1:]) + starts = self._local_doc_starts(local_start, inputs.numel()) + cu_seqlens, max_seqlen = _build_cu_seqlens( + starts, inputs.numel(), inputs.device, max_seq_len, self.cu_bucket_size + ) + cu_seqlens = cu_seqlens.pin_memory() + self.cursor += global_span + return inputs, targets, cu_seqlens, max_seqlen + + def next_batch(self, global_tokens, grad_accum_steps, max_seq_len=None): + if max_seq_len is None: + max_seq_len = self.max_seq_len + max_seq_len = int(max_seq_len) + if max_seq_len != self.max_seq_len: + self.max_seq_len = max_seq_len + self._prefetch_queue.clear() + num_tokens_local = global_tokens // (self.world_size * grad_accum_steps) + while len(self._prefetch_queue) < 2: + self._prefetch_queue.append( + self._batch_pool.submit(self._prepare_batch, num_tokens_local, self.max_seq_len)) + inputs, targets, cu_seqlens, max_seqlen = self._prefetch_queue.pop(0).result() + self._prefetch_queue.append( + self._batch_pool.submit(self._prepare_batch, num_tokens_local, self.max_seq_len)) + return ( + inputs[None].to(self.device, non_blocking=True), + targets[None].to(self.device, non_blocking=True), + cu_seqlens.to(self.device, non_blocking=True), + max_seqlen, + ) + + +class ShuffledSequenceLoader: + def __init__(self, h, device): + self.world_size = h.world_size + self.seq_len = h.train_seq_len + self.device = device + all_files = [Path(p) for p in sorted(glob.glob(h.train_files))] + if not all_files: + raise FileNotFoundError(f"No files found for pattern: {h.train_files}") + self.files = all_files[h.rank :: h.world_size] + self.rng = np.random.Generator(np.random.PCG64(h.rank)) + self.num_tokens = [_read_num_tokens(f) for f in self.files] + self.start_inds = [[] for _ in self.files] + for si in range(len(self.files)): + self._reset_shard(si) + + def _reset_shard(self, si): + max_phase = min( + self.seq_len - 1, max(0, self.num_tokens[si] - self.seq_len - 1) + ) + phase = int(self.rng.integers(max_phase + 1)) if max_phase > 0 else 0 + num_sequences = (self.num_tokens[si] - 1 - phase) // self.seq_len + sequence_order = self.rng.permutation(num_sequences) + self.start_inds[si] = (phase + sequence_order * self.seq_len).tolist() + + def next_batch(self, global_tokens, grad_accum_steps): + device_tokens = global_tokens // (self.world_size * grad_accum_steps) + device_batch_size = device_tokens // self.seq_len + remaining = np.array([len(s) for s in self.start_inds], dtype=np.float64) + x = torch.empty((device_batch_size, self.seq_len), dtype=torch.int64) + y = torch.empty((device_batch_size, self.seq_len), dtype=torch.int64) + for bi in range(device_batch_size): + total = remaining.sum() + if total <= 0: + for si in range(len(self.files)): + self._reset_shard(si) + remaining = np.array( + [len(s) for s in self.start_inds], dtype=np.float64 + ) + total = remaining.sum() + probs = remaining / total + si = int(self.rng.choice(len(self.files), p=probs)) + start_ind = self.start_inds[si].pop() + remaining[si] -= 1 + mm = _get_shard_memmap(self.files[si]) + window = torch.as_tensor( + np.array(mm[start_ind : start_ind + self.seq_len + 1], dtype=np.int64) + ) + x[bi] = window[:-1] + y[bi] = window[1:] + return x.to(self.device, non_blocking=True), y.to( + self.device, non_blocking=True + ) + + +class RMSNorm(nn.Module): + def __init__(self, eps=None): + super().__init__() + self.eps = eps + + def forward(self, x): + return F.rms_norm(x, (x.size(-1),), eps=self.eps) + + +class CastedLinear(nn.Linear): + def forward(self, x): + w = self.weight.to(x.dtype) + bias = self.bias.to(x.dtype) if self.bias is not None else None + return F.linear(x, w, bias) + + +@triton.jit +def linear_leaky_relu_square_kernel( + a_desc, + b_desc, + c_desc, + aux_desc, + M, + N, + K, + BLOCK_SIZE_M: tl.constexpr, + BLOCK_SIZE_N: tl.constexpr, + BLOCK_SIZE_K: tl.constexpr, + NUM_SMS: tl.constexpr, + FORWARD: tl.constexpr, + # ml-intern PR2014-evolution: parameterize the LeakyReLU-square slope. + # Default constexprs preserve PR #2014 behavior bit-identical (slope=0.5). + # Backward derivative: d/dx [(slope*x)^2] = 2*slope^2*x for x<0. + FWD_SLOPE: tl.constexpr = 0.5, + BWD_SLOPE: tl.constexpr = 0.5, +): + dtype = tl.bfloat16 + start_pid = tl.program_id(axis=0) + num_pid_m = tl.cdiv(M, BLOCK_SIZE_M) + num_pid_n = tl.cdiv(N, BLOCK_SIZE_N) + k_tiles = tl.cdiv(K, BLOCK_SIZE_K) + num_tiles = num_pid_m * num_pid_n + tile_id_c = start_pid - NUM_SMS + for tile_id in tl.range(start_pid, num_tiles, NUM_SMS, flatten=True): + pid_m = tile_id // num_pid_n + pid_n = tile_id % num_pid_n + offs_am = pid_m * BLOCK_SIZE_M + offs_bn = pid_n * BLOCK_SIZE_N + accumulator = tl.zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=tl.float32) + for ki in range(k_tiles): + offs_k = ki * BLOCK_SIZE_K + a = a_desc.load([offs_am, offs_k]) + b = b_desc.load([offs_bn, offs_k]) + accumulator = tl.dot(a, b.T, accumulator) + tile_id_c += NUM_SMS + offs_am_c = offs_am + offs_bn_c = offs_bn + acc = tl.reshape(accumulator, (BLOCK_SIZE_M, 2, BLOCK_SIZE_N // 2)) + acc = tl.permute(acc, (0, 2, 1)) + acc0, acc1 = tl.split(acc) + c0 = acc0.to(dtype) + c1 = acc1.to(dtype) + if not FORWARD: + pre0 = aux_desc.load([offs_am_c, offs_bn_c]) + pre1 = aux_desc.load([offs_am_c, offs_bn_c + BLOCK_SIZE_N // 2]) + c0 = c0 * tl.where(pre0 > 0, 2.0 * pre0, BWD_SLOPE * pre0) + c1 = c1 * tl.where(pre1 > 0, 2.0 * pre1, BWD_SLOPE * pre1) + c_desc.store([offs_am_c, offs_bn_c], c0) + c_desc.store([offs_am_c, offs_bn_c + BLOCK_SIZE_N // 2], c1) + if FORWARD: + aux0 = tl.where(c0 > 0, c0, FWD_SLOPE * c0) + aux1 = tl.where(c1 > 0, c1, FWD_SLOPE * c1) + aux_desc.store([offs_am_c, offs_bn_c], aux0 * aux0) + aux_desc.store([offs_am_c, offs_bn_c + BLOCK_SIZE_N // 2], aux1 * aux1) + + +# ml-intern PR2014-evolution: process-global slope (set in main once Hyperparameters +# is constructed). Drives both the Triton kernel and the eager fallback so they +# stay consistent. Default 0.5 keeps PR #2014 byte-identical when unset. +_LEAKY_RELU_SQ_SLOPE = 0.5 + + +def set_leaky_relu_sq_slope(slope: float): + """ml-intern: must be called once before the first MLP forward pass.""" + global _LEAKY_RELU_SQ_SLOPE + if not (0.0 < slope <= 1.0): + raise ValueError(f"LEAKY_RELU_SQ_SLOPE must be in (0, 1], got {slope}") + _LEAKY_RELU_SQ_SLOPE = float(slope) + + +def linear_leaky_relu_square(a, b, aux=None): + M, K = a.shape + N, K2 = b.shape + assert K == K2 + c = torch.empty((M, N), device=a.device, dtype=a.dtype) + forward = aux is None + if aux is None: + aux = torch.empty((M, N), device=a.device, dtype=a.dtype) + num_sms = torch.cuda.get_device_properties(a.device).multi_processor_count + BLOCK_SIZE_M, BLOCK_SIZE_N, BLOCK_SIZE_K = 256, 128, 64 + num_stages = 4 if forward else 3 + a_desc = TensorDescriptor.from_tensor(a, [BLOCK_SIZE_M, BLOCK_SIZE_K]) + b_desc = TensorDescriptor.from_tensor(b, [BLOCK_SIZE_N, BLOCK_SIZE_K]) + c_desc = TensorDescriptor.from_tensor(c, [BLOCK_SIZE_M, BLOCK_SIZE_N // 2]) + aux_desc = TensorDescriptor.from_tensor(aux, [BLOCK_SIZE_M, BLOCK_SIZE_N // 2]) + grid = lambda _meta: ( + min(num_sms, triton.cdiv(M, BLOCK_SIZE_M) * triton.cdiv(N, BLOCK_SIZE_N)), + ) + linear_leaky_relu_square_kernel[grid]( + a_desc, + b_desc, + c_desc, + aux_desc, + M, + N, + K, + BLOCK_SIZE_M=BLOCK_SIZE_M, + BLOCK_SIZE_N=BLOCK_SIZE_N, + BLOCK_SIZE_K=BLOCK_SIZE_K, + NUM_SMS=num_sms, + FORWARD=forward, + FWD_SLOPE=0.3, + BWD_SLOPE=0.18, + num_stages=num_stages, + num_warps=8, + ) + if forward: + return c, aux + return c + + +class FusedLinearLeakyReLUSquareFunction(torch.autograd.Function): + @staticmethod + def forward(ctx, x, w1, w2): + x_flat = x.reshape(-1, x.shape[-1]) + pre, post = linear_leaky_relu_square(x_flat, w1) + out = F.linear(post, w2) + ctx.save_for_backward(x, w1, w2, pre, post) + return out.view(*x.shape[:-1], out.shape[-1]) + + @staticmethod + def backward(ctx, grad_output): + x, w1, w2, pre, post = ctx.saved_tensors + x_flat = x.reshape(-1, x.shape[-1]) + grad_output_flat = grad_output.reshape(-1, grad_output.shape[-1]) + dw2 = grad_output_flat.T @ post + dpre = linear_leaky_relu_square(grad_output_flat, w2.T.contiguous(), aux=pre) + dw1 = dpre.T @ x_flat + dx = dpre @ w1 + return dx.view_as(x), dw1, dw2 + + +FusedLeakyReLUSquareMLP = FusedLinearLeakyReLUSquareFunction.apply + + +class Rotary(nn.Module): + def __init__(self, dim, base=1e4, train_seq_len=1024, rope_dims=0, yarn=True): + super().__init__() + self.dim = dim + self.base = base + self.train_seq_len = train_seq_len + self.yarn = yarn + self.rope_dims = rope_dims if rope_dims > 0 else dim + inv_freq = 1.0 / base ** ( + torch.arange(0, self.rope_dims, 2, dtype=torch.float32) / self.rope_dims + ) + self.register_buffer("inv_freq", inv_freq, persistent=False) + self._seq_len_cached = 0 + self._cos_cached = None + self._sin_cached = None + + def forward(self, seq_len, device, dtype): + if ( + self._cos_cached is None + or self._sin_cached is None + or self._seq_len_cached < seq_len + or self._cos_cached.device != device + ): + rd = self.rope_dims + if self.yarn and seq_len > self.train_seq_len: + scale = seq_len / self.train_seq_len + new_base = self.base * scale ** (rd / (rd - 2)) + inv_freq = 1.0 / new_base ** ( + torch.arange(0, rd, 2, dtype=torch.float32, device=device) / rd + ) + else: + inv_freq = self.inv_freq.float().to(device) + t = torch.arange(seq_len, device=device, dtype=torch.float32) + freqs = torch.outer(t, inv_freq) + self._cos_cached = freqs.cos()[None, :, None, :] + self._sin_cached = freqs.sin()[None, :, None, :] + self._seq_len_cached = seq_len + return self._cos_cached[:, :seq_len].to(dtype=dtype), self._sin_cached[:, :seq_len].to(dtype=dtype) + + +def apply_rotary_emb(x, cos, sin, rope_dims=0): + if rope_dims > 0 and rope_dims < x.size(-1): + x_rope, x_pass = x[..., :rope_dims], x[..., rope_dims:] + half = rope_dims // 2 + x1, x2 = x_rope[..., :half], x_rope[..., half:] + x_rope = torch.cat((x1 * cos + x2 * sin, x1 * -sin + x2 * cos), dim=-1) + return torch.cat((x_rope, x_pass), dim=-1) + half = x.size(-1) // 2 + x1, x2 = x[..., :half], x[..., half:] + return torch.cat((x1 * cos + x2 * sin, x1 * -sin + x2 * cos), dim=-1) + + +class CausalSelfAttention(nn.Module): + def __init__( + self, dim, num_heads, num_kv_heads, rope_base, qk_gain_init, train_seq_len, yarn=True, + attn_out_gate=False, attn_out_gate_src="proj", gate_window=12, + gated_attn=False, gated_attn_init_std=0.01, + sparse_attn_gate=False, sparse_attn_gate_init_std=0.0, sparse_attn_gate_scale=1.0, + ): + super().__init__() + if dim % num_heads != 0: + raise ValueError("model_dim must be divisible by num_heads") + if num_heads % num_kv_heads != 0: + raise ValueError("num_heads must be divisible by num_kv_heads") + if int(attn_out_gate) + int(gated_attn) + int(sparse_attn_gate) > 1: + raise ValueError( + "attn_out_gate, gated_attn, and sparse_attn_gate are mutually exclusive" + ) + self.num_heads = num_heads + self.num_kv_heads = num_kv_heads + self.head_dim = dim // num_heads + if self.head_dim % 2 != 0: + raise ValueError("head_dim must be even for RoPE") + self.q_gain = nn.Parameter( + torch.full((num_heads,), qk_gain_init, dtype=torch.float32) + ) + self.rope_dims = 0 + self.rotary = Rotary(self.head_dim, base=rope_base, train_seq_len=train_seq_len, yarn=yarn) + self.use_xsa = False + # AttnOutGate (PR #1667 MarioPaerle): per-head multiplicative gate on attention + # output. CastedLinear so restore_fp32_params casts back to fp32 for GPTQ. + # _zero_init -> 2*sigmoid(0)=1 -> transparent at init. + self.attn_out_gate = attn_out_gate + self.attn_out_gate_src = attn_out_gate_src + self.gate_window = gate_window + if attn_out_gate: + self.attn_gate_proj = CastedLinear(gate_window, num_heads, bias=False) + self.attn_gate_proj._zero_init = True + # Gated Attention (arXiv:2505.06708, Qwen, NeurIPS 2025). Per-head sigmoid + # gate on SDPA output, BEFORE out_proj. Gate projection W_g: (num_heads, dim). + # Name "attn_gate_w" contains "attn_gate" substring so it matches + # CONTROL_TENSOR_NAME_PATTERNS and routes to the scalar AdamW group. + # fp32 Parameter -> restore_fp32_params path covers it via the ndim<2 OR + # name-pattern check (name matches "attn_gate"). Cast to x.dtype on use. + self.gated_attn = gated_attn + if gated_attn: + W = torch.empty(num_heads, dim, dtype=torch.float32) + nn.init.normal_(W, mean=0.0, std=gated_attn_init_std) + self.attn_gate_w = nn.Parameter(W) + # Sparse attention head-output gate (modded-nanogpt style). Keeps dense SDPA + # and only narrows the gate input to the first gate_window residual dims. + # W_g: (num_heads, gate_window). y_{t,h} <- sigmoid(scale * W_g_h @ x_t[:gate_window]) * y_{t,h}. + # Shares attn_gate_w name with dense GatedAttn so the quant routing + # (CONTROL_TENSOR_NAME_PATTERNS / attn_gate_w int8 passthrough) is unchanged. + self.sparse_attn_gate = sparse_attn_gate + self.sparse_attn_gate_scale = sparse_attn_gate_scale + if sparse_attn_gate: + W = torch.empty(num_heads, gate_window, dtype=torch.float32) + if sparse_attn_gate_init_std > 0: + nn.init.normal_(W, mean=0.0, std=sparse_attn_gate_init_std) + else: + nn.init.zeros_(W) + self.attn_gate_w = nn.Parameter(W) + + def _xsa_efficient(self, y, v): + B, T, H, D = y.shape + Hkv = v.size(-2) + group = H // Hkv + y_g = y.reshape(B, T, Hkv, group, D) + vn = F.normalize(v, dim=-1).unsqueeze(-2) + proj = (y_g * vn).sum(dim=-1, keepdim=True) * vn + return (y_g - proj).reshape(B, T, H, D) + + def forward(self, x, q_w, k_w, v_w, out_w, cu_seqlens=None, max_seqlen=0): + bsz, seqlen, dim = x.shape + # q_raw kept around as a tap point for attn_out_gate_src='q' (post-projection, + # pre-reshape, pre-RoPE). + q_raw = F.linear(x, q_w.to(x.dtype)) + q = q_raw.reshape(bsz, seqlen, self.num_heads, self.head_dim) + k = F.linear(x, k_w.to(x.dtype)).reshape(bsz, seqlen, self.num_kv_heads, self.head_dim) + v = F.linear(x, v_w.to(x.dtype)).reshape(bsz, seqlen, self.num_kv_heads, self.head_dim) + q = F.rms_norm(q, (q.size(-1),)) + k = F.rms_norm(k, (k.size(-1),)) + cos, sin = self.rotary(seqlen, x.device, q.dtype) + q = apply_rotary_emb(q, cos, sin, self.rope_dims) + k = apply_rotary_emb(k, cos, sin, self.rope_dims) + q = q * self.q_gain.to(dtype=q.dtype)[None, None, :, None] + if cu_seqlens is not None: + y = flash_attn_varlen_func( + q[0], + k[0], + v[0], + cu_seqlens_q=cu_seqlens, + cu_seqlens_k=cu_seqlens, + max_seqlen_q=max_seqlen, + max_seqlen_k=max_seqlen, + causal=True, + window_size=(-1, -1), + )[None] + else: + y = flash_attn_3_func(q, k, v, causal=True) + if self.use_xsa: + y = self._xsa_efficient(y, v) + # AttnOutGate inlined (PR #1667). Inline + .contiguous() barrier so torch.compile + # fullgraph=True is happy (this avoids the @torch.compiler.disable trap that + # crashed gates v3). Per-head gate on (B,T,H,D) tensor: g shape [B,T,H], broadcast + # over D via [..., None]. zero-init weight -> 2*sigmoid(0)=1 -> transparent. + if self.attn_out_gate: + gate_src = q_raw if self.attn_out_gate_src == "q" else x + gate_in = gate_src[..., : self.gate_window].contiguous() + g = 2.0 * torch.sigmoid(self.attn_gate_proj(gate_in)) + y = y * g[..., None] + # Gated Attention (arXiv:2505.06708 G1). Inline + .contiguous() barrier so + # torch.compile fullgraph=True is happy. Per-head gate on (B,T,H,D): g shape + # [B,T,H], broadcast over D via [..., None]. Paper: g = sigmoid(x @ W_g.T) + # where W_g: (H, dim). .to(x.dtype) on fp32 param before broadcast with bf16. + if self.gated_attn: + x_c = x.contiguous() + g = torch.sigmoid(F.linear(x_c, self.attn_gate_w.to(x.dtype))) + y = y * g[..., None] + # Sparse head-output gate: narrower (gate_window) input, same shape g as GatedAttn. + if self.sparse_attn_gate: + gate_in = x[..., : self.gate_window].contiguous() + g = torch.sigmoid( + self.sparse_attn_gate_scale + * F.linear(gate_in, self.attn_gate_w.to(x.dtype)) + ) + y = y * g[..., None] + y = y.reshape(bsz, seqlen, dim) + self._last_proj_input = y.detach() if getattr(self, "_calib", False) else None + return F.linear(y, out_w.to(x.dtype)) + + +class MLP(nn.Module): + def __init__(self, dim, mlp_mult): + super().__init__() + self.use_fused = True + + def forward(self, x, up_w, down_w): + if self.training and self.use_fused: + return FusedLeakyReLUSquareMLP(x, up_w.to(x.dtype), down_w.to(x.dtype)) + # ml-intern PR2014-evolution: use process-global slope so eager and Triton + # kernels stay consistent. Default 0.5 keeps PR #2014 byte-identical. + hidden = F.leaky_relu(F.linear(x, up_w.to(x.dtype)), negative_slope=0.3).square() + self._last_down_input = hidden.detach() if getattr(self, "_calib", False) else None + return F.linear(hidden, down_w.to(x.dtype)) + + +class Block(nn.Module): + def __init__( + self, + dim, + num_heads, + num_kv_heads, + mlp_mult, + rope_base, + qk_gain_init, + train_seq_len, + layer_idx=0, + ln_scale=False, + yarn=True, + attn_out_gate=False, + attn_out_gate_src="proj", + gate_window=12, + gated_attn=False, + gated_attn_init_std=0.01, + sparse_attn_gate=False, + sparse_attn_gate_init_std=0.0, + sparse_attn_gate_scale=1.0, + ): + super().__init__() + self.attn_norm = RMSNorm() + self.mlp_norm = RMSNorm() + self.attn = CausalSelfAttention( + dim, num_heads, num_kv_heads, rope_base, qk_gain_init, train_seq_len, yarn=yarn, + attn_out_gate=attn_out_gate, attn_out_gate_src=attn_out_gate_src, gate_window=gate_window, + gated_attn=gated_attn, gated_attn_init_std=gated_attn_init_std, + sparse_attn_gate=sparse_attn_gate, + sparse_attn_gate_init_std=sparse_attn_gate_init_std, + sparse_attn_gate_scale=sparse_attn_gate_scale, + ) + self.mlp = MLP(dim, mlp_mult) + self.attn_scale = nn.Parameter(torch.ones(dim, dtype=torch.float32)) + self.mlp_scale = nn.Parameter(torch.ones(dim, dtype=torch.float32)) + self.resid_mix = nn.Parameter( + torch.stack((torch.ones(dim), torch.zeros(dim))).float() + ) + self.ln_scale_factor = 1.0 / math.sqrt(layer_idx + 1) if ln_scale else 1.0 + + def forward(self, x, x0, q_w, k_w, v_w, out_w, up_w, down_w, cu_seqlens=None, max_seqlen=0): + mix = self.resid_mix.to(dtype=x.dtype) + x_in = mix[0][None, None, :] * x + mix[1][None, None, :] * x0 + attn_out = self.attn( + self.attn_norm(x_in) * self.ln_scale_factor, + q_w, k_w, v_w, out_w, + cu_seqlens=cu_seqlens, + max_seqlen=max_seqlen, + ) + x_out = x_in + self.attn_scale.to(dtype=x_in.dtype)[None, None, :] * attn_out + x_out = x_out + self.mlp_scale.to(dtype=x_out.dtype)[ + None, None, : + ] * self.mlp(self.mlp_norm(x_out) * self.ln_scale_factor, up_w, down_w) + return x_out + +class GPT(nn.Module): + def __init__(self, h): + super().__init__() + if h.logit_softcap <= 0.0: + raise ValueError(f"logit_softcap must be positive, got {h.logit_softcap}") + self.tie_embeddings = h.tie_embeddings + self.tied_embed_init_std = h.tied_embed_init_std + self.logit_softcap = h.logit_softcap + self.fused_ce_enabled = bool(h.fused_ce_enabled) + self.tok_emb = nn.Embedding(h.vocab_size, h.model_dim) + self.num_layers = h.num_layers + head_dim = h.model_dim // h.num_heads + kv_dim = h.num_kv_heads * head_dim + hidden_dim = int(h.mlp_mult * h.model_dim) + self.qo_bank = nn.Parameter(torch.empty(2 * h.num_layers, h.model_dim, h.model_dim)) + self.kv_bank = nn.Parameter(torch.empty(2 * h.num_layers, kv_dim, h.model_dim)) + self.mlp_up_bank = nn.Parameter(torch.empty(h.num_layers, hidden_dim, h.model_dim)) + self.mlp_down_bank = nn.Parameter(torch.empty(h.num_layers, h.model_dim, hidden_dim)) + self.num_encoder_layers = h.num_layers // 2 + self.num_decoder_layers = h.num_layers - self.num_encoder_layers + self.blocks = nn.ModuleList( + [ + Block( + h.model_dim, + h.num_heads, + h.num_kv_heads, + h.mlp_mult, + h.rope_base, + h.qk_gain_init, + h.train_seq_len, + layer_idx=i, + ln_scale=h.ln_scale, + yarn=h.rope_yarn, + attn_out_gate=h.attn_out_gate_enabled, + attn_out_gate_src=h.attn_out_gate_src, + gate_window=h.gate_window, + gated_attn=h.gated_attn_enabled, + gated_attn_init_std=h.gated_attn_init_std, + sparse_attn_gate=h.sparse_attn_gate_enabled, + sparse_attn_gate_init_std=h.sparse_attn_gate_init_std, + sparse_attn_gate_scale=h.sparse_attn_gate_scale, + ) + for i in range(h.num_layers) + ] + ) + if h.rope_dims > 0: + head_dim = h.model_dim // h.num_heads + for block in self.blocks: + block.attn.rope_dims = h.rope_dims + block.attn.rotary = Rotary( + head_dim, + base=h.rope_base, + train_seq_len=h.train_seq_len, + rope_dims=h.rope_dims, + yarn=h.rope_yarn, + ) + self.final_norm = RMSNorm() + self.lm_head = ( + None + if h.tie_embeddings + else CastedLinear(h.model_dim, h.vocab_size, bias=False) + ) + if self.lm_head is not None: + self.lm_head._zero_init = True + if h.xsa_last_n > 0: + for i in range(max(0, h.num_layers - h.xsa_last_n), h.num_layers): + self.blocks[i].attn.use_xsa = True + self.looping_active = False + if h.num_loops > 0: + loop_seg = list(range(h.loop_start, h.loop_end + 1)) + all_indices = list(range(h.loop_start)) + for _ in range(h.num_loops + 1): + all_indices.extend(loop_seg) + all_indices.extend(range(h.loop_end + 1, h.num_layers)) + num_enc = len(all_indices) // 2 + self.encoder_indices = all_indices[:num_enc] + self.decoder_indices = all_indices[num_enc:] + else: + self.encoder_indices = list(range(self.num_encoder_layers)) + self.decoder_indices = list(range(self.num_encoder_layers, h.num_layers)) + self.num_skip_weights = min( + len(self.encoder_indices), len(self.decoder_indices) + ) + self.skip_weights = nn.Parameter( + torch.ones(self.num_skip_weights, h.model_dim, dtype=torch.float32) + ) + self.skip_gates = ( + nn.Parameter( + torch.zeros(self.num_skip_weights, h.model_dim, dtype=torch.float32) + ) + if h.skip_gates_enabled + else None + ) + self.parallel_start_layer = h.parallel_start_layer + self.parallel_final_lane = h.parallel_final_lane.lower() + self.parallel_post_lambdas = nn.Parameter( + torch.ones(h.num_layers, 2, 2, dtype=torch.float32) + ) + self.parallel_resid_lambdas = nn.Parameter( + torch.full((h.num_layers, 2), 1.1, dtype=torch.float32) + ) + # SmearGate (PR #1667 / modded-nanogpt @classiclarryd): + # x_t <- x_t + lam * sigmoid(W * x_t[:gate_window]) * x_{t-1}. + # Per-token forward-1 smear of the embedding lane. W zero-init + lam=0 -> + # transparent at init. Uses CastedLinear so restore_fp32_params handles dtype. + self.smear_gate_enabled = h.smear_gate_enabled + if self.smear_gate_enabled: + self.smear_window = h.gate_window + self.smear_gate = CastedLinear(self.smear_window, 1, bias=False) + self.smear_gate._zero_init = True + self.smear_lambda = nn.Parameter(torch.zeros(1, dtype=torch.float32)) + # V19: Asymmetric Logit Rescale (PR #1923 jorge-asenjo). + # Two learnable softcap scales applied on the EVAL path (forward_logits + + # forward_ttt). Init to logit_softcap so the layer is identity at step 0. + # Train path keeps the single fused softcap to preserve PR #1855 numerics. + self.asym_logit_enabled = bool(int(os.environ.get("ASYM_LOGIT_RESCALE", "0"))) + if self.asym_logit_enabled: + self.softcap_pos = nn.Parameter(torch.tensor(float(h.logit_softcap), dtype=torch.float32)) + self.softcap_neg = nn.Parameter(torch.tensor(float(h.logit_softcap), dtype=torch.float32)) + self._init_weights() + + def _init_weights(self): + if self.tie_embeddings: + nn.init.normal_(self.tok_emb.weight, mean=0.0, std=self.tied_embed_init_std) + n = self.num_layers + proj_scale = 1.0 / math.sqrt(2 * n) + for i in range(n): + nn.init.orthogonal_(self.qo_bank.data[i], gain=1.0) + nn.init.zeros_(self.qo_bank.data[n + i]) + self.qo_bank.data[n + i].mul_(proj_scale) + nn.init.orthogonal_(self.kv_bank.data[i], gain=1.0) + nn.init.orthogonal_(self.kv_bank.data[n + i], gain=1.0) + for i in range(n): + nn.init.orthogonal_(self.mlp_up_bank.data[i], gain=1.0) + nn.init.zeros_(self.mlp_down_bank.data[i]) + self.mlp_down_bank.data[i].mul_(proj_scale) + for name, module in self.named_modules(): + if isinstance(module, nn.Linear): + if getattr(module, "_zero_init", False): + nn.init.zeros_(module.weight) + elif ( + module.weight.ndim == 2 + and module.weight.shape[0] >= 64 + and module.weight.shape[1] >= 64 + ): + nn.init.orthogonal_(module.weight, gain=1.0) + + def _bank_weights(self, i): + n = self.num_layers + return ( + self.qo_bank[i], + self.kv_bank[i], + self.kv_bank[n + i], + self.qo_bank[n + i], + self.mlp_up_bank[i], + self.mlp_down_bank[i], + ) + + def _parallel_block( + self, block_idx, lane0, lane1, x0, + q_w, k_w, v_w, out_w, up_w, down_w, + cu_seqlens=None, max_seqlen=0, + ): + block = self.blocks[block_idx] + mix = block.resid_mix.to(dtype=lane0.dtype) + attn_read = mix[0][None, None, :] * lane0 + mix[1][None, None, :] * x0 + attn_out = block.attn( + block.attn_norm(attn_read) * block.ln_scale_factor, + q_w, k_w, v_w, out_w, + cu_seqlens=cu_seqlens, max_seqlen=max_seqlen, + ) + attn_out = block.attn_scale.to(dtype=attn_out.dtype)[None, None, :] * attn_out + mlp_read = lane1 + mlp_out = block.mlp_scale.to(dtype=lane1.dtype)[None, None, :] * block.mlp( + block.mlp_norm(mlp_read) * block.ln_scale_factor, up_w, down_w + ) + attn_resid = self.parallel_resid_lambdas[block_idx, 0].to(dtype=lane0.dtype) + attn_post = self.parallel_post_lambdas[block_idx, 0].to(dtype=lane0.dtype) + mlp_resid = self.parallel_resid_lambdas[block_idx, 1].to(dtype=lane0.dtype) + mlp_post = self.parallel_post_lambdas[block_idx, 1].to(dtype=lane0.dtype) + lane0 = attn_resid * lane0 + attn_post[0] * attn_out + mlp_post[0] * mlp_out + lane1 = mlp_resid * lane1 + attn_post[1] * attn_out + mlp_post[1] * mlp_out + return lane0, lane1 + + def _final_parallel_hidden(self, lane0, lane1): + if self.parallel_final_lane == "mlp": + return lane1 + if self.parallel_final_lane == "attn": + return lane0 + return 0.5 * (lane0 + lane1) + + def _forward_hidden(self, input_ids, cu_seqlens=None, max_seqlen=0): + """Run the encoder/decoder stack to the final RMSNorm; returns pre-projection hidden. + Shared by eval (softcap+projection via forward_logits) and train (fused CE path).""" + x = self.tok_emb(input_ids) + # SmearGate (PR #1667). lam=0 + W=0 -> identity at init. + # Cross-doc leak fix: zero the prev-token smear at any position whose current token + # is BOS, so the BOS embedding starting doc N+1 in a packed stream is not + # contaminated by doc N's last token (audited issue on PR#1797 base). + if self.smear_gate_enabled: + sl = self.smear_lambda.to(dtype=x.dtype) + gate_in = x[:, 1:, : self.smear_window].contiguous() + g = sl * torch.sigmoid(self.smear_gate(gate_in)) + not_bos = (input_ids[:, 1:] != BOS_ID).to(x.dtype).unsqueeze(-1) + x = torch.cat([x[:, :1], x[:, 1:] + g * x[:, :-1] * not_bos], dim=1) + x = F.rms_norm(x, (x.size(-1),)) + x0 = x + skips = [] + enc_iter = ( + self.encoder_indices + if self.looping_active + else range(self.num_encoder_layers) + ) + dec_iter = ( + self.decoder_indices + if self.looping_active + else range( + self.num_encoder_layers, + self.num_encoder_layers + self.num_decoder_layers, + ) + ) + for i in enc_iter: + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + x = self.blocks[i](x, x0, q_w, k_w, v_w, out_w, up_w, down_w, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen) + skips.append(x) + psl = self.parallel_start_layer + lane0 = None + lane1 = None + for skip_idx, i in enumerate(dec_iter): + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + if i >= psl and psl > 0: + if lane0 is None: + lane0 = x + lane1 = x + if skip_idx < self.num_skip_weights and skips: + skip = skips.pop() + w = self.skip_weights[skip_idx].to(dtype=lane0.dtype)[None, None, :] + if self.skip_gates is not None: + g = torch.sigmoid(self.skip_gates[skip_idx].to(dtype=lane0.dtype))[None, None, :] + lane0 = torch.lerp(w * skip, lane0, g) + else: + lane0 = lane0 + w * skip + lane0, lane1 = self._parallel_block( + i, lane0, lane1, x0, q_w, k_w, v_w, out_w, up_w, down_w, + cu_seqlens=cu_seqlens, max_seqlen=max_seqlen, + ) + else: + if skip_idx < self.num_skip_weights and skips: + scaled_skip = ( + self.skip_weights[skip_idx].to(dtype=x.dtype)[None, None, :] + * skips.pop() + ) + if self.skip_gates is not None: + g = torch.sigmoid(self.skip_gates[skip_idx].to(dtype=x.dtype))[None, None, :] + x = torch.lerp(scaled_skip, x, g) + else: + x = x + scaled_skip + x = self.blocks[i](x, x0, q_w, k_w, v_w, out_w, up_w, down_w, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen) + if lane0 is not None: + x = self._final_parallel_hidden(lane0, lane1) + x = self.final_norm(x) + return x + + def _project_logits(self, hidden): + if self.tie_embeddings: + return F.linear(hidden, self.tok_emb.weight) + return self.lm_head(hidden) + + def _apply_asym_softcap(self, logits): + # V19: Asymmetric softcap (PR #1923). Splits the logit_softcap scalar into + # learnable positive/negative branches. Score-first preserved: still a + # bounded, normalized post-projection nonlinearity feeding a standard + # softmax over the full vocab. + sp = self.softcap_pos.to(logits.dtype) + sn = self.softcap_neg.to(logits.dtype) + return torch.where(logits > 0, sp * torch.tanh(logits / sp), sn * torch.tanh(logits / sn)) + + def forward_logits(self, input_ids, cu_seqlens=None, max_seqlen=0): + hidden = self._forward_hidden(input_ids, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen) + logits_proj = self._project_logits(hidden) + if self.asym_logit_enabled: + return self._apply_asym_softcap(logits_proj) + return self.logit_softcap * torch.tanh(logits_proj / self.logit_softcap) + + def forward(self, input_ids, target_ids, cu_seqlens=None, max_seqlen=0): + hidden = self._forward_hidden(input_ids, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen) + logits_proj = self._project_logits(hidden) + flat_targets = target_ids.reshape(-1) + # Fused softcapped-CE kernel (training path only). Applies softcap inside the + # Triton kernel; takes pre-softcap logits_proj. Non-fused path matches stock + # PR-1736 numerics exactly (softcap in fp32, then F.cross_entropy on fp32). + if self.fused_ce_enabled: + return softcapped_cross_entropy( + logits_proj.reshape(-1, logits_proj.size(-1)), + flat_targets, + self.logit_softcap, + reduction="mean", + ) + logits = self.logit_softcap * torch.tanh(logits_proj / self.logit_softcap) + return F.cross_entropy( + logits.reshape(-1, logits.size(-1)).float(), + flat_targets, + reduction="mean", + ) + + def forward_ttt(self, input_ids, target_ids, lora, hint_ids=None): + x = self.tok_emb(input_ids) + # SmearGate on the TTT path — same inline compute as forward_logits. + # Cross-doc leak fix: see _forward_hidden comment. + if self.smear_gate_enabled: + sl = self.smear_lambda.to(dtype=x.dtype) + gate_in = x[:, 1:, : self.smear_window].contiguous() + g = sl * torch.sigmoid(self.smear_gate(gate_in)) + not_bos = (input_ids[:, 1:] != BOS_ID).to(x.dtype).unsqueeze(-1) + x = torch.cat([x[:, :1], x[:, 1:] + g * x[:, :-1] * not_bos], dim=1) + x = F.rms_norm(x, (x.size(-1),)) + x0 = x + skips = [] + enc_iter = ( + self.encoder_indices + if self.looping_active + else list(range(self.num_encoder_layers)) + ) + dec_iter = ( + self.decoder_indices + if self.looping_active + else list( + range( + self.num_encoder_layers, + self.num_encoder_layers + self.num_decoder_layers, + ) + ) + ) + slot = 0 + for i in enc_iter: + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) + slot += 1 + skips.append(x) + psl = self.parallel_start_layer + lane0 = None + lane1 = None + for skip_idx, i in enumerate(dec_iter): + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + if i >= psl and psl > 0: + if lane0 is None: + lane0 = x + lane1 = x + if skip_idx < self.num_skip_weights and skips: + skip = skips.pop() + w = self.skip_weights[skip_idx].to(dtype=lane0.dtype)[None, None, :] + if self.skip_gates is not None: + g = torch.sigmoid(self.skip_gates[skip_idx].to(dtype=lane0.dtype))[None, None, :] + lane0 = torch.lerp(w * skip, lane0, g) + else: + lane0 = lane0 + w * skip + lane0, lane1 = self._parallel_block_with_lora( + i, lane0, lane1, x0, lora, slot, + q_w, k_w, v_w, out_w, up_w, down_w, + ) + else: + if skip_idx < self.num_skip_weights and skips: + scaled_skip = ( + self.skip_weights[skip_idx].to(dtype=x.dtype)[None, None, :] + * skips.pop() + ) + if self.skip_gates is not None: + g = torch.sigmoid(self.skip_gates[skip_idx].to(dtype=x.dtype))[None, None, :] + x = torch.lerp(scaled_skip, x, g) + else: + x = x + scaled_skip + x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) + slot += 1 + if lane0 is not None: + x = self._final_parallel_hidden(lane0, lane1) + x = self.final_norm(x) + if self.tie_embeddings: + logits = F.linear(x, self.tok_emb.weight) + else: + logits = self.lm_head(x) + logits = logits + lora.lm_head_lora(x) + # V19: same asymmetric softcap on the TTT eval path. + if self.asym_logit_enabled: + logits = self._apply_asym_softcap(logits) + else: + logits = self.logit_softcap * torch.tanh(logits / self.logit_softcap) + bsz, sl, V = logits.shape + if hint_ids is None: + return F.cross_entropy( + logits.float().reshape(-1, V), target_ids.reshape(-1), reduction="none" + ).reshape(bsz, sl) + if logits.requires_grad: + ls = F.log_softmax(logits.float(), dim=-1) + log_p_y = ls.gather(-1, target_ids.unsqueeze(-1)).squeeze(-1) + log_q_h = ls.gather(-1, hint_ids.clamp(min=0).unsqueeze(-1)).squeeze(-1) + return -log_p_y, log_q_h + log_p_y, log_q_h = fused_log_softmax_dual_gather( + logits, target_ids, hint_ids.clamp(min=0) + ) + return -log_p_y, log_q_h + + def _block_with_lora(self, block, x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w): + mix = block.resid_mix.to(dtype=x.dtype) + x_in = mix[0][None, None, :] * x + mix[1][None, None, :] * x0 + n = block.attn_norm(x_in) * block.ln_scale_factor + attn = block.attn + bsz, seqlen, dim = n.shape + # Keep raw Q for AttnOutGate src='q' (matches forward path semantics). + q_raw = F.linear(n, q_w.to(n.dtype)) + if lora.q_loras is not None: + q_raw = q_raw + lora.q_loras[slot](n) + q = q_raw.reshape(bsz, seqlen, attn.num_heads, attn.head_dim) + k = F.linear(n, k_w.to(n.dtype)) + if lora.k_loras is not None: + k = k + lora.k_loras[slot](n) + k = k.reshape(bsz, seqlen, attn.num_kv_heads, attn.head_dim) + v = F.linear(n, v_w.to(n.dtype)) + if lora.v_loras is not None: + v = v + lora.v_loras[slot](n) + v = v.reshape(bsz, seqlen, attn.num_kv_heads, attn.head_dim) + q = F.rms_norm(q, (q.size(-1),)) + k = F.rms_norm(k, (k.size(-1),)) + cos, sin = attn.rotary(seqlen, n.device, q.dtype) + q = apply_rotary_emb(q, cos, sin, attn.rope_dims) + k = apply_rotary_emb(k, cos, sin, attn.rope_dims) + q = q * attn.q_gain.to(dtype=q.dtype)[None, None, :, None] + y = flash_attn_3_func(q, k, v, causal=True) + if attn.use_xsa: + y = attn._xsa_efficient(y, v) + # AttnOutGate (TTT path) — inline + .contiguous() barrier, same as the eval path. + if attn.attn_out_gate: + gate_src = q_raw if attn.attn_out_gate_src == "q" else n + gate_in = gate_src[..., : attn.gate_window].contiguous() + g = 2.0 * torch.sigmoid(attn.attn_gate_proj(gate_in)) + y = y * g[..., None] + # Gated Attention (TTT path). Gate input is n (post-norm block input), same + # as eval path. .to(n.dtype) on fp32 param before bf16 broadcast. + if attn.gated_attn: + n_c = n.contiguous() + g = torch.sigmoid(F.linear(n_c, attn.attn_gate_w.to(n.dtype))) + y = y * g[..., None] + # Sparse attention head-output gate (TTT path) — must match the eval path in + # forward() exactly, else training (which applied the gate) and TTT eval (which + # skipped it) produce mismatched representations and catastrophic BPB regression. + if attn.sparse_attn_gate: + gate_in = n[..., : attn.gate_window].contiguous() + g = torch.sigmoid( + attn.sparse_attn_gate_scale + * F.linear(gate_in, attn.attn_gate_w.to(n.dtype)) + ) + y = y * g[..., None] + y = y.reshape(bsz, seqlen, dim) + attn_out = F.linear(y, out_w.to(n.dtype)) + if lora.o_loras is not None: + attn_out = attn_out + lora.o_loras[slot](n) + x_out = x_in + block.attn_scale.to(dtype=x_in.dtype)[None, None, :] * attn_out + mlp_n = block.mlp_norm(x_out) * block.ln_scale_factor + mlp_out = block.mlp(mlp_n, up_w, down_w) + if lora.mlp_loras is not None: + mlp_out = mlp_out + lora.mlp_loras[slot](mlp_n) + x_out = x_out + block.mlp_scale.to(dtype=x_out.dtype)[None, None, :] * mlp_out + return x_out + + def _parallel_block_with_lora( + self, block_idx, lane0, lane1, x0, lora, slot, + q_w, k_w, v_w, out_w, up_w, down_w, + ): + block = self.blocks[block_idx] + mix = block.resid_mix.to(dtype=lane0.dtype) + attn_read = mix[0][None, None, :] * lane0 + mix[1][None, None, :] * x0 + n = block.attn_norm(attn_read) * block.ln_scale_factor + attn = block.attn + bsz, seqlen, dim = n.shape + q_raw = F.linear(n, q_w.to(n.dtype)) + if lora.q_loras is not None: + q_raw = q_raw + lora.q_loras[slot](n) + q = q_raw.reshape(bsz, seqlen, attn.num_heads, attn.head_dim) + k = F.linear(n, k_w.to(n.dtype)) + if lora.k_loras is not None: + k = k + lora.k_loras[slot](n) + k = k.reshape(bsz, seqlen, attn.num_kv_heads, attn.head_dim) + v = F.linear(n, v_w.to(n.dtype)) + if lora.v_loras is not None: + v = v + lora.v_loras[slot](n) + v = v.reshape(bsz, seqlen, attn.num_kv_heads, attn.head_dim) + q = F.rms_norm(q, (q.size(-1),)) + k = F.rms_norm(k, (k.size(-1),)) + cos, sin = attn.rotary(seqlen, n.device, q.dtype) + q = apply_rotary_emb(q, cos, sin, attn.rope_dims) + k = apply_rotary_emb(k, cos, sin, attn.rope_dims) + q = q * attn.q_gain.to(dtype=q.dtype)[None, None, :, None] + y = flash_attn_3_func(q, k, v, causal=True) + if attn.use_xsa: + y = attn._xsa_efficient(y, v) + # AttnOutGate (TTT parallel path) — inline + .contiguous() barrier. + if attn.attn_out_gate: + gate_src = q_raw if attn.attn_out_gate_src == "q" else n + gate_in = gate_src[..., : attn.gate_window].contiguous() + g = 2.0 * torch.sigmoid(attn.attn_gate_proj(gate_in)) + y = y * g[..., None] + # Gated Attention (TTT parallel path). Gate input is n (post-norm block input). + if attn.gated_attn: + n_c = n.contiguous() + g = torch.sigmoid(F.linear(n_c, attn.attn_gate_w.to(n.dtype))) + y = y * g[..., None] + # Sparse attention head-output gate (TTT parallel path) — must match the + # eval path in forward() to keep train/eval semantics in sync. + if attn.sparse_attn_gate: + gate_in = n[..., : attn.gate_window].contiguous() + g = torch.sigmoid( + attn.sparse_attn_gate_scale + * F.linear(gate_in, attn.attn_gate_w.to(n.dtype)) + ) + y = y * g[..., None] + y = y.reshape(bsz, seqlen, dim) + attn_out = F.linear(y, out_w.to(n.dtype)) + if lora.o_loras is not None: + attn_out = attn_out + lora.o_loras[slot](n) + attn_out = block.attn_scale.to(dtype=attn_out.dtype)[None, None, :] * attn_out + mlp_read = lane1 + mlp_n = block.mlp_norm(mlp_read) * block.ln_scale_factor + mlp_out = block.mlp(mlp_n, up_w, down_w) + if lora.mlp_loras is not None: + mlp_out = mlp_out + lora.mlp_loras[slot](mlp_n) + mlp_out = block.mlp_scale.to(dtype=lane1.dtype)[None, None, :] * mlp_out + attn_resid = self.parallel_resid_lambdas[block_idx, 0].to(dtype=lane0.dtype) + attn_post = self.parallel_post_lambdas[block_idx, 0].to(dtype=lane0.dtype) + mlp_resid = self.parallel_resid_lambdas[block_idx, 1].to(dtype=lane0.dtype) + mlp_post = self.parallel_post_lambdas[block_idx, 1].to(dtype=lane0.dtype) + lane0 = attn_resid * lane0 + attn_post[0] * attn_out + mlp_post[0] * mlp_out + lane1 = mlp_resid * lane1 + attn_post[1] * attn_out + mlp_post[1] * mlp_out + return lane0, lane1 + + +class BatchedLinearLoRA(nn.Module): + # PR-1767: rank-scaled output (alpha/rank), like standard LoRA. Decouples + # effective magnitude from rank so changing rank does not change LR scale. + _ALPHA = float(os.environ.get("TTT_LORA_ALPHA", "144")) + # PR-1767: optionally keep A warm across per-doc resets (only B is zeroed). + # Accumulates useful feature directions across documents within a TTT phase. + _WARM_START_A = bool(int(os.environ.get("TTT_WARM_START_A", "1"))) + + def __init__(self, bsz, in_features, out_features, rank): + super().__init__() + self._bound = 1.0 / math.sqrt(in_features) + self._scale = self._ALPHA / rank + self.A = nn.Parameter( + torch.empty(bsz, rank, in_features).uniform_(-self._bound, self._bound) + ) + self.B = nn.Parameter(torch.zeros(bsz, out_features, rank)) + + def reset(self): + with torch.no_grad(): + if not self._WARM_START_A: + self.A.uniform_(-self._bound, self._bound) + self.B.zero_() + + def forward(self, x): + return ((x @ self.A.transpose(1, 2)) @ self.B.transpose(1, 2)) * self._scale + + +class BatchedTTTLoRA(nn.Module): + def __init__( + self, bsz, model, rank, + q_lora=True, k_lora=True, v_lora=True, mlp_lora=True, o_lora=True, + ): + super().__init__() + self.bsz = bsz + dim = model.qo_bank.shape[-1] + vocab = model.tok_emb.num_embeddings + if getattr(model, "looping_active", False): + num_slots = len(model.encoder_indices) + len(model.decoder_indices) + else: + num_slots = len(model.blocks) + kv_dim = model.blocks[0].attn.num_kv_heads * ( + dim // model.blocks[0].attn.num_heads + ) + embed_dim = model.tok_emb.embedding_dim + self.lm_head_lora = BatchedLinearLoRA(bsz, embed_dim, vocab, rank) + self.q_loras = ( + nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, dim, rank) for _ in range(num_slots)] + ) + if q_lora + else None + ) + self.v_loras = ( + nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, kv_dim, rank) for _ in range(num_slots)] + ) + if v_lora + else None + ) + self.k_loras = ( + nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, kv_dim, rank) for _ in range(num_slots)] + ) + if k_lora + else None + ) + self.mlp_loras = ( + nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, dim, rank) for _ in range(num_slots)] + ) + if mlp_lora + else None + ) + self.o_loras = ( + nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, dim, rank) for _ in range(num_slots)] + ) + if o_lora + else None + ) + + def reset(self): + with torch.no_grad(): + self.lm_head_lora.reset() + for loras in [self.q_loras, self.v_loras, self.k_loras, + self.mlp_loras, self.o_loras]: + if loras is not None: + for lora in loras: + lora.reset() + + +# Polar Express per-iteration minimax Newton-Schulz coefficients (PR #1344). +# Replaces the fixed (3.4445, -4.775, 2.0315) coefficients of stock Muon. +# Applied at backend_steps=5 — taking more than 5 iterations from this list +# falls back to the final (converged) tuple via the slice guard below. +_PE_COEFFS = ( + (8.156554524902461, -22.48329292557795, 15.878769915207462), + (4.042929935166739, -2.808917465908714, 0.5000178451051316), + (3.8916678022926607, -2.772484153217685, 0.5060648178503393), + (3.285753657755655, -2.3681294933425376, 0.46449024233003106), + (2.3465413258596377, -1.7097828382687081, 0.42323551169305323), +) + + +@torch.compile +def zeropower_via_newtonschulz5(G, steps=10, eps=1e-07): + was_2d = G.ndim == 2 + if was_2d: + G = G.unsqueeze(0) + X = G.bfloat16() + transposed = X.size(-2) > X.size(-1) + if transposed: + X = X.mT + X = X / (X.norm(dim=(-2, -1), keepdim=True) + eps) + coeffs = _PE_COEFFS[:steps] if steps <= len(_PE_COEFFS) else _PE_COEFFS + for a, b, c in coeffs: + A = X @ X.mT + B = b * A + c * (A @ A) + X = a * X + B @ X + if transposed: + X = X.mT + if was_2d: + X = X.squeeze(0) + return X + + +class Muon(torch.optim.Optimizer): + def __init__( + self, + params, + lr, + momentum, + backend_steps, + nesterov=True, + weight_decay=0.0, + row_normalize=False, + # ml-intern PR2014-evolution: Skylight transplants from modded-nanogpt PR #269. + uw_floor=False, + uw_ratio=0.35, + norm_ema=False, + norm_beta2=0.95, + norm_eps=1e-7, + ): + super().__init__( + params, + dict( + lr=lr, + momentum=momentum, + backend_steps=backend_steps, + nesterov=nesterov, + weight_decay=weight_decay, + row_normalize=row_normalize, + uw_floor=uw_floor, + uw_ratio=uw_ratio, + norm_ema=norm_ema, + norm_beta2=norm_beta2, + norm_eps=norm_eps, + ), + ) + self._built = False + + def _build(self): + self._distributed = dist.is_available() and dist.is_initialized() + self._world_size = dist.get_world_size() if self._distributed else 1 + self._rank = dist.get_rank() if self._distributed else 0 + ws = self._world_size + self._bank_meta = [] + for group in self.param_groups: + for p in group["params"]: + B = p.shape[0] + padded_B = ((B + ws - 1) // ws) * ws + shard_B = padded_B // ws + tail = p.shape[1:] + dev = p.device + self._bank_meta.append({ + "p": p, + "B": B, + "padded_grad": torch.zeros(padded_B, *tail, device=dev, dtype=torch.bfloat16), + "shard": torch.zeros(shard_B, *tail, device=dev, dtype=torch.bfloat16), + "shard_mom": torch.zeros(shard_B, *tail, device=dev, dtype=torch.bfloat16), + "full_update": torch.zeros(padded_B, *tail, device=dev, dtype=torch.bfloat16), + "scale": max(1, p.shape[-2] / p.shape[-1]) ** 0.5, + }) + self._bank_meta.sort(key=lambda m: -m["p"].numel()) + self._built = True + + def launch_reduce_scatters(self): + if not self._built: + self._build() + if not self._distributed: + return + self._rs_futures = [] + for m in self._bank_meta: + p = m["p"] + if p.grad is None: + self._rs_futures.append(None) + continue + pg = m["padded_grad"] + pg[: m["B"]].copy_(p.grad) + fut = dist.reduce_scatter_tensor( + m["shard"], pg, op=dist.ReduceOp.AVG, async_op=True + ) + self._rs_futures.append(fut) + + def _skylight_post_allgather(self, m, group): + """ml-intern PR2014-evolution: Skylight transplants applied to FULL update. + + Runs after the all-gather (so it operates on the full per-bank update), + before the optimizer apply. Uses optimizer state keyed off the param. + + Two independently togglable transformations from modded-nanogpt PR #269: + + (a) NorMuon-style per-output-row variance EMA. We treat each "row" as the + innermost-dim slice along dim=-1 (i.e. the output direction of each + matrix in the bank). EMA of row variance is maintained per param, + divides each row, then a Frobenius re-norm restores the pre-norm scale + so the average step size is preserved. + + (b) u/w floor. Compute ratio = ||U||_F / ||W||_F over the full per-bank + tensor. If below uw_ratio, scale U up so the ratio equals uw_ratio. + Operates on the bf16 update in-place. + """ + norm_ema = group.get("norm_ema", False) + uw_floor = group.get("uw_floor", False) + if not (norm_ema or uw_floor): + return + full = m["full_update"][: m["B"]] + # We treat the full bank tensor as a 2D (B*..., D) layout: rows over + # everything-but-last, cols over the last dim. Matches the standard + # NS / row_normalize convention used elsewhere in this file. + u_2d = full.reshape(-1, full.shape[-1]) + if norm_ema: + beta2 = group.get("norm_beta2", 0.95) + eps = group.get("norm_eps", 1e-7) + state = self.state[m["p"]] + row_var = u_2d.float().var(dim=-1, unbiased=False) # (rows,) + if "_skylight_row_var_ema" not in state: + state["_skylight_row_var_ema"] = row_var.clone() + ema = state["_skylight_row_var_ema"] + ema.mul_(beta2).add_(row_var, alpha=1.0 - beta2) + scale_pre = u_2d.float().norm() + denom = (ema + eps).sqrt().to(u_2d.dtype).clamp_min_(1e-7) + u_2d.div_(denom.unsqueeze(-1)) + scale_post = u_2d.float().norm().clamp_min(1e-7) + u_2d.mul_((scale_pre / scale_post).to(u_2d.dtype)) + if uw_floor: + uw_ratio = group.get("uw_ratio", 0.35) + p = m["p"] + w_norm = p.data.float().norm().clamp_min(1e-7) + u_norm = u_2d.float().norm().clamp_min(1e-12) + ratio = u_norm / w_norm + if ratio < uw_ratio: + u_2d.mul_((uw_ratio / ratio).to(u_2d.dtype)) + + @torch.no_grad() + def step(self, closure=None): + loss = None + if closure is not None: + with torch.enable_grad(): + loss = closure() + if not self._built: + self._build() + for group in self.param_groups: + lr = group["lr"] + momentum = group["momentum"] + backend_steps = group["backend_steps"] + nesterov = group["nesterov"] + wd = group.get("weight_decay", 0.0) + row_normalize = group.get("row_normalize", False) + prev_ag_handle = None + prev_m = None + sharded = self._distributed and hasattr(self, "_rs_futures") + for idx, m in enumerate(self._bank_meta): + p = m["p"] + if p.grad is None: + continue + if prev_ag_handle is not None: + prev_ag_handle.wait() + # ml-intern: apply Skylight transplants on the gathered full update + # before the apply step. No-op when both flags are off. + self._skylight_post_allgather(prev_m, group) + pp = prev_m["p"] + upd = prev_m["full_update"][: prev_m["B"]] + if wd > 0.0: + pp.data.mul_(1.0 - lr * wd) + pp.add_(upd, alpha=-lr * prev_m["scale"]) + if sharded and self._rs_futures[idx] is not None: + self._rs_futures[idx].wait() + g = m["shard"] + buf = m["shard_mom"] + else: + g = p.grad.bfloat16() + state = self.state[p] + if "momentum_buffer" not in state: + state["momentum_buffer"] = torch.zeros_like(g) + buf = state["momentum_buffer"] + buf.mul_(momentum).add_(g) + if nesterov: + update = g.add(buf, alpha=momentum) + else: + update = buf + if row_normalize: + rn = update.float().norm(dim=-1, keepdim=True).clamp_min(1e-07) + update = update / rn.to(update.dtype) + update = zeropower_via_newtonschulz5(update, steps=backend_steps) + if sharded: + prev_ag_handle = dist.all_gather_into_tensor( + m["full_update"], update, async_op=True + ) + prev_m = m + else: + # Single-rank path: no all-gather, apply Skylight directly. + m["full_update"][: m["B"]].copy_(update) + self._skylight_post_allgather(m, group) + update = m["full_update"][: m["B"]] + if wd > 0.0: + p.data.mul_(1.0 - lr * wd) + p.add_(update, alpha=-lr * m["scale"]) + if prev_ag_handle is not None: + prev_ag_handle.wait() + self._skylight_post_allgather(prev_m, group) + pp = prev_m["p"] + upd = prev_m["full_update"][: prev_m["B"]] + if wd > 0.0: + pp.data.mul_(1.0 - lr * wd) + pp.add_(upd, alpha=-lr * prev_m["scale"]) + if hasattr(self, "_rs_futures"): + del self._rs_futures + return loss + + +CONTROL_TENSOR_NAME_PATTERNS = tuple( + pattern + for pattern in os.environ.get( + "CONTROL_TENSOR_NAME_PATTERNS", + "attn_scale,attn_scales,mlp_scale,mlp_scales,resid_mix,resid_mixes,q_gain,skip_weight,skip_weights,skip_gates,parallel_post_lambdas,parallel_resid_lambdas,attn_gate_proj,attn_gate_w,smear_gate,smear_lambda", + ).split(",") + if pattern +) + + +PACKED_REPLICATED_GRAD_MAX_NUMEL = 1 << 15 + + +class Optimizers: + def __init__(self, h, base_model): + matrix_params = [ + base_model.qo_bank, + base_model.kv_bank, + base_model.mlp_up_bank, + base_model.mlp_down_bank, + ] + block_named_params = list(base_model.blocks.named_parameters()) + scalar_params = [ + p + for (name, p) in block_named_params + if p.ndim < 2 + or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS) + ] + if base_model.skip_weights.numel() > 0: + scalar_params.append(base_model.skip_weights) + if base_model.skip_gates is not None and base_model.skip_gates.numel() > 0: + scalar_params.append(base_model.skip_gates) + if base_model.parallel_post_lambdas is not None: + scalar_params.append(base_model.parallel_post_lambdas) + if base_model.parallel_resid_lambdas is not None: + scalar_params.append(base_model.parallel_resid_lambdas) + # SmearGate params live on GPT root (not in .blocks), so add them by hand. + # Both are tiny (gate_window scalars + 1 lambda). Optimized via scalar Adam. + if getattr(base_model, "smear_gate_enabled", False): + scalar_params.append(base_model.smear_gate.weight) + scalar_params.append(base_model.smear_lambda) + token_lr = h.tied_embed_lr if h.tie_embeddings else h.embed_lr + tok_params = [ + {"params": [base_model.tok_emb.weight], "lr": token_lr, "base_lr": token_lr} + ] + self.optimizer_tok = torch.optim.AdamW( + tok_params, + betas=(h.beta1, h.beta2), + eps=h.adam_eps, + weight_decay=h.embed_wd, + fused=True, + ) + self.optimizer_muon = Muon( + matrix_params, + lr=h.matrix_lr, + momentum=h.muon_momentum, + backend_steps=h.muon_backend_steps, + weight_decay=h.muon_wd, + row_normalize=h.muon_row_normalize, + # ml-intern PR2014-evolution: Skylight transplants from modded-nanogpt PR #269. + uw_floor=getattr(h, "skylight_uw_floor", False), + uw_ratio=getattr(h, "skylight_uw_ratio", 0.35), + norm_ema=getattr(h, "skylight_norm_ema", False), + norm_beta2=getattr(h, "skylight_norm_beta2", 0.95), + norm_eps=getattr(h, "skylight_norm_eps", 1e-7), + ) + for group in self.optimizer_muon.param_groups: + group["base_lr"] = h.matrix_lr + self.optimizer_scalar = torch.optim.AdamW( + [{"params": scalar_params, "lr": h.scalar_lr, "base_lr": h.scalar_lr}], + betas=(h.beta1, h.beta2), + eps=h.adam_eps, + weight_decay=h.adam_wd, + fused=True, + ) + self.optimizers = [ + self.optimizer_tok, + self.optimizer_muon, + self.optimizer_scalar, + ] + self.replicated_params = list(tok_params[0]["params"]) + self.replicated_params.extend(scalar_params) + self.replicated_large_params = [] + self.replicated_packed_params = [] + for p in self.replicated_params: + if p.numel() <= PACKED_REPLICATED_GRAD_MAX_NUMEL: + self.replicated_packed_params.append(p) + else: + self.replicated_large_params.append(p) + self._aux_stream = torch.cuda.Stream() + + def __iter__(self): + return iter(self.optimizers) + + def zero_grad_all(self): + for opt in self.optimizers: + opt.zero_grad(set_to_none=True) + + def _all_reduce_packed_grads(self): + grads_by_key = collections.defaultdict(list) + for p in self.replicated_packed_params: + if p.grad is not None: + grads_by_key[(p.grad.device, p.grad.dtype)].append(p.grad) + for grads in grads_by_key.values(): + flat = torch.empty( + sum(g.numel() for g in grads), + device=grads[0].device, + dtype=grads[0].dtype, + ) + offset = 0 + for g in grads: + n = g.numel() + flat[offset : offset + n].copy_(g.contiguous().view(-1)) + offset += n + dist.all_reduce(flat, op=dist.ReduceOp.AVG) + offset = 0 + for g in grads: + n = g.numel() + g.copy_(flat[offset : offset + n].view_as(g)) + offset += n + + def step(self, distributed=False): + self.optimizer_muon.launch_reduce_scatters() + if distributed: + reduce_handles = [ + dist.all_reduce(p.grad, op=dist.ReduceOp.AVG, async_op=True) + for p in self.replicated_large_params + if p.grad is not None + ] + self._all_reduce_packed_grads() + for handle in reduce_handles: + handle.wait() + self._aux_stream.wait_stream(torch.cuda.current_stream()) + with torch.cuda.stream(self._aux_stream): + self.optimizer_tok.step() + self.optimizer_scalar.step() + self.optimizer_muon.step() + torch.cuda.current_stream().wait_stream(self._aux_stream) + self.zero_grad_all() + + +def restore_fp32_params(model): + for module in model.modules(): + if isinstance(module, CastedLinear): + module.float() + for name, param in model.named_parameters(): + if ( + param.ndim < 2 + or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS) + ) and param.dtype != torch.float32: + param.data = param.data.float() + if hasattr(model, "qo_bank") and model.qo_bank is not None: + model.qo_bank.data = model.qo_bank.data.float() + model.kv_bank.data = model.kv_bank.data.float() + model.mlp_up_bank.data = model.mlp_up_bank.data.float() + model.mlp_down_bank.data = model.mlp_down_bank.data.float() + + +def collect_hessians(model, train_loader, h, device, n_calibration_batches=64): + hessians = {} + act_sumsq = {} + act_counts = {} + hooks = [] + for i, block in enumerate(model.blocks): + block.attn._calib = True + block.mlp._calib = True + block.mlp.use_fused = False + + def make_attn_hook(layer_idx): + def hook_fn(module, inp, out): + x = inp[0].detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + x_sq = x.square().sum(dim=0) + x_count = x.shape[0] + for suffix in ["c_q", "c_k", "c_v"]: + name = f"blocks.{layer_idx}.attn.{suffix}.weight" + if name not in hessians: + hessians[name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(x.T, x) + if name not in act_sumsq: + act_sumsq[name] = torch.zeros( + x.shape[1], dtype=torch.float32, device=device + ) + act_counts[name] = 0 + act_sumsq[name] += x_sq + act_counts[name] += x_count + y = module._last_proj_input + if y is not None: + y = y.float() + if y.ndim == 3: + y = y.reshape(-1, y.shape[-1]) + name = f"blocks.{layer_idx}.attn.proj.weight" + if name not in hessians: + hessians[name] = torch.zeros( + y.shape[1], y.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(y.T, y) + if name not in act_sumsq: + act_sumsq[name] = torch.zeros( + y.shape[1], dtype=torch.float32, device=device + ) + act_counts[name] = 0 + act_sumsq[name] += y.square().sum(dim=0) + act_counts[name] += y.shape[0] + return hook_fn + + def make_mlp_hook(layer_idx): + def hook_fn(module, inp, out): + x = inp[0].detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + name = f"blocks.{layer_idx}.mlp.fc.weight" + if name not in hessians: + hessians[name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(x.T, x) + if name not in act_sumsq: + act_sumsq[name] = torch.zeros( + x.shape[1], dtype=torch.float32, device=device + ) + act_counts[name] = 0 + act_sumsq[name] += x.square().sum(dim=0) + act_counts[name] += x.shape[0] + h_act = module._last_down_input + if h_act is not None: + h_act = h_act.float() + if h_act.ndim == 3: + h_act = h_act.reshape(-1, h_act.shape[-1]) + name = f"blocks.{layer_idx}.mlp.proj.weight" + if name not in hessians: + hessians[name] = torch.zeros( + h_act.shape[1], h_act.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(h_act.T, h_act) + if name not in act_sumsq: + act_sumsq[name] = torch.zeros( + h_act.shape[1], dtype=torch.float32, device=device + ) + act_counts[name] = 0 + act_sumsq[name] += h_act.square().sum(dim=0) + act_counts[name] += h_act.shape[0] + return hook_fn + + for i, block in enumerate(model.blocks): + hooks.append(block.attn.register_forward_hook(make_attn_hook(i))) + hooks.append(block.mlp.register_forward_hook(make_mlp_hook(i))) + + # Hessian hooks for embedding factorization projection layers + def make_linear_input_hook(weight_name): + def hook_fn(module, inp, out): + x = inp[0].detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + if weight_name not in hessians: + hessians[weight_name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[weight_name].addmm_(x.T, x) + return hook_fn + + if model.tie_embeddings: + hook_module = model.final_norm + + def make_output_hook(name): + def hook_fn(module, inp, out): + x = out.detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + if name not in hessians: + hessians[name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(x.T, x) + if name not in act_sumsq: + act_sumsq[name] = torch.zeros( + x.shape[1], dtype=torch.float32, device=device + ) + act_counts[name] = 0 + act_sumsq[name] += x.square().sum(dim=0) + act_counts[name] += x.shape[0] + return hook_fn + + hooks.append( + hook_module.register_forward_hook(make_output_hook("tok_emb.weight")) + ) + model.eval() + with torch.no_grad(): + for _ in range(n_calibration_batches): + x, _ = train_loader.next_batch(h.train_batch_tokens, h.grad_accum_steps) + model.forward_logits(x) + for hook in hooks: + hook.remove() + for i, block in enumerate(model.blocks): + block.attn._calib = False + block.mlp._calib = False + block.mlp.use_fused = True + for name in hessians: + hessians[name] = hessians[name].cpu() / n_calibration_batches + act_stats = {} + for name, sumsq in act_sumsq.items(): + count = max(act_counts.get(name, 0), 1) + act_stats[name] = (sumsq / count).sqrt().cpu() + return hessians, act_stats + + +def gptq_quantize_weight( + w, + H, + clip_sigmas=3.0, + clip_range=63, + block_size=128, + protect_groups=None, + group_size=None, + protect_clip_range=None, +): + W_orig = w.float().clone() + rows, cols = W_orig.shape + H = H.float().clone() + dead = torch.diag(H) == 0 + H[dead, dead] = 1 + damp = 0.01 * H.diag().mean() + H.diagonal().add_(damp) + perm = torch.argsort(H.diag(), descending=True) + invperm = torch.argsort(perm) + W_perm = W_orig[:, perm].clone() + W_perm[:, dead[perm]] = 0 + H = H[perm][:, perm] + Hinv = torch.cholesky_inverse(torch.linalg.cholesky(H)) + Hinv = torch.linalg.cholesky(Hinv, upper=True) + row_std = W_orig.std(dim=1) + s = (clip_sigmas * row_std / clip_range).clamp_min(1e-10).to(torch.float16) + sf = s.float() + protect_meta = None + protect_mask_perm = None + s_hi = None + sf_hi = None + if ( + protect_groups + and group_size is not None + and protect_clip_range is not None + and protect_clip_range > clip_range + ): + protect_mask = torch.zeros(cols, dtype=torch.bool) + starts = [] + for (start, end) in protect_groups: + if start < 0 or end > cols or end <= start: + continue + protect_mask[start:end] = True + starts.append(start) + if starts: + protect_mask_perm = protect_mask[perm] + s_hi = (clip_sigmas * row_std / protect_clip_range).clamp_min(1e-10).to( + torch.float16 + ) + sf_hi = s_hi.float() + protect_meta = { + "starts": torch.tensor(starts, dtype=torch.int16), + "size": int(group_size), + "s_hi": s_hi, + } + Q = torch.zeros(rows, cols, dtype=torch.int8) + W_work = W_perm.clone() + for i1 in range(0, cols, block_size): + i2 = min(i1 + block_size, cols) + W_block = W_work[:, i1:i2].clone() + Hinv_block = Hinv[i1:i2, i1:i2] + Err = torch.zeros(rows, i2 - i1) + for j in range(i2 - i1): + w_col = W_block[:, j] + d = Hinv_block[j, j] + if protect_mask_perm is not None and bool(protect_mask_perm[i1 + j]): + q_col = torch.clamp( + torch.round(w_col / sf_hi), + -protect_clip_range, + protect_clip_range, + ) + w_recon = q_col.float() * sf_hi + else: + q_col = torch.clamp(torch.round(w_col / sf), -clip_range, clip_range) + w_recon = q_col.float() * sf + Q[:, i1 + j] = q_col.to(torch.int8) + err = (w_col - w_recon) / d + Err[:, j] = err + W_block[:, j:] -= err.unsqueeze(1) * Hinv_block[j, j:].unsqueeze(0) + if i2 < cols: + W_work[:, i2:] -= Err @ Hinv[i1:i2, i2:] + return Q[:, invperm], s, protect_meta + + +def _quantize_gate_int8_row(w): + # Symmetric int8-per-row quantization for small gate tensors. w shape + # (R, C) -> (R,) scales in fp16, int8 values in [-127, 127]. Single scale + # per row keeps accuracy high while halving storage vs fp16. + W = w.float().contiguous() + row_max = W.abs().amax(dim=1).clamp_min(1e-10) + s = (row_max / 127.0).to(torch.float16) + sf = s.float().view(-1, 1) + q = torch.clamp(torch.round(W / sf), -127, 127).to(torch.int8) + return q, s + + +def _lqer_pack(A, B, bits): + rng = 2 ** (bits - 1) - 1 + sA = (A.abs().amax(dim=1).clamp_min(1e-10) / rng).to(torch.float16) + sB = (B.abs().amax(dim=1).clamp_min(1e-10) / rng).to(torch.float16) + qA = torch.clamp(torch.round(A / sA.float().view(-1, 1)), -rng, rng).to(torch.int8) + qB = torch.clamp(torch.round(B / sB.float().view(-1, 1)), -rng, rng).to(torch.int8) + return qA, sA, qB, sB + + +def _lqer_pack_asym(A, B, g=64): + # A: INT2 per-matrix scalar (signed [-2,1], scale = |A|max/1.5). + sA = (A.abs().amax().clamp_min(1e-10) / 1.5).to(torch.float16) + qA = torch.clamp(torch.round(A / sA.float()), -2, 1).to(torch.int8) + # B: INT4 groupwise g over flattened B (signed [-8,7], per-group scale). + Bf = B.reshape(-1, g) + Bmax = Bf.abs().amax(dim=-1, keepdim=True).clamp_min(1e-10) + sB = (Bmax / 7.5).to(torch.float16).reshape(-1) + qB = torch.clamp(torch.round(Bf / sB.float().reshape(-1, 1)), -8, 7).to( + torch.int8 + ).reshape(B.shape) + return qA, sA, qB, sB + + +def _lqer_fit_quantized(E, h): + U, S, Vh = torch.linalg.svd(E, full_matrices=False) + r = min(h.lqer_rank, S.numel()) + if r <= 0: + return None + A = (U[:, :r] * S[:r]).contiguous() + B = Vh[:r, :].contiguous() + asym_on = bool(getattr(h, "lqer_asym_enabled", False)) + asym_g = int(getattr(h, "lqer_asym_group", 64)) + if asym_on and B.numel() % asym_g == 0: + qA, sA, qB, sB = _lqer_pack_asym(A, B, asym_g) + A_hat = qA.float() * float(sA) + g_sz = qB.numel() // sB.numel() + B_hat = (qB.reshape(-1, g_sz).float() * sB.float().view(-1, 1)).reshape( + qB.shape + ) + return { + "kind": "asym", + "qA": qA, + "sA": sA, + "qB": qB, + "sB": sB, + "delta": A_hat @ B_hat, + } + qA, sA, qB, sB = _lqer_pack(A, B, h.lqer_factor_bits) + A_hat = qA.float() * sA.float().view(-1, 1) + B_hat = qB.float() * sB.float().view(-1, 1) + return { + "kind": "sym", + "qA": qA, + "sA": sA, + "qB": qB, + "sB": sB, + "delta": A_hat @ B_hat, + } + + +def _awq_lite_group_candidates(w, act_rms, group_size): + cols = w.shape[1] + n_groups = cols // group_size + if n_groups <= 0: + return [] + weight_score = w.float().abs().mean(dim=0) + saliency = act_rms.float() * weight_score + cands = [] + for gi in range(n_groups): + start = gi * group_size + end = start + group_size + score = float(saliency[start:end].sum()) + cands.append((score, start, end)) + return cands + + +def gptq_mixed_quantize(state_dict, hessians, act_stats, h): + result = {} + meta = {} + quant_gate = bool(getattr(h, "gated_attn_quant_gate", False)) + lqer_on = bool(getattr(h, "lqer_enabled", False)) + awq_on = bool(getattr(h, "awq_lite_enabled", False)) + lqer_cands = {} + awq_selected = collections.defaultdict(list) + if awq_on: + awq_cands = [] + for (name, tensor) in state_dict.items(): + t = tensor.detach().cpu().contiguous() + if t.is_floating_point() and t.numel() > 65536 and name in act_stats: + bits = h.embed_bits if "tok_emb" in name else h.matrix_bits + if bits < h.awq_lite_bits: + for score, start, end in _awq_lite_group_candidates( + t, act_stats[name], h.awq_lite_group_size + ): + awq_cands.append((score, name, start, end)) + awq_cands.sort(key=lambda x: -x[0]) + for (_score, name, start, end) in awq_cands[: h.awq_lite_group_top_k]: + awq_selected[name].append((start, end)) + for (name, tensor) in state_dict.items(): + t = tensor.detach().cpu().contiguous() + # Dedicated int8-per-row path for attn_gate_w (bypasses both GPTQ and + # fp16 passthrough). Applied BEFORE the numel<=65536 passthrough check + # so the gate tensor is routed here instead of to fp16. + if ( + quant_gate + and t.is_floating_point() + and t.ndim == 2 + and name.endswith(".attn_gate_w") + # Dense GatedAttn: (num_heads, dim) = (8, 512) = 4096. + # Sparse gate: (num_heads, gate_window) = (8, 12) = 96. + # Both need int8-per-row routing; the 1024 lower bound in stock + # PR-1736 presumed dense-only. Widen to catch both. + and 32 <= t.numel() <= 8192 + ): + gq, gs = _quantize_gate_int8_row(t) + result[name + ".gq"] = gq + result[name + ".gs"] = gs + meta[name] = "gate_int8_row" + continue + if not t.is_floating_point() or t.numel() <= 65536: + result[name] = t.to(torch.float16) if t.is_floating_point() else t + meta[name] = "passthrough (float16)" + continue + if "tok_emb" in name: + cs = h.embed_clip_sigmas + elif ".mlp." in name: + cs = h.mlp_clip_sigmas + elif ".attn." in name: + cs = h.attn_clip_sigmas + else: + cs = h.matrix_clip_sigmas + bits = h.embed_bits if "tok_emb" in name else h.matrix_bits + clip_range = 2 ** (bits - 1) - 1 + q, s, protect_meta = gptq_quantize_weight( + t, + hessians[name], + clip_sigmas=cs, + clip_range=clip_range, + protect_groups=awq_selected.get(name), + group_size=h.awq_lite_group_size if name in awq_selected else None, + protect_clip_range=(2 ** (h.awq_lite_bits - 1) - 1) + if name in awq_selected + else None, + ) + result[name + ".q"] = q + result[name + ".scale"] = s + meta[name] = f"gptq (int{bits})" + W_q = q.float() * s.float().view(-1, 1) + if protect_meta is not None: + result[name + ".awqg_start"] = protect_meta["starts"] + result[name + ".awqg_s_hi"] = protect_meta["s_hi"] + result[name + ".awqg_size"] = torch.tensor( + protect_meta["size"], dtype=torch.int16 + ) + meta[name] = meta[name] + f"+awqgrpint{h.awq_lite_bits}" + gsz = protect_meta["size"] + for start in protect_meta["starts"].tolist(): + W_q[:, start : start + gsz] = ( + q[:, start : start + gsz].float() + * protect_meta["s_hi"].float().view(-1, 1) + ) + if lqer_on: + # LQER is fit on top of the fully realized GPTQ base, which already + # includes any higher-precision AWQ-protected groups. + scope = str(getattr(h, "lqer_scope", "all")).lower() + scope_ok = ( + scope == "all" + or (scope == "mlp" and ".mlp." in name) + or (scope == "attn" and ".attn." in name) + or (scope == "embed" and "tok_emb" in name) + ) + if scope_ok: + E = t.float() - W_q + err_norm = float(E.norm()) + if err_norm > 0: + lqer_cands[name] = (E, err_norm) + if lqer_on and lqer_cands: + if bool(getattr(h, "lqer_gain_select", False)): + scored = [] + for (name, (E, base_err)) in lqer_cands.items(): + fit = _lqer_fit_quantized(E, h) + if fit is None: + continue + new_err = float((E - fit["delta"]).norm()) + gain = base_err - new_err + if gain > 0: + scored.append((gain, name, fit)) + scored.sort(key=lambda x: -x[0]) + for (_gain, name, fit) in scored[: h.lqer_top_k]: + if fit["kind"] == "asym": + result[name + ".lqA_a"] = fit["qA"] + result[name + ".lqAs_a"] = fit["sA"] + result[name + ".lqB_a"] = fit["qB"] + result[name + ".lqBs_a"] = fit["sB"] + meta[name] = meta[name] + "+lqer_asym" + else: + result[name + ".lqA"] = fit["qA"] + result[name + ".lqAs"] = fit["sA"] + result[name + ".lqB"] = fit["qB"] + result[name + ".lqBs"] = fit["sB"] + meta[name] = meta[name] + "+lqer" + else: + top = sorted(lqer_cands.items(), key=lambda kv: -kv[1][1])[: h.lqer_top_k] + asym_on = bool(getattr(h, "lqer_asym_enabled", False)) + asym_g = int(getattr(h, "lqer_asym_group", 64)) + for (name, (E, _)) in top: + U, S, Vh = torch.linalg.svd(E, full_matrices=False) + r = min(h.lqer_rank, S.numel()) + A = (U[:, :r] * S[:r]).contiguous() + B = Vh[:r, :].contiguous() + if asym_on and B.numel() % asym_g == 0: + qA, sA, qB, sB = _lqer_pack_asym(A, B, asym_g) + result[name + ".lqA_a"] = qA + result[name + ".lqAs_a"] = sA + result[name + ".lqB_a"] = qB + result[name + ".lqBs_a"] = sB + meta[name] = meta[name] + "+lqer_asym" + else: + qA, sA, qB, sB = _lqer_pack(A, B, h.lqer_factor_bits) + result[name + ".lqA"] = qA + result[name + ".lqAs"] = sA + result[name + ".lqB"] = qB + result[name + ".lqBs"] = sB + meta[name] = meta[name] + "+lqer" + categories = collections.defaultdict(set) + for (name, cat) in meta.items(): + short = re.sub("\\.\\d+$", "", re.sub("blocks\\.\\d+", "blocks", name)) + categories[cat].add(short) + log("Quantized weights:") + for cat in sorted(categories): + log(f" {cat}: {', '.join(sorted(categories[cat]))}") + return result, meta + +def dequantize_mixed(result, meta, template_sd): + out = {} + for (name, orig) in template_sd.items(): + info = meta.get(name) + if info is None: + continue + orig_dtype = orig.dtype + if "passthrough" in info: + t = result[name] + if t.dtype == torch.float16 and orig_dtype in ( + torch.float32, + torch.bfloat16, + ): + t = t.to(orig_dtype) + out[name] = t + continue + if info == "gate_int8_row": + gq = result[name + ".gq"] + gs = result[name + ".gs"] + out[name] = (gq.float() * gs.float().view(-1, 1)).to(orig_dtype) + continue + q, s = result[name + ".q"], result[name + ".scale"] + if s.ndim > 0: + W = q.float() * s.float().view(q.shape[0], *[1] * (q.ndim - 1)) + else: + W = q.float() * float(s.item()) + if "awqgrpint" in info: + starts = result[name + ".awqg_start"].tolist() + s_hi = result[name + ".awqg_s_hi"].float() + gsz = int(result[name + ".awqg_size"].item()) + for start in starts: + W[:, start : start + gsz] = ( + q[:, start : start + gsz].float() * s_hi.view(-1, 1) + ) + if "lqer_asym" in info: + qA_t = result[name + ".lqA_a"] + sA_t = result[name + ".lqAs_a"] + qB_t = result[name + ".lqB_a"] + sB_t = result[name + ".lqBs_a"] + qA = qA_t.float() * float(sA_t) + g_sz = qB_t.numel() // sB_t.numel() + qB = (qB_t.reshape(-1, g_sz).float() * sB_t.float().view(-1, 1)).reshape( + qB_t.shape + ) + W = W + qA @ qB + elif "lqer" in info: + qA = result[name + ".lqA"].float() * result[name + ".lqAs"].float().view(-1, 1) + qB = result[name + ".lqB"].float() * result[name + ".lqBs"].float().view(-1, 1) + W = W + qA @ qB + out[name] = W.to(orig_dtype) + return out + + +_BSHF_MAGIC = b"BSHF" + + +# ── Per-group lrzip compression (ported from PR#1586 via PR#1667/1729) ──────── + +_GROUP_ORDER = [ + "_tok_emb.weight.q", + "attn.c_k.weight.q", "attn.c_q.weight.q", + "attn.c_v.weight.q", "attn.proj.weight.q", + "mlp.fc.weight.q", "mlp.proj.weight.q", +] +_SIMSORT_KEYS = {"_tok_emb.weight.q", "attn.c_q.weight.q", "mlp.fc.weight.q"} +_PACK_MAGIC = b"PGRP" + + +def _similarity_sort_l1(matrix): + import numpy as _np + n = matrix.shape[0] + used = _np.zeros(n, dtype=bool) + order = [0] + used[0] = True + cur = matrix[0].astype(_np.float32) + for _ in range(n - 1): + dists = _np.sum(_np.abs(matrix[~used].astype(_np.float32) - cur), axis=1) + unused = _np.where(~used)[0] + best = unused[_np.argmin(dists)] + order.append(best) + used[best] = True + cur = matrix[best].astype(_np.float32) + return _np.array(order, dtype=_np.uint16) + + +def _lrzip_compress(data, tmpdir, label): + inp = os.path.join(tmpdir, f"{label}.bin") + out = f"{inp}.lrz" + with open(inp, "wb") as f: + f.write(data) + subprocess.run(["lrzip", "-z", "-L", "9", "-o", out, inp], capture_output=True, check=True) + with open(out, "rb") as f: + result = f.read() + os.remove(inp); os.remove(out) + return result + + +def _lrzip_decompress(data, tmpdir, label): + inp = os.path.join(tmpdir, f"{label}.lrz") + out = os.path.join(tmpdir, f"{label}.bin") + with open(inp, "wb") as f: + f.write(data) + subprocess.run(["lrzip", "-d", "-f", "-o", out, inp], capture_output=True, check=True) + with open(out, "rb") as f: + result = f.read() + os.remove(inp); os.remove(out) + return result + + +def _pack_streams(streams): + import struct + n = len(streams) + hdr = _PACK_MAGIC + struct.pack("= 2 + docs.append((start, end - start)) + return docs + + +def _build_ttt_global_batches(doc_entries, h, ascending=False): + batch_size = h.ttt_batch_size + global_doc_entries = sorted(doc_entries, key=lambda x: x[1][1]) + global_batches = [ + global_doc_entries[i : i + batch_size] + for i in range(0, len(global_doc_entries), batch_size) + ] + indexed = list(enumerate(global_batches)) + if not ascending: + indexed.sort(key=lambda ib: -max(dl for _, (_, dl) in ib[1])) + return indexed + + +def _init_batch_counter(path): + with open(path, "wb") as f: + f.write((0).to_bytes(4, "little")) + + +def _claim_next_batch(counter_path, queue_len): + try: + with open(counter_path, "r+b") as f: + fcntl.flock(f, fcntl.LOCK_EX) + idx = int.from_bytes(f.read(4), "little") + f.seek(0) + f.write((idx + 1).to_bytes(4, "little")) + f.flush() + except FileNotFoundError: + return queue_len + return idx + + +def _compute_chunk_window(ci, pred_len, num_chunks, chunk_size, eval_seq_len): + chunk_end = pred_len if ci == num_chunks - 1 else (ci + 1) * chunk_size + win_start = max(0, chunk_end - eval_seq_len) + win_len = chunk_end - win_start + chunk_start = ci * chunk_size + chunk_offset = chunk_start - win_start + chunk_len = chunk_end - chunk_start + return win_start, win_len, chunk_offset, chunk_len + + +def _accumulate_bpb( + ptl, + x, + y, + chunk_offsets, + chunk_lens, + pos_idx, + base_bytes_lut, + has_leading_space_lut, + is_boundary_token_lut, + loss_sum, + byte_sum, + token_count, + y_bytes=None, +): + pos = pos_idx[: x.size(1)].unsqueeze(0) + mask = ( + (chunk_lens.unsqueeze(1) > 0) + & (pos >= chunk_offsets.unsqueeze(1)) + & (pos < (chunk_offsets + chunk_lens).unsqueeze(1)) + ) + mask_f64 = mask.to(torch.float64) + if y_bytes is not None: + tok_bytes = y_bytes.to(torch.float64) + else: + tok_bytes = base_bytes_lut[y].to(torch.float64) + tok_bytes += (has_leading_space_lut[y] & ~is_boundary_token_lut[x]).to( + torch.float64 + ) + loss_sum += (ptl.to(torch.float64) * mask_f64).sum() + byte_sum += (tok_bytes * mask_f64).sum() + token_count += chunk_lens.to(torch.float64).sum() + + +def _loss_bpb_from_sums(loss_sum, token_count, byte_sum): + val_loss = (loss_sum / token_count).item() + val_bpb = val_loss / math.log(2.0) * (token_count.item() / byte_sum.item()) + return val_loss, val_bpb + + +def _add_to_counter(path, delta): + try: + with open(path, "r+b") as f: + fcntl.flock(f, fcntl.LOCK_EX) + cur = int.from_bytes(f.read(8), "little", signed=True) + cur += int(delta) + f.seek(0) + f.write(int(cur).to_bytes(8, "little", signed=True)) + f.flush() + return cur + except FileNotFoundError: + return int(delta) + + +def _init_int64_counter(path): + with open(path, "wb") as f: + f.write((0).to_bytes(8, "little", signed=True)) + + +def _select_ttt_doc_entries(docs, h): + doc_entries = list(enumerate(docs)) + if h.val_doc_fraction < 1.0: + sample_n = max(1, int(round(len(docs) * h.val_doc_fraction))) + sampled_indices = sorted( + random.Random(h.seed).sample(range(len(docs)), sample_n) + ) + return [(i, docs[i]) for i in sampled_indices] + return doc_entries + + +def train_val_ttt_global_sgd_distributed(h, device, val_data, base_model, val_tokens, batch_seqs=None): + global BOS_ID + if BOS_ID is None: + BOS_ID = 1 + base_model.eval() + seq_len = h.eval_seq_len + total_tokens = val_tokens.numel() - 1 + ttt_chunk = h.global_ttt_chunk_tokens + batch_seqs = h.global_ttt_batch_seqs if batch_seqs is None else batch_seqs + num_chunks = (total_tokens + ttt_chunk - 1) // ttt_chunk + ttt_params = [p for p in base_model.parameters()] + for p in ttt_params: + p.requires_grad_(True) + optimizer = torch.optim.SGD( + ttt_params, lr=h.global_ttt_lr, momentum=h.global_ttt_momentum + ) + t_start = time.perf_counter() + for ci in range(num_chunks): + chunk_start = ci * ttt_chunk + chunk_end = min((ci + 1) * ttt_chunk, total_tokens) + is_last_chunk = ci == num_chunks - 1 + if is_last_chunk or h.global_ttt_epochs <= 0: + continue + base_model.train() + chunk_seqs = (chunk_end - chunk_start) // seq_len + if chunk_seqs <= 0: + continue + warmup_chunks = max(0, min(h.global_ttt_warmup_chunks, num_chunks - 1)) + if warmup_chunks > 0 and ci < warmup_chunks: + warmup_denom = max(warmup_chunks - 1, 1) + warmup_t = ci / warmup_denom + lr_now = ( + h.global_ttt_warmup_start_lr + + (h.global_ttt_lr - h.global_ttt_warmup_start_lr) * warmup_t + ) + else: + decay_steps = max(num_chunks - 1 - warmup_chunks, 1) + decay_ci = max(ci - warmup_chunks, 0) + lr_now = h.global_ttt_lr * 0.5 * ( + 1.0 + math.cos(math.pi * decay_ci / decay_steps) + ) + for pg in optimizer.param_groups: + pg["lr"] = lr_now + my_seq_s = chunk_seqs * h.rank // h.world_size + my_seq_e = chunk_seqs * (h.rank + 1) // h.world_size + my_chunk_seqs = my_seq_e - my_seq_s + for _ in range(h.global_ttt_epochs): + for bs in range(0, my_chunk_seqs, batch_seqs): + be = min(bs + batch_seqs, my_chunk_seqs) + actual_bs = my_seq_s + bs + start_tok = chunk_start + actual_bs * seq_len + end_tok = chunk_start + (my_seq_s + be) * seq_len + 1 + if end_tok > val_tokens.numel(): + continue + local = val_tokens[start_tok:end_tok].to(device=device, dtype=torch.int64) + x_flat = local[:-1] + y_flat = local[1:] + optimizer.zero_grad(set_to_none=True) + with torch.enable_grad(): + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + if h.global_ttt_respect_doc_boundaries: + bos_pos = (x_flat == BOS_ID).nonzero(as_tuple=True)[0].tolist() + cu_seqlens, max_seqlen = _build_cu_seqlens( + bos_pos, x_flat.numel(), x_flat.device, h.eval_seq_len, 64 + ) + loss = base_model( + x_flat[None], + y_flat[None], + cu_seqlens=cu_seqlens, + max_seqlen=max_seqlen, + ) + else: + x = x_flat.reshape(-1, seq_len) + y = y_flat.reshape(-1, seq_len) + loss = base_model(x, y) + loss.backward() + if dist.is_available() and dist.is_initialized(): + for p in ttt_params: + if p.grad is not None: + dist.all_reduce(p.grad, op=dist.ReduceOp.SUM) + p.grad.mul_(1.0 / h.world_size) + if h.global_ttt_grad_clip > 0: + torch.nn.utils.clip_grad_norm_(ttt_params, h.global_ttt_grad_clip) + optimizer.step() + base_model.eval() + if h.rank == 0: + elapsed = time.perf_counter() - t_start + log( + f"tttg: c{ci+1}/{num_chunks} lr:{lr_now:.6f} t:{elapsed:.1f}s" + ) + for p in base_model.parameters(): + p.requires_grad_(True) + base_model.eval() + + +def _compute_ngram_hints_for_val(h, val_data, log0=print): + if not getattr(h, "ngram_tilt_enabled", False): + return None + from online_ngram_tilt import build_hints_for_targets + all_tokens = val_data.val_tokens + targets_np_all = all_tokens.cpu().numpy().astype("uint16", copy=False)[1:] + t_h0 = time.perf_counter() + hints_pkg = build_hints_for_targets( + target_token_ids_np=targets_np_all, + tokenizer_path=h.tokenizer_path, + vocab_size=h.vocab_size, + log0=log0, + token_order=h.token_order, + token_threshold=h.token_threshold, + token_boost=h.token_boost, + within_tau=h.within_tau, + within_boost=h.within_boost, + word_order=h.word_order, + word_normalize=h.word_normalize, + word_tau=h.word_tau, + word_boost=h.word_boost, + agree_add_boost=h.agree_add_boost, + ) + hint_global = torch.from_numpy(hints_pkg["hint_ids"].astype("int64")) + gate_global = torch.from_numpy(hints_pkg["gate_mask"]) + boost_global = torch.from_numpy(hints_pkg["boost"].astype("float32")) + log0( + f"ngram_tilt:precompute_outside_timer_done elapsed={time.perf_counter()-t_h0:.2f}s " + f"total_targets={hint_global.numel()}" + ) + return (hint_global, gate_global, boost_global) + + +def eval_val_ttt_phased(h, base_model, device, val_data, forward_ttt_train, precomputed_hints=None): + global BOS_ID + if BOS_ID is None: + BOS_ID = 1 + base_model.eval() + for p in base_model.parameters(): + p.requires_grad_(False) + all_tokens = val_data.val_tokens + all_tokens_idx = all_tokens.to(torch.int32) + ngram_hint_global = None + ngram_gate_global = None + ngram_boost_global = None + if precomputed_hints is not None: + ngram_hint_global, ngram_gate_global, ngram_boost_global = precomputed_hints + log( + f"ngram_tilt:using_precomputed_hints " + f"total_targets={ngram_hint_global.numel()} (precompute time excluded from eval)" + ) + elif getattr(h, "ngram_tilt_enabled", False): + precomputed_hints = _compute_ngram_hints_for_val(h, val_data, log0=log) + if precomputed_hints is not None: + ngram_hint_global, ngram_gate_global, ngram_boost_global = precomputed_hints + docs = _find_docs(all_tokens) + doc_entries = _select_ttt_doc_entries(docs, h) + target_tokens = sum(doc_len - 1 for _, doc_len in docs) + prefix_doc_limit = max(0, min(len(doc_entries), int(h.phased_ttt_prefix_docs))) + num_phases = max(1, int(h.phased_ttt_num_phases)) + phase_boundaries = [] + for pi in range(num_phases): + boundary = prefix_doc_limit * (pi + 1) // num_phases + phase_boundaries.append(boundary) + current_phase = 0 + current_phase_boundary = phase_boundaries[0] + log( + "ttt_phased:" + f" total_docs:{len(doc_entries)} prefix_docs:{prefix_doc_limit} " + f"suffix_docs:{len(doc_entries) - prefix_doc_limit}" + f" num_phases:{num_phases} boundaries:{phase_boundaries}" + f" target_tokens:{target_tokens}" + ) + chunk_size, eval_seq_len = h.ttt_chunk_size, h.ttt_eval_seq_len + + def _parse_short_score_first_steps(raw): + steps = [] + for item in str(raw).split(","): + item = item.strip() + if not item: + continue + if ":" in item: + doc_raw, chunk_raw = item.split(":", 1) + elif "=" in item: + doc_raw, chunk_raw = item.split("=", 1) + else: + raise ValueError( + "TTT_SHORT_SCORE_FIRST_STEPS must look like '256:16,512:24'" + ) + doc_len = int(doc_raw.strip()) + step_chunk = int(chunk_raw.strip()) + if doc_len <= 0 or step_chunk <= 0: + raise ValueError("TTT short score-first steps must be positive") + steps.append((doc_len, step_chunk)) + steps.sort(key=lambda x: x[0]) + return steps + + short_score_steps = _parse_short_score_first_steps( + h.ttt_short_score_first_steps + ) + + def _score_first_chunk_for_doc(max_doc_len): + if not h.ttt_short_score_first_enabled: + return chunk_size + if short_score_steps: + for doc_limit, step_chunk in short_score_steps: + if max_doc_len <= doc_limit: + return step_chunk + return chunk_size + if max_doc_len <= h.ttt_short_doc_len and h.ttt_short_chunk_size > 0: + return h.ttt_short_chunk_size + return chunk_size + + eval_batch_set = None + if h.ttt_eval_batches: + eval_batch_set = set(int(x) for x in h.ttt_eval_batches.split(",") if x.strip()) + use_ascending = eval_batch_set is not None + global_batches_sorted = _build_ttt_global_batches( + doc_entries, h, ascending=use_ascending + ) + queue_len = len(global_batches_sorted) + counter_path = f"/tmp/ttt_counter_{h.run_id}" + prefix_counter_path = f"/tmp/ttt_prefix_counter_{h.run_id}" + pause_flag_path = f"/tmp/ttt_pause_flag_{h.run_id}" + if h.rank == 0: + _init_batch_counter(counter_path) + _init_int64_counter(prefix_counter_path) + try: + os.remove(pause_flag_path) + except FileNotFoundError: + pass + if dist.is_available() and dist.is_initialized(): + path_list = [counter_path, prefix_counter_path, pause_flag_path] + dist.broadcast_object_list(path_list, src=0) + counter_path, prefix_counter_path, pause_flag_path = path_list + dist.barrier() + loss_sum = torch.zeros((), device=device, dtype=torch.float64) + byte_sum = torch.zeros((), device=device, dtype=torch.float64) + token_count = torch.zeros((), device=device, dtype=torch.float64) + t_start = time.perf_counter() + reusable_lora = BatchedTTTLoRA( + h.ttt_batch_size, base_model, h.ttt_lora_rank, + q_lora=h.ttt_q_lora, k_lora=h.ttt_k_lora, v_lora=h.ttt_v_lora, + mlp_lora=h.ttt_mlp_lora, o_lora=h.ttt_o_lora, + ).to(device) + reusable_short_lora = None + reusable_short_opt = None + + def _build_opt(lora, lr=None, weight_decay=None, beta2=None): + lr = h.ttt_lora_lr if lr is None else lr + lr = lr * h.ttt_local_lr_mult + weight_decay = h.ttt_weight_decay if weight_decay is None else weight_decay + beta2 = h.ttt_beta2 if beta2 is None else beta2 + if h.ttt_optimizer == "sgd": + return torch.optim.SGD( + lora.parameters(), lr=lr, + momentum=h.ttt_beta1, weight_decay=weight_decay, + ) + return torch.optim.AdamW( + lora.parameters(), lr=lr, + betas=(h.ttt_beta1, beta2), + eps=1e-10, weight_decay=weight_decay, fused=True, + ) + + def _reset_optimizer_state(opt): + for s in opt.state.values(): + for k, v in s.items(): + if isinstance(v, torch.Tensor): + v.zero_() + elif k == "step": + s[k] = 0 + + def _apply_lora_template(lora, template): + if not template: + return False + with torch.no_grad(): + for name, p in lora.named_parameters(): + t = template.get(name) + if t is None or tuple(t.shape) != tuple(p.shape[1:]): + return False + for name, p in lora.named_parameters(): + t = template[name].to(device=p.device, dtype=p.dtype) + p.copy_(t.unsqueeze(0).expand_as(p)) + return True + + def _update_lora_template(template, lora): + momentum = float(h.ttt_warm_start_mean_momentum) + new_template = {} + with torch.no_grad(): + for name, p in lora.named_parameters(): + mean = p.detach().mean(dim=0).clone() + old = template.get(name) if template else None + if old is not None and tuple(old.shape) == tuple(mean.shape): + mean = old.to(device=mean.device, dtype=mean.dtype).mul(momentum).add( + mean, alpha=1.0 - momentum + ) + new_template[name] = mean + return new_template + + reusable_opt = _build_opt(reusable_lora) + warm_lora_template = None + local_scored_docs = [] + global_ttt_done = prefix_doc_limit == 0 + try: + while True: + queue_idx = _claim_next_batch(counter_path, queue_len) + if queue_idx >= queue_len: + break + orig_batch_idx, batch_entries = global_batches_sorted[queue_idx] + batch = [doc for _, doc in batch_entries] + bsz = len(batch) + doc_lens = [dl for _, dl in batch] + max_doc_len = max(doc_lens) + train_doc_allowed = [ + (h.ttt_train_min_doc_len <= 0 or dl >= h.ttt_train_min_doc_len) + and (h.ttt_train_max_doc_len <= 0 or dl <= h.ttt_train_max_doc_len) + for dl in doc_lens + ] + train_doc_mask_t = torch.tensor( + train_doc_allowed, dtype=torch.float32, device=device + ) + use_short_lora = h.ttt_short_lora_enabled and max_doc_len <= h.ttt_short_doc_len + batch_chunk_size = _score_first_chunk_for_doc(max_doc_len) + use_short_chunks = batch_chunk_size != chunk_size + batch_lora_rank = h.ttt_short_lora_rank if use_short_lora else h.ttt_lora_rank + batch_lora_lr = h.ttt_short_lora_lr if use_short_lora else h.ttt_lora_lr + batch_lora_wd = h.ttt_short_weight_decay if use_short_lora else h.ttt_weight_decay + batch_lora_beta2 = h.ttt_short_beta2 if use_short_lora else h.ttt_beta2 + prev_loss = loss_sum.item() + prev_bytes = byte_sum.item() + prev_tokens = token_count.item() + if use_short_lora and bsz == h.ttt_batch_size: + if reusable_short_lora is None: + reusable_short_lora = BatchedTTTLoRA( + h.ttt_batch_size, base_model, h.ttt_short_lora_rank, + q_lora=h.ttt_q_lora, k_lora=h.ttt_k_lora, v_lora=h.ttt_v_lora, + mlp_lora=h.ttt_mlp_lora, o_lora=h.ttt_o_lora, + ).to(device) + reusable_short_opt = _build_opt( + reusable_short_lora, + lr=h.ttt_short_lora_lr, + weight_decay=h.ttt_short_weight_decay, + beta2=h.ttt_short_beta2, + ) + reusable_short_lora.reset() + _reset_optimizer_state(reusable_short_opt) + cur_lora = reusable_short_lora + cur_opt = reusable_short_opt + elif (not use_short_lora) and bsz == reusable_lora.bsz: + reusable_lora.reset() + _reset_optimizer_state(reusable_opt) + cur_lora = reusable_lora + cur_opt = reusable_opt + else: + cur_lora = BatchedTTTLoRA( + bsz, base_model, batch_lora_rank, + q_lora=h.ttt_q_lora, k_lora=h.ttt_k_lora, v_lora=h.ttt_v_lora, + mlp_lora=h.ttt_mlp_lora, o_lora=h.ttt_o_lora, + ).to(device) + cur_opt = _build_opt( + cur_lora, + lr=batch_lora_lr, + weight_decay=batch_lora_wd, + beta2=batch_lora_beta2, + ) + template_used = False + if ( + h.ttt_warm_start_mean_enabled + and max_doc_len <= h.ttt_warm_start_mean_doc_len + ): + template_used = _apply_lora_template(cur_lora, warm_lora_template) + pred_lens = [doc_len - 1 for _, doc_len in batch] + num_chunks = [(pl + batch_chunk_size - 1) // batch_chunk_size for pl in pred_lens] + max_nc = max(num_chunks) + num_chunks_t = torch.tensor(num_chunks, dtype=torch.int64, device=device) + for ci in range(max_nc): + active = [ci < nc for nc in num_chunks] + needs_train = any( + train_doc_allowed[b] and ci < nc - 1 + for b, nc in enumerate(num_chunks) + ) + tok_starts = torch.zeros(bsz, dtype=torch.int64) + tok_wls = torch.zeros(bsz, dtype=torch.int64) + chunk_offsets_cpu = torch.zeros(bsz, dtype=torch.int64) + chunk_lens_cpu = torch.zeros(bsz, dtype=torch.int64) + for b in range(bsz): + if not active[b]: + continue + doc_start, doc_len = batch[b] + win_start, win_len, chunk_offset, chunk_len = _compute_chunk_window( + ci, pred_lens[b], num_chunks[b], batch_chunk_size, eval_seq_len + ) + tok_starts[b] = doc_start + win_start + tok_wls[b] = win_len + chunk_offsets_cpu[b] = chunk_offset + chunk_lens_cpu[b] = chunk_len + _, context_size, chunk_offset, _ = _compute_chunk_window( + ci, (ci + 1) * batch_chunk_size, ci + 1, batch_chunk_size, eval_seq_len + ) + col_idx = torch.arange(context_size + 1) + idx = tok_starts.unsqueeze(1) + col_idx.unsqueeze(0) + idx.clamp_(max=all_tokens.numel() - 1) + gathered_gpu = all_tokens_idx[idx].to( + device=device, dtype=torch.int64, non_blocking=True + ) + valid = (col_idx[:context_size].unsqueeze(0) < tok_wls.unsqueeze(1)).to( + device, non_blocking=True + ) + chunk_offsets = chunk_offsets_cpu.to(device, non_blocking=True) + chunk_lens = chunk_lens_cpu.to(device, non_blocking=True) + x = torch.where(valid, gathered_gpu[:, :context_size], 0) + y = torch.where(valid, gathered_gpu[:, 1 : context_size + 1], 0) + ctx_pos = torch.arange(context_size, device=device, dtype=torch.int64) + hint_ids_gpu = None + gate_mask_gpu = None + boost_gpu = None + if ngram_hint_global is not None: + hint_idx_cpu = ( + tok_starts.unsqueeze(1) + col_idx[:context_size].unsqueeze(0) + ).clamp_(min=0, max=ngram_hint_global.numel() - 1) + hint_ids_gpu = ngram_hint_global[hint_idx_cpu].to( + device=device, dtype=torch.int64, non_blocking=True + ) + gate_mask_gpu = ngram_gate_global[hint_idx_cpu].to( + device=device, non_blocking=True + ) + boost_gpu = ngram_boost_global[hint_idx_cpu].to( + device=device, dtype=torch.float32, non_blocking=True + ) + hint_ids_gpu = torch.where(valid, hint_ids_gpu, torch.zeros_like(hint_ids_gpu)) + gate_mask_gpu = gate_mask_gpu & valid + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + if hint_ids_gpu is not None: + per_tok_loss, log_q_hint = forward_ttt_train( + x, y, lora=cur_lora, hint_ids=hint_ids_gpu + ) + else: + per_tok_loss = forward_ttt_train(x, y, lora=cur_lora) + log_q_hint = None + # CaseOps sidecar-driven byte budget. Mirror the index pattern + # used to build y from all_tokens: y[b, j] corresponds to the + # token at global position tok_starts[b] + 1 + j (when valid). + y_bytes_arg = None + if val_data.caseops_enabled and val_data.val_bytes is not None: + y_idx = ( + tok_starts.unsqueeze(1) + + 1 + + col_idx[:context_size].unsqueeze(0) + ) + y_idx = y_idx.clamp_(max=val_data.val_bytes.numel() - 1) + y_bytes_arg = val_data.val_bytes[y_idx].to( + device=device, dtype=torch.int32, non_blocking=True + ) + # Mirror the `valid` masking used for y so out-of-range tokens + # contribute zero bytes (matches y=0 substitution above). + y_bytes_arg = torch.where( + valid, y_bytes_arg, torch.zeros_like(y_bytes_arg) + ) + if hint_ids_gpu is not None and log_q_hint is not None: + from online_ngram_tilt import apply_tilt_to_ptl_torch_fast as apply_tilt_to_ptl_torch + score_loss = apply_tilt_to_ptl_torch( + ptl=per_tok_loss, + log_q_hint=log_q_hint, + target_ids=y, + hint_ids=hint_ids_gpu, + gate_mask=gate_mask_gpu, + boost=boost_gpu, + ) + else: + score_loss = per_tok_loss + with torch.no_grad(): + _accumulate_bpb( + score_loss, + x, + y, + chunk_offsets, + chunk_lens, + ctx_pos, + val_data.base_bytes_lut, + val_data.has_leading_space_lut, + val_data.is_boundary_token_lut, + loss_sum, + byte_sum, + token_count, + y_bytes=y_bytes_arg, + ) + if needs_train: + activate_chunk_mask = (num_chunks_t - 1 > ci).float() * train_doc_mask_t + for gi in range(h.ttt_grad_steps): + if gi > 0: + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + if hint_ids_gpu is not None: + per_tok_loss, _ = forward_ttt_train( + x, y, lora=cur_lora, hint_ids=hint_ids_gpu + ) + else: + per_tok_loss = forward_ttt_train(x, y, lora=cur_lora) + per_doc = per_tok_loss[ + :, chunk_offset : chunk_offset + batch_chunk_size + ].mean(dim=-1) + cur_opt.zero_grad(set_to_none=True) + (per_doc * activate_chunk_mask).sum().backward() + cur_opt.step() + else: + del per_tok_loss + if h.ttt_warm_start_mean_enabled: + warm_lora_template = _update_lora_template(warm_lora_template, cur_lora) + batch_num = orig_batch_idx + 1 + should_report = batch_num in eval_batch_set if eval_batch_set is not None else True + if should_report: + cur_tokens = token_count.item() + cur_loss_val = loss_sum.item() + cur_bytes_val = byte_sum.item() + dt = cur_tokens - prev_tokens + db = cur_bytes_val - prev_bytes + if dt > 0 and db > 0: + b_loss = (cur_loss_val - prev_loss) / dt + b_bpb = b_loss / math.log(2.0) * (dt / db) + else: + b_loss = b_bpb = 0.0 + r_loss = cur_loss_val / max(cur_tokens, 1) + r_bpb = r_loss / math.log(2.0) * (cur_tokens / max(cur_bytes_val, 1)) + elapsed = time.perf_counter() - t_start + log( + f"ttp: b{batch_num}/{queue_len} bl:{b_loss:.4f} bb:{b_bpb:.4f} " + f"rl:{r_loss:.4f} rb:{r_bpb:.4f} dl:{min(doc_lens)}-{max(doc_lens)} " + f"gd:{int(global_ttt_done)} sr:{int(use_short_lora)} " + f"sf:{int(use_short_chunks)} tr:{sum(train_doc_allowed)}/{bsz} " + f"wt:{int(template_used)}" + ) + if not global_ttt_done: + local_scored_docs.extend( + (orig_batch_idx, pos, doc_start, doc_len) + for pos, (doc_start, doc_len) in enumerate(batch) + if train_doc_allowed[pos] + ) + prefix_done = _add_to_counter(prefix_counter_path, len(batch_entries)) + if prefix_done >= current_phase_boundary: + try: + with open(pause_flag_path, "x"): + pass + except FileExistsError: + pass + should_pause = os.path.exists(pause_flag_path) + if should_pause: + if dist.is_available() and dist.is_initialized(): + dist.barrier() + gathered_scored_docs = [None] * h.world_size + if dist.is_available() and dist.is_initialized(): + dist.all_gather_object(gathered_scored_docs, local_scored_docs) + else: + gathered_scored_docs = [local_scored_docs] + scored_docs_for_global = [] + for rank_docs in gathered_scored_docs: + if rank_docs: + scored_docs_for_global.extend(rank_docs) + scored_docs_for_global.sort(key=lambda x: (x[0], x[1])) + scored_docs_for_global = scored_docs_for_global[:current_phase_boundary] + scored_token_chunks = [ + val_data.val_tokens[doc_start : doc_start + doc_len] + for _, _, doc_start, doc_len in scored_docs_for_global + ] + if scored_token_chunks: + global_ttt_tokens = torch.cat(scored_token_chunks) + else: + global_ttt_tokens = val_data.val_tokens[:0] + if h.rank == 0: + prefix_done = 0 + try: + with open(prefix_counter_path, "rb") as f: + prefix_done = int.from_bytes( + f.read(8), "little", signed=True + ) + except FileNotFoundError: + pass + log( + f"ttpp: phase:{current_phase + 1}/{num_phases} pd:{prefix_done} " + f"gd:{len(scored_docs_for_global)} " + f"t:{time.perf_counter() - t_start:.1f}s" + ) + train_val_ttt_global_sgd_distributed( + h, device, val_data, base_model, global_ttt_tokens + ) + for p in base_model.parameters(): + p.requires_grad_(False) + reusable_lora = BatchedTTTLoRA( + h.ttt_batch_size, base_model, h.ttt_lora_rank, + q_lora=h.ttt_q_lora, k_lora=h.ttt_k_lora, v_lora=h.ttt_v_lora, + mlp_lora=h.ttt_mlp_lora, o_lora=h.ttt_o_lora, + ).to(device) + reusable_opt = _build_opt(reusable_lora) + reusable_short_lora = None + reusable_short_opt = None + current_phase += 1 + if current_phase >= num_phases: + global_ttt_done = True + else: + current_phase_boundary = phase_boundaries[current_phase] + if h.rank == 0: + try: + os.remove(pause_flag_path) + except FileNotFoundError: + pass + if dist.is_available() and dist.is_initialized(): + dist.barrier() + if h.rank == 0: + log(f"ttpr: phase:{current_phase}/{num_phases} t:{time.perf_counter() - t_start:.1f}s") + del cur_lora, cur_opt + finally: + pass + if dist.is_available() and dist.is_initialized(): + dist.all_reduce(loss_sum, op=dist.ReduceOp.SUM) + dist.all_reduce(byte_sum, op=dist.ReduceOp.SUM) + dist.all_reduce(token_count, op=dist.ReduceOp.SUM) + for p in base_model.parameters(): + p.requires_grad_(True) + base_model.train() + return _loss_bpb_from_sums(loss_sum, token_count, byte_sum) + + +def timed_eval(label, fn, *args, **kwargs): + torch.cuda.synchronize() + t0 = time.perf_counter() + val_loss, val_bpb = fn(*args, **kwargs) + torch.cuda.synchronize() + elapsed_ms = 1e3 * (time.perf_counter() - t0) + log( + f"{label} val_loss:{val_loss:.8f} val_bpb:{val_bpb:.8f} eval_time:{elapsed_ms:.0f}ms" + ) + return val_loss, val_bpb + + +def train_model(h, device, val_data): + base_model = GPT(h).to(device).bfloat16() + restore_fp32_params(base_model) + compiled_model = torch.compile(base_model, dynamic=False, fullgraph=True) + compiled_forward_logits = torch.compile( + base_model.forward_logits, dynamic=False, fullgraph=True + ) + model = compiled_model + log(f"model_params:{sum(p.numel()for p in base_model.parameters())}") + optimizers = Optimizers(h, base_model) + train_loader = DocumentPackingLoader(h, device) + train_seq_plan = parse_train_seq_schedule(h.train_seq_schedule, h.train_seq_len) + midrun_cap_plan = parse_scalar_schedule(h.midrun_cap_schedule, 1.0) + max_train_seq_len = max_train_seq_len_from_schedule(train_seq_plan, h.train_seq_len) + if max_train_seq_len != h.train_seq_len: + raise ValueError( + f"TRAIN_SEQ_LEN={h.train_seq_len} must match the maximum sequence length in " + f"TRAIN_SEQ_SCHEDULE ({max_train_seq_len})" + ) + local_microbatch_tokens = validate_train_seq_plan_compatibility( + train_seq_plan, + global_tokens=h.train_batch_tokens, + world_size=h.world_size, + grad_accum_steps=h.grad_accum_steps, + ) + log( + "train_seq_schedule:" + + ",".join((f"{seq_len}@{threshold:.3f}" for threshold, seq_len in train_seq_plan)) + ) + if h.midrun_cap_schedule: + log( + "midrun_cap_schedule:" + + ",".join( + (f"{value:.3f}@{threshold:.3f}" for threshold, value in midrun_cap_plan) + ) + ) + log(f"local_microbatch_tokens:{local_microbatch_tokens}") + active_train_seq_len = train_seq_plan[0][1] + seq_change_warmup_start_step = None + midrun_cap_active = False + midrun_cap_prev_scale = schedule_value(midrun_cap_plan, 0.0) + log(f"growth_stage:seq_len:{active_train_seq_len} progress:0.000") + max_wallclock_ms = ( + 1e3 * h.max_wallclock_seconds if h.max_wallclock_seconds > 0 else None + ) + if max_wallclock_ms is not None: + max_wallclock_ms -= h.gptq_reserve_seconds * 1e3 + log( + f"gptq:reserving {h.gptq_reserve_seconds:.0f}s, effective={max_wallclock_ms:.0f}ms" + ) + + def training_frac(step, elapsed_ms): + if max_wallclock_ms is None: + return step / max(h.iterations, 1) + return elapsed_ms / max(max_wallclock_ms, 1e-09) + + def lr_mul(step, elapsed_ms, frac): + if h.warmdown_iters > 0: + if max_wallclock_ms is None: + warmdown_start = max(h.iterations - h.warmdown_iters, 0) + if warmdown_start <= step < h.iterations: + return max( + (h.iterations - step) / max(h.warmdown_iters, 1), + h.min_lr, + ) + return 1.0 + step_ms = elapsed_ms / max(step, 1) + warmdown_ms = h.warmdown_iters * step_ms + remaining_ms = max(max_wallclock_ms - elapsed_ms, 0.0) + if remaining_ms <= warmdown_ms: + return max(remaining_ms / max(warmdown_ms, 1e-9), h.min_lr) + return 1.0 + if h.warmdown_frac <= 0: + return 1.0 + if frac >= 1.0 - h.warmdown_frac: + return max((1.0 - frac) / h.warmdown_frac, h.min_lr) + return 1.0 + + _clip_params = [p for p in base_model.parameters() if p.requires_grad] + def step_fn(step, lr_scale): + train_loss = torch.zeros((), device=device) + for micro_step in range(h.grad_accum_steps): + x, y, cu_seqlens, _max_seqlen = train_loader.next_batch( + h.train_batch_tokens, h.grad_accum_steps, active_train_seq_len + ) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=True): + loss = model( + x, y, cu_seqlens=cu_seqlens, max_seqlen=active_train_seq_len + ) + train_loss += loss.detach() + (loss / h.grad_accum_steps).backward() + train_loss /= h.grad_accum_steps + if step <= h.muon_momentum_warmup_steps: + + frac = ( + + min(step / h.muon_momentum_warmup_steps, 1.0) + + if h.muon_momentum_warmup_steps > 0 + + else 1.0 + + ) + + muon_momentum = ( + + 1 - frac + + ) * h.muon_momentum_warmup_start + frac * h.muon_momentum + + for group in optimizers.optimizer_muon.param_groups: + + group["momentum"] = muon_momentum + for opt in optimizers: + for group in opt.param_groups: + group["lr"] = group["base_lr"] * lr_scale + if h.grad_clip_norm > 0: + torch.nn.utils.clip_grad_norm_(_clip_params, h.grad_clip_norm) + optimizers.step(distributed=h.distributed) + return train_loss + + if h.warmup_steps > 0: + initial_model_state = { + name: tensor.detach().cpu().clone() + for (name, tensor) in base_model.state_dict().items() + } + initial_optimizer_states = [ + copy.deepcopy(opt.state_dict()) for opt in optimizers + ] + model.train() + num_tokens_local = h.train_batch_tokens // h.world_size + for blk in base_model.blocks: + blk.attn.rotary(num_tokens_local, device, torch.bfloat16) + cu_bucket_size = train_loader.cu_bucket_size + warmup_cu_buckets = tuple(cu_bucket_size * i for i in range(1, 5)) + warmup_cu_iters = 3 + x, y, cu_seqlens, _ = train_loader.next_batch( + h.train_batch_tokens, h.grad_accum_steps, active_train_seq_len + ) + log(f"warmup_cu_buckets:{','.join(str(b) for b in warmup_cu_buckets)} iters_each:{warmup_cu_iters}") + + def _compile_warmup_work_items(): + if not h.compile_shape_warmup: + items = [(h.train_seq_len, False)] + if h.num_loops > 0: + items.append((h.train_seq_len, True)) + return items + loop_mode = h.compile_shape_warmup_loop_modes + if loop_mode not in {"auto", "inactive", "active", "both"}: + raise ValueError( + "COMPILE_SHAPE_WARMUP_LOOP_MODES must be one of auto,inactive,active,both" + ) + items = [] + stage_start = 0.0 + for stage_end, seq_len in train_seq_plan: + if loop_mode == "inactive" or h.num_loops <= 0: + modes = [False] + elif loop_mode == "active": + modes = [True] + elif loop_mode == "both": + modes = [False, True] + else: + modes = [] + if stage_start < h.enable_looping_at: + modes.append(False) + if stage_end > h.enable_looping_at: + modes.append(True) + if not modes: + modes.append(False) + for loop_active in modes: + item = (seq_len, loop_active) + if item not in items: + items.append(item) + stage_start = stage_end + return items + + def _run_cu_bucket_warmup(seq_len): + for bucket_len in warmup_cu_buckets: + boundaries = list(range(0, x.size(1), max(seq_len, 1))) + if boundaries[-1] != x.size(1): + boundaries.append(x.size(1)) + if bucket_len < len(boundaries): + continue + cu = torch.full((bucket_len,), x.size(1), dtype=torch.int32, device=device) + cu[: len(boundaries)] = torch.tensor(boundaries, dtype=torch.int32, device=device) + for _ in range(warmup_cu_iters): + optimizers.zero_grad_all() + with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=True): + wloss = model(x, y, cu_seqlens=cu, max_seqlen=seq_len) + (wloss / h.grad_accum_steps).backward() + optimizers.zero_grad_all() + + warmup_items = _compile_warmup_work_items() + if h.compile_shape_warmup: + log( + "compile_shape_warmup:start " + + ",".join( + (f"{seq}x{'loop' if loop else 'plain'}" for seq, loop in warmup_items) + ) + ) + for seq_len, loop_active in warmup_items: + base_model.looping_active = bool(loop_active) + if h.compile_shape_warmup: + log( + f"compile_shape_warmup:shape seq_len:{seq_len} loop:{int(loop_active)}" + ) + for _ in range(max(h.compile_shape_warmup_iters if h.compile_shape_warmup else 1, 1)): + _run_cu_bucket_warmup(seq_len) + base_model.looping_active = False + for warmup_step in range(h.warmup_steps): + step_fn(warmup_step, 1.0) + if ( + warmup_step <= 5 + or (warmup_step + 1) % 10 == 0 + or warmup_step + 1 == h.warmup_steps + ): + log(f"warmup_step: {warmup_step+1}/{h.warmup_steps}") + if h.num_loops > 0: + base_model.looping_active = True + log( + f"loop_warmup:enabled encoder:{base_model.encoder_indices} decoder:{base_model.decoder_indices}" + ) + for warmup_step in range(h.warmup_steps): + step_fn(warmup_step, 1.0) + if ( + warmup_step <= 5 + or (warmup_step + 1) % 10 == 0 + or warmup_step + 1 == h.warmup_steps + ): + log(f"loop_warmup_step: {warmup_step+1}/{h.warmup_steps}") + base_model.looping_active = False + base_model.load_state_dict(initial_model_state, strict=True) + for (opt, state) in zip(optimizers, initial_optimizer_states, strict=True): + opt.load_state_dict(state) + optimizers.zero_grad_all() + train_loader = DocumentPackingLoader(h, device) + _live_state = base_model.state_dict(keep_vars=True) + ema_state = { + name: t.detach().float().clone() + for (name, t) in _live_state.items() + } + _ema_pairs = [(ema_state[name], t) for (name, t) in _live_state.items()] + ema_decay = h.ema_decay + training_time_ms = 0.0 + forced_stop_step = int(os.environ.get("FORCE_STOP_STEP", "0")) + stop_after_step = forced_stop_step if forced_stop_step > 0 else None + torch.cuda.synchronize() + t0 = time.perf_counter() + step = 0 + while True: + last_step = ( + step == h.iterations + or stop_after_step is not None + and step >= stop_after_step + ) + should_validate = ( + last_step or h.val_loss_every > 0 and step % h.val_loss_every == 0 + ) + if should_validate: + torch.cuda.synchronize() + training_time_ms += 1e3 * (time.perf_counter() - t0) + val_loss, val_bpb = eval_val( + h, device, val_data, model, compiled_forward_logits + ) + log( + f"{step}/{h.iterations} val_loss: {val_loss:.4f} val_bpb: {val_bpb:.4f}" + ) + torch.cuda.synchronize() + t0 = time.perf_counter() + if last_step: + if stop_after_step is not None and step < h.iterations: + log( + f"stopping_early: wallclock_cap train_time: {training_time_ms:.0f}ms step: {step}/{h.iterations}" + ) + break + elapsed_ms = training_time_ms + 1e3 * (time.perf_counter() - t0) + stage_seq_len, frac = current_train_seq_len( + train_seq_plan, + step=step, + iterations=h.iterations, + elapsed_ms=elapsed_ms, + max_wallclock_ms=max_wallclock_ms, + schedule_mode=h.train_seq_schedule_mode, + ) + if stage_seq_len != active_train_seq_len: + active_train_seq_len = stage_seq_len + log(f"growth_stage:seq_len:{active_train_seq_len} progress:{frac:.3f} step:{step}") + if h.seq_change_warmup_steps > 0 and step > 0: + seq_change_warmup_start_step = step + log( + f"growth_stage_rewarmup:start step:{step} steps:{h.seq_change_warmup_steps} " + f"seq_len:{active_train_seq_len}" + ) + scale = lr_mul(step, elapsed_ms, frac) + cap_scale = schedule_value(midrun_cap_plan, frac) + cap_active = cap_scale < 0.999999 + if cap_active and not midrun_cap_active: + log(f"midrun_cap:start step:{step} progress:{frac:.3f} scale:{cap_scale:.3f}") + elif ( + cap_active + and h.midrun_cap_log_updates + and abs(cap_scale - midrun_cap_prev_scale) > 1e-6 + ): + log(f"midrun_cap:update step:{step} progress:{frac:.3f} scale:{cap_scale:.3f}") + if cap_active: + scale *= cap_scale + midrun_cap_active = cap_active + midrun_cap_prev_scale = cap_scale + if seq_change_warmup_start_step is not None and h.seq_change_warmup_steps > 0: + rewarm_progress = min( + max( + (step - seq_change_warmup_start_step + 1) + / max(h.seq_change_warmup_steps, 1), + 0.0, + ), + 1.0, + ) + scale *= rewarm_progress + if rewarm_progress >= 1.0: + seq_change_warmup_start_step = None + if ( + h.num_loops > 0 + and not base_model.looping_active + and frac >= h.enable_looping_at + ): + base_model.looping_active = True + log( + f"layer_loop:enabled step:{step} frac:{frac:.3f} encoder:{base_model.encoder_indices} decoder:{base_model.decoder_indices}" + ) + train_loss = step_fn(step, scale) + with torch.no_grad(): + for ema_t, t in _ema_pairs: + ema_t.mul_(ema_decay).add_(t.detach(), alpha=1.0 - ema_decay) + step += 1 + approx_training_time_ms = training_time_ms + 1e3 * (time.perf_counter() - t0) + should_log_train = h.train_log_every > 0 and ( + step <= 5 or step % h.train_log_every == 0 or stop_after_step is not None + ) + if should_log_train: + tok_per_sec = step * h.train_batch_tokens / (approx_training_time_ms / 1e3) + log( + f"{step}/{h.iterations} train_loss: {train_loss.item():.4f} train_time: {approx_training_time_ms/60000:.1f}m tok/s: {tok_per_sec:.0f}" + ) + reached_cap = ( + forced_stop_step <= 0 + and max_wallclock_ms is not None + and approx_training_time_ms >= max_wallclock_ms + ) + if h.distributed and forced_stop_step <= 0 and max_wallclock_ms is not None: + reached_cap_tensor = torch.tensor(int(reached_cap), device=device) + dist.all_reduce(reached_cap_tensor, op=dist.ReduceOp.MAX) + reached_cap = bool(reached_cap_tensor.item()) + if stop_after_step is None and reached_cap: + stop_after_step = step + log( + f"peak memory allocated: {torch.cuda.max_memory_allocated()//1024//1024} MiB reserved: {torch.cuda.max_memory_reserved()//1024//1024} MiB" + ) + if h.ema_decay <= 0: + log("averaging:none keeping current weights") + return base_model, compiled_model, compiled_forward_logits + log("ema:applying EMA weights") + current_state = base_model.state_dict() + avg_state = { + name: t.to(dtype=current_state[name].dtype) for (name, t) in ema_state.items() + } + base_model.load_state_dict(avg_state, strict=True) + return base_model, compiled_model, compiled_forward_logits + + +def train_and_eval(h, device): + global BOS_ID + random.seed(h.seed) + np.random.seed(h.seed) + torch.manual_seed(h.seed) + torch.cuda.manual_seed_all(h.seed) + if h.artifact_dir and h.is_main_process: + os.makedirs(h.artifact_dir, exist_ok=True) + val_data = ValidationData(h, device) + log( + f"train_shards: {len(list(Path(h.datasets_dir).resolve().glob('fineweb_train_*.bin')))}" + ) + log(f"val_tokens: {val_data.val_tokens.numel()-1}") + # TTT_EVAL_ONLY: skip training + GPTQ, jump straight to TTT eval on a + # pre-existing quantized artifact. Used to test TTT-only improvements + # (e.g., PR-1767's alpha/warm-start/WD) without retraining. + ttt_eval_only = os.environ.get("TTT_EVAL_ONLY", "0") == "1" + quantize_only = os.environ.get("QUANTIZE_ONLY", "0") == "1" + if ttt_eval_only: + log("TTT_EVAL_ONLY=1 — skipping training + GPTQ, loading saved artifact for TTT eval") + log(f"ttt_lora_alpha: {BatchedLinearLoRA._ALPHA}") + log(f"ttt_warm_start_a: {BatchedLinearLoRA._WARM_START_A}") + log(f"ttt_weight_decay: {h.ttt_weight_decay}") + elif quantize_only: + log("QUANTIZE_ONLY=1 — skipping training, loading saved full-precision checkpoint") + log(f"quantize_only checkpoint: {h.model_path}") + if BOS_ID is None: + BOS_ID = 1 + base_model = GPT(h).to(device).bfloat16() + state = torch.load(h.model_path, map_location="cpu") + template = base_model.state_dict() + for key in ("softcap_pos", "softcap_neg"): + if key not in state and key in template: + state[key] = template[key].detach().cpu().clone() + log(f"quantize_only:added neutral missing {key}") + base_model.load_state_dict(state, strict=True) + del state + serialize(h, base_model, Path(__file__).read_text(encoding="utf-8")) + if h.distributed: + dist.barrier() + else: + base_model, compiled_model, compiled_forward_logits = train_model( + h, device, val_data + ) + torch._dynamo.reset() + timed_eval( + "diagnostic pre-quantization post-ema", + eval_val, + h, + device, + val_data, + compiled_model, + compiled_forward_logits, + ) + if os.environ.get("PREQUANT_ONLY", "0") == "1": + log("PREQUANT_ONLY=1 — skipping serialize/GPTQ/post-quant eval/TTT") + return + serialize(h, base_model, Path(__file__).read_text(encoding="utf-8")) + if h.distributed: + dist.barrier() + eval_model = deserialize(h, device) + if h.num_loops > 0: + eval_model.looping_active = True + if not ttt_eval_only: + compiled_model = torch.compile(eval_model, dynamic=False, fullgraph=True) + compiled_forward_logits = torch.compile( + eval_model.forward_logits, dynamic=False, fullgraph=True + ) + timed_eval( + "diagnostic quantized", + eval_val, + h, + device, + val_data, + compiled_model, + compiled_forward_logits, + ) + del eval_model + if h.ttt_enabled: + if not ttt_eval_only: + del compiled_model + if ttt_eval_only: + del eval_model + torch._dynamo.reset() + torch.cuda.empty_cache() + ttt_model = deserialize(h, device) + if h.num_loops > 0: + ttt_model.looping_active = True + for p in ttt_model.parameters(): + p.requires_grad_(False) + + if h.rope_yarn: + _yarn_seqlen = h.train_batch_tokens // h.grad_accum_steps + for block in ttt_model.blocks: + block.attn.rotary(_yarn_seqlen, device, torch.bfloat16) + else: + for block in ttt_model.blocks: + block.attn.rotary._cos_cached = None + block.attn.rotary._sin_cached = None + block.attn.rotary._seq_len_cached = 0 + block.attn.rotary(h.ttt_eval_seq_len, device, torch.bfloat16) + + def _fwd_ttt_inner(input_ids, target_ids, lora): + return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) + + def _fwd_ttt_inner_with_hints(input_ids, target_ids, lora, hint_ids): + return ttt_model.forward_ttt( + input_ids, target_ids, lora=lora, hint_ids=hint_ids + ) + + _fwd_ttt_compiled_inner = None + _fwd_ttt_compiled_inner_with_hints = None + + def _fwd_ttt(input_ids, target_ids, lora, hint_ids=None): + nonlocal _fwd_ttt_compiled_inner, _fwd_ttt_compiled_inner_with_hints + if hint_ids is None: + if _fwd_ttt_compiled_inner is None: + _fwd_ttt_compiled_inner = torch.compile(_fwd_ttt_inner, dynamic=True) + return _fwd_ttt_compiled_inner(input_ids, target_ids, lora=lora) + if _fwd_ttt_compiled_inner_with_hints is None: + _fwd_ttt_compiled_inner_with_hints = torch.compile( + _fwd_ttt_inner_with_hints, dynamic=True + ) + return _fwd_ttt_compiled_inner_with_hints( + input_ids, target_ids, lora=lora, hint_ids=hint_ids + ) + + fwd_ttt_compiled = _fwd_ttt + log(f"ttt_lora:warming up compile (random tokens, no val data)") + if BOS_ID is None: + BOS_ID = 1 + t_warmup = time.perf_counter() + warmup_bszes = [h.ttt_batch_size] + for bsz in warmup_bszes: + wl = BatchedTTTLoRA( + bsz, ttt_model, h.ttt_lora_rank, + q_lora=h.ttt_q_lora, k_lora=h.ttt_k_lora, v_lora=h.ttt_v_lora, + mlp_lora=h.ttt_mlp_lora, o_lora=h.ttt_o_lora, + ).to(device) + wo = torch.optim.AdamW( + wl.parameters(), + lr=h.ttt_lora_lr * h.ttt_local_lr_mult, + betas=(h.ttt_beta1, h.ttt_beta2), + eps=1e-10, + weight_decay=h.ttt_weight_decay, + fused=True, + ) + warmup_ctx_lens = [h.ttt_chunk_size, h.ttt_eval_seq_len] + if ( + h.ttt_short_score_first_enabled + and h.ttt_short_chunk_size not in warmup_ctx_lens + ): + warmup_ctx_lens.insert(0, h.ttt_short_chunk_size) + for item in str(h.ttt_short_score_first_steps).split(","): + item = item.strip() + if not item: + continue + sep = ":" if ":" in item else "=" + if sep not in item: + continue + _, chunk_raw = item.split(sep, 1) + step_chunk = int(chunk_raw.strip()) + if step_chunk > 0 and step_chunk not in warmup_ctx_lens: + warmup_ctx_lens.insert(0, step_chunk) + for ctx_len in warmup_ctx_lens: + xw = torch.randint(0, h.vocab_size, (bsz, ctx_len), device=device, dtype=torch.int64) + yw = torch.randint(0, h.vocab_size, (bsz, ctx_len), device=device, dtype=torch.int64) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + ptl = fwd_ttt_compiled(xw, yw, lora=wl) + ptl[:, : min(h.ttt_chunk_size, ctx_len)].mean(dim=-1).sum().backward() + wo.step() + wo.zero_grad(set_to_none=True) + if h.ngram_tilt_enabled: + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + ptl_hint, _ = fwd_ttt_compiled( + xw, yw, lora=wl, hint_ids=yw + ) + ptl_hint[:, : min(h.ttt_chunk_size, ctx_len)].mean(dim=-1).sum().backward() + wo.step() + wo.zero_grad(set_to_none=True) + del wl, wo + torch.cuda.empty_cache() + compile_elapsed = time.perf_counter() - t_warmup + log(f"ttt_lora:compile warmup done ({compile_elapsed:.1f}s)") + precomputed_hints = None + if h.ngram_tilt_enabled and h.ngram_hint_precompute_outside: + log("ngram_tilt:precomputing hints outside eval timer") + precomputed_hints = _compute_ngram_hints_for_val(h, val_data, log0=log) + log("\nbeginning TTT eval timer") + torch.cuda.synchronize() + t_ttt = time.perf_counter() + ttt_val_loss, ttt_val_bpb = eval_val_ttt_phased( + h, + ttt_model, + device, + val_data, + forward_ttt_train=fwd_ttt_compiled, + precomputed_hints=precomputed_hints, + ) + torch.cuda.synchronize() + ttt_eval_elapsed = time.perf_counter() - t_ttt + log( + "quantized_ttt_phased " + f"val_loss:{ttt_val_loss:.8f} val_bpb:{ttt_val_bpb:.8f} " + f"eval_time:{1e3*ttt_eval_elapsed:.0f}ms" + ) + log(f"total_eval_time:{ttt_eval_elapsed:.1f}s") + del ttt_model + + +def main(): + world_size = int(os.environ.get("WORLD_SIZE", "1")) + local_rank = int(os.environ.get("LOCAL_RANK", "0")) + distributed = "RANK" in os.environ and "WORLD_SIZE" in os.environ + if not torch.cuda.is_available(): + raise RuntimeError("CUDA is required") + if world_size <= 0: + raise ValueError(f"WORLD_SIZE must be positive, got {world_size}") + if 8 % world_size != 0: + raise ValueError( + f"WORLD_SIZE={world_size} must divide 8 so grad_accum_steps stays integral" + ) + device = torch.device("cuda", local_rank) + torch.cuda.set_device(device) + if distributed: + dist.init_process_group(backend="nccl", device_id=device) + dist.barrier() + torch.backends.cuda.matmul.allow_tf32 = True + torch.backends.cudnn.allow_tf32 = True + torch.set_float32_matmul_precision("high") + from torch.backends.cuda import ( + enable_cudnn_sdp, + enable_flash_sdp, + enable_math_sdp, + enable_mem_efficient_sdp, + ) + + enable_cudnn_sdp(False) + enable_flash_sdp(True) + enable_mem_efficient_sdp(False) + enable_math_sdp(False) + torch._dynamo.config.optimize_ddp = False + dynamo_cache_size_limit = int(os.environ.get("DYNAMO_CACHE_SIZE_LIMIT", "128")) + torch._dynamo.config.cache_size_limit = dynamo_cache_size_limit + if hasattr(torch._dynamo.config, "recompile_limit"): + torch._dynamo.config.recompile_limit = dynamo_cache_size_limit + if hasattr(torch._dynamo.config, "accumulated_cache_size_limit"): + torch._dynamo.config.accumulated_cache_size_limit = max( + int(os.environ.get("DYNAMO_ACCUMULATED_CACHE_SIZE_LIMIT", "1024")), + dynamo_cache_size_limit, + ) + h = Hyperparameters() + set_logging_hparams(h) + # ml-intern PR2014-evolution: install LeakyReLU-square slope process-globally + # before any model forward. Default 0.5 keeps PR #2014 byte-identical when + # the env var is unset. + set_leaky_relu_sq_slope(h.leaky_relu_sq_slope) + if h.is_main_process: + os.makedirs(h.artifact_dir if h.artifact_dir else "logs", exist_ok=True) + log(100 * "=", console=False) + log("Hyperparameters:", console=True) + for (k, v) in sorted(vars(type(h)).items()): + if not k.startswith("_"): + log(f" {k}: {v}", console=True) + log("=" * 100, console=False) + log("Source code:", console=False) + log("=" * 100, console=False) + with open(__file__, "r", encoding="utf-8") as _src: + log(_src.read(), console=False) + log("=" * 100, console=False) + log(f"Running Python {sys.version}", console=False) + log(f"Running PyTorch {torch.__version__}", console=False) + log("=" * 100, console=False) + train_and_eval(h, device) + if distributed: + dist.destroy_process_group() + + +if __name__ == "__main__": + main() diff --git a/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_seed42.log b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_seed42.log new file mode 100644 index 0000000000..a25713962b --- /dev/null +++ b/records/track_10min_16mb/2026-05-01_PR2014_Leaky03_InTimerNgramTTT_1.0555/train_seed42.log @@ -0,0 +1,2661 @@ +W0501 20:49:32.897000 343740 torch/distributed/run.py:803] +W0501 20:49:32.897000 343740 torch/distributed/run.py:803] ***************************************** +W0501 20:49:32.897000 343740 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0501 20:49:32.897000 343740 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + artifact_dir: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_only_authorhf_seed42/seed42 + attn_clip_sigmas: 13.0 + attn_out_gate_enabled: False + attn_out_gate_src: proj + awq_lite_bits: 8 + awq_lite_enabled: True + awq_lite_group_size: 64 + awq_lite_group_top_k: 1 + beta1: 0.9 + beta2: 0.99 + caseops_enabled: True + compile_shape_warmup: True + compile_shape_warmup_iters: 1 + compile_shape_warmup_loop_modes: auto + compressor: pergroup + data_dir: ./data + datasets_dir: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved + distributed: True + ema_decay: 0.9965 + embed_bits: 7 + embed_clip_sigmas: 14.0 + embed_lr: 0.6 + embed_wd: 0.085 + enable_looping_at: 0.35 + eval_include_tail: True + eval_seq_len: 3072 + eval_stride: 1536 + fused_ce_enabled: True + gate_window: 12 + gated_attn_enabled: False + gated_attn_init_std: 0.01 + gated_attn_quant_gate: True + global_ttt_batch_seqs: 32 + global_ttt_chunk_tokens: 32768 + global_ttt_epochs: 1 + global_ttt_grad_clip: 1.0 + global_ttt_lr: 0.001 + global_ttt_momentum: 0.9 + global_ttt_respect_doc_boundaries: True + global_ttt_warmup_chunks: 0 + global_ttt_warmup_start_lr: 0.0 + gptq_calibration_batches: 16 + gptq_reserve_seconds: 4.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + is_main_process: True + iterations: 20000 + leaky_relu_sq_slope: 0.3 + ln_scale: True + local_rank: 0 + logfile: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_only_authorhf_seed42/seed42/pr2014_evo_skylight_lrelu03_s42.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + lqer_asym_enabled: True + lqer_asym_group: 64 + lqer_enabled: True + lqer_factor_bits: 4 + lqer_gain_select: False + lqer_rank: 4 + lqer_scope: all + lqer_top_k: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.026 + max_wallclock_seconds: 600.0 + midrun_cap_log_updates: False + midrun_cap_schedule: + min_lr: 0.1 + mlp_clip_sigmas: 11.5 + mlp_mult: 4.0 + model_dim: 512 + model_path: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_only_authorhf_seed42/seed42/final_model.pt + muon_backend_steps: 5 + muon_momentum: 0.97 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_final_lane: mean + parallel_start_layer: 8 + phased_ttt_num_phases: 1 + phased_ttt_prefix_docs: 2500 + qk_gain_init: 5.25 + quantized_model_path: /workspace/parameter-golf-pr2014-gated-clean/records/pr2014_evo_lrelu03_only_authorhf_seed42/seed42/final_model.int6.ptz + rank: 0 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 3072 + rope_yarn: False + run_id: pr2014_evo_skylight_lrelu03_s42 + scalar_lr: 0.02 + seed: 42 + seq_change_warmup_steps: 32 + skip_gates_enabled: True + skylight_norm_beta2: 0.95 + skylight_norm_ema: False + skylight_norm_eps: 1e-07 + skylight_uw_floor: False + skylight_uw_ratio: 0.35 + smear_gate_enabled: True + sparse_attn_gate_enabled: True + sparse_attn_gate_init_std: 0.0 + sparse_attn_gate_scale: 0.5 + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: /tmp/parameter-golf-data-authorhf/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + train_batch_tokens: 786432 + train_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 3072 + train_seq_schedule: 1024@0.100,2048@0.700,3072@1.000 + train_seq_schedule_mode: wallclock + ttt_batch_size: 24 + ttt_beta1: 0.0 + ttt_beta2: 0.99 + ttt_chunk_size: 48 + ttt_enabled: True + ttt_eval_batches: + ttt_eval_seq_len: 3072 + ttt_grad_steps: 1 + ttt_k_lora: True + ttt_local_lr_mult: 0.75 + ttt_lora_lr: 0.0001 + ttt_lora_rank: 80 + ttt_mask: no_qv + ttt_mlp_lora: True + ttt_o_lora: True + ttt_optimizer: adam + ttt_q_lora: False + ttt_short_beta2: 0.99 + ttt_short_chunk_size: 24 + ttt_short_doc_len: 2000 + ttt_short_lora_enabled: False + ttt_short_lora_lr: 0.0001 + ttt_short_lora_rank: 80 + ttt_short_score_first_enabled: True + ttt_short_score_first_steps: 256:8,2000:24 + ttt_short_weight_decay: 0.5 + ttt_train_max_doc_len: 0 + ttt_train_min_doc_len: 0 + ttt_v_lora: False + ttt_warm_start_mean_doc_len: 2000 + ttt_warm_start_mean_enabled: False + ttt_warm_start_mean_momentum: 0.95 + ttt_weight_decay: 0.5 + val_batch_tokens: 524288 + val_bytes_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_bytes_*.bin + val_doc_fraction: 1.0 + val_files: /tmp/parameter-golf-data-authorhf/datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/fineweb_val_*.bin + val_loss_every: 0 + vocab_size: 8192 + warmdown_frac: 0.85 + warmdown_iters: 0 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 47853343 +model_params:35945673 +train_seq_schedule:1024@0.100,2048@0.700,3072@1.000 +local_microbatch_tokens:98304 +growth_stage:seq_len:1024 progress:0.000 +gptq:reserving 4s, effective=596000ms +warmup_cu_buckets:64,128,192,256 iters_each:3 +compile_shape_warmup:start 1024xplain,2048xplain,2048xloop,3072xloop +compile_shape_warmup:shape seq_len:1024 loop:0 +compile_shape_warmup:shape seq_len:2048 loop:0 +compile_shape_warmup:shape seq_len:2048 loop:1 +compile_shape_warmup:shape seq_len:3072 loop:1 +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +1/20000 train_loss: 9.0087 train_time: 0.0m tok/s: 18278916 +2/20000 train_loss: 12.8398 train_time: 0.0m tok/s: 5826274 +3/20000 train_loss: 10.2223 train_time: 0.0m tok/s: 6410154 +4/20000 train_loss: 8.6850 train_time: 0.0m tok/s: 6750053 +5/20000 train_loss: 7.9463 train_time: 0.0m tok/s: 6967229 +500/20000 train_loss: 2.6000 train_time: 0.8m tok/s: 8599698 +growth_stage:seq_len:2048 progress:0.100 step:647 +growth_stage_rewarmup:start step:647 steps:32 seq_len:2048 +1000/20000 train_loss: 2.5803 train_time: 1.6m tok/s: 8414504 +1500/20000 train_loss: 2.6206 train_time: 2.4m tok/s: 8327425 +2000/20000 train_loss: 2.6518 train_time: 3.2m tok/s: 8285306 +layer_loop:enabled step:2193 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 2.5040 train_time: 4.2m tok/s: 7808754 +3000/20000 train_loss: 2.4503 train_time: 5.4m tok/s: 7319583 +3500/20000 train_loss: 2.4624 train_time: 6.5m tok/s: 7007618 +growth_stage:seq_len:3072 progress:0.700 step:3673 +growth_stage_rewarmup:start step:3673 steps:32 seq_len:3072 +4000/20000 train_loss: 2.3802 train_time: 7.8m tok/s: 6763257 +4500/20000 train_loss: 2.3422 train_time: 9.0m tok/s: 6557428 +4875/20000 val_loss: 2.3430 val_bpb: 1.0707 +stopping_early: wallclock_cap train_time: 596142ms step: 4875/20000 +peak memory allocated: 41707 MiB reserved: 46984 MiB +ema:applying EMA weights +diagnostic pre-quantization post-ema val_loss:2.31906221 val_bpb:1.05971819 eval_time:16347ms +Serialized model: 135418111 bytes +Code size (uncompressed): 198697 bytes +Code size (compressed): 49422 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 4.1s +Quantized weights: + gate_int8_row: blocks.attn.attn_gate_w + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight + gptq (int6)+lqer_asym: blocks.mlp.fc.weight + gptq (int7)+awqgrpint8+lqer_asym: tok_emb.weight + passthrough (float16): blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, parallel_post_lambdas, parallel_resid_lambdas, skip_gates, skip_weights, smear_gate.weight, smear_lambda, softcap_neg, softcap_pos +Serialize: per-group lrzip compression... +Serialize: per-group compression done in 100.9s +Serialized model quantized+pergroup: 15948543 bytes +Total submission size quantized+pergroup: 15997965 bytes +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 16.0s +diagnostic quantized val_loss:2.33648862 val_bpb:1.06768136 eval_time:95989ms +Deserialize: per-group lrzip decompression... +Deserialize: decompression done in 16.9s +ttt_lora:warming up compile (random tokens, no val data) +[rank5]: Traceback (most recent call last): +[rank5]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4662, in +[rank5]: main() +[rank5]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4656, in main +[rank5]: train_and_eval(h, device) +[rank5]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4568, in train_and_eval +[rank5]: ptl = fwd_ttt_compiled(xw, yw, lora=wl) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4525, in _fwd_ttt +[rank5]: return _fwd_ttt_compiled_inner(input_ids, target_ids, lora=lora) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 832, in compile_wrapper +[rank5]: return fn(*args, **kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1875, in __call__ +[rank5]: result = self._torchdynamo_orig_backend( +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1625, in __call__ +[rank5]: result = self._inner_convert( +[rank5]: ^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 688, in __call__ +[rank5]: result = _compile( +[rank5]: ^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1434, in _compile +[rank5]: guarded_code, tracer_output = compile_inner(code, one_graph, hooks) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_utils_internal.py", line 92, in wrapper_function +[rank5]: return function(*args, **kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1117, in compile_inner +[rank5]: return _compile_inner(code, one_graph, hooks) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1151, in _compile_inner +[rank5]: dynamo_output = compile_frame( +[rank5]: ^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1032, in compile_frame +[rank5]: bytecode, tracer_output = transform_code_object(code, transform) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1592, in transform_code_object +[rank5]: tracer_output = transformations(instructions, code_options) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1004, in transform +[rank5]: tracer_output = trace_frame( +[rank5]: ^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 312, in _fn +[rank5]: return fn(*args, **kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 815, in trace_frame +[rank5]: run_tracer() +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 797, in run_tracer +[rank5]: tracer.run() +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank5]: while self.step(): +[rank5]: ^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank5]: self.dispatch_table[inst.opcode](self, inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank5]: return inner_fn(self, inst) +[rank5]: ^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank5]: self._call(inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank5]: self.call_function(fn, args, kwargs) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank5]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank5]: return super().call_function(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank5]: return super().call_function(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank5]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank5]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank5]: return tracer.inline_call_() +[rank5]: ^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank5]: self.run() +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank5]: while self.step(): +[rank5]: ^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank5]: self.dispatch_table[inst.opcode](self, inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank5]: return inner_fn(self, inst) +[rank5]: ^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank5]: self._call(inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank5]: self.call_function(fn, args, kwargs) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank5]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank5]: return super().call_function(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank5]: return super().call_function(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank5]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank5]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank5]: return tracer.inline_call_() +[rank5]: ^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank5]: self.run() +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank5]: while self.step(): +[rank5]: ^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank5]: self.dispatch_table[inst.opcode](self, inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank5]: return inner_fn(self, inst) +[rank5]: ^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank5]: self._call(inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank5]: self.call_function(fn, args, kwargs) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank5]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank5]: return getattr(self.realize(), name)(*args, **kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/nn_module.py", line 1010, in call_function +[rank5]: return variables.UserFunctionVariable(fn, source=source).call_function( +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank5]: return super().call_function(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank5]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank5]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank5]: return tracer.inline_call_() +[rank5]: ^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank5]: self.run() +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank5]: while self.step(): +[rank5]: ^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank5]: self.dispatch_table[inst.opcode](self, inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank5]: return inner_fn(self, inst) +[rank5]: ^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank5]: self._call(inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank5]: self.call_function(fn, args, kwargs) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank5]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank5]: return getattr(self.realize(), name)(*args, **kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 1115, in call_function +[rank5]: return self.obj.call_method(tx, self.name, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 819, in call_method +[rank5]: return self.call_apply(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 734, in call_apply +[rank5]: ).call_function(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 3030, in call_function +[rank5]: (fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph( +[rank5]: ^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 999, in speculate_subgraph +[rank5]: output = f.call_function(tx, args, sub_kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank5]: return super().call_function(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank5]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank5]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank5]: return tracer.inline_call_() +[rank5]: ^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank5]: self.run() +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank5]: while self.step(): +[rank5]: ^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank5]: self.dispatch_table[inst.opcode](self, inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank5]: return inner_fn(self, inst) +[rank5]: ^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank5]: self._call(inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank5]: self.call_function(fn, args, kwargs) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank5]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank5]: return getattr(self.realize(), name)(*args, **kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank5]: return super().call_function(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank5]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank5]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank5]: return tracer.inline_call_() +[rank5]: ^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank5]: self.run() +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank5]: while self.step(): +[rank5]: ^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank5]: self.dispatch_table[inst.opcode](self, inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank5]: return inner_fn(self, inst) +[rank5]: ^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank5]: self._call(inst) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank5]: self.call_function(fn, args, kwargs) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank5]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank5]: return getattr(self.realize(), name)(*args, **kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank5]: return handler(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 966, in +[rank5]: return lambda tx, args, kwargs: obj.call_function( +[rank5]: ^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank5]: return handler(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1154, in builtin_dispatch +[rank5]: rv = fn(tx, args, kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1032, in call_self_handler +[rank5]: result = self_handler(tx, *args, **kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1486, in _call_int_float +[rank5]: proxy=tx.output.create_proxy( +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 819, in create_proxy +[rank5]: return self.current_tracer.create_proxy(*args, **kwargs) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 2795, in create_proxy +[rank5]: maybe_new_arg = self.maybe_lift_tracked_freevar_to_input(arg) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3172, in maybe_lift_tracked_freevar_to_input +[rank5]: return self.lift_tracked_freevar_to_input(arg) +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3141, in lift_tracked_freevar_to_input +[rank5]: self.parent.lift_tracked_freevar_to_input(proxy) +[rank5]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3113, in lift_tracked_freevar_to_input +[rank5]: assert self.parent is not None, ( +[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank5]: AssertionError: lift_tracked_freevar_to_input should not be called on root SubgraphTracer + +[rank5]: from user code: +[rank5]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4517, in _fwd_ttt_inner +[rank5]: return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) +[rank5]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1722, in forward_ttt +[rank5]: x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) +[rank5]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1833, in _block_with_lora +[rank5]: mlp_out = block.mlp(mlp_n, up_w, down_w) +[rank5]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1320, in forward +[rank5]: return FusedLeakyReLUSquareMLP(x, up_w.to(x.dtype), down_w.to(x.dtype)) +[rank5]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1110, in forward +[rank5]: pre, post = linear_leaky_relu_square(x_flat, w1) +[rank5]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1081, in linear_leaky_relu_square +[rank5]: fwd_slope = float(_LEAKY_RELU_SQ_SLOPE) + +[rank5]: Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo" + +[rank6]: Traceback (most recent call last): +[rank6]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4662, in +[rank6]: main() +[rank6]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4656, in main +[rank6]: train_and_eval(h, device) +[rank6]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4568, in train_and_eval +[rank6]: ptl = fwd_ttt_compiled(xw, yw, lora=wl) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4525, in _fwd_ttt +[rank6]: return _fwd_ttt_compiled_inner(input_ids, target_ids, lora=lora) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 832, in compile_wrapper +[rank6]: return fn(*args, **kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1875, in __call__ +[rank6]: result = self._torchdynamo_orig_backend( +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1625, in __call__ +[rank6]: result = self._inner_convert( +[rank6]: ^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 688, in __call__ +[rank6]: result = _compile( +[rank6]: ^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1434, in _compile +[rank6]: guarded_code, tracer_output = compile_inner(code, one_graph, hooks) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_utils_internal.py", line 92, in wrapper_function +[rank6]: return function(*args, **kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1117, in compile_inner +[rank6]: return _compile_inner(code, one_graph, hooks) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1151, in _compile_inner +[rank6]: dynamo_output = compile_frame( +[rank6]: ^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1032, in compile_frame +[rank6]: bytecode, tracer_output = transform_code_object(code, transform) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1592, in transform_code_object +[rank6]: tracer_output = transformations(instructions, code_options) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1004, in transform +[rank6]: tracer_output = trace_frame( +[rank6]: ^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 312, in _fn +[rank6]: return fn(*args, **kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 815, in trace_frame +[rank6]: run_tracer() +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 797, in run_tracer +[rank6]: tracer.run() +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank6]: while self.step(): +[rank6]: ^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank6]: self.dispatch_table[inst.opcode](self, inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank6]: return inner_fn(self, inst) +[rank6]: ^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank6]: self._call(inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank6]: self.call_function(fn, args, kwargs) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank6]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank6]: return super().call_function(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank6]: return super().call_function(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank6]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank6]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank6]: return tracer.inline_call_() +[rank6]: ^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank6]: self.run() +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank6]: while self.step(): +[rank6]: ^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank6]: self.dispatch_table[inst.opcode](self, inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank6]: return inner_fn(self, inst) +[rank6]: ^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank6]: self._call(inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank6]: self.call_function(fn, args, kwargs) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank6]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank6]: return super().call_function(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank6]: return super().call_function(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank6]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank6]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank6]: return tracer.inline_call_() +[rank6]: ^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank6]: self.run() +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank6]: while self.step(): +[rank6]: ^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank6]: self.dispatch_table[inst.opcode](self, inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank6]: return inner_fn(self, inst) +[rank6]: ^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank6]: self._call(inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank6]: self.call_function(fn, args, kwargs) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank6]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank6]: return getattr(self.realize(), name)(*args, **kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/nn_module.py", line 1010, in call_function +[rank6]: return variables.UserFunctionVariable(fn, source=source).call_function( +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank6]: return super().call_function(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank6]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank6]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank6]: return tracer.inline_call_() +[rank6]: ^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank6]: self.run() +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank6]: while self.step(): +[rank6]: ^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank6]: self.dispatch_table[inst.opcode](self, inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank6]: return inner_fn(self, inst) +[rank6]: ^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank6]: self._call(inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank6]: self.call_function(fn, args, kwargs) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank6]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank6]: return getattr(self.realize(), name)(*args, **kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 1115, in call_function +[rank6]: return self.obj.call_method(tx, self.name, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 819, in call_method +[rank6]: return self.call_apply(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 734, in call_apply +[rank6]: ).call_function(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 3030, in call_function +[rank6]: (fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph( +[rank6]: ^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 999, in speculate_subgraph +[rank6]: output = f.call_function(tx, args, sub_kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank6]: return super().call_function(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank6]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank6]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank6]: return tracer.inline_call_() +[rank6]: ^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank6]: self.run() +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank6]: while self.step(): +[rank6]: ^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank6]: self.dispatch_table[inst.opcode](self, inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank6]: return inner_fn(self, inst) +[rank6]: ^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank6]: self._call(inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank6]: self.call_function(fn, args, kwargs) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank6]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank6]: return getattr(self.realize(), name)(*args, **kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank6]: return super().call_function(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank6]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank6]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank6]: return tracer.inline_call_() +[rank6]: ^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank6]: self.run() +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank6]: while self.step(): +[rank6]: ^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank6]: self.dispatch_table[inst.opcode](self, inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank6]: return inner_fn(self, inst) +[rank6]: ^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank6]: self._call(inst) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank6]: self.call_function(fn, args, kwargs) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank6]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank6]: return getattr(self.realize(), name)(*args, **kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank6]: return handler(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 966, in +[rank6]: return lambda tx, args, kwargs: obj.call_function( +[rank6]: ^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank6]: return handler(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1154, in builtin_dispatch +[rank6]: rv = fn(tx, args, kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1032, in call_self_handler +[rank6]: result = self_handler(tx, *args, **kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1486, in _call_int_float +[rank6]: proxy=tx.output.create_proxy( +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 819, in create_proxy +[rank6]: return self.current_tracer.create_proxy(*args, **kwargs) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 2795, in create_proxy +[rank6]: maybe_new_arg = self.maybe_lift_tracked_freevar_to_input(arg) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3172, in maybe_lift_tracked_freevar_to_input +[rank6]: return self.lift_tracked_freevar_to_input(arg) +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3141, in lift_tracked_freevar_to_input +[rank6]: self.parent.lift_tracked_freevar_to_input(proxy) +[rank6]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3113, in lift_tracked_freevar_to_input +[rank6]: assert self.parent is not None, ( +[rank6]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank6]: AssertionError: lift_tracked_freevar_to_input should not be called on root SubgraphTracer + +[rank6]: from user code: +[rank6]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4517, in _fwd_ttt_inner +[rank6]: return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) +[rank6]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1722, in forward_ttt +[rank6]: x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) +[rank6]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1833, in _block_with_lora +[rank6]: mlp_out = block.mlp(mlp_n, up_w, down_w) +[rank6]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1320, in forward +[rank6]: return FusedLeakyReLUSquareMLP(x, up_w.to(x.dtype), down_w.to(x.dtype)) +[rank6]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1110, in forward +[rank6]: pre, post = linear_leaky_relu_square(x_flat, w1) +[rank6]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1081, in linear_leaky_relu_square +[rank6]: fwd_slope = float(_LEAKY_RELU_SQ_SLOPE) + +[rank6]: Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo" + +[rank2]: Traceback (most recent call last): +[rank2]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4662, in +[rank2]: main() +[rank2]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4656, in main +[rank2]: train_and_eval(h, device) +[rank2]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4568, in train_and_eval +[rank2]: ptl = fwd_ttt_compiled(xw, yw, lora=wl) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4525, in _fwd_ttt +[rank2]: return _fwd_ttt_compiled_inner(input_ids, target_ids, lora=lora) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 832, in compile_wrapper +[rank2]: return fn(*args, **kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1875, in __call__ +[rank2]: result = self._torchdynamo_orig_backend( +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1625, in __call__ +[rank2]: result = self._inner_convert( +[rank2]: ^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 688, in __call__ +[rank2]: result = _compile( +[rank2]: ^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1434, in _compile +[rank2]: guarded_code, tracer_output = compile_inner(code, one_graph, hooks) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_utils_internal.py", line 92, in wrapper_function +[rank2]: return function(*args, **kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1117, in compile_inner +[rank2]: return _compile_inner(code, one_graph, hooks) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1151, in _compile_inner +[rank2]: dynamo_output = compile_frame( +[rank2]: ^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1032, in compile_frame +[rank2]: bytecode, tracer_output = transform_code_object(code, transform) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1592, in transform_code_object +[rank2]: tracer_output = transformations(instructions, code_options) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1004, in transform +[rank2]: tracer_output = trace_frame( +[rank2]: ^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 312, in _fn +[rank2]: return fn(*args, **kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 815, in trace_frame +[rank2]: run_tracer() +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 797, in run_tracer +[rank2]: tracer.run() +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank2]: while self.step(): +[rank2]: ^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank2]: self.dispatch_table[inst.opcode](self, inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank2]: return inner_fn(self, inst) +[rank2]: ^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank2]: self._call(inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank2]: self.call_function(fn, args, kwargs) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank2]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank2]: return super().call_function(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank2]: return super().call_function(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank2]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank2]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank2]: return tracer.inline_call_() +[rank2]: ^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank2]: self.run() +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank2]: while self.step(): +[rank2]: ^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank2]: self.dispatch_table[inst.opcode](self, inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank2]: return inner_fn(self, inst) +[rank2]: ^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank2]: self._call(inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank2]: self.call_function(fn, args, kwargs) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank2]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank2]: return super().call_function(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank2]: return super().call_function(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank2]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank2]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank2]: return tracer.inline_call_() +[rank2]: ^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank2]: self.run() +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank2]: while self.step(): +[rank2]: ^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank2]: self.dispatch_table[inst.opcode](self, inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank2]: return inner_fn(self, inst) +[rank2]: ^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank2]: self._call(inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank2]: self.call_function(fn, args, kwargs) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank2]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank2]: return getattr(self.realize(), name)(*args, **kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/nn_module.py", line 1010, in call_function +[rank2]: return variables.UserFunctionVariable(fn, source=source).call_function( +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank2]: return super().call_function(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank2]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank2]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank2]: return tracer.inline_call_() +[rank2]: ^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank2]: self.run() +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank2]: while self.step(): +[rank2]: ^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank2]: self.dispatch_table[inst.opcode](self, inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank2]: return inner_fn(self, inst) +[rank2]: ^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank2]: self._call(inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank2]: self.call_function(fn, args, kwargs) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank2]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank2]: return getattr(self.realize(), name)(*args, **kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 1115, in call_function +[rank2]: return self.obj.call_method(tx, self.name, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 819, in call_method +[rank2]: return self.call_apply(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 734, in call_apply +[rank2]: ).call_function(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 3030, in call_function +[rank2]: (fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph( +[rank2]: ^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 999, in speculate_subgraph +[rank2]: output = f.call_function(tx, args, sub_kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank2]: return super().call_function(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank2]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank2]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank2]: return tracer.inline_call_() +[rank2]: ^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank2]: self.run() +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank2]: while self.step(): +[rank2]: ^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank2]: self.dispatch_table[inst.opcode](self, inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank2]: return inner_fn(self, inst) +[rank2]: ^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank2]: self._call(inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank2]: self.call_function(fn, args, kwargs) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank2]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank2]: return getattr(self.realize(), name)(*args, **kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank2]: return super().call_function(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank2]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank2]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank2]: return tracer.inline_call_() +[rank2]: ^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank2]: self.run() +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank2]: while self.step(): +[rank2]: ^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank2]: self.dispatch_table[inst.opcode](self, inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank2]: return inner_fn(self, inst) +[rank2]: ^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank2]: self._call(inst) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank2]: self.call_function(fn, args, kwargs) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank2]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank2]: return getattr(self.realize(), name)(*args, **kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank2]: return handler(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 966, in +[rank2]: return lambda tx, args, kwargs: obj.call_function( +[rank2]: ^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank2]: return handler(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1154, in builtin_dispatch +[rank2]: rv = fn(tx, args, kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1032, in call_self_handler +[rank2]: result = self_handler(tx, *args, **kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1486, in _call_int_float +[rank2]: proxy=tx.output.create_proxy( +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 819, in create_proxy +[rank2]: return self.current_tracer.create_proxy(*args, **kwargs) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 2795, in create_proxy +[rank2]: maybe_new_arg = self.maybe_lift_tracked_freevar_to_input(arg) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3172, in maybe_lift_tracked_freevar_to_input +[rank2]: return self.lift_tracked_freevar_to_input(arg) +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3141, in lift_tracked_freevar_to_input +[rank2]: self.parent.lift_tracked_freevar_to_input(proxy) +[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3113, in lift_tracked_freevar_to_input +[rank2]: assert self.parent is not None, ( +[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank2]: AssertionError: lift_tracked_freevar_to_input should not be called on root SubgraphTracer + +[rank2]: from user code: +[rank2]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4517, in _fwd_ttt_inner +[rank2]: return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) +[rank2]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1722, in forward_ttt +[rank2]: x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) +[rank2]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1833, in _block_with_lora +[rank2]: mlp_out = block.mlp(mlp_n, up_w, down_w) +[rank2]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1320, in forward +[rank2]: return FusedLeakyReLUSquareMLP(x, up_w.to(x.dtype), down_w.to(x.dtype)) +[rank2]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1110, in forward +[rank2]: pre, post = linear_leaky_relu_square(x_flat, w1) +[rank2]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1081, in linear_leaky_relu_square +[rank2]: fwd_slope = float(_LEAKY_RELU_SQ_SLOPE) + +[rank2]: Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo" + +[rank0]: Traceback (most recent call last): +[rank0]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4662, in +[rank0]: main() +[rank0]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4656, in main +[rank0]: train_and_eval(h, device) +[rank0]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4568, in train_and_eval +[rank0]: ptl = fwd_ttt_compiled(xw, yw, lora=wl) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4525, in _fwd_ttt +[rank0]: return _fwd_ttt_compiled_inner(input_ids, target_ids, lora=lora) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 832, in compile_wrapper +[rank0]: return fn(*args, **kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1875, in __call__ +[rank0]: result = self._torchdynamo_orig_backend( +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1625, in __call__ +[rank0]: result = self._inner_convert( +[rank0]: ^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 688, in __call__ +[rank0]: result = _compile( +[rank0]: ^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1434, in _compile +[rank0]: guarded_code, tracer_output = compile_inner(code, one_graph, hooks) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_utils_internal.py", line 92, in wrapper_function +[rank0]: return function(*args, **kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1117, in compile_inner +[rank0]: return _compile_inner(code, one_graph, hooks) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1151, in _compile_inner +[rank0]: dynamo_output = compile_frame( +[rank0]: ^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1032, in compile_frame +[rank0]: bytecode, tracer_output = transform_code_object(code, transform) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1592, in transform_code_object +[rank0]: tracer_output = transformations(instructions, code_options) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1004, in transform +[rank0]: tracer_output = trace_frame( +[rank0]: ^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 312, in _fn +[rank0]: return fn(*args, **kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 815, in trace_frame +[rank0]: run_tracer() +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 797, in run_tracer +[rank0]: tracer.run() +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank0]: while self.step(): +[rank0]: ^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank0]: self.dispatch_table[inst.opcode](self, inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank0]: return inner_fn(self, inst) +[rank0]: ^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank0]: self._call(inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank0]: self.call_function(fn, args, kwargs) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank0]: return super().call_function(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank0]: return super().call_function(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank0]: return tracer.inline_call_() +[rank0]: ^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank0]: self.run() +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank0]: while self.step(): +[rank0]: ^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank0]: self.dispatch_table[inst.opcode](self, inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank0]: return inner_fn(self, inst) +[rank0]: ^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank0]: self._call(inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank0]: self.call_function(fn, args, kwargs) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank0]: return super().call_function(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank0]: return super().call_function(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank0]: return tracer.inline_call_() +[rank0]: ^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank0]: self.run() +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank0]: while self.step(): +[rank0]: ^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank0]: self.dispatch_table[inst.opcode](self, inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank0]: return inner_fn(self, inst) +[rank0]: ^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank0]: self._call(inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank0]: self.call_function(fn, args, kwargs) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank0]: return getattr(self.realize(), name)(*args, **kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/nn_module.py", line 1010, in call_function +[rank0]: return variables.UserFunctionVariable(fn, source=source).call_function( +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank0]: return super().call_function(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank0]: return tracer.inline_call_() +[rank0]: ^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank0]: self.run() +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank0]: while self.step(): +[rank0]: ^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank0]: self.dispatch_table[inst.opcode](self, inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank0]: return inner_fn(self, inst) +[rank0]: ^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank0]: self._call(inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank0]: self.call_function(fn, args, kwargs) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank0]: return getattr(self.realize(), name)(*args, **kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 1115, in call_function +[rank0]: return self.obj.call_method(tx, self.name, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 819, in call_method +[rank0]: return self.call_apply(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 734, in call_apply +[rank0]: ).call_function(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 3030, in call_function +[rank0]: (fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph( +[rank0]: ^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 999, in speculate_subgraph +[rank0]: output = f.call_function(tx, args, sub_kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank0]: return super().call_function(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank0]: return tracer.inline_call_() +[rank0]: ^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank0]: self.run() +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank0]: while self.step(): +[rank0]: ^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank0]: self.dispatch_table[inst.opcode](self, inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank0]: return inner_fn(self, inst) +[rank0]: ^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank0]: self._call(inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank0]: self.call_function(fn, args, kwargs) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank0]: return getattr(self.realize(), name)(*args, **kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank0]: return super().call_function(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank0]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank0]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank0]: return tracer.inline_call_() +[rank0]: ^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank0]: self.run() +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank0]: while self.step(): +[rank0]: ^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank0]: self.dispatch_table[inst.opcode](self, inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank0]: return inner_fn(self, inst) +[rank0]: ^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank0]: self._call(inst) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank0]: self.call_function(fn, args, kwargs) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank0]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank0]: return getattr(self.realize(), name)(*args, **kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank0]: return handler(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 966, in +[rank0]: return lambda tx, args, kwargs: obj.call_function( +[rank0]: ^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank0]: return handler(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1154, in builtin_dispatch +[rank0]: rv = fn(tx, args, kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1032, in call_self_handler +[rank0]: result = self_handler(tx, *args, **kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1486, in _call_int_float +[rank0]: proxy=tx.output.create_proxy( +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 819, in create_proxy +[rank0]: return self.current_tracer.create_proxy(*args, **kwargs) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 2795, in create_proxy +[rank0]: maybe_new_arg = self.maybe_lift_tracked_freevar_to_input(arg) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3172, in maybe_lift_tracked_freevar_to_input +[rank0]: return self.lift_tracked_freevar_to_input(arg) +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3141, in lift_tracked_freevar_to_input +[rank0]: self.parent.lift_tracked_freevar_to_input(proxy) +[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3113, in lift_tracked_freevar_to_input +[rank0]: assert self.parent is not None, ( +[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank0]: AssertionError: lift_tracked_freevar_to_input should not be called on root SubgraphTracer + +[rank0]: from user code: +[rank0]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4517, in _fwd_ttt_inner +[rank0]: return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) +[rank0]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1722, in forward_ttt +[rank0]: x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) +[rank0]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1833, in _block_with_lora +[rank0]: mlp_out = block.mlp(mlp_n, up_w, down_w) +[rank0]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1320, in forward +[rank0]: return FusedLeakyReLUSquareMLP(x, up_w.to(x.dtype), down_w.to(x.dtype)) +[rank0]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1110, in forward +[rank0]: pre, post = linear_leaky_relu_square(x_flat, w1) +[rank0]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1081, in linear_leaky_relu_square +[rank0]: fwd_slope = float(_LEAKY_RELU_SQ_SLOPE) + +[rank0]: Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo" + +[rank1]: Traceback (most recent call last): +[rank1]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4662, in +[rank1]: main() +[rank1]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4656, in main +[rank1]: train_and_eval(h, device) +[rank1]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4568, in train_and_eval +[rank1]: ptl = fwd_ttt_compiled(xw, yw, lora=wl) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4525, in _fwd_ttt +[rank1]: return _fwd_ttt_compiled_inner(input_ids, target_ids, lora=lora) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 832, in compile_wrapper +[rank1]: return fn(*args, **kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1875, in __call__ +[rank1]: result = self._torchdynamo_orig_backend( +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1625, in __call__ +[rank1]: result = self._inner_convert( +[rank1]: ^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 688, in __call__ +[rank1]: result = _compile( +[rank1]: ^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1434, in _compile +[rank1]: guarded_code, tracer_output = compile_inner(code, one_graph, hooks) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_utils_internal.py", line 92, in wrapper_function +[rank1]: return function(*args, **kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1117, in compile_inner +[rank1]: return _compile_inner(code, one_graph, hooks) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1151, in _compile_inner +[rank1]: dynamo_output = compile_frame( +[rank1]: ^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1032, in compile_frame +[rank1]: bytecode, tracer_output = transform_code_object(code, transform) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1592, in transform_code_object +[rank1]: tracer_output = transformations(instructions, code_options) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1004, in transform +[rank1]: tracer_output = trace_frame( +[rank1]: ^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 312, in _fn +[rank1]: return fn(*args, **kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 815, in trace_frame +[rank1]: run_tracer() +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 797, in run_tracer +[rank1]: tracer.run() +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank1]: while self.step(): +[rank1]: ^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank1]: self.dispatch_table[inst.opcode](self, inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank1]: return inner_fn(self, inst) +[rank1]: ^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank1]: self._call(inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank1]: self.call_function(fn, args, kwargs) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank1]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank1]: return super().call_function(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank1]: return super().call_function(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank1]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank1]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank1]: return tracer.inline_call_() +[rank1]: ^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank1]: self.run() +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank1]: while self.step(): +[rank1]: ^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank1]: self.dispatch_table[inst.opcode](self, inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank1]: return inner_fn(self, inst) +[rank1]: ^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank1]: self._call(inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank1]: self.call_function(fn, args, kwargs) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank1]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank1]: return super().call_function(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank1]: return super().call_function(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank1]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank1]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank1]: return tracer.inline_call_() +[rank1]: ^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank1]: self.run() +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank1]: while self.step(): +[rank1]: ^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank1]: self.dispatch_table[inst.opcode](self, inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank1]: return inner_fn(self, inst) +[rank1]: ^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank1]: self._call(inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank1]: self.call_function(fn, args, kwargs) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank1]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank1]: return getattr(self.realize(), name)(*args, **kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/nn_module.py", line 1010, in call_function +[rank1]: return variables.UserFunctionVariable(fn, source=source).call_function( +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank1]: return super().call_function(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank1]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank1]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank1]: return tracer.inline_call_() +[rank1]: ^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank1]: self.run() +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank1]: while self.step(): +[rank1]: ^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank1]: self.dispatch_table[inst.opcode](self, inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank1]: return inner_fn(self, inst) +[rank1]: ^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank1]: self._call(inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank1]: self.call_function(fn, args, kwargs) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank1]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank1]: return getattr(self.realize(), name)(*args, **kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 1115, in call_function +[rank1]: return self.obj.call_method(tx, self.name, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 819, in call_method +[rank1]: return self.call_apply(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 734, in call_apply +[rank1]: ).call_function(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 3030, in call_function +[rank1]: (fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph( +[rank1]: ^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 999, in speculate_subgraph +[rank1]: output = f.call_function(tx, args, sub_kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank1]: return super().call_function(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank1]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank1]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank1]: return tracer.inline_call_() +[rank1]: ^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank1]: self.run() +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank1]: while self.step(): +[rank1]: ^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank1]: self.dispatch_table[inst.opcode](self, inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank1]: return inner_fn(self, inst) +[rank1]: ^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank1]: self._call(inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank1]: self.call_function(fn, args, kwargs) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank1]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank1]: return getattr(self.realize(), name)(*args, **kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank1]: return super().call_function(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank1]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank1]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank1]: return tracer.inline_call_() +[rank1]: ^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank1]: self.run() +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank1]: while self.step(): +[rank1]: ^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank1]: self.dispatch_table[inst.opcode](self, inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank1]: return inner_fn(self, inst) +[rank1]: ^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank1]: self._call(inst) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank1]: self.call_function(fn, args, kwargs) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank1]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank1]: return getattr(self.realize(), name)(*args, **kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank1]: return handler(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 966, in +[rank1]: return lambda tx, args, kwargs: obj.call_function( +[rank1]: ^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank1]: return handler(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1154, in builtin_dispatch +[rank1]: rv = fn(tx, args, kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1032, in call_self_handler +[rank1]: result = self_handler(tx, *args, **kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1486, in _call_int_float +[rank1]: proxy=tx.output.create_proxy( +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 819, in create_proxy +[rank1]: return self.current_tracer.create_proxy(*args, **kwargs) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 2795, in create_proxy +[rank1]: maybe_new_arg = self.maybe_lift_tracked_freevar_to_input(arg) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3172, in maybe_lift_tracked_freevar_to_input +[rank1]: return self.lift_tracked_freevar_to_input(arg) +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3141, in lift_tracked_freevar_to_input +[rank1]: self.parent.lift_tracked_freevar_to_input(proxy) +[rank1]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3113, in lift_tracked_freevar_to_input +[rank1]: assert self.parent is not None, ( +[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank1]: AssertionError: lift_tracked_freevar_to_input should not be called on root SubgraphTracer + +[rank1]: from user code: +[rank1]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4517, in _fwd_ttt_inner +[rank1]: return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) +[rank1]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1722, in forward_ttt +[rank1]: x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) +[rank1]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1833, in _block_with_lora +[rank1]: mlp_out = block.mlp(mlp_n, up_w, down_w) +[rank1]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1320, in forward +[rank1]: return FusedLeakyReLUSquareMLP(x, up_w.to(x.dtype), down_w.to(x.dtype)) +[rank1]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1110, in forward +[rank1]: pre, post = linear_leaky_relu_square(x_flat, w1) +[rank1]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1081, in linear_leaky_relu_square +[rank1]: fwd_slope = float(_LEAKY_RELU_SQ_SLOPE) + +[rank1]: Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo" + +[rank3]: Traceback (most recent call last): +[rank3]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4662, in +[rank3]: main() +[rank3]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4656, in main +[rank3]: train_and_eval(h, device) +[rank3]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4568, in train_and_eval +[rank3]: ptl = fwd_ttt_compiled(xw, yw, lora=wl) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4525, in _fwd_ttt +[rank3]: return _fwd_ttt_compiled_inner(input_ids, target_ids, lora=lora) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 832, in compile_wrapper +[rank3]: return fn(*args, **kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1875, in __call__ +[rank3]: result = self._torchdynamo_orig_backend( +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1625, in __call__ +[rank3]: result = self._inner_convert( +[rank3]: ^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 688, in __call__ +[rank3]: result = _compile( +[rank3]: ^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1434, in _compile +[rank3]: guarded_code, tracer_output = compile_inner(code, one_graph, hooks) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_utils_internal.py", line 92, in wrapper_function +[rank3]: return function(*args, **kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1117, in compile_inner +[rank3]: return _compile_inner(code, one_graph, hooks) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1151, in _compile_inner +[rank3]: dynamo_output = compile_frame( +[rank3]: ^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1032, in compile_frame +[rank3]: bytecode, tracer_output = transform_code_object(code, transform) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1592, in transform_code_object +[rank3]: tracer_output = transformations(instructions, code_options) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1004, in transform +[rank3]: tracer_output = trace_frame( +[rank3]: ^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 312, in _fn +[rank3]: return fn(*args, **kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 815, in trace_frame +[rank3]: run_tracer() +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 797, in run_tracer +[rank3]: tracer.run() +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank3]: while self.step(): +[rank3]: ^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank3]: self.dispatch_table[inst.opcode](self, inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank3]: return inner_fn(self, inst) +[rank3]: ^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank3]: self._call(inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank3]: self.call_function(fn, args, kwargs) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank3]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank3]: return super().call_function(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank3]: return super().call_function(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank3]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank3]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank3]: return tracer.inline_call_() +[rank3]: ^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank3]: self.run() +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank3]: while self.step(): +[rank3]: ^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank3]: self.dispatch_table[inst.opcode](self, inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank3]: return inner_fn(self, inst) +[rank3]: ^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank3]: self._call(inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank3]: self.call_function(fn, args, kwargs) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank3]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank3]: return super().call_function(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank3]: return super().call_function(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank3]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank3]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank3]: return tracer.inline_call_() +[rank3]: ^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank3]: self.run() +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank3]: while self.step(): +[rank3]: ^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank3]: self.dispatch_table[inst.opcode](self, inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank3]: return inner_fn(self, inst) +[rank3]: ^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank3]: self._call(inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank3]: self.call_function(fn, args, kwargs) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank3]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank3]: return getattr(self.realize(), name)(*args, **kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/nn_module.py", line 1010, in call_function +[rank3]: return variables.UserFunctionVariable(fn, source=source).call_function( +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank3]: return super().call_function(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank3]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank3]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank3]: return tracer.inline_call_() +[rank3]: ^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank3]: self.run() +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank3]: while self.step(): +[rank3]: ^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank3]: self.dispatch_table[inst.opcode](self, inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank3]: return inner_fn(self, inst) +[rank3]: ^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank3]: self._call(inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank3]: self.call_function(fn, args, kwargs) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank3]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank3]: return getattr(self.realize(), name)(*args, **kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 1115, in call_function +[rank3]: return self.obj.call_method(tx, self.name, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 819, in call_method +[rank3]: return self.call_apply(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 734, in call_apply +[rank3]: ).call_function(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 3030, in call_function +[rank3]: (fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph( +[rank3]: ^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 999, in speculate_subgraph +[rank3]: output = f.call_function(tx, args, sub_kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank3]: return super().call_function(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank3]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank3]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank3]: return tracer.inline_call_() +[rank3]: ^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank3]: self.run() +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank3]: while self.step(): +[rank3]: ^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank3]: self.dispatch_table[inst.opcode](self, inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank3]: return inner_fn(self, inst) +[rank3]: ^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank3]: self._call(inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank3]: self.call_function(fn, args, kwargs) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank3]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank3]: return getattr(self.realize(), name)(*args, **kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank3]: return super().call_function(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank3]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank3]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank3]: return tracer.inline_call_() +[rank3]: ^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank3]: self.run() +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank3]: while self.step(): +[rank3]: ^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank3]: self.dispatch_table[inst.opcode](self, inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank3]: return inner_fn(self, inst) +[rank3]: ^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank3]: self._call(inst) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank3]: self.call_function(fn, args, kwargs) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank3]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank3]: return getattr(self.realize(), name)(*args, **kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank3]: return handler(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 966, in +[rank3]: return lambda tx, args, kwargs: obj.call_function( +[rank3]: ^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank3]: return handler(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1154, in builtin_dispatch +[rank3]: rv = fn(tx, args, kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1032, in call_self_handler +[rank3]: result = self_handler(tx, *args, **kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1486, in _call_int_float +[rank3]: proxy=tx.output.create_proxy( +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 819, in create_proxy +[rank3]: return self.current_tracer.create_proxy(*args, **kwargs) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 2795, in create_proxy +[rank3]: maybe_new_arg = self.maybe_lift_tracked_freevar_to_input(arg) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3172, in maybe_lift_tracked_freevar_to_input +[rank3]: return self.lift_tracked_freevar_to_input(arg) +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3141, in lift_tracked_freevar_to_input +[rank3]: self.parent.lift_tracked_freevar_to_input(proxy) +[rank3]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3113, in lift_tracked_freevar_to_input +[rank3]: assert self.parent is not None, ( +[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank3]: AssertionError: lift_tracked_freevar_to_input should not be called on root SubgraphTracer + +[rank3]: from user code: +[rank3]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4517, in _fwd_ttt_inner +[rank3]: return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) +[rank3]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1722, in forward_ttt +[rank3]: x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) +[rank3]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1833, in _block_with_lora +[rank3]: mlp_out = block.mlp(mlp_n, up_w, down_w) +[rank3]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1320, in forward +[rank3]: return FusedLeakyReLUSquareMLP(x, up_w.to(x.dtype), down_w.to(x.dtype)) +[rank3]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1110, in forward +[rank3]: pre, post = linear_leaky_relu_square(x_flat, w1) +[rank3]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1081, in linear_leaky_relu_square +[rank3]: fwd_slope = float(_LEAKY_RELU_SQ_SLOPE) + +[rank3]: Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo" + +[rank4]: Traceback (most recent call last): +[rank4]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4662, in +[rank4]: main() +[rank4]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4656, in main +[rank4]: train_and_eval(h, device) +[rank4]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4568, in train_and_eval +[rank4]: ptl = fwd_ttt_compiled(xw, yw, lora=wl) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4525, in _fwd_ttt +[rank4]: return _fwd_ttt_compiled_inner(input_ids, target_ids, lora=lora) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 832, in compile_wrapper +[rank4]: return fn(*args, **kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1875, in __call__ +[rank4]: result = self._torchdynamo_orig_backend( +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1625, in __call__ +[rank4]: result = self._inner_convert( +[rank4]: ^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 688, in __call__ +[rank4]: result = _compile( +[rank4]: ^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1434, in _compile +[rank4]: guarded_code, tracer_output = compile_inner(code, one_graph, hooks) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_utils_internal.py", line 92, in wrapper_function +[rank4]: return function(*args, **kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1117, in compile_inner +[rank4]: return _compile_inner(code, one_graph, hooks) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1151, in _compile_inner +[rank4]: dynamo_output = compile_frame( +[rank4]: ^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1032, in compile_frame +[rank4]: bytecode, tracer_output = transform_code_object(code, transform) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1592, in transform_code_object +[rank4]: tracer_output = transformations(instructions, code_options) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1004, in transform +[rank4]: tracer_output = trace_frame( +[rank4]: ^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 312, in _fn +[rank4]: return fn(*args, **kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 815, in trace_frame +[rank4]: run_tracer() +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 797, in run_tracer +[rank4]: tracer.run() +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank4]: while self.step(): +[rank4]: ^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank4]: self.dispatch_table[inst.opcode](self, inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank4]: return inner_fn(self, inst) +[rank4]: ^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank4]: self._call(inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank4]: self.call_function(fn, args, kwargs) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank4]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank4]: return super().call_function(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank4]: return super().call_function(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank4]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank4]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank4]: return tracer.inline_call_() +[rank4]: ^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank4]: self.run() +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank4]: while self.step(): +[rank4]: ^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank4]: self.dispatch_table[inst.opcode](self, inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank4]: return inner_fn(self, inst) +[rank4]: ^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank4]: self._call(inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank4]: self.call_function(fn, args, kwargs) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank4]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank4]: return super().call_function(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank4]: return super().call_function(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank4]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank4]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank4]: return tracer.inline_call_() +[rank4]: ^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank4]: self.run() +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank4]: while self.step(): +[rank4]: ^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank4]: self.dispatch_table[inst.opcode](self, inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank4]: return inner_fn(self, inst) +[rank4]: ^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank4]: self._call(inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank4]: self.call_function(fn, args, kwargs) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank4]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank4]: return getattr(self.realize(), name)(*args, **kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/nn_module.py", line 1010, in call_function +[rank4]: return variables.UserFunctionVariable(fn, source=source).call_function( +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank4]: return super().call_function(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank4]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank4]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank4]: return tracer.inline_call_() +[rank4]: ^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank4]: self.run() +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank4]: while self.step(): +[rank4]: ^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank4]: self.dispatch_table[inst.opcode](self, inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank4]: return inner_fn(self, inst) +[rank4]: ^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank4]: self._call(inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank4]: self.call_function(fn, args, kwargs) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank4]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank4]: return getattr(self.realize(), name)(*args, **kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 1115, in call_function +[rank4]: return self.obj.call_method(tx, self.name, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 819, in call_method +[rank4]: return self.call_apply(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 734, in call_apply +[rank4]: ).call_function(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 3030, in call_function +[rank4]: (fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph( +[rank4]: ^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 999, in speculate_subgraph +[rank4]: output = f.call_function(tx, args, sub_kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank4]: return super().call_function(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank4]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank4]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank4]: return tracer.inline_call_() +[rank4]: ^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank4]: self.run() +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank4]: while self.step(): +[rank4]: ^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank4]: self.dispatch_table[inst.opcode](self, inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank4]: return inner_fn(self, inst) +[rank4]: ^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank4]: self._call(inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank4]: self.call_function(fn, args, kwargs) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank4]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank4]: return getattr(self.realize(), name)(*args, **kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank4]: return super().call_function(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank4]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank4]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank4]: return tracer.inline_call_() +[rank4]: ^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank4]: self.run() +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank4]: while self.step(): +[rank4]: ^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank4]: self.dispatch_table[inst.opcode](self, inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank4]: return inner_fn(self, inst) +[rank4]: ^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank4]: self._call(inst) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank4]: self.call_function(fn, args, kwargs) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank4]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank4]: return getattr(self.realize(), name)(*args, **kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank4]: return handler(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 966, in +[rank4]: return lambda tx, args, kwargs: obj.call_function( +[rank4]: ^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank4]: return handler(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1154, in builtin_dispatch +[rank4]: rv = fn(tx, args, kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1032, in call_self_handler +[rank4]: result = self_handler(tx, *args, **kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1486, in _call_int_float +[rank4]: proxy=tx.output.create_proxy( +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 819, in create_proxy +[rank4]: return self.current_tracer.create_proxy(*args, **kwargs) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 2795, in create_proxy +[rank4]: maybe_new_arg = self.maybe_lift_tracked_freevar_to_input(arg) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3172, in maybe_lift_tracked_freevar_to_input +[rank4]: return self.lift_tracked_freevar_to_input(arg) +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3141, in lift_tracked_freevar_to_input +[rank4]: self.parent.lift_tracked_freevar_to_input(proxy) +[rank4]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3113, in lift_tracked_freevar_to_input +[rank4]: assert self.parent is not None, ( +[rank4]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank4]: AssertionError: lift_tracked_freevar_to_input should not be called on root SubgraphTracer + +[rank4]: from user code: +[rank4]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4517, in _fwd_ttt_inner +[rank4]: return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) +[rank4]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1722, in forward_ttt +[rank4]: x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) +[rank4]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1833, in _block_with_lora +[rank4]: mlp_out = block.mlp(mlp_n, up_w, down_w) +[rank4]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1320, in forward +[rank4]: return FusedLeakyReLUSquareMLP(x, up_w.to(x.dtype), down_w.to(x.dtype)) +[rank4]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1110, in forward +[rank4]: pre, post = linear_leaky_relu_square(x_flat, w1) +[rank4]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1081, in linear_leaky_relu_square +[rank4]: fwd_slope = float(_LEAKY_RELU_SQ_SLOPE) + +[rank4]: Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo" + +[rank7]: Traceback (most recent call last): +[rank7]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4662, in +[rank7]: main() +[rank7]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4656, in main +[rank7]: train_and_eval(h, device) +[rank7]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4568, in train_and_eval +[rank7]: ptl = fwd_ttt_compiled(xw, yw, lora=wl) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4525, in _fwd_ttt +[rank7]: return _fwd_ttt_compiled_inner(input_ids, target_ids, lora=lora) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 832, in compile_wrapper +[rank7]: return fn(*args, **kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1875, in __call__ +[rank7]: result = self._torchdynamo_orig_backend( +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1625, in __call__ +[rank7]: result = self._inner_convert( +[rank7]: ^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 688, in __call__ +[rank7]: result = _compile( +[rank7]: ^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1434, in _compile +[rank7]: guarded_code, tracer_output = compile_inner(code, one_graph, hooks) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_utils_internal.py", line 92, in wrapper_function +[rank7]: return function(*args, **kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1117, in compile_inner +[rank7]: return _compile_inner(code, one_graph, hooks) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1151, in _compile_inner +[rank7]: dynamo_output = compile_frame( +[rank7]: ^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1032, in compile_frame +[rank7]: bytecode, tracer_output = transform_code_object(code, transform) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1592, in transform_code_object +[rank7]: tracer_output = transformations(instructions, code_options) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1004, in transform +[rank7]: tracer_output = trace_frame( +[rank7]: ^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 312, in _fn +[rank7]: return fn(*args, **kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 815, in trace_frame +[rank7]: run_tracer() +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 797, in run_tracer +[rank7]: tracer.run() +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank7]: while self.step(): +[rank7]: ^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank7]: self.dispatch_table[inst.opcode](self, inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank7]: return inner_fn(self, inst) +[rank7]: ^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank7]: self._call(inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank7]: self.call_function(fn, args, kwargs) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank7]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank7]: return super().call_function(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank7]: return super().call_function(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank7]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank7]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank7]: return tracer.inline_call_() +[rank7]: ^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank7]: self.run() +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank7]: while self.step(): +[rank7]: ^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank7]: self.dispatch_table[inst.opcode](self, inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank7]: return inner_fn(self, inst) +[rank7]: ^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank7]: self._call(inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank7]: self.call_function(fn, args, kwargs) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank7]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 1154, in call_function +[rank7]: return super().call_function(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank7]: return super().call_function(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank7]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank7]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank7]: return tracer.inline_call_() +[rank7]: ^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank7]: self.run() +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank7]: while self.step(): +[rank7]: ^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank7]: self.dispatch_table[inst.opcode](self, inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank7]: return inner_fn(self, inst) +[rank7]: ^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank7]: self._call(inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank7]: self.call_function(fn, args, kwargs) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank7]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank7]: return getattr(self.realize(), name)(*args, **kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/nn_module.py", line 1010, in call_function +[rank7]: return variables.UserFunctionVariable(fn, source=source).call_function( +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank7]: return super().call_function(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank7]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank7]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank7]: return tracer.inline_call_() +[rank7]: ^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank7]: self.run() +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank7]: while self.step(): +[rank7]: ^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank7]: self.dispatch_table[inst.opcode](self, inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank7]: return inner_fn(self, inst) +[rank7]: ^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank7]: self._call(inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank7]: self.call_function(fn, args, kwargs) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank7]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank7]: return getattr(self.realize(), name)(*args, **kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 1115, in call_function +[rank7]: return self.obj.call_method(tx, self.name, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 819, in call_method +[rank7]: return self.call_apply(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/misc.py", line 734, in call_apply +[rank7]: ).call_function(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 3030, in call_function +[rank7]: (fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph( +[rank7]: ^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/higher_order_ops.py", line 999, in speculate_subgraph +[rank7]: output = f.call_function(tx, args, sub_kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank7]: return super().call_function(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank7]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank7]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank7]: return tracer.inline_call_() +[rank7]: ^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank7]: self.run() +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank7]: while self.step(): +[rank7]: ^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank7]: self.dispatch_table[inst.opcode](self, inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank7]: return inner_fn(self, inst) +[rank7]: ^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank7]: self._call(inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank7]: self.call_function(fn, args, kwargs) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank7]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank7]: return getattr(self.realize(), name)(*args, **kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 598, in call_function +[rank7]: return super().call_function(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/functions.py", line 342, in call_function +[rank7]: return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1288, in inline_user_function_return +[rank7]: return InliningInstructionTranslator.inline_call(self, fn, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4129, in inline_call +[rank7]: return tracer.inline_call_() +[rank7]: ^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 4332, in inline_call_ +[rank7]: self.run() +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1500, in run +[rank7]: while self.step(): +[rank7]: ^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1348, in step +[rank7]: self.dispatch_table[inst.opcode](self, inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 904, in wrapper +[rank7]: return inner_fn(self, inst) +[rank7]: ^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3428, in CALL +[rank7]: self._call(inst) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 3422, in _call +[rank7]: self.call_function(fn, args, kwargs) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 1266, in call_function +[rank7]: self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type] +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/lazy.py", line 212, in realize_and_forward +[rank7]: return getattr(self.realize(), name)(*args, **kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank7]: return handler(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 966, in +[rank7]: return lambda tx, args, kwargs: obj.call_function( +[rank7]: ^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1347, in call_function +[rank7]: return handler(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1154, in builtin_dispatch +[rank7]: rv = fn(tx, args, kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1032, in call_self_handler +[rank7]: result = self_handler(tx, *args, **kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/variables/builtin.py", line 1486, in _call_int_float +[rank7]: proxy=tx.output.create_proxy( +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 819, in create_proxy +[rank7]: return self.current_tracer.create_proxy(*args, **kwargs) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 2795, in create_proxy +[rank7]: maybe_new_arg = self.maybe_lift_tracked_freevar_to_input(arg) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3172, in maybe_lift_tracked_freevar_to_input +[rank7]: return self.lift_tracked_freevar_to_input(arg) +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3141, in lift_tracked_freevar_to_input +[rank7]: self.parent.lift_tracked_freevar_to_input(proxy) +[rank7]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 3113, in lift_tracked_freevar_to_input +[rank7]: assert self.parent is not None, ( +[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^ +[rank7]: AssertionError: lift_tracked_freevar_to_input should not be called on root SubgraphTracer + +[rank7]: from user code: +[rank7]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 4517, in _fwd_ttt_inner +[rank7]: return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) +[rank7]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1722, in forward_ttt +[rank7]: x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) +[rank7]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1833, in _block_with_lora +[rank7]: mlp_out = block.mlp(mlp_n, up_w, down_w) +[rank7]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1320, in forward +[rank7]: return FusedLeakyReLUSquareMLP(x, up_w.to(x.dtype), down_w.to(x.dtype)) +[rank7]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1110, in forward +[rank7]: pre, post = linear_leaky_relu_square(x_flat, w1) +[rank7]: File "/workspace/parameter-golf-pr2014-gated-clean/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_Progressive3k_ShortDocTTT/train_gpt.py", line 1081, in linear_leaky_relu_square +[rank7]: fwd_slope = float(_LEAKY_RELU_SQ_SLOPE) + +[rank7]: Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo" + +[rank0]:[W501 21:09:30.589009730 ProcessGroupNCCL.cpp:1524] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) +W0501 21:09:32.301000 343740 torch/distributed/elastic/multiprocessing/api.py:908] Sending process 343812 closing signal SIGTERM +W0501 21:09:32.304000 343740 torch/distributed/elastic/multiprocessing/api.py:908] Sending process 343813 closing signal SIGTERM +W0501 21:09:32.305000 343740 torch/distributed/elastic/multiprocessing/api.py:908] Sending process 343814 closing signal SIGTERM +W0501 21:09:32.306000 343740 torch/distributed/elastic/multiprocessing/api.py:908] Sending process 343815 closing signal SIGTERM +W0501 21:09:32.307000 343740 torch/distributed/elastic/multiprocessing/api.py:908] Sending process 343816 closing signal SIGTERM +W0501 21:09:32.308000 343740 torch/distributed/elastic/multiprocessing/api.py:908] Sending process 343818 closing signal SIGTERM +W0501 21:09:32.309000 343740 torch/distributed/elastic/multiprocessing/api.py:908] Sending process 343819 closing signal SIGTERM +E0501 21:09:33.218000 343740 torch/distributed/elastic/multiprocessing/api.py:882] failed (exitcode: 1) local_rank: 5 (pid: 343817) of binary: /usr/local/bin/python +Traceback (most recent call last): + File "/usr/local/bin/torchrun", line 7, in + sys.exit(main()) + ^^^^^^ + File "/usr/local/lib/python3.12/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 357, in wrapper + return f(*args, **kwargs) + ^^^^^^^^^^^^^^^^^^ + File "/usr/local/lib/python3.12/dist-packages/torch/distributed/run.py", line 936, in main + run(args) + File "/usr/local/lib/python3.12/dist-packages/torch/distributed/run.py", line 927, in run + elastic_launch( + File "/usr/local/lib/python3.12/dist-packages/torch/distributed/launcher/api.py", line 156, in __call__ + return launch_agent(self._config, self._entrypoint, list(args)) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/usr/local/lib/python3.12/dist-packages/torch/distributed/launcher/api.py", line 293, in launch_agent + raise ChildFailedError( +torch.distributed.elastic.multiprocessing.errors.ChildFailedError: +============================================================ +train_gpt.py FAILED +------------------------------------------------------------ +Failures: + +------------------------------------------------------------ +Root Cause (first observed failure): +[0]: + time : 2026-05-01_21:09:32 + host : 361aec2574c3 + rank : 5 (local_rank: 5) + exitcode : 1 (pid: 343817) + error_file: + traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html +============================================================