diff --git a/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/README.md b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/README.md new file mode 100644 index 0000000000..8172130a25 --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/README.md @@ -0,0 +1,79 @@ +# Submission: SP8192 CaseOps + WiderGate32 + PolarNS Muon + GPTQ-int6 + +**val_bpb: 1.08037** (3-seed mean, std 0.00139) | **~15.9 MB** | 8×H100 SXM, 600s wallclock | TTT eval + +## Results + +| Seed | Pre-quant val_bpb | Post-quant val_bpb | **Post-TTT val_bpb** | Artifact | +|------|-------------------|--------------------|----------------------|----------| +| 0 | 1.07175 | 1.09419 | **1.08196** | 15,890,131 | +| 42 | 1.07039 | 1.09076 | **1.07983** | 15,887,137 | +| 1234 | 1.06982 | 1.09058 | **1.07932** | 15,888,516 | +| **Mean** | | | **1.08037** | 15,888,595 | + +## Architecture + +| Component | Setting | Source | +|-----------|---------|--------| +| Layers | 11 (512d, 8 GQA heads, 4 KV heads) | Baseline | +| MLP | 4× (2048) with LeakyReLU(0.5)² | [#493](https://github.com/openai/parameter-golf/pull/493) | +| Attention | FA3, GQA 2:1 | Baseline | +| RoPE | Partial (16/64 dims), base 10000 | [#315](https://github.com/openai/parameter-golf/pull/315) | +| U-Net skips | Encoder-decoder skip connections + skip gates | [#289](https://github.com/openai/parameter-golf/pull/289) | +| Parallel decoder | 2-lane parallel from layer 8+ | [#1530](https://github.com/openai/parameter-golf/pull/1530) | +| Depth recurrence | Loop layers 3-5, NUM_LOOPS=2 (17 virtual layers) | [#1344](https://github.com/openai/parameter-golf/pull/1344) | +| Logit softcap | 30 | Baseline | +| **Wider AttnOutGate** | Per-head output gate, **GATE_WIDTH=32** (vs standard 12) | [#1787](https://github.com/openai/parameter-golf/pull/1787) + **this work** | +| **SmearGate** | Position-mixing gate, width=32 | [#1667](https://github.com/openai/parameter-golf/pull/1667) | +| **Polar-Express Muon** | 5 NS steps, per-iter minimax tuples, momentum 0.97 | [#1344](https://github.com/openai/parameter-golf/pull/1344) | +| **MIN_LR floor** | 0.10 (warmdown LR floor) | [#1787](https://github.com/openai/parameter-golf/pull/1787) | +| Quantization | GPTQ int6 all weights (EMBED_BITS=6) + brotli-11 | | +| TTT | LoRA rank-96, 1 phase, 2000 prefix docs | [#1610](https://github.com/openai/parameter-golf/pull/1610) | +| Tokenizer | SP8192 CaseOps (bijective case markers) | [#1729](https://github.com/openai/parameter-golf/pull/1729) | + +## Key Innovation: Wider Attention Output Gates + +Standard AttnOutGate (PR #1787) uses 12 input dimensions from the residual stream to compute per-head gating: + +```python +gate_in = x_orig[:, :, :12] # standard: 12 dims +gate = 2.0 * sigmoid(linear(gate_in, gate_w)) # -> per-head scalar +y = attn_output * gate +``` + +We widen the gate input to 32 dimensions (`GATE_WIDTH=32`), giving each head a richer view: + +```python +gate_in = x_orig[:, :, :gate_w.shape[-1]] # wider: 32 dims +``` + +- Gate params per layer: 32 × 8 heads = 256 (vs 96 with width=12) +- Total extra params: 1,760 across 11 layers (float16 passthrough, negligible) +- **Pre-quant improvement: −0.002 BPB** vs width=12 + +The same widening is applied to SmearGate for consistency. + +## Training Configuration + +```bash +VOCAB_SIZE=8192 +DATA_PATH=./data/datasets/fineweb10B_sp8192_caseops +TOKENIZER_PATH=./data/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model +MAX_WALLCLOCK_SECONDS=600 +POLAR_EXPRESS_NS=1 +LQER_ENABLED=0 +MIN_LR=0.10 +EMBED_BITS=6 +COMPRESSOR=brotli +ATTN_OUT_GATE=1 +SMEAR_GATE=1 +GATE_WIDTH=32 +``` + +## Reproduction + +```bash +pip install torch>=2.9.0 sentencepiece brotli triton +python prepare_caseops_data.py +torchrun --standalone --nproc_per_node=8 train_gpt.py +``` diff --git a/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/lossless_caps.py b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/lossless_caps.py new file mode 100644 index 0000000000..3c57be54d8 --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/lossless_caps.py @@ -0,0 +1,495 @@ +"""Lossless capitalization pre-encoding helpers. + +This module provides a narrow, reversible transform that only touches +ASCII capital letters `A-Z`. Each uppercase ASCII letter is rewritten as +``, where `sentinel` is a private-use Unicode +character that is escaped by doubling if it appears literally in the +input text. + +Example with the default sentinel `\\uE000`: + + "The NASA Launch" -> "\\uE000the \\uE000n\\uE000a\\uE000s\\uE000a \\uE000launch" + +The transform is intentionally simple for v1: + +- lowercase ASCII letters are unchanged +- uppercase ASCII letters become sentinel + lowercase letter +- non-ASCII characters are left untouched +- literal sentinel characters are escaped as sentinel + sentinel + +This makes the transform exactly invertible while allowing a downstream +tokenizer to reuse lowercase subwords across case variants. +""" + +from __future__ import annotations + +import json +from pathlib import Path +from typing import Callable, Iterable + +LOSSLESS_CAPS_V1 = "lossless_caps_v1" +LOSSLESS_CAPS_V2 = "lossless_caps_v2" +LOSSLESS_CAPS_V3 = "lossless_caps_v3" +LOSSLESS_CAPS_V4 = "lossless_caps_v4" +LOSSLESS_CAPS_V5 = "lossless_caps_v5" +LOSSLESS_CAPS_V6 = "lossless_caps_v6" +LOSSLESS_CAPS_V7 = "lossless_caps_v7" +LOSSLESS_CAPS_CASEOPS_V1 = "lossless_caps_caseops_v1" +IDENTITY = "identity" +DEFAULT_SENTINEL = "\uE000" +DEFAULT_V2_TITLE = "\uE001" +DEFAULT_V2_ALLCAPS = "\uE002" +DEFAULT_V2_CAPNEXT = "\uE003" +DEFAULT_V2_ESC = "\uE004" +DEFAULT_V5_TITLE_MIN_LEN = 7 +DEFAULT_V6_ALLCAPS_MIN_LEN = 3 +DEFAULT_V7_ALLCAPS_MIN_LEN = 4 + + +class LosslessCapsError(ValueError): + """Raised when a transformed string is malformed.""" + + +def _is_ascii_upper(ch: str) -> bool: + return "A" <= ch <= "Z" + + +def _is_ascii_lower(ch: str) -> bool: + return "a" <= ch <= "z" + + +def _is_ascii_alpha(ch: str) -> bool: + return _is_ascii_lower(ch) or _is_ascii_upper(ch) + + +def _validate_distinct_single_chars(*chars: str) -> None: + if any(len(ch) != 1 for ch in chars): + raise ValueError("all control characters must be exactly one character") + if len(set(chars)) != len(chars): + raise ValueError("control characters must be distinct") + + +def encode_lossless_caps_v1(text: str, *, sentinel: str = DEFAULT_SENTINEL) -> str: + """Encode ASCII capitals reversibly using a one-character sentinel.""" + if len(sentinel) != 1: + raise ValueError("sentinel must be exactly one character") + out: list[str] = [] + for ch in text: + if ch == sentinel: + out.append(sentinel) + out.append(sentinel) + elif _is_ascii_upper(ch): + out.append(sentinel) + out.append(ch.lower()) + else: + out.append(ch) + return "".join(out) + + +def decode_lossless_caps_v1(text: str, *, sentinel: str = DEFAULT_SENTINEL) -> str: + """Decode the `lossless_caps_v1` transform back to the original text.""" + if len(sentinel) != 1: + raise ValueError("sentinel must be exactly one character") + out: list[str] = [] + i = 0 + n = len(text) + while i < n: + ch = text[i] + if ch != sentinel: + out.append(ch) + i += 1 + continue + if i + 1 >= n: + raise LosslessCapsError("dangling capitalization sentinel at end of string") + nxt = text[i + 1] + if nxt == sentinel: + out.append(sentinel) + elif _is_ascii_lower(nxt): + out.append(nxt.upper()) + else: + raise LosslessCapsError( + f"invalid sentinel escape sequence {sentinel + nxt!r}; " + "expected doubled sentinel or sentinel + lowercase ASCII letter" + ) + i += 2 + return "".join(out) + + +def encode_lossless_caps_v2( + text: str, + *, + title: str = DEFAULT_V2_TITLE, + allcaps: str = DEFAULT_V2_ALLCAPS, + capnext: str = DEFAULT_V2_CAPNEXT, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Encode ASCII word capitalization with cheap word-level markers. + + Rules over maximal ASCII alphabetic runs: + - lowercase words stay unchanged + - TitleCase words become `title + lowercase(word)` + - ALLCAPS words become `allcaps + lowercase(word)` + - mixed-case words use: + - optional `title` when the first letter is uppercase + - `capnext + lowercase(letter)` for subsequent uppercase letters + - literal control characters are escaped as `esc + literal` + """ + _validate_distinct_single_chars(title, allcaps, capnext, esc) + controls = {title, allcaps, capnext, esc} + out: list[str] = [] + i = 0 + n = len(text) + while i < n: + ch = text[i] + if ch in controls: + out.append(esc) + out.append(ch) + i += 1 + continue + if not _is_ascii_alpha(ch): + out.append(ch) + i += 1 + continue + + j = i + 1 + while j < n and _is_ascii_alpha(text[j]): + j += 1 + word = text[i:j] + lower_word = word.lower() + + if word.islower(): + out.append(word) + elif len(word) >= 2 and word.isupper(): + out.append(allcaps) + out.append(lower_word) + elif _is_ascii_upper(word[0]) and word[1:].islower(): + out.append(title) + out.append(lower_word) + else: + if _is_ascii_upper(word[0]): + out.append(title) + out.append(lower_word[0]) + for orig_ch, lower_ch in zip(word[1:], lower_word[1:], strict=True): + if _is_ascii_upper(orig_ch): + out.append(capnext) + out.append(lower_ch) + i = j + return "".join(out) + + +def decode_lossless_caps_v2( + text: str, + *, + title: str = DEFAULT_V2_TITLE, + allcaps: str = DEFAULT_V2_ALLCAPS, + capnext: str = DEFAULT_V2_CAPNEXT, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Decode the `lossless_caps_v2` transform back to the original text.""" + _validate_distinct_single_chars(title, allcaps, capnext, esc) + out: list[str] = [] + pending_escape = False + pending_word_mode: str | None = None + active_allcaps = False + pending_capnext = False + in_ascii_word = False + + for ch in text: + if pending_escape: + if pending_word_mode is not None and not _is_ascii_alpha(ch): + raise LosslessCapsError("escaped control char cannot satisfy pending word capitalization mode") + out.append(ch) + pending_escape = False + if _is_ascii_alpha(ch): + in_ascii_word = True + else: + in_ascii_word = False + active_allcaps = False + continue + + if ch == esc: + pending_escape = True + continue + if ch == title: + if pending_word_mode is not None or in_ascii_word or pending_capnext: + raise LosslessCapsError("invalid title marker placement") + pending_word_mode = "title" + continue + if ch == allcaps: + if pending_word_mode is not None or in_ascii_word or pending_capnext: + raise LosslessCapsError("invalid allcaps marker placement") + pending_word_mode = "allcaps" + continue + if ch == capnext: + if pending_capnext: + raise LosslessCapsError("duplicate capnext marker") + pending_capnext = True + continue + + if _is_ascii_alpha(ch): + at_word_start = not in_ascii_word + if at_word_start: + if pending_word_mode == "allcaps": + out.append(ch.upper()) + active_allcaps = True + elif pending_word_mode == "title": + out.append(ch.upper()) + elif pending_capnext: + out.append(ch.upper()) + else: + out.append(ch) + pending_word_mode = None + pending_capnext = False + in_ascii_word = True + continue + + if pending_word_mode is not None: + raise LosslessCapsError("word capitalization marker leaked into the middle of a word") + if active_allcaps: + out.append(ch.upper()) + elif pending_capnext: + out.append(ch.upper()) + else: + out.append(ch) + pending_capnext = False + continue + + if pending_word_mode is not None or pending_capnext: + raise LosslessCapsError("capitalization marker not followed by an ASCII letter") + out.append(ch) + in_ascii_word = False + active_allcaps = False + + if pending_escape: + raise LosslessCapsError("dangling escape marker at end of string") + if pending_word_mode is not None or pending_capnext: + raise LosslessCapsError("dangling capitalization marker at end of string") + return "".join(out) + + +def encode_lossless_caps_v3( + text: str, + *, + title: str = DEFAULT_V2_TITLE, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Encode only common word-level capitalization patterns. + + Rules over maximal ASCII alphabetic runs: + - lowercase words stay unchanged + - TitleCase words become `title + lowercase(word)` + - ALLCAPS words become `allcaps + lowercase(word)` + - all other mixed-case words are left unchanged + - literal control characters are escaped as `esc + literal` + """ + _validate_distinct_single_chars(title, allcaps, esc) + controls = {title, allcaps, esc} + out: list[str] = [] + i = 0 + n = len(text) + while i < n: + ch = text[i] + if ch in controls: + out.append(esc) + out.append(ch) + i += 1 + continue + if not _is_ascii_alpha(ch): + out.append(ch) + i += 1 + continue + + j = i + 1 + while j < n and _is_ascii_alpha(text[j]): + j += 1 + word = text[i:j] + + if word.islower(): + out.append(word) + elif len(word) >= 2 and word.isupper(): + out.append(allcaps) + out.append(word.lower()) + elif _is_ascii_upper(word[0]) and word[1:].islower(): + out.append(title) + out.append(word.lower()) + else: + out.append(word) + i = j + return "".join(out) + + +def decode_lossless_caps_v3( + text: str, + *, + title: str = DEFAULT_V2_TITLE, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Decode the `lossless_caps_v3` transform back to the original text.""" + _validate_distinct_single_chars(title, allcaps, esc) + out: list[str] = [] + pending_escape = False + pending_word_mode: str | None = None + active_allcaps = False + in_ascii_word = False + + for ch in text: + if pending_escape: + if pending_word_mode is not None and not _is_ascii_alpha(ch): + raise LosslessCapsError("escaped control char cannot satisfy pending word capitalization mode") + out.append(ch) + pending_escape = False + if _is_ascii_alpha(ch): + in_ascii_word = True + else: + in_ascii_word = False + active_allcaps = False + continue + + if ch == esc: + pending_escape = True + continue + if ch == title: + if pending_word_mode is not None or in_ascii_word: + raise LosslessCapsError("invalid title marker placement") + pending_word_mode = "title" + continue + if ch == allcaps: + if pending_word_mode is not None or in_ascii_word: + raise LosslessCapsError("invalid allcaps marker placement") + pending_word_mode = "allcaps" + continue + + if _is_ascii_alpha(ch): + at_word_start = not in_ascii_word + if at_word_start: + if pending_word_mode == "allcaps": + out.append(ch.upper()) + active_allcaps = True + elif pending_word_mode == "title": + out.append(ch.upper()) + else: + out.append(ch) + pending_word_mode = None + in_ascii_word = True + continue + + if pending_word_mode is not None: + raise LosslessCapsError("word capitalization marker leaked into the middle of a word") + out.append(ch.upper() if active_allcaps else ch) + continue + + if pending_word_mode is not None: + raise LosslessCapsError("capitalization marker not followed by an ASCII letter") + out.append(ch) + in_ascii_word = False + active_allcaps = False + + if pending_escape: + raise LosslessCapsError("dangling escape marker at end of string") + if pending_word_mode is not None: + raise LosslessCapsError("dangling capitalization marker at end of string") + return "".join(out) + + +def encode_lossless_caps_v4( + text: str, + *, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Encode only ALLCAPS ASCII words, leaving all other case untouched.""" + _validate_distinct_single_chars(allcaps, esc) + controls = {allcaps, esc} + out: list[str] = [] + i = 0 + n = len(text) + while i < n: + ch = text[i] + if ch in controls: + out.append(esc) + out.append(ch) + i += 1 + continue + if not _is_ascii_alpha(ch): + out.append(ch) + i += 1 + continue + j = i + 1 + while j < n and _is_ascii_alpha(text[j]): + j += 1 + word = text[i:j] + if len(word) >= 2 and word.isupper(): + out.append(allcaps) + out.append(word.lower()) + else: + out.append(word) + i = j + return "".join(out) + + +def decode_lossless_caps_v4( + text: str, + *, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, +) -> str: + """Decode the `lossless_caps_v4` transform back to the original text.""" + _validate_distinct_single_chars(allcaps, esc) + out: list[str] = [] + pending_escape = False + pending_allcaps = False + in_ascii_word = False + active_allcaps = False + + for ch in text: + if pending_escape: + if pending_allcaps and not _is_ascii_alpha(ch): + raise LosslessCapsError("escaped control char cannot satisfy pending allcaps mode") + out.append(ch) + pending_escape = False + if _is_ascii_alpha(ch): + in_ascii_word = True + else: + in_ascii_word = False + active_allcaps = False + continue + + if ch == esc: + pending_escape = True + continue + if ch == allcaps: + if pending_allcaps or in_ascii_word: + raise LosslessCapsError("invalid allcaps marker placement") + pending_allcaps = True + continue + + if _is_ascii_alpha(ch): + if not in_ascii_word: + active_allcaps = pending_allcaps + pending_allcaps = False + in_ascii_word = True + out.append(ch.upper() if active_allcaps else ch) + continue + + if pending_allcaps: + raise LosslessCapsError("allcaps marker not followed by an ASCII letter") + out.append(ch) + in_ascii_word = False + active_allcaps = False + + if pending_escape: + raise LosslessCapsError("dangling escape marker at end of string") + if pending_allcaps: + raise LosslessCapsError("dangling allcaps marker at end of string") + return "".join(out) + + +def encode_lossless_caps_v5( + text: str, + *, + title: str = DEFAULT_V2_TITLE, + allcaps: str = DEFAULT_V2_ALLCAPS, + esc: str = DEFAULT_V2_ESC, + title_min_len: int = DEFAULT_V5_TITLE_MIN_LEN, diff --git a/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/prepare_caseops_data.py b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/prepare_caseops_data.py new file mode 100644 index 0000000000..a6ed58446e --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/prepare_caseops_data.py @@ -0,0 +1,473 @@ +"""Prepare CaseOps-tokenized FineWeb shards + per-token byte sidecar. + +CaseOps (``lossless_caps_caseops_v1``) is a bijective, character-level text +transform that introduces four operator tokens in place of explicit +capitalization: TITLE, ALLCAPS, CAPNEXT, ESC. The transform is fully +reversible — no information is lost relative to the untransformed UTF-8 +text, so BPB stays computable on TRUE byte counts. + +Forward pipeline: + 1. Read the canonical FineWeb-10B doc stream (``docs_selected.jsonl`` + produced by ``data/download_hf_docs_and_tokenize.py`` in the root repo). + 2. Apply ``encode_lossless_caps_v2`` (the caseops_v1 alias) to each doc. + 3. Tokenize with the shipped SP model + ``tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model`` + (reserves TITLE/ALLCAPS/CAPNEXT/ESC + sentinel as user_defined_symbols). + 4. Write uint16 train/val shards (``fineweb_{train,val}_XXXXXX.bin``). + 5. For the VAL stream only, emit per-token byte sidecar shards + (``fineweb_val_bytes_XXXXXX.bin``, uint16 parallel arrays) that record + each token's ORIGINAL pre-transform UTF-8 byte count. BPB is computed + from these canonical bytes so the score is on the untransformed text + (not the transformed representation). + +Output layout — matches what ``train_gpt.py`` expects under +``DATA_DIR=./data`` with ``CASEOPS_ENABLED=1``: + + data/datasets/fineweb10B_sp8192_caseops/datasets/ + tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + datasets/fineweb10B_sp8192_lossless_caps_caseops_v1_reserved/ + fineweb_train_000000.bin + fineweb_train_000001.bin + ... + fineweb_val_000000.bin + fineweb_val_bytes_000000.bin + +Usage: + + python3 prepare_caseops_data.py \\ + --docs ./fineweb10B_raw/docs_selected.jsonl \\ + --out ./data/datasets/fineweb10B_sp8192_caseops/datasets \\ + --sp ./tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model + +Requirements: sentencepiece, numpy. CPU-only. Runs once; reused across seeds. +""" +from __future__ import annotations + +import argparse +import json +import pathlib +import struct +import sys + +import numpy as np +import sentencepiece as spm + +# Local import — lossless_caps.py ships next to this script. +sys.path.insert(0, str(pathlib.Path(__file__).resolve().parent)) +from lossless_caps import ( # noqa: E402 + LOSSLESS_CAPS_CASEOPS_V1, + encode_lossless_caps_v2, + surface_piece_original_byte_counts, +) + + +SHARD_MAGIC = 20240520 +SHARD_VERSION = 1 +SHARD_TOKENS = 10_000_000 # tokens per shard — matches the main pipeline +BOS_ID = 1 # SP model's control token; train_gpt.py:_find_docs requires BOS per doc + + +def _write_shard(out_path: pathlib.Path, arr: np.ndarray) -> None: + """Write a uint16 shard in the standard header-prefixed format.""" + assert arr.dtype == np.uint16 + header = np.zeros(256, dtype=np.int32) + header[0] = SHARD_MAGIC + header[1] = SHARD_VERSION + header[2] = int(arr.size) + with out_path.open("wb") as fh: + fh.write(header.tobytes()) + fh.write(arr.tobytes()) + + +def _iter_docs(docs_path: pathlib.Path): + """Yield doc strings from a jsonl file (one json object per line).""" + with docs_path.open("r", encoding="utf-8") as fh: + for line in fh: + line = line.strip() + if not line: + continue + obj = json.loads(line) + # Support both {"text": ...} and raw strings. + yield obj["text"] if isinstance(obj, dict) else obj + + +def _token_original_byte_counts( + sp: spm.SentencePieceProcessor, + original_text: str, + transformed_text: str, +) -> np.ndarray: + """Per-token canonical (pre-transform) UTF-8 byte counts. + + Delegates to ``surface_piece_original_byte_counts`` in ``lossless_caps.py`` + — the canonical exporter used by the PR #1729 / HF-hosted CaseOps dataset. + Operator pieces (U+E001..U+E004) contribute 0 original bytes; letter pieces + contribute their pre-transform UTF-8 byte count. + """ + proto = sp.encode_as_immutable_proto(transformed_text) + byte_counts = surface_piece_original_byte_counts( + (piece.surface for piece in proto.pieces), + text_transform_name=LOSSLESS_CAPS_CASEOPS_V1, + ) + return np.asarray(list(byte_counts), dtype=np.uint16) + + +def main() -> None: + ap = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter) + ap.add_argument("--docs", required=True, type=pathlib.Path, help="Path to docs_selected.jsonl") + ap.add_argument("--out", required=True, type=pathlib.Path, help="Output datasets dir") + ap.add_argument("--sp", required=True, type=pathlib.Path, help="Path to CaseOps SP model") + ap.add_argument("--val-docs", type=int, default=10_000, help="Validation docs count") + args = ap.parse_args() + + sp = spm.SentencePieceProcessor(model_file=str(args.sp)) + print(f"loaded sp: vocab={sp.vocab_size()}", flush=True) + + train_out = args.out / "datasets" / "fineweb10B_sp8192_lossless_caps_caseops_v1_reserved" + train_out.mkdir(parents=True, exist_ok=True) + + val_buf_tokens: list[int] = [] + val_buf_bytes: list[int] = [] + train_buf: list[int] = [] + val_written = 0 + train_written = 0 + n_docs = 0 + + for text in _iter_docs(args.docs): + transformed = encode_lossless_caps_v2(text) + token_ids = [BOS_ID] + sp.encode(transformed, out_type=int) + if n_docs < args.val_docs: + # Validation doc — also compute byte sidecar + byte_counts = _token_original_byte_counts(sp, text, transformed) + val_buf_tokens.extend(token_ids) + val_buf_bytes.append(0) # BOS contributes 0 original bytes + val_buf_bytes.extend(int(b) for b in byte_counts) + if len(val_buf_tokens) >= SHARD_TOKENS: + _write_shard(train_out / f"fineweb_val_{val_written:06d}.bin", + np.array(val_buf_tokens[:SHARD_TOKENS], dtype=np.uint16)) + _write_shard(train_out / f"fineweb_val_bytes_{val_written:06d}.bin", + np.array(val_buf_bytes[:SHARD_TOKENS], dtype=np.uint16)) + val_buf_tokens = val_buf_tokens[SHARD_TOKENS:] + val_buf_bytes = val_buf_bytes[SHARD_TOKENS:] + val_written += 1 + else: + train_buf.extend(token_ids) + if len(train_buf) >= SHARD_TOKENS: + _write_shard(train_out / f"fineweb_train_{train_written:06d}.bin", + np.array(train_buf[:SHARD_TOKENS], dtype=np.uint16)) + train_buf = train_buf[SHARD_TOKENS:] + train_written += 1 + n_docs += 1 + if n_docs % 10_000 == 0: + print(f" processed {n_docs} docs train_shards={train_written} val_shards={val_written}", flush=True) + + # Flush tail buffers into final (possibly short) shards. + if val_buf_tokens: + _write_shard(train_out / f"fineweb_val_{val_written:06d}.bin", + np.array(val_buf_tokens, dtype=np.uint16)) + _write_shard(train_out / f"fineweb_val_bytes_{val_written:06d}.bin", + np.array(val_buf_bytes, dtype=np.uint16)) + if train_buf: + _write_shard(train_out / f"fineweb_train_{train_written:06d}.bin", + np.array(train_buf, dtype=np.uint16)) + + print(f"done. docs={n_docs} train_shards={train_written + (1 if train_buf else 0)} val_shards={val_written + (1 if val_buf_tokens else 0)}") + + +if __name__ == "__main__": + main() +# Python deps. Install with: pip install -r requirements.txt +torch==2.9.1+cu128 +sentencepiece +brotli +huggingface_hub +numpy +python-minifier + +# FlashAttention 3 must be installed separately (not on PyPI): +# pip install --no-deps flash_attn_3 --find-links https://windreamer.github.io/flash-attention3-wheels/cu128_torch291/ + +# System dep (apt): lrzip (used by per-group compressor) +# apt-get install -y lrzip +{ + "author": "Benjamin Hadad", + "github_id": "codemath3000", + "name": "11L XSA + LQER + SparseAttnGate + SmearGate (BOS-fixed) + PolarNS Muon + 9-hparam stack", + "blurb": "11L 512d 8H/4KV transformer with U-Net skips, parallel decoder, partial RoPE, Polar-Express Newton-Schulz Muon (5 steps), LQER asymmetric int4 rank-4 quant correction, sparse attention head-output gate (gate_window=12), SmearGate position-mixing (with cross-document leak fix on BOS positions), fused LeakyReLU-square MLP, fused softcapped CE Triton kernel, GPTQ int6 + int7 embed + per-row int8 attn-gate quantization, per-group lrzip+brotli compression, phased TTT eval (3 phases, prefix=2500 docs). Plus 9 greedy-validated hyperparameter overrides on top of the published baseline. 3-seed mean: 1.06107587 BPB, beating the current official leaderboard (1.0810 BPB) by 0.01992 BPB / 0.04359 nats.", + "date": "2026-04-27", + "track": "10min_16mb", + "val_loss": 2.32202732, + "val_bpb": 1.06107587, + "val_loss_std": 0.00198, + "val_bpb_std": 0.00090, + "seeds": [42, 0, 1234], + "seed_results": { + "42": { + "val_loss": 2.31944212, + "val_bpb": 1.05989454, + "artifact_bytes": 15897259, + "steps": 4945, + "step_avg_ms": 121.3, + "eval_time_s": 508.8 + }, + "0": { + "val_loss": 2.32239991, + "val_bpb": 1.06124613, + "artifact_bytes": 15900947, + "steps": 4932, + "step_avg_ms": 121.7, + "eval_time_s": 455.1 + }, + "1234": { + "val_loss": 2.32423994, + "val_bpb": 1.06208695, + "artifact_bytes": 15907550, + "steps": 4917, + "step_avg_ms": 122.0, + "eval_time_s": 470.0 + } + }, + "comparison_baseline_bpb": 1.0810, + "delta_vs_leaderboard_bpb": -0.01992, + "delta_vs_leaderboard_nats": -0.04359, + "artifact_bytes_mean": 15901919, + "artifact_bytes_max": 15907550, + "bytes_total": 15907550, + "train_steps_mean": 4931.33, + "step_avg_ms_mean": 121.7, + "hardware": "8xH100 80GB SXM", + "pytorch_version": "2.9.1+cu128", + "cuda_version": "12.8", + "flash_attn_version": "FA3 (cu128_torch291 wheel)", + "technique_summary": "11L XSA + LQER int4-rank4 + SparseAttnGate + BOS-fixed SmearGate + Polar-Express Muon + per-group lrzip compression + 9-hparam greedy stack (clips_tighter, BETA2=0.99, TTT_BETA2=0.99, TTT_WEIGHT_DECAY=0.5, TTT_LORA_RANK=80, SPARSE_ATTN_GATE_SCALE=0.5, PHASED_TTT_PREFIX_DOCS=2500, WARMDOWN_FRAC=0.85)" +} +import base64, collections, copy, fcntl, glob, io, lzma, math, os +from pathlib import Path +import random, re, subprocess, sys, time, uuid, numpy as np, sentencepiece as spm, torch, torch.distributed as dist, torch.nn.functional as F +from torch import Tensor, nn +from flash_attn_interface import ( + flash_attn_func as flash_attn_3_func, + flash_attn_varlen_func, +) +from concurrent.futures import ThreadPoolExecutor +import triton +import triton.language as tl +from triton.tools.tensor_descriptor import TensorDescriptor + + +# ===== Fused softcapped cross-entropy (Triton) — training-only path ===== +# Replaces the eager +# logits_softcap = softcap * tanh(logits / softcap) +# F.cross_entropy(logits_softcap.float(), targets, reduction="mean") +# sequence with a single fused kernel that reads logits_proj once, applies +# softcap in-register, and computes (LSE, loss) in one streaming pass. The +# backward kernel mirrors the forward so there's no stored softcapped logits. +# Numerically identical to the eager path up to fp32 accumulation differences. +_FUSED_CE_LIBRARY = "pgsubmission1draft7fusedce" +_FUSED_CE_BLOCK_SIZE = 1024 +_FUSED_CE_NUM_WARPS = 4 + + +@triton.jit +def _softcapped_ce_fwd_kernel( + logits_ptr, losses_ptr, lse_ptr, targets_ptr, + stride_logits_n, stride_logits_v, + n_rows, n_cols, softcap, + block_size: tl.constexpr, +): + row_idx = tl.program_id(0).to(tl.int64) + logits_row_ptr = logits_ptr + row_idx * stride_logits_n + max_val = -float("inf") + sum_exp = 0.0 + A = 2.0 * softcap + inv_C = 2.0 / softcap + for off in range(0, n_cols, block_size): + cols = off + tl.arange(0, block_size) + mask = cols < n_cols + val = tl.load( + logits_row_ptr + cols * stride_logits_v, + mask=mask, other=-float("inf"), + ).to(tl.float32) + z = A * tl.sigmoid(val * inv_C) + z = tl.where(mask, z, -float("inf")) + curr_max = tl.max(z, axis=0) + new_max = tl.maximum(max_val, curr_max) + sum_exp = sum_exp * tl.exp(max_val - new_max) + tl.sum(tl.exp(z - new_max), axis=0) + max_val = new_max + lse = max_val + tl.log(sum_exp) + tl.store(lse_ptr + row_idx, lse) + target = tl.load(targets_ptr + row_idx).to(tl.int32) + target_val = tl.load(logits_row_ptr + target * stride_logits_v).to(tl.float32) + target_z = A * tl.sigmoid(target_val * inv_C) + tl.store(losses_ptr + row_idx, lse - target_z) + + +@triton.jit +def _softcapped_ce_bwd_kernel( + grad_logits_ptr, grad_losses_ptr, lse_ptr, logits_ptr, targets_ptr, + stride_logits_n, stride_logits_v, + stride_grad_n, stride_grad_v, + n_rows, n_cols, softcap, + block_size: tl.constexpr, +): + row_idx = tl.program_id(0).to(tl.int64) + logits_row_ptr = logits_ptr + row_idx * stride_logits_n + grad_row_ptr = grad_logits_ptr + row_idx * stride_grad_n + lse = tl.load(lse_ptr + row_idx) + grad_loss = tl.load(grad_losses_ptr + row_idx).to(tl.float32) + target = tl.load(targets_ptr + row_idx).to(tl.int32) + A = 2.0 * softcap + inv_C = 2.0 / softcap + dz_dx_scale = A * inv_C + for off in range(0, n_cols, block_size): + cols = off + tl.arange(0, block_size) + mask = cols < n_cols + val = tl.load( + logits_row_ptr + cols * stride_logits_v, + mask=mask, other=0.0, + ).to(tl.float32) + sigmoid_u = tl.sigmoid(val * inv_C) + z = A * sigmoid_u + probs = tl.exp(z - lse) + grad_z = grad_loss * (probs - tl.where(cols == target, 1.0, 0.0)) + grad_x = grad_z * (dz_dx_scale * sigmoid_u * (1.0 - sigmoid_u)) + tl.store(grad_row_ptr + cols * stride_grad_v, grad_x, mask=mask) + + +def _validate_softcapped_ce_inputs( + logits: Tensor, targets: Tensor, softcap: float, +) -> tuple[Tensor, Tensor]: + if logits.ndim != 2: + raise ValueError(f"Expected logits.ndim=2, got {logits.ndim}") + if targets.ndim != 1: + raise ValueError(f"Expected targets.ndim=1, got {targets.ndim}") + if logits.shape[0] != targets.shape[0]: + raise ValueError( + f"Expected matching rows, got logits={tuple(logits.shape)} targets={tuple(targets.shape)}" + ) + if not logits.is_cuda or not targets.is_cuda: + raise ValueError("softcapped_cross_entropy requires CUDA tensors") + if softcap <= 0.0: + raise ValueError(f"softcap must be positive, got {softcap}") + if logits.dtype not in (torch.float16, torch.bfloat16, torch.float32): + raise ValueError(f"Unsupported logits dtype: {logits.dtype}") + logits = logits.contiguous() + targets = targets.contiguous() + if targets.dtype != torch.int64: + targets = targets.to(dtype=torch.int64) + return logits, targets + + +@torch.library.custom_op(f"{_FUSED_CE_LIBRARY}::softcapped_ce", mutates_args=()) +def softcapped_ce_op(logits: Tensor, targets: Tensor, softcap: float) -> tuple[Tensor, Tensor]: + logits, targets = _validate_softcapped_ce_inputs(logits, targets, float(softcap)) + n_rows, n_cols = logits.shape + losses = torch.empty((n_rows,), device=logits.device, dtype=torch.float32) + lse = torch.empty((n_rows,), device=logits.device, dtype=torch.float32) + _softcapped_ce_fwd_kernel[(n_rows,)]( + logits, losses, lse, targets, + logits.stride(0), logits.stride(1), + n_rows, n_cols, float(softcap), + block_size=_FUSED_CE_BLOCK_SIZE, num_warps=_FUSED_CE_NUM_WARPS, + ) + return losses, lse + + +@softcapped_ce_op.register_fake +def _(logits: Tensor, targets: Tensor, softcap: float): + if logits.ndim != 2 or targets.ndim != 1: + raise ValueError("softcapped_ce fake impl expects 2D logits and 1D targets") + if logits.shape[0] != targets.shape[0]: + raise ValueError( + f"Expected matching rows, got logits={tuple(logits.shape)} targets={tuple(targets.shape)}" + ) + n_rows = logits.shape[0] + return ( + logits.new_empty((n_rows,), dtype=torch.float32), + logits.new_empty((n_rows,), dtype=torch.float32), + ) + + +@torch.library.custom_op(f"{_FUSED_CE_LIBRARY}::softcapped_ce_backward", mutates_args=()) +def softcapped_ce_backward_op( + logits: Tensor, targets: Tensor, lse: Tensor, grad_losses: Tensor, softcap: float, +) -> Tensor: + logits, targets = _validate_softcapped_ce_inputs(logits, targets, float(softcap)) + lse = lse.contiguous() + grad_losses = grad_losses.contiguous().to(dtype=torch.float32) + if lse.ndim != 1 or grad_losses.ndim != 1: + raise ValueError("Expected 1D lse and grad_losses") + if lse.shape[0] != logits.shape[0] or grad_losses.shape[0] != logits.shape[0]: + raise ValueError( + f"Expected row-aligned lse/grad_losses, got logits={tuple(logits.shape)} " + f"lse={tuple(lse.shape)} grad_losses={tuple(grad_losses.shape)}" + ) + grad_logits = torch.empty_like(logits) + n_rows, n_cols = logits.shape + _softcapped_ce_bwd_kernel[(n_rows,)]( + grad_logits, grad_losses, lse, logits, targets, + logits.stride(0), logits.stride(1), + grad_logits.stride(0), grad_logits.stride(1), + n_rows, n_cols, float(softcap), + block_size=_FUSED_CE_BLOCK_SIZE, num_warps=_FUSED_CE_NUM_WARPS, + ) + return grad_logits + + +@softcapped_ce_backward_op.register_fake +def _(logits: Tensor, targets: Tensor, lse: Tensor, grad_losses: Tensor, softcap: float): + if logits.ndim != 2 or targets.ndim != 1 or lse.ndim != 1 or grad_losses.ndim != 1: + raise ValueError("softcapped_ce_backward fake impl expects 2D logits and 1D row tensors") + if ( + logits.shape[0] != targets.shape[0] + or logits.shape[0] != lse.shape[0] + or logits.shape[0] != grad_losses.shape[0] + ): + raise ValueError("softcapped_ce_backward fake impl expects row-aligned tensors") + return logits.new_empty(logits.shape) + + +def _softcapped_ce_setup_context( + ctx: torch.autograd.function.FunctionCtx, inputs, output, +) -> None: + logits, targets, softcap = inputs + _losses, lse = output + ctx.save_for_backward(logits, targets, lse) + ctx.softcap = float(softcap) + + +def _softcapped_ce_backward( + ctx: torch.autograd.function.FunctionCtx, grad_losses: Tensor, grad_lse: "Tensor | None", +): + del grad_lse + logits, targets, lse = ctx.saved_tensors + grad_logits = torch.ops.pgsubmission1draft7fusedce.softcapped_ce_backward( + logits, targets, lse, grad_losses, ctx.softcap + ) + return grad_logits, None, None + + +softcapped_ce_op.register_autograd( + _softcapped_ce_backward, setup_context=_softcapped_ce_setup_context, +) + + +def softcapped_cross_entropy( + logits: Tensor, targets: Tensor, softcap: float, reduction: str = "mean", +) -> Tensor: + losses, _lse = torch.ops.pgsubmission1draft7fusedce.softcapped_ce( + logits, targets, float(softcap) + ) + if reduction == "none": + return losses + if reduction == "sum": + return losses.sum() + if reduction == "mean": + return losses.mean() + raise ValueError(f"Unsupported reduction={reduction!r}") + + +class Hyperparameters: + data_dir = os.environ.get("DATA_DIR", "./data/") + seed = int(os.environ.get("SEED", 1337)) + run_id = os.environ.get("RUN_ID", str(uuid.uuid4())) + iterations = int(os.environ.get("ITERATIONS", 20000)) diff --git a/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/requirements.txt b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/requirements.txt new file mode 100644 index 0000000000..b519ca5e4f --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/requirements.txt @@ -0,0 +1,4 @@ +torch>=2.9.0 +sentencepiece +brotli +triton diff --git a/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/submission.json b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/submission.json new file mode 100644 index 0000000000..a260ac1efe --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/submission.json @@ -0,0 +1,41 @@ +{ + "author": "Kamil Krawczyk", + "github_id": "bsisduck", + "name": "11L XSA + WiderGate32 + SmearGate + PolarNS Muon + CaseOps SP8192", + "blurb": "11L 512d 8H/4KV transformer with U-Net skips, parallel decoder, partial RoPE, Polar-Express Newton-Schulz Muon (5 steps), wider attention output gates (GATE_WIDTH=32 vs standard 12, giving per-head gates a richer 32-dim view of the residual stream), SmearGate position-mixing (width=32), fused LeakyReLU-square MLP, GPTQ int6 all weights + brotli-11 compression, phased TTT eval (1 phase, prefix=2000 docs). Key innovation: widening the AttnOutGate input from 12 to 32 dimensions improves pre-quant BPB by -0.002. 3-seed mean: 1.08037 BPB.", + "date": "2026-04-30", + "track": "10min_16mb", + "val_loss": 2.78796, + "val_bpb": 1.08037, + "val_loss_std": 0.00360, + "val_bpb_std": 0.00139, + "seeds": [0, 42, 1234], + "seed_results": { + "0": { + "val_loss": 2.79483, + "val_bpb": 1.08196, + "artifact_bytes": 15890131, + "eval_time_s": 391.9 + }, + "42": { + "val_loss": 2.78881, + "val_bpb": 1.07983, + "artifact_bytes": 15887137, + "eval_time_s": 428.7 + }, + "1234": { + "val_loss": 2.78749, + "val_bpb": 1.07932, + "artifact_bytes": 15888516, + "eval_time_s": 333.3 + } + }, + "comparison_baseline_bpb": 1.0810, + "delta_vs_leaderboard_bpb": -0.00063, + "artifact_bytes_mean": 15888595, + "artifact_bytes_max": 15890131, + "hardware": "8xH100 80GB SXM", + "pytorch_version": "2.9.1+cu128", + "cuda_version": "12.8", + "technique_summary": "11L XSA + WiderGate32 (AttnOutGate width=32) + SmearGate (width=32) + PolarNS Muon + CaseOps SP8192 + GPTQ int6 all weights + brotli-11 + phased TTT LoRA rank-96" +} diff --git a/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model new file mode 100644 index 0000000000..15ed0d3efc Binary files /dev/null and b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/tokenizers/fineweb_8192_bpe_lossless_caps_caseops_v1_reserved.model differ diff --git a/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_gpt.py b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_gpt.py new file mode 100644 index 0000000000..89adbbdbcb --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_gpt.py @@ -0,0 +1,3148 @@ +import base64, collections, copy, fcntl, glob, io, json, lzma, math, os +from pathlib import Path +import random, re, subprocess, sys, time, uuid, numpy as np, sentencepiece as spm, torch, torch.distributed as dist, torch.nn.functional as F +from torch import nn +from flash_attn_interface import ( + flash_attn_func as flash_attn_3_func, + flash_attn_varlen_func, +) +from concurrent.futures import ThreadPoolExecutor +import triton +import triton.language as tl +from triton.tools.tensor_descriptor import TensorDescriptor + + +class Hyperparameters: + data_dir = os.environ.get("DATA_DIR", "./data/") + seed = int(os.environ.get("SEED", 1337)) + run_id = os.environ.get("RUN_ID", str(uuid.uuid4())) + iterations = int(os.environ.get("ITERATIONS", 20000)) + warmdown_frac = float(os.environ.get("WARMDOWN_FRAC", 0.75)) + warmup_steps = int(os.environ.get("WARMUP_STEPS", 20)) + train_batch_tokens = int(os.environ.get("TRAIN_BATCH_TOKENS", 786432)) + train_seq_len = int(os.environ.get("TRAIN_SEQ_LEN", 2048)) + train_log_every = int(os.environ.get("TRAIN_LOG_EVERY", 500)) + max_wallclock_seconds = float(os.environ.get("MAX_WALLCLOCK_SECONDS", 6e2)) + val_batch_tokens = int(os.environ.get("VAL_BATCH_TOKENS", 524288)) + eval_seq_len = int(os.environ.get("EVAL_SEQ_LEN", 2048)) + val_loss_every = int(os.environ.get("VAL_LOSS_EVERY", 4000)) + sliding_window_enabled = bool(int(os.environ.get("SLIDING_WINDOW_ENABLED", "0"))) + vocab_size = int(os.environ.get("VOCAB_SIZE", 8192)) + num_layers = int(os.environ.get("NUM_LAYERS", 11)) + xsa_last_n = int(os.environ.get("XSA_LAST_N", 11)) + model_dim = int(os.environ.get("MODEL_DIM", 512)) + num_kv_heads = int(os.environ.get("NUM_KV_HEADS", 4)) + num_heads = int(os.environ.get("NUM_HEADS", 8)) + mlp_mult = float(os.environ.get("MLP_MULT", 4.0)) + skip_gates_enabled = bool(int(os.environ.get("SKIP_GATES_ENABLED", "1"))) + tie_embeddings = bool(int(os.environ.get("TIE_EMBEDDINGS", "1"))) + logit_softcap = float(os.environ.get("LOGIT_SOFTCAP", 3e1)) + rope_base = float(os.environ.get("ROPE_BASE", 1e4)) + rope_dims = int(os.environ.get("ROPE_DIMS", 16)) + rope_train_seq_len = int(os.environ.get("ROPE_TRAIN_SEQ_LEN", 2048)) + rope_yarn = bool(int(os.environ.get("ROPE_YARN", "0"))) + ln_scale = bool(int(os.environ.get("LN_SCALE", "1"))) + qk_gain_init = float(os.environ.get("QK_GAIN_INIT", 5.0)) + num_loops = int(os.environ.get("NUM_LOOPS", 2)) + loop_start = int(os.environ.get("LOOP_START", 3)) + loop_end = int(os.environ.get("LOOP_END", 5)) + enable_looping_at = float(os.environ.get("ENABLE_LOOPING_AT", 0.35)) + parallel_start_layer = int(os.environ.get("PARALLEL_START_LAYER", 8)) + parallel_final_lane = os.environ.get("PARALLEL_FINAL_LANE", "mean") + min_lr = float(os.environ.get("MIN_LR", 0.0)) + embed_lr = float(os.environ.get("EMBED_LR", 0.6)) + tied_embed_lr = float(os.environ.get("TIED_EMBED_LR", 0.03)) + tied_embed_init_std = float(os.environ.get("TIED_EMBED_INIT_STD", 0.005)) + matrix_lr = float(os.environ.get("MATRIX_LR", 0.026)) + scalar_lr = float(os.environ.get("SCALAR_LR", 0.02)) + muon_momentum = float(os.environ.get("MUON_MOMENTUM", 0.97)) + muon_backend_steps = int(os.environ.get("MUON_BACKEND_STEPS", 5)) + muon_momentum_warmup_start = float( + os.environ.get("MUON_MOMENTUM_WARMUP_START", 0.92) + ) + muon_momentum_warmup_steps = int(os.environ.get("MUON_MOMENTUM_WARMUP_STEPS", 1500)) + muon_row_normalize = bool(int(os.environ.get("MUON_ROW_NORMALIZE", "1"))) + beta1 = float(os.environ.get("BETA1", 0.9)) + beta2 = float(os.environ.get("BETA2", 0.95)) + adam_eps = float(os.environ.get("ADAM_EPS", 1e-08)) + grad_clip_norm = float(os.environ.get("GRAD_CLIP_NORM", 0.3)) + eval_stride = int(os.environ.get("EVAL_STRIDE", 64)) + adam_wd = float(os.environ.get("ADAM_WD", 0.02)) + muon_wd = float(os.environ.get("MUON_WD", 0.095)) + embed_wd = float(os.environ.get("EMBED_WD", 0.085)) + ema_decay = float(os.environ.get("EMA_DECAY", 0.9965)) + ttt_enabled = bool(int(os.environ.get("TTT_ENABLED", "1"))) + ttt_lora_rank = int(os.environ.get("TTT_LORA_RANK", 96)) + ttt_lora_lr = float(os.environ.get("TTT_LORA_LR", 0.0001)) + ttt_chunk_size = int(os.environ.get("TTT_CHUNK_SIZE", 48)) + ttt_eval_seq_len = int(os.environ.get("TTT_EVAL_SEQ_LEN", 2048)) + ttt_batch_size = int(os.environ.get("TTT_BATCH_SIZE", 64)) + ttt_grad_steps = int(os.environ.get("TTT_GRAD_STEPS", 1)) + ttt_weight_decay = float(os.environ.get("TTT_WEIGHT_DECAY", 0.5)) + ttt_beta1 = float(os.environ.get("TTT_BETA1", 0)) + ttt_beta2 = float(os.environ.get("TTT_BETA2", 0.999)) + ttt_k_lora = bool(int(os.environ.get("TTT_K_LORA", "1"))) + ttt_mlp_lora = bool(int(os.environ.get("TTT_MLP_LORA", "1"))) + ttt_o_lora = bool(int(os.environ.get("TTT_O_LORA", "1"))) + ttt_optimizer = os.environ.get("TTT_OPTIMIZER", "adam") + ttt_eval_batches = os.environ.get("TTT_EVAL_BATCHES", "") + val_doc_fraction = float(os.environ.get("VAL_DOC_FRACTION", 1.0)) + compressor = os.environ.get("COMPRESSOR", "brotli") + gptq_calibration_batches = int(os.environ.get("GPTQ_CALIBRATION_BATCHES", 16)) + gptq_reserve_seconds = float(os.environ.get("GPTQ_RESERVE_SECONDS", 4.0)) + phased_ttt_enabled = bool(int(os.environ.get("PHASED_TTT_ENABLED", "0"))) + phased_ttt_prefix_docs = int(os.environ.get("PHASED_TTT_PREFIX_DOCS", 2000)) + phased_ttt_num_phases = int(os.environ.get("PHASED_TTT_NUM_PHASES", 1)) + global_ttt_lr = float(os.environ.get("GLOBAL_TTT_LR", 0.001)) + global_ttt_momentum = float(os.environ.get("GLOBAL_TTT_MOMENTUM", 0.9)) + global_ttt_epochs = int(os.environ.get("GLOBAL_TTT_EPOCHS", 1)) + global_ttt_chunk_tokens = int(os.environ.get("GLOBAL_TTT_CHUNK_TOKENS", 32768)) + global_ttt_batch_seqs = int(os.environ.get("GLOBAL_TTT_BATCH_SEQS", 32)) + global_ttt_warmup_start_lr = float(os.environ.get("GLOBAL_TTT_WARMUP_START_LR", 0.0)) + global_ttt_warmup_chunks = int(os.environ.get("GLOBAL_TTT_WARMUP_CHUNKS", 0)) + global_ttt_grad_clip = float(os.environ.get("GLOBAL_TTT_GRAD_CLIP", 1.0)) + global_ttt_respect_doc_boundaries = bool(int(os.environ.get("GLOBAL_TTT_RESPECT_DOC_BOUNDARIES", "1"))) + matrix_bits = int(os.environ.get("MATRIX_BITS", 6)) + embed_bits = int(os.environ.get("EMBED_BITS", 8)) + matrix_clip_sigmas = float(os.environ.get("MATRIX_CLIP_SIGMAS", 12.85)) + embed_clip_sigmas = float(os.environ.get("EMBED_CLIP_SIGMAS", 2e1)) + mlp_clip_sigmas = float(os.environ.get("MLP_CLIP_SIGMAS", 10.0)) + attn_clip_sigmas = float(os.environ.get("ATTN_CLIP_SIGMAS", 13.0)) + # LQER asymmetric rank-k correction on top-K quant-error tensors (PR #1797). + lqer_enabled = bool(int(os.environ.get("LQER_ENABLED", "1"))) + lqer_rank = int(os.environ.get("LQER_RANK", 4)) + lqer_top_k = int(os.environ.get("LQER_TOP_K", 3)) + lqer_factor_bits = int(os.environ.get("LQER_FACTOR_BITS", 4)) + lqer_asym_enabled = bool(int(os.environ.get("LQER_ASYM_ENABLED", "1"))) + lqer_asym_group = int(os.environ.get("LQER_ASYM_GROUP", "64")) + smear_gate = bool(int(os.environ.get("SMEAR_GATE", "0"))) + attn_out_gate = bool(int(os.environ.get("ATTN_OUT_GATE", "0"))) + gate_width = int(os.environ.get("GATE_WIDTH", 12)) + leaky_slope = float(os.environ.get("LEAKY_SLOPE", 0.5)) + # --- TECHNIQUE 1: sin(x).square() activation (FANformer) --- + sin_squared_activation = bool(int(os.environ.get("SIN_SQUARED_ACTIVATION", "0"))) + # --- TECHNIQUE 2: Rho-1 selective loss masking --- + rho1_enabled = bool(int(os.environ.get("RHO1_ENABLED", "0"))) + rho1_top_k = float(os.environ.get("RHO1_TOP_K", 0.7)) + distributed = "RANK" in os.environ and "WORLD_SIZE" in os.environ + rank = int(os.environ.get("RANK", "0")) + world_size = int(os.environ.get("WORLD_SIZE", "1")) + local_rank = int(os.environ.get("LOCAL_RANK", "0")) + is_main_process = rank == 0 + grad_accum_steps = 8 // world_size + datasets_dir = os.path.join(data_dir, "datasets", f"fineweb10B_sp{vocab_size}") + train_files = os.path.join(datasets_dir, "fineweb_train_*.bin") + val_files = os.path.join(datasets_dir, "fineweb_val_*.bin") + tokenizer_path = os.path.join( + data_dir, "tokenizers", f"fineweb_{vocab_size}_bpe.model" + ) + artifact_dir = os.environ.get("ARTIFACT_DIR", "") + logfile = ( + os.path.join(artifact_dir, f"{run_id}.txt") + if artifact_dir + else f"logs/{run_id}.txt" + ) + model_path = ( + os.path.join(artifact_dir, "final_model.pt") + if artifact_dir + else "final_model.pt" + ) + quantized_model_path = ( + os.path.join(artifact_dir, "final_model.int6.ptz") + if artifact_dir + else "final_model.int6.ptz" + ) + + +_logger_hparams = None + + +def set_logging_hparams(h): + global _logger_hparams + _logger_hparams = h + + +def log(msg, console=True): + if _logger_hparams is None: + print(msg) + return + if _logger_hparams.is_main_process: + if console: + print(msg) + if _logger_hparams.logfile is not None: + with open(_logger_hparams.logfile, "a", encoding="utf-8") as f: + print(msg, file=f) + + +class ValidationData: + def __init__(self, h, device): + self.sp = spm.SentencePieceProcessor(model_file=h.tokenizer_path) + if int(self.sp.vocab_size()) != h.vocab_size: + raise ValueError( + f"VOCAB_SIZE={h.vocab_size} does not match tokenizer vocab_size={int(self.sp.vocab_size())}" + ) + self.val_tokens = load_validation_tokens(h.val_files, h.eval_seq_len) + ( + self.base_bytes_lut, + self.has_leading_space_lut, + self.is_boundary_token_lut, + ) = build_sentencepiece_luts(self.sp, h.vocab_size, device) + + +def build_sentencepiece_luts(sp, vocab_size, device): + sp_vocab_size = int(sp.vocab_size()) + assert ( + sp.piece_to_id("▁") != sp.unk_id() + ), "Tokenizer must have '▁' (space) as its own token for correct BPB byte counting" + table_size = max(sp_vocab_size, vocab_size) + base_bytes_np = np.zeros((table_size,), dtype=np.int16) + has_leading_space_np = np.zeros((table_size,), dtype=np.bool_) + is_boundary_token_np = np.ones((table_size,), dtype=np.bool_) + for token_id in range(sp_vocab_size): + if sp.is_control(token_id) or sp.is_unknown(token_id) or sp.is_unused(token_id): + continue + is_boundary_token_np[token_id] = False + if sp.is_byte(token_id): + base_bytes_np[token_id] = 1 + continue + piece = sp.id_to_piece(token_id) + if piece.startswith("▁"): + has_leading_space_np[token_id] = True + piece = piece[1:] + base_bytes_np[token_id] = len(piece.encode("utf-8")) + return ( + torch.tensor(base_bytes_np, dtype=torch.int16, device=device), + torch.tensor(has_leading_space_np, dtype=torch.bool, device=device), + torch.tensor(is_boundary_token_np, dtype=torch.bool, device=device), + ) + + +def load_validation_tokens(pattern, seq_len): + files = [Path(p) for p in sorted(glob.glob(pattern))] + if not files: + raise FileNotFoundError(f"No files found for pattern: {pattern}") + tokens = torch.cat([load_data_shard(file) for file in files]).contiguous() + usable = (tokens.numel() - 1) // seq_len * seq_len + if usable <= 0: + raise ValueError(f"Validation split is too short for TRAIN_SEQ_LEN={seq_len}") + return tokens[: usable + 1] + + +def load_data_shard(file): + header_bytes = 256 * np.dtype(" 0: + pos = start + while pos < end: + seg_starts.append(pos) + pos += max_doc_len + else: + seg_starts.append(start) + boundaries = seg_starts + [total_len] + padded_len = get_next_multiple_of_n(len(boundaries), bucket_size) + cu = torch.full((padded_len,), total_len, dtype=torch.int32, device=device) + cu[: len(boundaries)] = torch.tensor(boundaries, dtype=torch.int32, device=device) + seg_ends = seg_starts[1:] + [total_len] + max_seqlen = max(end - start for start, end in zip(seg_starts, seg_ends)) + return cu, max_seqlen + +class DocumentPackingLoader: + _shard_pool = ThreadPoolExecutor(1) + + def __init__(self, h, device, cu_bucket_size=64): + self.rank = h.rank + self.world_size = h.world_size + self.device = device + self.cu_bucket_size = cu_bucket_size + self.max_seq_len = h.train_seq_len + all_files = [Path(p) for p in sorted(glob.glob(h.train_files))] + if not all_files: + raise FileNotFoundError(f"No files found for pattern: {h.train_files}") + self.files = all_files + self.file_iter = iter(self.files) + self._init_shard(load_data_shard(next(self.file_iter))) + self._next_shard = self._submit_next_shard() + self._batch_pool = ThreadPoolExecutor(1) + self._next_batch = None + + def _init_shard(self, tokens): + global BOS_ID + self.tokens = tokens + self.shard_size = tokens.numel() + if BOS_ID is None: + BOS_ID = 1 + self.bos_idx = ( + (tokens == BOS_ID).nonzero(as_tuple=True)[0].to(torch.int64).cpu().numpy() + ) + if self.bos_idx.size == 0: + self.bos_idx = np.array([0], dtype=np.int64) + self.cursor = int(self.bos_idx[0]) + + def _submit_next_shard(self): + try: + path = next(self.file_iter) + return self._shard_pool.submit(load_data_shard, path) + except StopIteration: + return None + + def _advance_shard(self): + if self._next_shard is None: + self.file_iter = iter(self.files) + self._next_shard = self._shard_pool.submit( + load_data_shard, next(self.file_iter) + ) + self._init_shard(self._next_shard.result()) + self._next_shard = self._submit_next_shard() + + def _local_doc_starts(self, local_start, total_len): + lo = np.searchsorted(self.bos_idx, local_start, side="left") + hi = np.searchsorted(self.bos_idx, local_start + total_len, side="left") + return (self.bos_idx[lo:hi] - local_start).tolist() + + def _prepare_batch(self, num_tokens_local, max_seq_len): + per_rank_span = num_tokens_local + 1 + global_span = per_rank_span * self.world_size + while self.cursor + global_span > self.shard_size: + self._advance_shard() + local_start = self.cursor + self.rank * per_rank_span + buf = self.tokens[local_start : local_start + per_rank_span] + inputs = buf[:-1].to(dtype=torch.int64).pin_memory() + targets = buf[1:].to(dtype=torch.int64).pin_memory() + starts = self._local_doc_starts(local_start, inputs.numel()) + cu_seqlens, max_seqlen = _build_cu_seqlens( + starts, inputs.numel(), inputs.device, max_seq_len, self.cu_bucket_size + ) + cu_seqlens = cu_seqlens.pin_memory() + self.cursor += global_span + return inputs, targets, cu_seqlens, max_seqlen + + def next_batch(self, global_tokens, grad_accum_steps): + num_tokens_local = global_tokens // (self.world_size * grad_accum_steps) + if self._next_batch is not None: + inputs, targets, cu_seqlens, max_seqlen = self._next_batch.result() + else: + inputs, targets, cu_seqlens, max_seqlen = self._prepare_batch( + num_tokens_local, self.max_seq_len + ) + self._next_batch = self._batch_pool.submit( + self._prepare_batch, num_tokens_local, self.max_seq_len + ) + return ( + inputs[None].to(self.device, non_blocking=True), + targets[None].to(self.device, non_blocking=True), + cu_seqlens.to(self.device, non_blocking=True), + max_seqlen, + ) + + +class ShuffledSequenceLoader: + def __init__(self, h, device): + self.world_size = h.world_size + self.seq_len = h.train_seq_len + self.device = device + all_files = [Path(p) for p in sorted(glob.glob(h.train_files))] + if not all_files: + raise FileNotFoundError(f"No files found for pattern: {h.train_files}") + self.files = all_files[h.rank :: h.world_size] + self.rng = np.random.Generator(np.random.PCG64(h.rank)) + self.num_tokens = [_read_num_tokens(f) for f in self.files] + self.start_inds = [[] for _ in self.files] + for si in range(len(self.files)): + self._reset_shard(si) + + def _reset_shard(self, si): + max_phase = min( + self.seq_len - 1, max(0, self.num_tokens[si] - self.seq_len - 1) + ) + phase = int(self.rng.integers(max_phase + 1)) if max_phase > 0 else 0 + num_sequences = (self.num_tokens[si] - 1 - phase) // self.seq_len + sequence_order = self.rng.permutation(num_sequences) + self.start_inds[si] = (phase + sequence_order * self.seq_len).tolist() + + def next_batch(self, global_tokens, grad_accum_steps): + device_tokens = global_tokens // (self.world_size * grad_accum_steps) + device_batch_size = device_tokens // self.seq_len + remaining = np.array([len(s) for s in self.start_inds], dtype=np.float64) + x = torch.empty((device_batch_size, self.seq_len), dtype=torch.int64) + y = torch.empty((device_batch_size, self.seq_len), dtype=torch.int64) + for bi in range(device_batch_size): + total = remaining.sum() + if total <= 0: + for si in range(len(self.files)): + self._reset_shard(si) + remaining = np.array( + [len(s) for s in self.start_inds], dtype=np.float64 + ) + total = remaining.sum() + probs = remaining / total + si = int(self.rng.choice(len(self.files), p=probs)) + start_ind = self.start_inds[si].pop() + remaining[si] -= 1 + mm = _get_shard_memmap(self.files[si]) + window = torch.as_tensor( + np.array(mm[start_ind : start_ind + self.seq_len + 1], dtype=np.int64) + ) + x[bi] = window[:-1] + y[bi] = window[1:] + return x.to(self.device, non_blocking=True), y.to( + self.device, non_blocking=True + ) + + +class RMSNorm(nn.Module): + def __init__(self, eps=None): + super().__init__() + self.eps = eps + + def forward(self, x): + return F.rms_norm(x, (x.size(-1),), eps=self.eps) + + +class CastedLinear(nn.Linear): + def forward(self, x): + w = self.weight.to(x.dtype) + bias = self.bias.to(x.dtype) if self.bias is not None else None + return F.linear(x, w, bias) + + +@triton.jit +def linear_leaky_relu_square_kernel( + a_desc, + b_desc, + c_desc, + aux_desc, + M, + N, + K, + BLOCK_SIZE_M: tl.constexpr, + BLOCK_SIZE_N: tl.constexpr, + BLOCK_SIZE_K: tl.constexpr, + NUM_SMS: tl.constexpr, + FORWARD: tl.constexpr, +): + dtype = tl.bfloat16 + start_pid = tl.program_id(axis=0) + num_pid_m = tl.cdiv(M, BLOCK_SIZE_M) + num_pid_n = tl.cdiv(N, BLOCK_SIZE_N) + k_tiles = tl.cdiv(K, BLOCK_SIZE_K) + num_tiles = num_pid_m * num_pid_n + tile_id_c = start_pid - NUM_SMS + for tile_id in tl.range(start_pid, num_tiles, NUM_SMS, flatten=True): + pid_m = tile_id // num_pid_n + pid_n = tile_id % num_pid_n + offs_am = pid_m * BLOCK_SIZE_M + offs_bn = pid_n * BLOCK_SIZE_N + accumulator = tl.zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=tl.float32) + for ki in range(k_tiles): + offs_k = ki * BLOCK_SIZE_K + a = a_desc.load([offs_am, offs_k]) + b = b_desc.load([offs_bn, offs_k]) + accumulator = tl.dot(a, b.T, accumulator) + tile_id_c += NUM_SMS + offs_am_c = offs_am + offs_bn_c = offs_bn + acc = tl.reshape(accumulator, (BLOCK_SIZE_M, 2, BLOCK_SIZE_N // 2)) + acc = tl.permute(acc, (0, 2, 1)) + acc0, acc1 = tl.split(acc) + c0 = acc0.to(dtype) + c1 = acc1.to(dtype) + if not FORWARD: + pre0 = aux_desc.load([offs_am_c, offs_bn_c]) + pre1 = aux_desc.load([offs_am_c, offs_bn_c + BLOCK_SIZE_N // 2]) + c0 = c0 * tl.where(pre0 > 0, 2.0 * pre0, 0.5 * pre0) + c1 = c1 * tl.where(pre1 > 0, 2.0 * pre1, 0.5 * pre1) + c_desc.store([offs_am_c, offs_bn_c], c0) + c_desc.store([offs_am_c, offs_bn_c + BLOCK_SIZE_N // 2], c1) + if FORWARD: + aux0 = tl.where(c0 > 0, c0, 0.5 * c0) + aux1 = tl.where(c1 > 0, c1, 0.5 * c1) + aux_desc.store([offs_am_c, offs_bn_c], aux0 * aux0) + aux_desc.store([offs_am_c, offs_bn_c + BLOCK_SIZE_N // 2], aux1 * aux1) + + +def linear_leaky_relu_square(a, b, aux=None): + M, K = a.shape + N, K2 = b.shape + assert K == K2 + c = torch.empty((M, N), device=a.device, dtype=a.dtype) + forward = aux is None + if aux is None: + aux = torch.empty((M, N), device=a.device, dtype=a.dtype) + num_sms = torch.cuda.get_device_properties(a.device).multi_processor_count + BLOCK_SIZE_M, BLOCK_SIZE_N, BLOCK_SIZE_K = 128, 256, 64 + num_stages = 4 if forward else 3 + a_desc = TensorDescriptor.from_tensor(a, [BLOCK_SIZE_M, BLOCK_SIZE_K]) + b_desc = TensorDescriptor.from_tensor(b, [BLOCK_SIZE_N, BLOCK_SIZE_K]) + c_desc = TensorDescriptor.from_tensor(c, [BLOCK_SIZE_M, BLOCK_SIZE_N // 2]) + aux_desc = TensorDescriptor.from_tensor(aux, [BLOCK_SIZE_M, BLOCK_SIZE_N // 2]) + grid = lambda _meta: ( + min(num_sms, triton.cdiv(M, BLOCK_SIZE_M) * triton.cdiv(N, BLOCK_SIZE_N)), + ) + linear_leaky_relu_square_kernel[grid]( + a_desc, + b_desc, + c_desc, + aux_desc, + M, + N, + K, + BLOCK_SIZE_M=BLOCK_SIZE_M, + BLOCK_SIZE_N=BLOCK_SIZE_N, + BLOCK_SIZE_K=BLOCK_SIZE_K, + NUM_SMS=num_sms, + FORWARD=forward, + num_stages=num_stages, + num_warps=8, + ) + if forward: + return c, aux + return c + + +class FusedLinearLeakyReLUSquareFunction(torch.autograd.Function): + @staticmethod + def forward(ctx, x, w1, w2): + x_flat = x.reshape(-1, x.shape[-1]) + pre, post = linear_leaky_relu_square(x_flat, w1) + out = F.linear(post, w2) + ctx.save_for_backward(x, w1, w2, pre, post) + return out.view(*x.shape[:-1], out.shape[-1]) + + @staticmethod + def backward(ctx, grad_output): + x, w1, w2, pre, post = ctx.saved_tensors + x_flat = x.reshape(-1, x.shape[-1]) + grad_output_flat = grad_output.reshape(-1, grad_output.shape[-1]) + dw2 = grad_output_flat.T @ post + dpre = linear_leaky_relu_square(grad_output_flat, w2.T.contiguous(), aux=pre) + dw1 = dpre.T @ x_flat + dx = dpre @ w1 + return dx.view_as(x), dw1, dw2 + + +FusedLeakyReLUSquareMLP = FusedLinearLeakyReLUSquareFunction.apply + + +class Rotary(nn.Module): + def __init__(self, dim, base=1e4, train_seq_len=1024, rope_dims=0, yarn=True): + super().__init__() + self.dim = dim + self.base = base + self.train_seq_len = train_seq_len + self.yarn = yarn + self.rope_dims = rope_dims if rope_dims > 0 else dim + inv_freq = 1.0 / base ** ( + torch.arange(0, self.rope_dims, 2, dtype=torch.float32) / self.rope_dims + ) + self.register_buffer("inv_freq", inv_freq, persistent=False) + self._seq_len_cached = 0 + self._cos_cached = None + self._sin_cached = None + + def forward(self, seq_len, device, dtype): + if ( + self._cos_cached is None + or self._sin_cached is None + or self._seq_len_cached < seq_len + or self._cos_cached.device != device + ): + rd = self.rope_dims + if self.yarn and seq_len > self.train_seq_len: + scale = seq_len / self.train_seq_len + new_base = self.base * scale ** (rd / (rd - 2)) + inv_freq = 1.0 / new_base ** ( + torch.arange(0, rd, 2, dtype=torch.float32, device=device) / rd + ) + else: + inv_freq = self.inv_freq.float().to(device) + t = torch.arange(seq_len, device=device, dtype=torch.float32) + freqs = torch.outer(t, inv_freq) + self._cos_cached = freqs.cos()[None, :, None, :] + self._sin_cached = freqs.sin()[None, :, None, :] + self._seq_len_cached = seq_len + return self._cos_cached[:, :seq_len].to(dtype=dtype), self._sin_cached[:, :seq_len].to(dtype=dtype) + + +def apply_rotary_emb(x, cos, sin, rope_dims=0): + if rope_dims > 0 and rope_dims < x.size(-1): + x_rope, x_pass = x[..., :rope_dims], x[..., rope_dims:] + half = rope_dims // 2 + x1, x2 = x_rope[..., :half], x_rope[..., half:] + x_rope = torch.cat((x1 * cos + x2 * sin, x1 * -sin + x2 * cos), dim=-1) + return torch.cat((x_rope, x_pass), dim=-1) + half = x.size(-1) // 2 + x1, x2 = x[..., :half], x[..., half:] + return torch.cat((x1 * cos + x2 * sin, x1 * -sin + x2 * cos), dim=-1) + + +def _apply_attn_out_gate_inline(y, x_orig, gate_w): + """Inline-safe version: .contiguous() barriers prevent over-aggressive kernel fusion.""" + gate_in = x_orig[:, :, :gate_w.shape[-1]].contiguous() + gate = (2.0 * torch.sigmoid(F.linear(gate_in, gate_w.to(gate_in.dtype)))).contiguous() + return y * gate.unsqueeze(-1) + +def _apply_smear_gate_inline(x, smear_w, smear_lambda): + """Inline-safe version: .contiguous() barriers prevent over-aggressive kernel fusion.""" + prev_x = torch.zeros_like(x) + prev_x[:, 1:] = x[:, :-1] + gate_in = x[:, :, :smear_w.shape[0]].contiguous() + gate = torch.sigmoid(F.linear(gate_in, smear_w.to(x.dtype).unsqueeze(0))).contiguous() + return x + smear_lambda.to(x.dtype) * gate * prev_x + +class CausalSelfAttention(nn.Module): + def __init__( + self, dim, num_heads, num_kv_heads, rope_base, qk_gain_init, train_seq_len, yarn=True + ): + super().__init__() + if dim % num_heads != 0: + raise ValueError("model_dim must be divisible by num_heads") + if num_heads % num_kv_heads != 0: + raise ValueError("num_heads must be divisible by num_kv_heads") + self.num_heads = num_heads + self.num_kv_heads = num_kv_heads + self.head_dim = dim // num_heads + if self.head_dim % 2 != 0: + raise ValueError("head_dim must be even for RoPE") + self.q_gain = nn.Parameter( + torch.full((num_heads,), qk_gain_init, dtype=torch.float32) + ) + self.rope_dims = 0 + self.rotary = Rotary(self.head_dim, base=rope_base, train_seq_len=train_seq_len, yarn=yarn) + self.use_xsa = False + self.attn_out_gate_w = None + + def _xsa_efficient(self, y, v): + B, T, H, D = y.shape + Hkv = v.size(-2) + group = H // Hkv + y_g = y.reshape(B, T, Hkv, group, D) + vn = F.normalize(v, dim=-1).unsqueeze(-2) + proj = (y_g * vn).sum(dim=-1, keepdim=True) * vn + return (y_g - proj).reshape(B, T, H, D) + + def forward(self, x, q_w, k_w, v_w, out_w, cu_seqlens=None, max_seqlen=0, x_orig=None): + bsz, seqlen, dim = x.shape + q = F.linear(x, q_w.to(x.dtype)).reshape(bsz, seqlen, self.num_heads, self.head_dim) + k = F.linear(x, k_w.to(x.dtype)).reshape(bsz, seqlen, self.num_kv_heads, self.head_dim) + v = F.linear(x, v_w.to(x.dtype)).reshape(bsz, seqlen, self.num_kv_heads, self.head_dim) + q = F.rms_norm(q, (q.size(-1),)) + k = F.rms_norm(k, (k.size(-1),)) + cos, sin = self.rotary(seqlen, x.device, q.dtype) + q = apply_rotary_emb(q, cos, sin, self.rope_dims) + k = apply_rotary_emb(k, cos, sin, self.rope_dims) + q = q * self.q_gain.to(dtype=q.dtype)[None, None, :, None] + if cu_seqlens is not None: + y = flash_attn_varlen_func( + q[0], + k[0], + v[0], + cu_seqlens_q=cu_seqlens, + cu_seqlens_k=cu_seqlens, + max_seqlen_q=max_seqlen, + max_seqlen_k=max_seqlen, + causal=True, + window_size=(-1, -1), + )[None] + else: + y = flash_attn_3_func(q, k, v, causal=True) + if self.use_xsa: + y = self._xsa_efficient(y, v) + if self.attn_out_gate_w is not None and x_orig is not None: + y = _apply_attn_out_gate_inline(y, x_orig, self.attn_out_gate_w) + y = y.reshape(bsz, seqlen, dim) + self._last_proj_input = y.detach() if getattr(self, "_calib", False) else None + return F.linear(y, out_w.to(x.dtype)) + + +class MLP(nn.Module): + def __init__(self, dim, mlp_mult): + super().__init__() + self.leaky_slope = Hyperparameters.leaky_slope + self.use_fused = True and (self.leaky_slope == 0.5) # fused kernel only supports slope=0.5 + self.sin_squared = Hyperparameters.sin_squared_activation + + def forward(self, x, up_w, down_w): + if self.sin_squared: + # FANformer: sin(x).square() activation — no fused Triton path + hidden = torch.sin(F.linear(x, up_w.to(x.dtype))).square() + self._last_down_input = hidden.detach() if getattr(self, "_calib", False) else None + return F.linear(hidden, down_w.to(x.dtype)) + if self.training and self.use_fused: + return FusedLeakyReLUSquareMLP(x, up_w.to(x.dtype), down_w.to(x.dtype)) + hidden = F.leaky_relu(F.linear(x, up_w.to(x.dtype)), negative_slope=self.leaky_slope).square() + self._last_down_input = hidden.detach() if getattr(self, "_calib", False) else None + return F.linear(hidden, down_w.to(x.dtype)) + + +class Block(nn.Module): + def __init__( + self, + dim, + num_heads, + num_kv_heads, + mlp_mult, + rope_base, + qk_gain_init, + train_seq_len, + layer_idx=0, + ln_scale=False, + yarn=True, + ): + super().__init__() + self.attn_norm = RMSNorm() + self.mlp_norm = RMSNorm() + self.attn = CausalSelfAttention( + dim, num_heads, num_kv_heads, rope_base, qk_gain_init, train_seq_len, yarn=yarn + ) + self.mlp = MLP(dim, mlp_mult) + self.attn_scale = nn.Parameter(torch.ones(dim, dtype=torch.float32)) + self.mlp_scale = nn.Parameter(torch.ones(dim, dtype=torch.float32)) + self.resid_mix = nn.Parameter( + torch.stack((torch.ones(dim), torch.zeros(dim))).float() + ) + self.ln_scale_factor = 1.0 / math.sqrt(layer_idx + 1) if ln_scale else 1.0 + + def forward(self, x, x0, q_w, k_w, v_w, out_w, up_w, down_w, cu_seqlens=None, max_seqlen=0): + mix = self.resid_mix.to(dtype=x.dtype) + x_in = mix[0][None, None, :] * x + mix[1][None, None, :] * x0 + attn_out = self.attn( + self.attn_norm(x_in) * self.ln_scale_factor, + q_w, k_w, v_w, out_w, + cu_seqlens=cu_seqlens, + max_seqlen=max_seqlen, + x_orig=x_in, + ) + x_out = x_in + self.attn_scale.to(dtype=x_in.dtype)[None, None, :] * attn_out + x_out = x_out + self.mlp_scale.to(dtype=x_out.dtype)[ + None, None, : + ] * self.mlp(self.mlp_norm(x_out) * self.ln_scale_factor, up_w, down_w) + return x_out + +class GPT(nn.Module): + def __init__(self, h): + super().__init__() + if h.logit_softcap <= 0.0: + raise ValueError(f"logit_softcap must be positive, got {h.logit_softcap}") + self.tie_embeddings = h.tie_embeddings + self.tied_embed_init_std = h.tied_embed_init_std + self.logit_softcap = h.logit_softcap + self.tok_emb = nn.Embedding(h.vocab_size, h.model_dim) + self.num_layers = h.num_layers + head_dim = h.model_dim // h.num_heads + kv_dim = h.num_kv_heads * head_dim + hidden_dim = int(h.mlp_mult * h.model_dim) + self.qo_bank = nn.Parameter(torch.empty(2 * h.num_layers, h.model_dim, h.model_dim)) + self.kv_bank = nn.Parameter(torch.empty(2 * h.num_layers, kv_dim, h.model_dim)) + self.mlp_up_bank = nn.Parameter(torch.empty(h.num_layers, hidden_dim, h.model_dim)) + self.mlp_down_bank = nn.Parameter(torch.empty(h.num_layers, h.model_dim, hidden_dim)) + self.num_encoder_layers = h.num_layers // 2 + self.num_decoder_layers = h.num_layers - self.num_encoder_layers + self.blocks = nn.ModuleList( + [ + Block( + h.model_dim, + h.num_heads, + h.num_kv_heads, + h.mlp_mult, + h.rope_base, + h.qk_gain_init, + h.train_seq_len, + layer_idx=i, + ln_scale=h.ln_scale, + yarn=h.rope_yarn, + ) + for i in range(h.num_layers) + ] + ) + if h.rope_dims > 0: + head_dim = h.model_dim // h.num_heads + for block in self.blocks: + block.attn.rope_dims = h.rope_dims + block.attn.rotary = Rotary( + head_dim, + base=h.rope_base, + train_seq_len=h.train_seq_len, + rope_dims=h.rope_dims, + yarn=h.rope_yarn, + ) + self.final_norm = RMSNorm() + self.lm_head = ( + None + if h.tie_embeddings + else CastedLinear(h.model_dim, h.vocab_size, bias=False) + ) + if self.lm_head is not None: + self.lm_head._zero_init = True + if h.xsa_last_n > 0: + for i in range(max(0, h.num_layers - h.xsa_last_n), h.num_layers): + self.blocks[i].attn.use_xsa = True + self.looping_active = False + if h.num_loops > 0: + loop_seg = list(range(h.loop_start, h.loop_end + 1)) + all_indices = list(range(h.loop_start)) + for _ in range(h.num_loops + 1): + all_indices.extend(loop_seg) + all_indices.extend(range(h.loop_end + 1, h.num_layers)) + num_enc = len(all_indices) // 2 + self.encoder_indices = all_indices[:num_enc] + self.decoder_indices = all_indices[num_enc:] + else: + self.encoder_indices = list(range(self.num_encoder_layers)) + self.decoder_indices = list(range(self.num_encoder_layers, h.num_layers)) + self.num_skip_weights = min( + len(self.encoder_indices), len(self.decoder_indices) + ) + self.skip_weights = nn.Parameter( + torch.ones(self.num_skip_weights, h.model_dim, dtype=torch.float32) + ) + self.skip_gates = ( + nn.Parameter( + torch.zeros(self.num_skip_weights, h.model_dim, dtype=torch.float32) + ) + if h.skip_gates_enabled + else None + ) + self.parallel_start_layer = h.parallel_start_layer + self.parallel_final_lane = h.parallel_final_lane.lower() + self.parallel_post_lambdas = nn.Parameter( + torch.ones(h.num_layers, 2, 2, dtype=torch.float32) + ) + self.parallel_resid_lambdas = nn.Parameter( + torch.full((h.num_layers, 2), 1.1, dtype=torch.float32) + ) + self.smear_gate_enabled = h.smear_gate + if h.smear_gate: + self.smear_w = nn.Parameter(torch.zeros(h.gate_width)) + self.smear_lambda = nn.Parameter(torch.zeros(1)) + else: + self.smear_w = None + self.smear_lambda = None + if h.attn_out_gate: + for block in self.blocks: + block.attn.attn_out_gate_w = nn.Parameter( + torch.zeros(h.num_heads, h.gate_width, dtype=torch.float32) + ) + self._init_weights() + + def _init_weights(self): + if self.tie_embeddings: + nn.init.normal_(self.tok_emb.weight, mean=0.0, std=self.tied_embed_init_std) + n = self.num_layers + proj_scale = 1.0 / math.sqrt(2 * n) + for i in range(n): + nn.init.orthogonal_(self.qo_bank.data[i], gain=1.0) + nn.init.zeros_(self.qo_bank.data[n + i]) + self.qo_bank.data[n + i].mul_(proj_scale) + nn.init.orthogonal_(self.kv_bank.data[i], gain=1.0) + nn.init.orthogonal_(self.kv_bank.data[n + i], gain=1.0) + for i in range(n): + nn.init.orthogonal_(self.mlp_up_bank.data[i], gain=1.0) + nn.init.zeros_(self.mlp_down_bank.data[i]) + self.mlp_down_bank.data[i].mul_(proj_scale) + for name, module in self.named_modules(): + if isinstance(module, nn.Linear): + if getattr(module, "_zero_init", False): + nn.init.zeros_(module.weight) + elif ( + module.weight.ndim == 2 + and module.weight.shape[0] >= 64 + and module.weight.shape[1] >= 64 + ): + nn.init.orthogonal_(module.weight, gain=1.0) + + def _bank_weights(self, i): + n = self.num_layers + return ( + self.qo_bank[i], + self.kv_bank[i], + self.kv_bank[n + i], + self.qo_bank[n + i], + self.mlp_up_bank[i], + self.mlp_down_bank[i], + ) + + def _parallel_block( + self, block_idx, lane0, lane1, x0, + q_w, k_w, v_w, out_w, up_w, down_w, + cu_seqlens=None, max_seqlen=0, + ): + block = self.blocks[block_idx] + mix = block.resid_mix.to(dtype=lane0.dtype) + attn_read = mix[0][None, None, :] * lane0 + mix[1][None, None, :] * x0 + attn_out = block.attn( + block.attn_norm(attn_read) * block.ln_scale_factor, + q_w, k_w, v_w, out_w, + cu_seqlens=cu_seqlens, max_seqlen=max_seqlen, + x_orig=attn_read, + ) + attn_out = block.attn_scale.to(dtype=attn_out.dtype)[None, None, :] * attn_out + mlp_read = lane1 + mlp_out = block.mlp_scale.to(dtype=lane1.dtype)[None, None, :] * block.mlp( + block.mlp_norm(mlp_read) * block.ln_scale_factor, up_w, down_w + ) + attn_resid = self.parallel_resid_lambdas[block_idx, 0].to(dtype=lane0.dtype) + attn_post = self.parallel_post_lambdas[block_idx, 0].to(dtype=lane0.dtype) + mlp_resid = self.parallel_resid_lambdas[block_idx, 1].to(dtype=lane0.dtype) + mlp_post = self.parallel_post_lambdas[block_idx, 1].to(dtype=lane0.dtype) + lane0 = attn_resid * lane0 + attn_post[0] * attn_out + mlp_post[0] * mlp_out + lane1 = mlp_resid * lane1 + attn_post[1] * attn_out + mlp_post[1] * mlp_out + return lane0, lane1 + + def _final_parallel_hidden(self, lane0, lane1): + if self.parallel_final_lane == "mlp": + return lane1 + if self.parallel_final_lane == "attn": + return lane0 + return 0.5 * (lane0 + lane1) + + def forward_logits(self, input_ids, cu_seqlens=None, max_seqlen=0): + x = self.tok_emb(input_ids) + x = F.rms_norm(x, (x.size(-1),)) + if self.smear_gate_enabled: + x = _apply_smear_gate_inline(x, self.smear_w, self.smear_lambda) + x0 = x + skips = [] + enc_iter = ( + self.encoder_indices + if self.looping_active + else range(self.num_encoder_layers) + ) + dec_iter = ( + self.decoder_indices + if self.looping_active + else range( + self.num_encoder_layers, + self.num_encoder_layers + self.num_decoder_layers, + ) + ) + for i in enc_iter: + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + x = self.blocks[i](x, x0, q_w, k_w, v_w, out_w, up_w, down_w, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen) + skips.append(x) + psl = self.parallel_start_layer + lane0 = None + lane1 = None + for skip_idx, i in enumerate(dec_iter): + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + if i >= psl and psl > 0: + if lane0 is None: + lane0 = x + lane1 = x + if skip_idx < self.num_skip_weights and skips: + skip = skips.pop() + w = self.skip_weights[skip_idx].to(dtype=lane0.dtype)[None, None, :] + if self.skip_gates is not None: + g = torch.sigmoid(self.skip_gates[skip_idx].to(dtype=lane0.dtype))[None, None, :] + lane0 = torch.lerp(w * skip, lane0, g) + else: + lane0 = lane0 + w * skip + lane0, lane1 = self._parallel_block( + i, lane0, lane1, x0, q_w, k_w, v_w, out_w, up_w, down_w, + cu_seqlens=cu_seqlens, max_seqlen=max_seqlen, + ) + else: + if skip_idx < self.num_skip_weights and skips: + scaled_skip = ( + self.skip_weights[skip_idx].to(dtype=x.dtype)[None, None, :] + * skips.pop() + ) + if self.skip_gates is not None: + g = torch.sigmoid(self.skip_gates[skip_idx].to(dtype=x.dtype))[None, None, :] + x = torch.lerp(scaled_skip, x, g) + else: + x = x + scaled_skip + x = self.blocks[i](x, x0, q_w, k_w, v_w, out_w, up_w, down_w, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen) + if lane0 is not None: + x = self._final_parallel_hidden(lane0, lane1) + x = self.final_norm(x) + if self.tie_embeddings: + logits_proj = F.linear(x, self.tok_emb.weight) + else: + logits_proj = self.lm_head(x) + return self.logit_softcap * torch.tanh(logits_proj / self.logit_softcap) + + def forward(self, input_ids, target_ids, cu_seqlens=None, max_seqlen=0): + logits = self.forward_logits( + input_ids, cu_seqlens=cu_seqlens, max_seqlen=max_seqlen + ) + if Hyperparameters.rho1_enabled: + # Rho-1: selective loss — only backprop through top-K% hardest tokens + per_token = F.cross_entropy( + logits.reshape(-1, logits.size(-1)).float(), + target_ids.reshape(-1), + reduction="none", + ) + k = max(1, int(per_token.numel() * Hyperparameters.rho1_top_k)) + threshold = per_token.detach().topk(k).values[-1] + mask = (per_token.detach() >= threshold).float() + return (per_token * mask).sum() / mask.sum() + return F.cross_entropy( + logits.reshape(-1, logits.size(-1)).float(), + target_ids.reshape(-1), + reduction="mean", + ) + + def forward_ttt(self, input_ids, target_ids, lora): + x = self.tok_emb(input_ids) + x = F.rms_norm(x, (x.size(-1),)) + if self.smear_gate_enabled: + x = _apply_smear_gate_inline(x, self.smear_w, self.smear_lambda) + x0 = x + skips = [] + enc_iter = ( + self.encoder_indices + if self.looping_active + else list(range(self.num_encoder_layers)) + ) + dec_iter = ( + self.decoder_indices + if self.looping_active + else list( + range( + self.num_encoder_layers, + self.num_encoder_layers + self.num_decoder_layers, + ) + ) + ) + slot = 0 + for i in enc_iter: + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) + slot += 1 + skips.append(x) + psl = self.parallel_start_layer + lane0 = None + lane1 = None + for skip_idx, i in enumerate(dec_iter): + q_w, k_w, v_w, out_w, up_w, down_w = self._bank_weights(i) + if i >= psl and psl > 0: + if lane0 is None: + lane0 = x + lane1 = x + if skip_idx < self.num_skip_weights and skips: + skip = skips.pop() + w = self.skip_weights[skip_idx].to(dtype=lane0.dtype)[None, None, :] + if self.skip_gates is not None: + g = torch.sigmoid(self.skip_gates[skip_idx].to(dtype=lane0.dtype))[None, None, :] + lane0 = torch.lerp(w * skip, lane0, g) + else: + lane0 = lane0 + w * skip + lane0, lane1 = self._parallel_block_with_lora( + i, lane0, lane1, x0, lora, slot, + q_w, k_w, v_w, out_w, up_w, down_w, + ) + else: + if skip_idx < self.num_skip_weights and skips: + scaled_skip = ( + self.skip_weights[skip_idx].to(dtype=x.dtype)[None, None, :] + * skips.pop() + ) + if self.skip_gates is not None: + g = torch.sigmoid(self.skip_gates[skip_idx].to(dtype=x.dtype))[None, None, :] + x = torch.lerp(scaled_skip, x, g) + else: + x = x + scaled_skip + x = self._block_with_lora(self.blocks[i], x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w) + slot += 1 + if lane0 is not None: + x = self._final_parallel_hidden(lane0, lane1) + x = self.final_norm(x) + if self.tie_embeddings: + logits = F.linear(x, self.tok_emb.weight) + else: + logits = self.lm_head(x) + logits = logits + lora.lm_head_lora(x) + logits = self.logit_softcap * torch.tanh(logits / self.logit_softcap) + bsz, sl, V = logits.shape + return F.cross_entropy( + logits.float().reshape(-1, V), target_ids.reshape(-1), reduction="none" + ).reshape(bsz, sl) + + def _block_with_lora(self, block, x, x0, lora, slot, q_w, k_w, v_w, out_w, up_w, down_w): + mix = block.resid_mix.to(dtype=x.dtype) + x_in = mix[0][None, None, :] * x + mix[1][None, None, :] * x0 + n = block.attn_norm(x_in) * block.ln_scale_factor + attn = block.attn + bsz, seqlen, dim = n.shape + q = (F.linear(n, q_w.to(n.dtype)) + lora.q_loras[slot](n)).reshape( + bsz, seqlen, attn.num_heads, attn.head_dim + ) + k = F.linear(n, k_w.to(n.dtype)) + if lora.k_loras is not None: + k = k + lora.k_loras[slot](n) + k = k.reshape(bsz, seqlen, attn.num_kv_heads, attn.head_dim) + v = (F.linear(n, v_w.to(n.dtype)) + lora.v_loras[slot](n)).reshape( + bsz, seqlen, attn.num_kv_heads, attn.head_dim + ) + q = F.rms_norm(q, (q.size(-1),)) + k = F.rms_norm(k, (k.size(-1),)) + cos, sin = attn.rotary(seqlen, n.device, q.dtype) + q = apply_rotary_emb(q, cos, sin, attn.rope_dims) + k = apply_rotary_emb(k, cos, sin, attn.rope_dims) + q = q * attn.q_gain.to(dtype=q.dtype)[None, None, :, None] + y = flash_attn_3_func(q, k, v, causal=True) + if attn.use_xsa: + y = attn._xsa_efficient(y, v) + if attn.attn_out_gate_w is not None: + y = _apply_attn_out_gate_inline(y, x_in, attn.attn_out_gate_w) + y = y.reshape(bsz, seqlen, dim) + attn_out = F.linear(y, out_w.to(n.dtype)) + if lora.o_loras is not None: + attn_out = attn_out + lora.o_loras[slot](n) + x_out = x_in + block.attn_scale.to(dtype=x_in.dtype)[None, None, :] * attn_out + mlp_n = block.mlp_norm(x_out) * block.ln_scale_factor + mlp_out = block.mlp(mlp_n, up_w, down_w) + if lora.mlp_loras is not None: + mlp_out = mlp_out + lora.mlp_loras[slot](mlp_n) + x_out = x_out + block.mlp_scale.to(dtype=x_out.dtype)[None, None, :] * mlp_out + return x_out + + def _parallel_block_with_lora( + self, block_idx, lane0, lane1, x0, lora, slot, + q_w, k_w, v_w, out_w, up_w, down_w, + ): + block = self.blocks[block_idx] + mix = block.resid_mix.to(dtype=lane0.dtype) + attn_read = mix[0][None, None, :] * lane0 + mix[1][None, None, :] * x0 + n = block.attn_norm(attn_read) * block.ln_scale_factor + attn = block.attn + bsz, seqlen, dim = n.shape + q = (F.linear(n, q_w.to(n.dtype)) + lora.q_loras[slot](n)).reshape( + bsz, seqlen, attn.num_heads, attn.head_dim + ) + k = F.linear(n, k_w.to(n.dtype)) + if lora.k_loras is not None: + k = k + lora.k_loras[slot](n) + k = k.reshape(bsz, seqlen, attn.num_kv_heads, attn.head_dim) + v = (F.linear(n, v_w.to(n.dtype)) + lora.v_loras[slot](n)).reshape( + bsz, seqlen, attn.num_kv_heads, attn.head_dim + ) + q = F.rms_norm(q, (q.size(-1),)) + k = F.rms_norm(k, (k.size(-1),)) + cos, sin = attn.rotary(seqlen, n.device, q.dtype) + q = apply_rotary_emb(q, cos, sin, attn.rope_dims) + k = apply_rotary_emb(k, cos, sin, attn.rope_dims) + q = q * attn.q_gain.to(dtype=q.dtype)[None, None, :, None] + y = flash_attn_3_func(q, k, v, causal=True) + if attn.use_xsa: + y = attn._xsa_efficient(y, v) + if attn.attn_out_gate_w is not None: + y = _apply_attn_out_gate_inline(y, attn_read, attn.attn_out_gate_w) + y = y.reshape(bsz, seqlen, dim) + attn_out = F.linear(y, out_w.to(n.dtype)) + if lora.o_loras is not None: + attn_out = attn_out + lora.o_loras[slot](n) + attn_out = block.attn_scale.to(dtype=attn_out.dtype)[None, None, :] * attn_out + mlp_read = lane1 + mlp_n = block.mlp_norm(mlp_read) * block.ln_scale_factor + mlp_out = block.mlp(mlp_n, up_w, down_w) + if lora.mlp_loras is not None: + mlp_out = mlp_out + lora.mlp_loras[slot](mlp_n) + mlp_out = block.mlp_scale.to(dtype=lane1.dtype)[None, None, :] * mlp_out + attn_resid = self.parallel_resid_lambdas[block_idx, 0].to(dtype=lane0.dtype) + attn_post = self.parallel_post_lambdas[block_idx, 0].to(dtype=lane0.dtype) + mlp_resid = self.parallel_resid_lambdas[block_idx, 1].to(dtype=lane0.dtype) + mlp_post = self.parallel_post_lambdas[block_idx, 1].to(dtype=lane0.dtype) + lane0 = attn_resid * lane0 + attn_post[0] * attn_out + mlp_post[0] * mlp_out + lane1 = mlp_resid * lane1 + attn_post[1] * attn_out + mlp_post[1] * mlp_out + return lane0, lane1 + + +class BatchedLinearLoRA(nn.Module): + def __init__(self, bsz, in_features, out_features, rank): + super().__init__() + self._bound = 1.0 / math.sqrt(in_features) + self.A = nn.Parameter( + torch.empty(bsz, rank, in_features).uniform_(-self._bound, self._bound) + ) + self.B = nn.Parameter(torch.zeros(bsz, out_features, rank)) + + def reset(self): + with torch.no_grad(): + self.A.uniform_(-self._bound, self._bound) + self.B.zero_() + + def forward(self, x): + return (x @ self.A.transpose(1, 2)) @ self.B.transpose(1, 2) + + +class BatchedTTTLoRA(nn.Module): + def __init__(self, bsz, model, rank, k_lora=True, mlp_lora=True, o_lora=True): + super().__init__() + self.bsz = bsz + dim = model.qo_bank.shape[-1] + vocab = model.tok_emb.num_embeddings + if getattr(model, "looping_active", False): + num_slots = len(model.encoder_indices) + len(model.decoder_indices) + else: + num_slots = len(model.blocks) + kv_dim = model.blocks[0].attn.num_kv_heads * ( + dim // model.blocks[0].attn.num_heads + ) + embed_dim = model.tok_emb.embedding_dim + self.lm_head_lora = BatchedLinearLoRA(bsz, embed_dim, vocab, rank) + self.q_loras = nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, dim, rank) for _ in range(num_slots)] + ) + self.v_loras = nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, kv_dim, rank) for _ in range(num_slots)] + ) + self.k_loras = ( + nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, kv_dim, rank) for _ in range(num_slots)] + ) + if k_lora + else None + ) + self.mlp_loras = ( + nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, dim, rank) for _ in range(num_slots)] + ) + if mlp_lora + else None + ) + self.o_loras = ( + nn.ModuleList( + [BatchedLinearLoRA(bsz, dim, dim, rank) for _ in range(num_slots)] + ) + if o_lora + else None + ) + + def reset(self): + with torch.no_grad(): + self.lm_head_lora.reset() + for loras in [self.q_loras, self.v_loras, self.k_loras, + self.mlp_loras, self.o_loras]: + if loras is not None: + for lora in loras: + lora.reset() + + +# Polar Express per-iteration minimax Newton-Schulz coefficients (PR #1344). +# Replaces the fixed (3.4445, -4.775, 2.0315) coefficients when +# POLAR_EXPRESS_NS=1 (default). Applied at backend_steps=5. +_PE_COEFFS = ( + (8.156554524902461, -22.48329292557795, 15.878769915207462), + (4.042929935166739, -2.808917465908714, 0.5000178451051316), + (3.8916678022926607, -2.772484153217685, 0.5060648178503393), + (3.285753657755655, -2.3681294933425376, 0.46449024233003106), + (2.3465413258596377, -1.7097828382687081, 0.42323551169305323), +) +_POLAR_EXPRESS_NS = bool(int(os.environ.get("POLAR_EXPRESS_NS", "1"))) + + +@torch.compile +def zeropower_via_newtonschulz5(G, steps=10, eps=1e-07): + was_2d = G.ndim == 2 + if was_2d: + G = G.unsqueeze(0) + X = G.bfloat16() + transposed = X.size(-2) > X.size(-1) + if transposed: + X = X.mT + X = X / (X.norm(dim=(-2, -1), keepdim=True) + eps) + if _POLAR_EXPRESS_NS: + coeffs = _PE_COEFFS[:steps] if steps <= len(_PE_COEFFS) else _PE_COEFFS + for a, b, c in coeffs: + A = X @ X.mT + B = b * A + c * (A @ A) + X = a * X + B @ X + else: + a, b, c = 3.4445, -4.775, 2.0315 + for _ in range(steps): + A = X @ X.mT + B = b * A + c * (A @ A) + X = a * X + B @ X + if transposed: + X = X.mT + if was_2d: + X = X.squeeze(0) + return X + + +class Muon(torch.optim.Optimizer): + def __init__( + self, + params, + lr, + momentum, + backend_steps, + nesterov=True, + weight_decay=0.0, + row_normalize=False, + ): + super().__init__( + params, + dict( + lr=lr, + momentum=momentum, + backend_steps=backend_steps, + nesterov=nesterov, + weight_decay=weight_decay, + row_normalize=row_normalize, + ), + ) + self._built = False + + def _build(self): + self._distributed = dist.is_available() and dist.is_initialized() + self._world_size = dist.get_world_size() if self._distributed else 1 + self._rank = dist.get_rank() if self._distributed else 0 + ws = self._world_size + self._bank_meta = [] + for group in self.param_groups: + for p in group["params"]: + B = p.shape[0] + padded_B = ((B + ws - 1) // ws) * ws + shard_B = padded_B // ws + tail = p.shape[1:] + dev = p.device + self._bank_meta.append({ + "p": p, + "B": B, + "padded_grad": torch.zeros(padded_B, *tail, device=dev, dtype=torch.bfloat16), + "shard": torch.zeros(shard_B, *tail, device=dev, dtype=torch.bfloat16), + "shard_mom": torch.zeros(shard_B, *tail, device=dev, dtype=torch.bfloat16), + "full_update": torch.zeros(padded_B, *tail, device=dev, dtype=torch.bfloat16), + "scale": max(1, p.shape[-2] / p.shape[-1]) ** 0.5, + }) + self._bank_meta.sort(key=lambda m: -m["p"].numel()) + self._built = True + + def launch_reduce_scatters(self): + if not self._built: + self._build() + if not self._distributed: + return + self._rs_futures = [] + for m in self._bank_meta: + p = m["p"] + if p.grad is None: + self._rs_futures.append(None) + continue + pg = m["padded_grad"] + pg[: m["B"]].copy_(p.grad.bfloat16()) + if pg.shape[0] > m["B"]: + pg[m["B"] :].zero_() + fut = dist.reduce_scatter_tensor( + m["shard"], pg, op=dist.ReduceOp.AVG, async_op=True + ) + self._rs_futures.append(fut) + + @torch.no_grad() + def step(self, closure=None): + loss = None + if closure is not None: + with torch.enable_grad(): + loss = closure() + if not self._built: + self._build() + for group in self.param_groups: + lr = group["lr"] + momentum = group["momentum"] + backend_steps = group["backend_steps"] + nesterov = group["nesterov"] + wd = group.get("weight_decay", 0.0) + row_normalize = group.get("row_normalize", False) + prev_ag_handle = None + prev_m = None + sharded = self._distributed and hasattr(self, "_rs_futures") + for idx, m in enumerate(self._bank_meta): + p = m["p"] + if p.grad is None: + continue + if prev_ag_handle is not None: + prev_ag_handle.wait() + pp = prev_m["p"] + upd = prev_m["full_update"][: prev_m["B"]] + if wd > 0.0: + pp.data.mul_(1.0 - lr * wd) + pp.add_(upd.to(dtype=pp.dtype), alpha=-lr * prev_m["scale"]) + if sharded and self._rs_futures[idx] is not None: + self._rs_futures[idx].wait() + g = m["shard"] + buf = m["shard_mom"] + else: + g = p.grad.bfloat16() + state = self.state[p] + if "momentum_buffer" not in state: + state["momentum_buffer"] = torch.zeros_like(g) + buf = state["momentum_buffer"] + buf.mul_(momentum).add_(g) + if nesterov: + update = g.add(buf, alpha=momentum) + else: + update = buf + if row_normalize: + rn = update.float().norm(dim=-1, keepdim=True).clamp_min(1e-07) + update = update / rn.to(update.dtype) + update = zeropower_via_newtonschulz5(update, steps=backend_steps) + if sharded: + prev_ag_handle = dist.all_gather_into_tensor( + m["full_update"], update, async_op=True + ) + prev_m = m + else: + if wd > 0.0: + p.data.mul_(1.0 - lr * wd) + p.add_(update.to(dtype=p.dtype), alpha=-lr * m["scale"]) + if prev_ag_handle is not None: + prev_ag_handle.wait() + pp = prev_m["p"] + upd = prev_m["full_update"][: prev_m["B"]] + if wd > 0.0: + pp.data.mul_(1.0 - lr * wd) + pp.add_(upd.to(dtype=pp.dtype), alpha=-lr * prev_m["scale"]) + if hasattr(self, "_rs_futures"): + del self._rs_futures + return loss + + +CONTROL_TENSOR_NAME_PATTERNS = tuple( + pattern + for pattern in os.environ.get( + "CONTROL_TENSOR_NAME_PATTERNS", + "attn_scale,attn_scales,mlp_scale,mlp_scales,resid_mix,resid_mixes,q_gain,skip_weight,skip_weights,skip_gates,parallel_post_lambdas,parallel_resid_lambdas,attn_out_gate_w", + ).split(",") + if pattern +) + + +PACKED_REPLICATED_GRAD_MAX_NUMEL = 1 << 15 + + +class Optimizers: + def __init__(self, h, base_model): + matrix_params = [ + base_model.qo_bank, + base_model.kv_bank, + base_model.mlp_up_bank, + base_model.mlp_down_bank, + ] + block_named_params = list(base_model.blocks.named_parameters()) + scalar_params = [ + p + for (name, p) in block_named_params + if p.ndim < 2 + or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS) + ] + if base_model.skip_weights.numel() > 0: + scalar_params.append(base_model.skip_weights) + if base_model.skip_gates is not None and base_model.skip_gates.numel() > 0: + scalar_params.append(base_model.skip_gates) + if base_model.parallel_post_lambdas is not None: + scalar_params.append(base_model.parallel_post_lambdas) + if base_model.parallel_resid_lambdas is not None: + scalar_params.append(base_model.parallel_resid_lambdas) + if base_model.smear_w is not None: + scalar_params.append(base_model.smear_w) + scalar_params.append(base_model.smear_lambda) + token_lr = h.tied_embed_lr if h.tie_embeddings else h.embed_lr + tok_params = [ + {"params": [base_model.tok_emb.weight], "lr": token_lr, "base_lr": token_lr} + ] + self.optimizer_tok = torch.optim.AdamW( + tok_params, + betas=(h.beta1, h.beta2), + eps=h.adam_eps, + weight_decay=h.embed_wd, + fused=True, + ) + self.optimizer_muon = Muon( + matrix_params, + lr=h.matrix_lr, + momentum=h.muon_momentum, + backend_steps=h.muon_backend_steps, + weight_decay=h.muon_wd, + row_normalize=h.muon_row_normalize, + ) + for group in self.optimizer_muon.param_groups: + group["base_lr"] = h.matrix_lr + self.optimizer_scalar = torch.optim.AdamW( + [{"params": scalar_params, "lr": h.scalar_lr, "base_lr": h.scalar_lr}], + betas=(h.beta1, h.beta2), + eps=h.adam_eps, + weight_decay=h.adam_wd, + fused=True, + ) + self.optimizers = [ + self.optimizer_tok, + self.optimizer_muon, + self.optimizer_scalar, + ] + self.replicated_params = list(tok_params[0]["params"]) + self.replicated_params.extend(scalar_params) + self.replicated_large_params = [] + self.replicated_packed_params = [] + for p in self.replicated_params: + if p.numel() <= PACKED_REPLICATED_GRAD_MAX_NUMEL: + self.replicated_packed_params.append(p) + else: + self.replicated_large_params.append(p) + + def __iter__(self): + return iter(self.optimizers) + + def zero_grad_all(self): + for opt in self.optimizers: + opt.zero_grad(set_to_none=True) + + def _all_reduce_packed_grads(self): + grads_by_key = collections.defaultdict(list) + for p in self.replicated_packed_params: + if p.grad is not None: + grads_by_key[(p.grad.device, p.grad.dtype)].append(p.grad) + for grads in grads_by_key.values(): + flat = torch.empty( + sum(g.numel() for g in grads), + device=grads[0].device, + dtype=grads[0].dtype, + ) + offset = 0 + for g in grads: + n = g.numel() + flat[offset : offset + n].copy_(g.contiguous().view(-1)) + offset += n + dist.all_reduce(flat, op=dist.ReduceOp.AVG) + offset = 0 + for g in grads: + n = g.numel() + g.copy_(flat[offset : offset + n].view_as(g)) + offset += n + + def step(self, distributed=False): + self.optimizer_muon.launch_reduce_scatters() + if distributed: + reduce_handles = [ + dist.all_reduce(p.grad, op=dist.ReduceOp.AVG, async_op=True) + for p in self.replicated_large_params + if p.grad is not None + ] + self._all_reduce_packed_grads() + for handle in reduce_handles: + handle.wait() + self.optimizer_tok.step() + self.optimizer_scalar.step() + self.optimizer_muon.step() + self.zero_grad_all() + + +def restore_fp32_params(model): + for module in model.modules(): + if isinstance(module, CastedLinear): + module.float() + for name, param in model.named_parameters(): + if ( + param.ndim < 2 + or any(pattern in name for pattern in CONTROL_TENSOR_NAME_PATTERNS) + ) and param.dtype != torch.float32: + param.data = param.data.float() + if hasattr(model, "qo_bank") and model.qo_bank is not None: + model.qo_bank.data = model.qo_bank.data.float() + model.kv_bank.data = model.kv_bank.data.float() + model.mlp_up_bank.data = model.mlp_up_bank.data.float() + model.mlp_down_bank.data = model.mlp_down_bank.data.float() + + +def collect_hessians(model, train_loader, h, device, n_calibration_batches=64): + hessians = {} + hooks = [] + for i, block in enumerate(model.blocks): + block.attn._calib = True + block.mlp._calib = True + block.mlp.use_fused = False + + def make_attn_hook(layer_idx): + def hook_fn(module, inp, out): + x = inp[0].detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + for suffix in ["c_q", "c_k", "c_v"]: + name = f"blocks.{layer_idx}.attn.{suffix}.weight" + if name not in hessians: + hessians[name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(x.T, x) + y = module._last_proj_input + if y is not None: + y = y.float() + if y.ndim == 3: + y = y.reshape(-1, y.shape[-1]) + name = f"blocks.{layer_idx}.attn.proj.weight" + if name not in hessians: + hessians[name] = torch.zeros( + y.shape[1], y.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(y.T, y) + return hook_fn + + def make_mlp_hook(layer_idx): + def hook_fn(module, inp, out): + x = inp[0].detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + name = f"blocks.{layer_idx}.mlp.fc.weight" + if name not in hessians: + hessians[name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(x.T, x) + h_act = module._last_down_input + if h_act is not None: + h_act = h_act.float() + if h_act.ndim == 3: + h_act = h_act.reshape(-1, h_act.shape[-1]) + name = f"blocks.{layer_idx}.mlp.proj.weight" + if name not in hessians: + hessians[name] = torch.zeros( + h_act.shape[1], h_act.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(h_act.T, h_act) + return hook_fn + + for i, block in enumerate(model.blocks): + hooks.append(block.attn.register_forward_hook(make_attn_hook(i))) + hooks.append(block.mlp.register_forward_hook(make_mlp_hook(i))) + + # Hessian hooks for embedding factorization projection layers + def make_linear_input_hook(weight_name): + def hook_fn(module, inp, out): + x = inp[0].detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + if weight_name not in hessians: + hessians[weight_name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[weight_name].addmm_(x.T, x) + return hook_fn + + if model.tie_embeddings: + hook_module = model.final_norm + + def make_output_hook(name): + def hook_fn(module, inp, out): + x = out.detach().float() + if x.ndim == 3: + x = x.reshape(-1, x.shape[-1]) + if name not in hessians: + hessians[name] = torch.zeros( + x.shape[1], x.shape[1], dtype=torch.float32, device=device + ) + hessians[name].addmm_(x.T, x) + return hook_fn + + hooks.append( + hook_module.register_forward_hook(make_output_hook("tok_emb.weight")) + ) + model.eval() + with torch.no_grad(): + for _ in range(n_calibration_batches): + x, _ = train_loader.next_batch(h.train_batch_tokens, h.grad_accum_steps) + model.forward_logits(x) + for hook in hooks: + hook.remove() + for i, block in enumerate(model.blocks): + block.attn._calib = False + block.mlp._calib = False + block.mlp.use_fused = True + for name in hessians: + hessians[name] = hessians[name].cpu() / n_calibration_batches + return hessians + + +def gptq_quantize_weight(w, H, clip_sigmas=3.0, clip_range=63, block_size=128): + W_orig = w.float().clone() + rows, cols = W_orig.shape + H = H.float().clone() + dead = torch.diag(H) == 0 + H[dead, dead] = 1 + damp = 0.01 * H.diag().mean() + H.diagonal().add_(damp) + perm = torch.argsort(H.diag(), descending=True) + invperm = torch.argsort(perm) + W_perm = W_orig[:, perm].clone() + W_perm[:, dead[perm]] = 0 + H = H[perm][:, perm] + Hinv = torch.cholesky_inverse(torch.linalg.cholesky(H)) + Hinv = torch.linalg.cholesky(Hinv, upper=True) + row_std = W_orig.std(dim=1) + s = (clip_sigmas * row_std / clip_range).clamp_min(1e-10).to(torch.float16) + sf = s.float() + Q = torch.zeros(rows, cols, dtype=torch.int8) + W_work = W_perm.clone() + for i1 in range(0, cols, block_size): + i2 = min(i1 + block_size, cols) + W_block = W_work[:, i1:i2].clone() + Hinv_block = Hinv[i1:i2, i1:i2] + Err = torch.zeros(rows, i2 - i1) + for j in range(i2 - i1): + w_col = W_block[:, j] + d = Hinv_block[j, j] + q_col = torch.clamp(torch.round(w_col / sf), -clip_range, clip_range) + Q[:, i1 + j] = q_col.to(torch.int8) + err = (w_col - q_col.float() * sf) / d + Err[:, j] = err + W_block[:, j:] -= err.unsqueeze(1) * Hinv_block[j, j:].unsqueeze(0) + if i2 < cols: + W_work[:, i2:] -= Err @ Hinv[i1:i2, i2:] + return Q[:, invperm], s + + +def _lqer_pack(A, B, bits): + rng = 2 ** (bits - 1) - 1 + sA = (A.abs().amax(dim=1).clamp_min(1e-10) / rng).to(torch.float16) + sB = (B.abs().amax(dim=1).clamp_min(1e-10) / rng).to(torch.float16) + qA = torch.clamp(torch.round(A / sA.float().view(-1, 1)), -rng, rng).to(torch.int8) + qB = torch.clamp(torch.round(B / sB.float().view(-1, 1)), -rng, rng).to(torch.int8) + return qA, sA, qB, sB + + +def _lqer_pack_asym(A, B, g=64): + # A: INT2 per-matrix scalar (signed [-2,1], scale = |A|max/1.5). + sA = (A.abs().amax().clamp_min(1e-10) / 1.5).to(torch.float16) + qA = torch.clamp(torch.round(A / sA.float()), -2, 1).to(torch.int8) + # B: INT4 groupwise g over flattened B (signed [-8,7], per-group scale). + Bf = B.reshape(-1, g) + Bmax = Bf.abs().amax(dim=-1, keepdim=True).clamp_min(1e-10) + sB = (Bmax / 7.5).to(torch.float16).reshape(-1) + qB = torch.clamp(torch.round(Bf / sB.float().reshape(-1, 1)), -8, 7).to( + torch.int8 + ).reshape(B.shape) + return qA, sA, qB, sB + + +def gptq_mixed_quantize(state_dict, hessians, h): + result = {} + meta = {} + lqer_on = bool(getattr(h, "lqer_enabled", False)) + lqer_cands = {} + for (name, tensor) in state_dict.items(): + t = tensor.detach().cpu().contiguous() + if not t.is_floating_point() or t.numel() <= 65536: + result[name] = t.to(torch.float16) if t.is_floating_point() else t + meta[name] = "passthrough (float16)" + continue + if "tok_emb" in name: + cs = h.embed_clip_sigmas + elif ".mlp." in name: + cs = h.mlp_clip_sigmas + elif ".attn." in name: + cs = h.attn_clip_sigmas + else: + cs = h.matrix_clip_sigmas + bits = h.embed_bits if "tok_emb" in name else h.matrix_bits + clip_range = 2 ** (bits - 1) - 1 + ret = gptq_quantize_weight( + t, hessians[name], clip_sigmas=cs, clip_range=clip_range + ) + q, s = ret + result[name + ".q"] = q + result[name + ".scale"] = s + meta[name] = f"gptq (int{bits})" + if lqer_on: + W_q = q.float() * s.float().view(-1, 1) + E = t.float() - W_q + lqer_cands[name] = (E, float(E.norm())) + if lqer_on and lqer_cands: + top = sorted(lqer_cands.items(), key=lambda kv: -kv[1][1])[: h.lqer_top_k] + asym_on = bool(getattr(h, "lqer_asym_enabled", False)) + asym_g = int(getattr(h, "lqer_asym_group", 64)) + for (name, (E, _)) in top: + U, S, Vh = torch.linalg.svd(E, full_matrices=False) + r = min(h.lqer_rank, S.numel()) + A = (U[:, :r] * S[:r]).contiguous() + B = Vh[:r, :].contiguous() + if asym_on and B.numel() % asym_g == 0: + qA, sA, qB, sB = _lqer_pack_asym(A, B, asym_g) + result[name + ".lqA_a"] = qA + result[name + ".lqAs_a"] = sA + result[name + ".lqB_a"] = qB + result[name + ".lqBs_a"] = sB + meta[name] = meta[name] + "+lqer_asym" + else: + qA, sA, qB, sB = _lqer_pack(A, B, h.lqer_factor_bits) + result[name + ".lqA"] = qA + result[name + ".lqAs"] = sA + result[name + ".lqB"] = qB + result[name + ".lqBs"] = sB + meta[name] = meta[name] + "+lqer" + categories = collections.defaultdict(set) + for (name, cat) in meta.items(): + short = re.sub("\\.\\d+$", "", re.sub("blocks\\.\\d+", "blocks", name)) + categories[cat].add(short) + log("Quantized weights:") + for cat in sorted(categories): + log(f" {cat}: {', '.join(sorted(categories[cat]))}") + return result, meta + + +def dequantize_mixed(result, meta, template_sd): + out = {} + for (name, orig) in template_sd.items(): + info = meta.get(name) + if info is None: + continue + orig_dtype = orig.dtype + if "passthrough" in info: + t = result[name] + if t.dtype == torch.float16 and orig_dtype in ( + torch.float32, + torch.bfloat16, + ): + t = t.to(orig_dtype) + out[name] = t + continue + q, s = result[name + ".q"], result[name + ".scale"] + if s.ndim > 0: + W = q.float() * s.float().view(q.shape[0], *[1] * (q.ndim - 1)) + else: + W = q.float() * float(s.item()) + if "lqer_asym" in info: + qA_t = result[name + ".lqA_a"] + sA_t = result[name + ".lqAs_a"] + qB_t = result[name + ".lqB_a"] + sB_t = result[name + ".lqBs_a"] + qA = qA_t.float() * float(sA_t) + g_sz = qB_t.numel() // sB_t.numel() + qB = (qB_t.reshape(-1, g_sz).float() * sB_t.float().view(-1, 1)).reshape( + qB_t.shape + ) + W = W + qA @ qB + elif "lqer" in info: + qA = result[name + ".lqA"].float() * result[name + ".lqAs"].float().view(-1, 1) + qB = result[name + ".lqB"].float() * result[name + ".lqBs"].float().view(-1, 1) + W = W + qA @ qB + out[name] = W.to(orig_dtype) + return out + + +_BSHF_MAGIC = b"BSHF" + + +def _byte_shuffle(data, stride=2): + if stride <= 1 or len(data) < stride: + return data + src = np.frombuffer(data, dtype=np.uint8) + n = len(src) + out = np.empty(n, dtype=np.uint8) + dest_off = 0 + for pos in range(stride): + chunk = src[pos::stride] + out[dest_off : dest_off + len(chunk)] = chunk + dest_off += len(chunk) + return _BSHF_MAGIC + bytes([stride]) + out.tobytes() + + +def _byte_unshuffle(data): + if len(data) < 5 or data[:4] != _BSHF_MAGIC: + return data + stride = data[4] + if stride < 2: + return data[5:] + payload = np.frombuffer(data, dtype=np.uint8, offset=5) + n = len(payload) + out = np.empty(n, dtype=np.uint8) + src_off = 0 + for pos in range(stride): + chunk_len = n // stride + (1 if pos < n % stride else 0) + out[pos::stride][:chunk_len] = payload[src_off : src_off + chunk_len] + src_off += chunk_len + return out.tobytes() + + +def _compress(data, compressor): + data = _byte_shuffle(data) + if compressor == "lzma": + return lzma.compress(data, preset=6) + elif compressor == "brotli": + import brotli + + return brotli.compress(data, quality=11) + raise ValueError(f"Unknown compressor: {compressor!r}") + + +def _decompress(data, compressor): + if compressor == "lzma": + raw = lzma.decompress(data) + elif compressor == "brotli": + import brotli + + raw = brotli.decompress(data) + else: + raise ValueError(f"Unknown compressor: {compressor!r}") + raw = _byte_unshuffle(raw) + return raw + + +def _unbank_state_dict(state_dict, num_layers): + sd = {} + n = num_layers + for k, v in state_dict.items(): + t = v.detach().cpu() if v is not None else None + if k == "qo_bank": + for i in range(n): + sd[f"blocks.{i}.attn.c_q.weight"] = t[i] + sd[f"blocks.{i}.attn.proj.weight"] = t[n + i] + elif k == "kv_bank": + for i in range(n): + sd[f"blocks.{i}.attn.c_k.weight"] = t[i] + sd[f"blocks.{i}.attn.c_v.weight"] = t[n + i] + elif k == "mlp_up_bank": + for i in range(n): + sd[f"blocks.{i}.mlp.fc.weight"] = t[i] + elif k == "mlp_down_bank": + for i in range(n): + sd[f"blocks.{i}.mlp.proj.weight"] = t[i] + else: + if t is not None: + sd[k] = t + return sd + + +def _rebank_state_dict(flat_sd, num_layers, model_dim, kv_dim, hidden_dim): + sd = {} + n = num_layers + sd["qo_bank"] = torch.zeros(2 * n, model_dim, model_dim) + sd["kv_bank"] = torch.zeros(2 * n, kv_dim, model_dim) + for i in range(n): + sd["qo_bank"][i] = flat_sd[f"blocks.{i}.attn.c_q.weight"] + sd["qo_bank"][n + i] = flat_sd[f"blocks.{i}.attn.proj.weight"] + sd["kv_bank"][i] = flat_sd[f"blocks.{i}.attn.c_k.weight"] + sd["kv_bank"][n + i] = flat_sd[f"blocks.{i}.attn.c_v.weight"] + sd["mlp_up_bank"] = torch.zeros(n, hidden_dim, model_dim) + sd["mlp_down_bank"] = torch.zeros(n, model_dim, hidden_dim) + for i in range(n): + sd["mlp_up_bank"][i] = flat_sd[f"blocks.{i}.mlp.fc.weight"] + sd["mlp_down_bank"][i] = flat_sd[f"blocks.{i}.mlp.proj.weight"] + for k, v in flat_sd.items(): + if not ( + k.startswith("blocks.") + and any( + p in k + for p in [ + ".attn.c_q.", ".attn.c_k.", ".attn.c_v.", + ".attn.proj.", ".mlp.fc.", ".mlp.proj.", + ] + ) + ): + sd[k] = v + return sd + + + +def _compressed_code_size(code): + code_raw = code.encode("utf-8") + minified = subprocess.run( + ["pyminify", "--no-rename-locals", "--no-hoist-literals", "--remove-literal-statements", "-"], + input=code_raw, capture_output=True, check=True, + ).stdout + compressed = lzma.compress(minified) + encoded = base64.b85encode(compressed) + wrapper = b'import lzma as L,base64 as B\nexec(L.decompress(B.b85decode("' + encoded + b'")))\n' + return len(code_raw), len(wrapper) + + +def serialize(h, base_model, code): + code_bytes_uncompressed, code_bytes = _compressed_code_size(code) + if h.is_main_process: + torch.save(base_model.state_dict(), h.model_path) + model_bytes = os.path.getsize(h.model_path) + log(f"Serialized model: {model_bytes} bytes") + log(f"Code size (uncompressed): {code_bytes_uncompressed} bytes") + log(f"Code size (compressed): {code_bytes} bytes") + sd_cpu = _unbank_state_dict(base_model.state_dict(), h.num_layers) + device = torch.device("cuda", h.local_rank) + log("GPTQ:collecting Hessians from calibration data...") + t0 = time.perf_counter() + calib_loader = ShuffledSequenceLoader(h, device) + hessians = collect_hessians( + base_model, + calib_loader, + h, + device, + n_calibration_batches=h.gptq_calibration_batches, + ) + log(f"GPTQ:collected {len(hessians)} Hessians in {time.perf_counter()-t0:.1f}s") + quant_result, quant_meta = gptq_mixed_quantize(sd_cpu, hessians, h) + quant_buf = io.BytesIO() + torch.save({"w": quant_result, "m": quant_meta}, quant_buf) + quant_raw = quant_buf.getvalue() + quant_blob = _compress(quant_raw, h.compressor) + quant_file_bytes = len(quant_blob) + bytes_total = quant_file_bytes + code_bytes + if h.is_main_process: + with open(h.quantized_model_path, "wb") as f: + f.write(quant_blob) + log(f"Serialized model quantized+{h.compressor}: {quant_file_bytes} bytes") + log(f"Total submission size quantized+{h.compressor}: {bytes_total} bytes") + return bytes_total, quant_file_bytes + + +def deserialize(h, device): + eval_model = GPT(h).to(device).bfloat16() + restore_fp32_params(eval_model) + flat_template = _unbank_state_dict(eval_model.state_dict(), h.num_layers) + with open(h.quantized_model_path, "rb") as f: + quant_blob_disk = f.read() + quant_state = torch.load( + io.BytesIO(_decompress(quant_blob_disk, h.compressor)), map_location="cpu" + ) + deq_flat = dequantize_mixed(quant_state["w"], quant_state["m"], flat_template) + head_dim = h.model_dim // h.num_heads + kv_dim = h.num_kv_heads * head_dim + hidden_dim = int(h.mlp_mult * h.model_dim) + deq_state = _rebank_state_dict(deq_flat, h.num_layers, h.model_dim, kv_dim, hidden_dim) + eval_model.load_state_dict(deq_state, strict=True) + return eval_model + + +def _loss_bpb(loss_sum, token_count, byte_count): + val_loss = (loss_sum / token_count).item() + val_bpb = val_loss / math.log(2.0) * (token_count.item() / byte_count.item()) + return val_loss, val_bpb + + +def eval_val(h, device, val_data, model, forward_logits_fn=None): + seq_len = h.eval_seq_len + local_batch_tokens = h.val_batch_tokens // (h.world_size * h.grad_accum_steps) + if local_batch_tokens < seq_len: + raise ValueError( + f"VAL_BATCH_SIZE must provide at least one sequence per rank; got VAL_BATCH_SIZE={h.val_batch_tokens}, WORLD_SIZE={h.world_size}, GRAD_ACCUM_STEPS={h.grad_accum_steps}, seq_len={seq_len}" + ) + local_batch_seqs = local_batch_tokens // seq_len + total_seqs = (val_data.val_tokens.numel() - 1) // seq_len + seq_start = total_seqs * h.rank // h.world_size + seq_end = total_seqs * (h.rank + 1) // h.world_size + + # TODO: Don't truncate this. + seq_end = seq_start + ((seq_end - seq_start) // local_batch_seqs) * local_batch_seqs + + val_loss_sum = torch.zeros((), device=device, dtype=torch.float64) + val_token_count = torch.zeros((), device=device, dtype=torch.float64) + val_byte_count = torch.zeros((), device=device, dtype=torch.float64) + run_forward_logits = ( + (model.module.forward_logits if hasattr(model, "module") else model.forward_logits) + if forward_logits_fn is None + else forward_logits_fn + ) + model.eval() + global BOS_ID + if BOS_ID is None: + BOS_ID = 1 + with torch.no_grad(): + for batch_seq_start in range(seq_start, seq_end, local_batch_seqs): + batch_seq_end = min(batch_seq_start + local_batch_seqs, seq_end) + raw_start = batch_seq_start * seq_len + raw_end = batch_seq_end * seq_len + 1 + local = val_data.val_tokens[raw_start:raw_end].to( + device=device, dtype=torch.int64, non_blocking=True + ) + x = local[:-1] + y = local[1:] + bos_pos = (x == BOS_ID).nonzero(as_tuple=True)[0].tolist() + cu_seqlens, max_seqlen = _build_cu_seqlens( + bos_pos, x.numel(), x.device, h.eval_seq_len, 64 + ) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=True): + logits = run_forward_logits( + x[None], cu_seqlens=cu_seqlens, max_seqlen=max_seqlen + ).detach() + per_token_loss = F.cross_entropy( + logits.reshape(-1, logits.size(-1)).float(), + y.reshape(-1), + reduction="none", + ) + val_loss_sum += per_token_loss.to(torch.float64).sum() + val_token_count += float(y.numel()) + prev_ids = x + tgt_ids = y + token_bytes = val_data.base_bytes_lut[tgt_ids].to(dtype=torch.int16) + token_bytes += ( + val_data.has_leading_space_lut[tgt_ids] + & ~val_data.is_boundary_token_lut[prev_ids] + ).to(dtype=torch.int16) + val_byte_count += token_bytes.to(torch.float64).sum() + if dist.is_available() and dist.is_initialized(): + dist.all_reduce(val_loss_sum, op=dist.ReduceOp.SUM) + dist.all_reduce(val_token_count, op=dist.ReduceOp.SUM) + dist.all_reduce(val_byte_count, op=dist.ReduceOp.SUM) + model.train() + return _loss_bpb(val_loss_sum, val_token_count, val_byte_count) + + +def eval_val_sliding(h, device, val_data, base_model, forward_logits_fn=None, batch_seqs=32): + global BOS_ID + if BOS_ID is None: + BOS_ID = 1 + base_model.eval() + run_forward_logits = base_model.forward_logits if forward_logits_fn is None else forward_logits_fn + seq_len = h.eval_seq_len + stride = h.eval_stride + total_tokens = val_data.val_tokens.numel() - 1 + context_size = seq_len - stride + window_starts = [ws for ws in range(0, total_tokens, stride) + if ws + context_size < total_tokens] + total_windows = len(window_starts) + my_s = (total_windows * h.rank) // h.world_size + my_e = (total_windows * (h.rank + 1)) // h.world_size + my_windows = window_starts[my_s:my_e] + loss_sum = torch.zeros((), device=device, dtype=torch.float64) + token_count = torch.zeros((), device=device, dtype=torch.float64) + byte_count = torch.zeros((), device=device, dtype=torch.float64) + total_batches = (len(my_windows) + batch_seqs - 1) // batch_seqs + is_master = h.rank == 0 + cu_bucket = 64 + t_sw_start = time.perf_counter() + with torch.no_grad(): + for bi in range(0, len(my_windows), batch_seqs): + batch_idx = bi // batch_seqs + if is_master and (batch_idx % 50 == 0 or batch_idx == total_batches - 1): + elapsed = time.perf_counter() - t_sw_start + rl = float(loss_sum.item() / token_count.item()) if token_count.item() > 0 else 0.0 + rb = float((rl / math.log(2.0)) * token_count.item() / byte_count.item()) if byte_count.item() > 0 else 0.0 + log(f"sliding_progress: batch {batch_idx+1}/{total_batches} " + f"tokens:{int(token_count.item())} running_loss:{rl:.4f} running_bpb:{rb:.4f} " + f"elapsed:{elapsed:.1f}s") + batch_ws = my_windows[bi:bi + batch_seqs] + x_parts = [] + y_parts = [] + cu_starts = [] + score_ranges = [] + offset = 0 + for ws in batch_ws: + end = min(ws + seq_len, total_tokens) + wlen = end - ws + chunk_cpu = val_data.val_tokens[ws:end + 1] + bos_pos = (chunk_cpu[:-1] == BOS_ID).nonzero(as_tuple=True)[0].tolist() + if not bos_pos or bos_pos[0] != 0: + bos_pos = [0] + bos_pos + cu_starts.extend(offset + pos for pos in bos_pos) + chunk = chunk_cpu.to(dtype=torch.int64, device=device) + x_parts.append(chunk[:-1]) + y_parts.append(chunk[1:]) + score_ranges.append((offset, wlen, ws)) + offset += wlen + x_cat = torch.cat(x_parts, dim=0)[None] + y_cat = torch.cat(y_parts, dim=0) + boundaries = cu_starts + [offset] + padded_len = get_next_multiple_of_n(len(boundaries), cu_bucket) + cu_seqlens = torch.full((padded_len,), offset, dtype=torch.int32, device=device) + cu_seqlens[:len(boundaries)] = torch.tensor(boundaries, dtype=torch.int32, device=device) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + logits = run_forward_logits(x_cat, cu_seqlens=cu_seqlens, max_seqlen=seq_len) + flat_nll = F.cross_entropy( + logits.reshape(-1, logits.size(-1)).float(), + y_cat, + reduction="none", + ) + flat_x = x_cat.reshape(-1) + for off, wlen, ws in score_ranges: + s = 0 if ws == 0 else context_size + lo = off + s + hi = off + wlen + scored_nll = flat_nll[lo:hi].to(torch.float64) + loss_sum += scored_nll.sum() + token_count += float(hi - lo) + tgt = y_cat[lo:hi] + prev = flat_x[lo:hi] + tb = val_data.base_bytes_lut[tgt].to(torch.float64) + tb += (val_data.has_leading_space_lut[tgt] & ~val_data.is_boundary_token_lut[prev]).to(torch.float64) + byte_count += tb.sum() + if dist.is_available() and dist.is_initialized(): + dist.all_reduce(loss_sum, op=dist.ReduceOp.SUM) + dist.all_reduce(token_count, op=dist.ReduceOp.SUM) + dist.all_reduce(byte_count, op=dist.ReduceOp.SUM) + base_model.train() + return _loss_bpb(loss_sum, token_count, byte_count) + + +def _find_docs(all_tokens): + bos_positions = (all_tokens == BOS_ID).nonzero(as_tuple=True)[0].numpy() + docs = [] + if len(bos_positions) == 0: + # Fallback for tokenizers without BOS tokens (e.g. casefold): + # split into synthetic documents of ~2048 tokens each + synth_doc_len = 2048 + total = all_tokens.numel() + for start in range(0, total - 1, synth_doc_len): + doc_len = min(synth_doc_len, total - start) + if doc_len >= 2: + docs.append((start, doc_len)) + return docs + for i in range(len(bos_positions)): + start = int(bos_positions[i]) + end = ( + int(bos_positions[i + 1]) + if i + 1 < len(bos_positions) + else all_tokens.numel() + ) + if i + 1 < len(bos_positions): + end += 1 + if end - start >= 2: + docs.append((start, end - start)) + return docs + + +def _build_ttt_global_batches(doc_entries, h, ascending=False): + batch_size = h.ttt_batch_size + global_doc_entries = sorted(doc_entries, key=lambda x: x[1][1]) + global_batches = [ + global_doc_entries[i : i + batch_size] + for i in range(0, len(global_doc_entries), batch_size) + ] + indexed = list(enumerate(global_batches)) + if not ascending: + indexed.sort(key=lambda ib: -max(dl for _, (_, dl) in ib[1])) + return indexed + + +def _init_batch_counter(path): + with open(path, "wb") as f: + f.write((0).to_bytes(4, "little")) + + +def _claim_next_batch(counter_path, queue_len): + try: + with open(counter_path, "r+b") as f: + fcntl.flock(f, fcntl.LOCK_EX) + idx = int.from_bytes(f.read(4), "little") + f.seek(0) + f.write((idx + 1).to_bytes(4, "little")) + f.flush() + except FileNotFoundError: + return queue_len + return idx + + +def _compute_chunk_window(ci, pred_len, num_chunks, chunk_size, eval_seq_len): + chunk_end = pred_len if ci == num_chunks - 1 else (ci + 1) * chunk_size + win_start = max(0, chunk_end - eval_seq_len) + win_len = chunk_end - win_start + chunk_start = ci * chunk_size + chunk_offset = chunk_start - win_start + chunk_len = chunk_end - chunk_start + return win_start, win_len, chunk_offset, chunk_len + + +def _accumulate_bpb( + ptl, + x, + y, + chunk_offsets, + chunk_lens, + pos_idx, + base_bytes_lut, + has_leading_space_lut, + is_boundary_token_lut, + loss_sum, + byte_sum, + token_count, +): + pos = pos_idx[: x.size(1)].unsqueeze(0) + mask = ( + (chunk_lens.unsqueeze(1) > 0) + & (pos >= chunk_offsets.unsqueeze(1)) + & (pos < (chunk_offsets + chunk_lens).unsqueeze(1)) + ) + mask_f64 = mask.to(torch.float64) + tok_bytes = base_bytes_lut[y].to(torch.float64) + tok_bytes += (has_leading_space_lut[y] & ~is_boundary_token_lut[x]).to( + torch.float64 + ) + loss_sum += (ptl.to(torch.float64) * mask_f64).sum() + byte_sum += (tok_bytes * mask_f64).sum() + token_count += chunk_lens.to(torch.float64).sum() + + +def _loss_bpb_from_sums(loss_sum, token_count, byte_sum): + val_loss = (loss_sum / token_count).item() + val_bpb = val_loss / math.log(2.0) * (token_count.item() / byte_sum.item()) + return val_loss, val_bpb + + +def _split_doc_entries_for_phased(doc_entries, prefix_docs): + prefix_docs = max(0, min(len(doc_entries), int(prefix_docs))) + return doc_entries[:prefix_docs], doc_entries[prefix_docs:] + + +def _add_to_counter(path, delta): + try: + with open(path, "r+b") as f: + fcntl.flock(f, fcntl.LOCK_EX) + cur = int.from_bytes(f.read(8), "little", signed=True) + cur += int(delta) + f.seek(0) + f.write(int(cur).to_bytes(8, "little", signed=True)) + f.flush() + return cur + except FileNotFoundError: + return int(delta) + + +def _init_int64_counter(path): + with open(path, "wb") as f: + f.write((0).to_bytes(8, "little", signed=True)) + + +def _select_ttt_doc_entries(docs, h): + doc_entries = list(enumerate(docs)) + if h.val_doc_fraction < 1.0: + sample_n = max(1, int(round(len(docs) * h.val_doc_fraction))) + sampled_indices = sorted( + random.Random(h.seed).sample(range(len(docs)), sample_n) + ) + return [(i, docs[i]) for i in sampled_indices] + return doc_entries + + +def train_val_ttt_global_sgd_distributed(h, device, val_data, base_model, val_tokens, batch_seqs=None): + global BOS_ID + if BOS_ID is None: + BOS_ID = 1 + base_model.eval() + seq_len = h.eval_seq_len + total_tokens = val_tokens.numel() - 1 + ttt_chunk = h.global_ttt_chunk_tokens + batch_seqs = h.global_ttt_batch_seqs if batch_seqs is None else batch_seqs + num_chunks = (total_tokens + ttt_chunk - 1) // ttt_chunk + ttt_params = [p for p in base_model.parameters()] + for p in ttt_params: + p.requires_grad_(True) + optimizer = torch.optim.SGD( + ttt_params, lr=h.global_ttt_lr, momentum=h.global_ttt_momentum + ) + t_start = time.perf_counter() + for ci in range(num_chunks): + chunk_start = ci * ttt_chunk + chunk_end = min((ci + 1) * ttt_chunk, total_tokens) + is_last_chunk = ci == num_chunks - 1 + if is_last_chunk or h.global_ttt_epochs <= 0: + continue + base_model.train() + chunk_seqs = (chunk_end - chunk_start) // seq_len + if chunk_seqs <= 0: + continue + warmup_chunks = max(0, min(h.global_ttt_warmup_chunks, num_chunks - 1)) + if warmup_chunks > 0 and ci < warmup_chunks: + warmup_denom = max(warmup_chunks - 1, 1) + warmup_t = ci / warmup_denom + lr_now = ( + h.global_ttt_warmup_start_lr + + (h.global_ttt_lr - h.global_ttt_warmup_start_lr) * warmup_t + ) + else: + decay_steps = max(num_chunks - 1 - warmup_chunks, 1) + decay_ci = max(ci - warmup_chunks, 0) + lr_now = h.global_ttt_lr * 0.5 * ( + 1.0 + math.cos(math.pi * decay_ci / decay_steps) + ) + for pg in optimizer.param_groups: + pg["lr"] = lr_now + my_seq_s = chunk_seqs * h.rank // h.world_size + my_seq_e = chunk_seqs * (h.rank + 1) // h.world_size + my_chunk_seqs = my_seq_e - my_seq_s + for _ in range(h.global_ttt_epochs): + for bs in range(0, my_chunk_seqs, batch_seqs): + be = min(bs + batch_seqs, my_chunk_seqs) + actual_bs = my_seq_s + bs + start_tok = chunk_start + actual_bs * seq_len + end_tok = chunk_start + (my_seq_s + be) * seq_len + 1 + if end_tok > val_tokens.numel(): + continue + local = val_tokens[start_tok:end_tok].to(device=device, dtype=torch.int64) + x_flat = local[:-1] + y_flat = local[1:] + optimizer.zero_grad(set_to_none=True) + with torch.enable_grad(): + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + if h.global_ttt_respect_doc_boundaries: + bos_pos = (x_flat == BOS_ID).nonzero(as_tuple=True)[0].tolist() + cu_seqlens, max_seqlen = _build_cu_seqlens( + bos_pos, x_flat.numel(), x_flat.device, h.eval_seq_len, 64 + ) + loss = base_model( + x_flat[None], + y_flat[None], + cu_seqlens=cu_seqlens, + max_seqlen=max_seqlen, + ) + else: + x = x_flat.reshape(-1, seq_len) + y = y_flat.reshape(-1, seq_len) + loss = base_model(x, y) + loss.backward() + if dist.is_available() and dist.is_initialized(): + for p in ttt_params: + if p.grad is not None: + dist.all_reduce(p.grad, op=dist.ReduceOp.SUM) + p.grad.mul_(1.0 / h.world_size) + if h.global_ttt_grad_clip > 0: + torch.nn.utils.clip_grad_norm_(ttt_params, h.global_ttt_grad_clip) + optimizer.step() + base_model.eval() + if h.rank == 0: + elapsed = time.perf_counter() - t_start + log( + f"tttg: c{ci+1}/{num_chunks} lr:{lr_now:.6f} t:{elapsed:.1f}s" + ) + for p in base_model.parameters(): + p.requires_grad_(True) + base_model.eval() + + +def eval_val_ttt_phased(h, base_model, device, val_data, forward_ttt_train): + global BOS_ID + if BOS_ID is None: + BOS_ID = 1 + base_model.eval() + for p in base_model.parameters(): + p.requires_grad_(False) + all_tokens = val_data.val_tokens + all_tokens_idx = all_tokens.to(torch.int32) + docs = _find_docs(all_tokens) + doc_entries = _select_ttt_doc_entries(docs, h) + prefix_doc_limit = max(0, min(len(doc_entries), int(h.phased_ttt_prefix_docs))) + num_phases = max(1, int(h.phased_ttt_num_phases)) + phase_boundaries = [] + for pi in range(num_phases): + boundary = prefix_doc_limit * (pi + 1) // num_phases + phase_boundaries.append(boundary) + current_phase = 0 + current_phase_boundary = phase_boundaries[0] + log( + "ttt_phased:" + f" total_docs:{len(doc_entries)} prefix_docs:{prefix_doc_limit} " + f"suffix_docs:{len(doc_entries) - prefix_doc_limit}" + f" num_phases:{num_phases} boundaries:{phase_boundaries}" + ) + chunk_size, eval_seq_len = h.ttt_chunk_size, h.ttt_eval_seq_len + eval_batch_set = None + if h.ttt_eval_batches: + eval_batch_set = set(int(x) for x in h.ttt_eval_batches.split(",") if x.strip()) + use_ascending = eval_batch_set is not None + global_batches_sorted = _build_ttt_global_batches( + doc_entries, h, ascending=use_ascending + ) + queue_len = len(global_batches_sorted) + counter_path = f"/tmp/ttt_counter_{h.run_id}" + prefix_counter_path = f"/tmp/ttt_prefix_counter_{h.run_id}" + pause_flag_path = f"/tmp/ttt_pause_flag_{h.run_id}" + if h.rank == 0: + _init_batch_counter(counter_path) + _init_int64_counter(prefix_counter_path) + try: + os.remove(pause_flag_path) + except FileNotFoundError: + pass + if dist.is_available() and dist.is_initialized(): + path_list = [counter_path, prefix_counter_path, pause_flag_path] + dist.broadcast_object_list(path_list, src=0) + counter_path, prefix_counter_path, pause_flag_path = path_list + dist.barrier() + loss_sum = torch.zeros((), device=device, dtype=torch.float64) + byte_sum = torch.zeros((), device=device, dtype=torch.float64) + token_count = torch.zeros((), device=device, dtype=torch.float64) + t_start = time.perf_counter() + reusable_lora = BatchedTTTLoRA( + h.ttt_batch_size, base_model, h.ttt_lora_rank, + k_lora=h.ttt_k_lora, mlp_lora=h.ttt_mlp_lora, o_lora=h.ttt_o_lora, + ).to(device) + + def _build_opt(lora): + if h.ttt_optimizer == "sgd": + return torch.optim.SGD( + lora.parameters(), lr=h.ttt_lora_lr, + momentum=h.ttt_beta1, weight_decay=h.ttt_weight_decay, + ) + return torch.optim.AdamW( + lora.parameters(), lr=h.ttt_lora_lr, + betas=(h.ttt_beta1, h.ttt_beta2), + eps=1e-10, weight_decay=h.ttt_weight_decay, fused=True, + ) + + reusable_opt = _build_opt(reusable_lora) + local_scored_docs = [] + global_ttt_done = prefix_doc_limit == 0 + try: + while True: + queue_idx = _claim_next_batch(counter_path, queue_len) + if queue_idx >= queue_len: + break + orig_batch_idx, batch_entries = global_batches_sorted[queue_idx] + batch = [doc for _, doc in batch_entries] + bsz = len(batch) + prev_loss = loss_sum.item() + prev_bytes = byte_sum.item() + prev_tokens = token_count.item() + if bsz == reusable_lora.bsz: + reusable_lora.reset() + for s in reusable_opt.state.values(): + for k, v in s.items(): + if isinstance(v, torch.Tensor): + v.zero_() + elif k == "step": + s[k] = 0 + cur_lora = reusable_lora + cur_opt = reusable_opt + else: + cur_lora = BatchedTTTLoRA( + bsz, base_model, h.ttt_lora_rank, + k_lora=h.ttt_k_lora, mlp_lora=h.ttt_mlp_lora, o_lora=h.ttt_o_lora, + ).to(device) + cur_opt = _build_opt(cur_lora) + pred_lens = [doc_len - 1 for _, doc_len in batch] + num_chunks = [(pl + chunk_size - 1) // chunk_size for pl in pred_lens] + max_nc = max(num_chunks) + num_chunks_t = torch.tensor(num_chunks, dtype=torch.int64, device=device) + for ci in range(max_nc): + active = [ci < nc for nc in num_chunks] + needs_train = any(ci < nc - 1 for nc in num_chunks) + tok_starts = torch.zeros(bsz, dtype=torch.int64) + tok_wls = torch.zeros(bsz, dtype=torch.int64) + chunk_offsets_cpu = torch.zeros(bsz, dtype=torch.int64) + chunk_lens_cpu = torch.zeros(bsz, dtype=torch.int64) + for b in range(bsz): + if not active[b]: + continue + doc_start, doc_len = batch[b] + win_start, win_len, chunk_offset, chunk_len = _compute_chunk_window( + ci, pred_lens[b], num_chunks[b], chunk_size, eval_seq_len + ) + tok_starts[b] = doc_start + win_start + tok_wls[b] = win_len + chunk_offsets_cpu[b] = chunk_offset + chunk_lens_cpu[b] = chunk_len + _, context_size, chunk_offset, _ = _compute_chunk_window( + ci, (ci + 1) * chunk_size, ci + 1, chunk_size, eval_seq_len + ) + col_idx = torch.arange(context_size + 1) + idx = tok_starts.unsqueeze(1) + col_idx.unsqueeze(0) + idx.clamp_(max=all_tokens.numel() - 1) + gathered_gpu = all_tokens_idx[idx].to( + device=device, dtype=torch.int64, non_blocking=True + ) + valid = (col_idx[:context_size].unsqueeze(0) < tok_wls.unsqueeze(1)).to( + device, non_blocking=True + ) + chunk_offsets = chunk_offsets_cpu.to(device, non_blocking=True) + chunk_lens = chunk_lens_cpu.to(device, non_blocking=True) + x = torch.where(valid, gathered_gpu[:, :context_size], 0) + y = torch.where(valid, gathered_gpu[:, 1 : context_size + 1], 0) + ctx_pos = torch.arange(context_size, device=device, dtype=torch.int64) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + per_tok_loss = forward_ttt_train(x, y, lora=cur_lora) + with torch.no_grad(): + _accumulate_bpb( + per_tok_loss, + x, + y, + chunk_offsets, + chunk_lens, + ctx_pos, + val_data.base_bytes_lut, + val_data.has_leading_space_lut, + val_data.is_boundary_token_lut, + loss_sum, + byte_sum, + token_count, + ) + if needs_train: + activate_chunk_mask = (num_chunks_t - 1 > ci).float() + for gi in range(h.ttt_grad_steps): + if gi > 0: + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + per_tok_loss = forward_ttt_train(x, y, lora=cur_lora) + per_doc = per_tok_loss[ + :, chunk_offset : chunk_offset + chunk_size + ].mean(dim=-1) + cur_opt.zero_grad(set_to_none=True) + (per_doc * activate_chunk_mask).sum().backward() + cur_opt.step() + else: + del per_tok_loss + batch_num = orig_batch_idx + 1 + doc_lens = [dl for _, dl in batch] + should_report = batch_num in eval_batch_set if eval_batch_set is not None else True + if should_report: + cur_tokens = token_count.item() + cur_loss_val = loss_sum.item() + cur_bytes_val = byte_sum.item() + dt = cur_tokens - prev_tokens + db = cur_bytes_val - prev_bytes + if dt > 0 and db > 0: + b_loss = (cur_loss_val - prev_loss) / dt + b_bpb = b_loss / math.log(2.0) * (dt / db) + else: + b_loss = b_bpb = 0.0 + r_loss = cur_loss_val / max(cur_tokens, 1) + r_bpb = r_loss / math.log(2.0) * (cur_tokens / max(cur_bytes_val, 1)) + elapsed = time.perf_counter() - t_start + log( + f"ttp: b{batch_num}/{queue_len} bl:{b_loss:.4f} bb:{b_bpb:.4f} " + f"rl:{r_loss:.4f} rb:{r_bpb:.4f} dl:{min(doc_lens)}-{max(doc_lens)} " + f"gd:{int(global_ttt_done)}" + ) + if not global_ttt_done: + local_scored_docs.extend( + (orig_batch_idx, pos, doc_start, doc_len) + for pos, (doc_start, doc_len) in enumerate(batch) + ) + prefix_done = _add_to_counter(prefix_counter_path, len(batch_entries)) + if prefix_done >= current_phase_boundary: + try: + with open(pause_flag_path, "x"): + pass + except FileExistsError: + pass + should_pause = os.path.exists(pause_flag_path) + if should_pause: + if dist.is_available() and dist.is_initialized(): + dist.barrier() + gathered_scored_docs = [None] * h.world_size + if dist.is_available() and dist.is_initialized(): + dist.all_gather_object(gathered_scored_docs, local_scored_docs) + else: + gathered_scored_docs = [local_scored_docs] + scored_docs_for_global = [] + for rank_docs in gathered_scored_docs: + if rank_docs: + scored_docs_for_global.extend(rank_docs) + scored_docs_for_global.sort(key=lambda x: (x[0], x[1])) + scored_docs_for_global = scored_docs_for_global[:current_phase_boundary] + scored_token_chunks = [ + val_data.val_tokens[doc_start : doc_start + doc_len] + for _, _, doc_start, doc_len in scored_docs_for_global + ] + if scored_token_chunks: + global_ttt_tokens = torch.cat(scored_token_chunks) + else: + global_ttt_tokens = val_data.val_tokens[:0] + if h.rank == 0: + prefix_done = 0 + try: + with open(prefix_counter_path, "rb") as f: + prefix_done = int.from_bytes( + f.read(8), "little", signed=True + ) + except FileNotFoundError: + pass + log( + f"ttpp: phase:{current_phase + 1}/{num_phases} pd:{prefix_done} " + f"gd:{len(scored_docs_for_global)} " + f"t:{time.perf_counter() - t_start:.1f}s" + ) + train_val_ttt_global_sgd_distributed( + h, device, val_data, base_model, global_ttt_tokens + ) + for p in base_model.parameters(): + p.requires_grad_(False) + reusable_lora = BatchedTTTLoRA( + h.ttt_batch_size, base_model, h.ttt_lora_rank, + k_lora=h.ttt_k_lora, mlp_lora=h.ttt_mlp_lora, o_lora=h.ttt_o_lora, + ).to(device) + reusable_opt = _build_opt(reusable_lora) + current_phase += 1 + if current_phase >= num_phases: + global_ttt_done = True + else: + current_phase_boundary = phase_boundaries[current_phase] + if h.rank == 0: + try: + os.remove(pause_flag_path) + except FileNotFoundError: + pass + if dist.is_available() and dist.is_initialized(): + dist.barrier() + if h.rank == 0: + log(f"ttpr: phase:{current_phase}/{num_phases} t:{time.perf_counter() - t_start:.1f}s") + del cur_lora, cur_opt + finally: + pass + if dist.is_available() and dist.is_initialized(): + dist.all_reduce(loss_sum, op=dist.ReduceOp.SUM) + dist.all_reduce(byte_sum, op=dist.ReduceOp.SUM) + dist.all_reduce(token_count, op=dist.ReduceOp.SUM) + for p in base_model.parameters(): + p.requires_grad_(True) + base_model.train() + return _loss_bpb_from_sums(loss_sum, token_count, byte_sum) + + +def timed_eval(label, fn, *args, **kwargs): + torch.cuda.synchronize() + t0 = time.perf_counter() + val_loss, val_bpb = fn(*args, **kwargs) + torch.cuda.synchronize() + elapsed_ms = 1e3 * (time.perf_counter() - t0) + log( + f"{label} val_loss:{val_loss:.8f} val_bpb:{val_bpb:.8f} eval_time:{elapsed_ms:.0f}ms" + ) + return val_loss, val_bpb + + +def train_model(h, device, val_data): + base_model = GPT(h).to(device).bfloat16() + restore_fp32_params(base_model) + compiled_model = torch.compile(base_model, dynamic=False, fullgraph=True) + compiled_forward_logits = torch.compile( + base_model.forward_logits, dynamic=False, fullgraph=True + ) + model = compiled_model + log(f"model_params:{sum(p.numel()for p in base_model.parameters())}") + optimizers = Optimizers(h, base_model) + train_loader = DocumentPackingLoader(h, device) + max_wallclock_ms = ( + 1e3 * h.max_wallclock_seconds if h.max_wallclock_seconds > 0 else None + ) + if max_wallclock_ms is not None: + max_wallclock_ms -= h.gptq_reserve_seconds * 1e3 + log( + f"gptq:reserving {h.gptq_reserve_seconds:.0f}s, effective={max_wallclock_ms:.0f}ms" + ) + + def training_frac(step, elapsed_ms): + if max_wallclock_ms is None: + return step / max(h.iterations, 1) + return elapsed_ms / max(max_wallclock_ms, 1e-09) + + def lr_mul(frac): + if h.warmdown_frac <= 0: + return 1.0 + if frac >= 1.0 - h.warmdown_frac: + return max((1.0 - frac) / h.warmdown_frac, h.min_lr) + return 1.0 + + def step_fn(step, lr_scale): + optimizers.zero_grad_all() + train_loss = torch.zeros((), device=device) + for micro_step in range(h.grad_accum_steps): + x, y, cu_seqlens, _max_seqlen = train_loader.next_batch( + h.train_batch_tokens, h.grad_accum_steps + ) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=True): + loss = model(x, y, cu_seqlens=cu_seqlens, max_seqlen=h.train_seq_len) + train_loss += loss.detach() + (loss / h.grad_accum_steps).backward() + train_loss /= h.grad_accum_steps + frac = ( + min(step / h.muon_momentum_warmup_steps, 1.0) + if h.muon_momentum_warmup_steps > 0 + else 1.0 + ) + muon_momentum = ( + 1 - frac + ) * h.muon_momentum_warmup_start + frac * h.muon_momentum + for group in optimizers.optimizer_muon.param_groups: + group["momentum"] = muon_momentum + for opt in optimizers: + for group in opt.param_groups: + group["lr"] = group["base_lr"] * lr_scale + if h.grad_clip_norm > 0: + torch.nn.utils.clip_grad_norm_(base_model.parameters(), h.grad_clip_norm) + optimizers.step(distributed=h.distributed) + return train_loss + + if h.warmup_steps > 0: + initial_model_state = { + name: tensor.detach().cpu().clone() + for (name, tensor) in base_model.state_dict().items() + } + initial_optimizer_states = [ + copy.deepcopy(opt.state_dict()) for opt in optimizers + ] + model.train() + num_tokens_local = h.train_batch_tokens // h.world_size + for blk in base_model.blocks: + blk.attn.rotary(num_tokens_local, device, torch.bfloat16) + cu_bucket_size = train_loader.cu_bucket_size + warmup_cu_buckets = tuple(cu_bucket_size * i for i in range(1, 5)) + warmup_cu_iters = 3 + x, y, cu_seqlens, _ = train_loader.next_batch( + h.train_batch_tokens, h.grad_accum_steps + ) + log(f"warmup_cu_buckets:{','.join(str(b) for b in warmup_cu_buckets)} iters_each:{warmup_cu_iters}") + def _run_cu_bucket_warmup(): + for bucket_len in warmup_cu_buckets: + boundaries = list(range(0, x.size(1), max(h.train_seq_len, 1))) + if boundaries[-1] != x.size(1): + boundaries.append(x.size(1)) + cu = torch.full((bucket_len,), x.size(1), dtype=torch.int32, device=device) + cu[: len(boundaries)] = torch.tensor(boundaries, dtype=torch.int32, device=device) + for _ in range(warmup_cu_iters): + optimizers.zero_grad_all() + with torch.autocast(device_type="cuda", dtype=torch.bfloat16, enabled=True): + wloss = model(x, y, cu_seqlens=cu, max_seqlen=h.train_seq_len) + (wloss / h.grad_accum_steps).backward() + optimizers.zero_grad_all() + _run_cu_bucket_warmup() + if h.num_loops > 0: + base_model.looping_active = True + _run_cu_bucket_warmup() + base_model.looping_active = False + for warmup_step in range(h.warmup_steps): + step_fn(warmup_step, 1.0) + if ( + warmup_step <= 5 + or (warmup_step + 1) % 10 == 0 + or warmup_step + 1 == h.warmup_steps + ): + log(f"warmup_step: {warmup_step+1}/{h.warmup_steps}") + if h.num_loops > 0: + base_model.looping_active = True + log( + f"loop_warmup:enabled encoder:{base_model.encoder_indices} decoder:{base_model.decoder_indices}" + ) + for warmup_step in range(h.warmup_steps): + step_fn(warmup_step, 1.0) + if ( + warmup_step <= 5 + or (warmup_step + 1) % 10 == 0 + or warmup_step + 1 == h.warmup_steps + ): + log(f"loop_warmup_step: {warmup_step+1}/{h.warmup_steps}") + base_model.looping_active = False + base_model.load_state_dict(initial_model_state, strict=True) + for (opt, state) in zip(optimizers, initial_optimizer_states, strict=True): + opt.load_state_dict(state) + optimizers.zero_grad_all() + train_loader = DocumentPackingLoader(h, device) + ema_state = { + name: t.detach().float().clone() + for (name, t) in base_model.state_dict().items() + } + ema_decay = h.ema_decay + training_time_ms = 0.0 + stop_after_step = None + torch.cuda.synchronize() + t0 = time.perf_counter() + step = 0 + while True: + last_step = ( + step == h.iterations + or stop_after_step is not None + and step >= stop_after_step + ) + should_validate = ( + last_step or h.val_loss_every > 0 and step % h.val_loss_every == 0 + ) + if should_validate: + torch.cuda.synchronize() + training_time_ms += 1e3 * (time.perf_counter() - t0) + val_loss, val_bpb = eval_val( + h, device, val_data, model, compiled_forward_logits + ) + log( + f"{step}/{h.iterations} val_loss: {val_loss:.4f} val_bpb: {val_bpb:.4f}" + ) + torch.cuda.synchronize() + t0 = time.perf_counter() + if last_step: + if stop_after_step is not None and step < h.iterations: + log( + f"stopping_early: wallclock_cap train_time: {training_time_ms:.0f}ms step: {step}/{h.iterations}" + ) + break + elapsed_ms = training_time_ms + 1e3 * (time.perf_counter() - t0) + frac = training_frac(step, elapsed_ms) + scale = lr_mul(frac) + if ( + h.num_loops > 0 + and not base_model.looping_active + and frac >= h.enable_looping_at + ): + base_model.looping_active = True + log( + f"layer_loop:enabled step:{step} frac:{frac:.3f} encoder:{base_model.encoder_indices} decoder:{base_model.decoder_indices}" + ) + train_loss = step_fn(step, scale) + with torch.no_grad(): + for (name, t) in base_model.state_dict().items(): + ema_state[name].mul_(ema_decay).add_( + t.detach().float(), alpha=1.0 - ema_decay + ) + step += 1 + approx_training_time_ms = training_time_ms + 1e3 * (time.perf_counter() - t0) + should_log_train = h.train_log_every > 0 and ( + step <= 5 or step % h.train_log_every == 0 or stop_after_step is not None + ) + if should_log_train: + tok_per_sec = step * h.train_batch_tokens / (approx_training_time_ms / 1e3) + log( + f"{step}/{h.iterations} train_loss: {train_loss.item():.4f} train_time: {approx_training_time_ms/60000:.1f}m tok/s: {tok_per_sec:.0f}" + ) + reached_cap = ( + max_wallclock_ms is not None and approx_training_time_ms >= max_wallclock_ms + ) + if h.distributed and max_wallclock_ms is not None: + reached_cap_tensor = torch.tensor(int(reached_cap), device=device) + dist.all_reduce(reached_cap_tensor, op=dist.ReduceOp.MAX) + reached_cap = bool(reached_cap_tensor.item()) + if stop_after_step is None and reached_cap: + stop_after_step = step + log( + f"peak memory allocated: {torch.cuda.max_memory_allocated()//1024//1024} MiB reserved: {torch.cuda.max_memory_reserved()//1024//1024} MiB" + ) + log("ema:applying EMA weights") + current_state = base_model.state_dict() + avg_state = { + name: t.to(dtype=current_state[name].dtype) for (name, t) in ema_state.items() + } + base_model.load_state_dict(avg_state, strict=True) + return base_model, compiled_model, compiled_forward_logits + + +def train_and_eval(h, device): + random.seed(h.seed) + np.random.seed(h.seed) + torch.manual_seed(h.seed) + torch.cuda.manual_seed_all(h.seed) + if h.artifact_dir and h.is_main_process: + os.makedirs(h.artifact_dir, exist_ok=True) + val_data = ValidationData(h, device) + log( + f"train_shards: {len(list(Path(h.datasets_dir).resolve().glob('fineweb_train_*.bin')))}" + ) + log(f"val_tokens: {val_data.val_tokens.numel()-1}") + base_model, compiled_model, compiled_forward_logits = train_model( + h, device, val_data + ) + torch._dynamo.reset() + timed_eval( + "diagnostic pre-quantization post-ema", + eval_val, + h, + device, + val_data, + compiled_model, + compiled_forward_logits, + ) + serialize(h, base_model, Path(__file__).read_text(encoding="utf-8")) + if h.distributed: + dist.barrier() + eval_model = deserialize(h, device) + if h.num_loops > 0: + eval_model.looping_active = True + compiled_model = torch.compile(eval_model, dynamic=False, fullgraph=True) + compiled_forward_logits = torch.compile( + eval_model.forward_logits, dynamic=False, fullgraph=True + ) + timed_eval( + "diagnostic quantized", + eval_val, + h, + device, + val_data, + compiled_model, + compiled_forward_logits, + ) + if h.sliding_window_enabled: + timed_eval( + "diagnostic quantized_sliding_window", + eval_val_sliding, + h, + device, + val_data, + eval_model, + forward_logits_fn=compiled_forward_logits, + ) + if h.ttt_enabled: + del eval_model, compiled_model + torch._dynamo.reset() + torch.cuda.empty_cache() + ttt_model = deserialize(h, device) + if h.num_loops > 0: + ttt_model.looping_active = True + for p in ttt_model.parameters(): + p.requires_grad_(False) + + if h.rope_yarn: + _yarn_seqlen = h.train_batch_tokens // h.grad_accum_steps + for block in ttt_model.blocks: + block.attn.rotary(_yarn_seqlen, device, torch.bfloat16) + else: + for block in ttt_model.blocks: + block.attn.rotary._cos_cached = None + block.attn.rotary._sin_cached = None + block.attn.rotary._seq_len_cached = 0 + block.attn.rotary(h.ttt_eval_seq_len, device, torch.bfloat16) + + def _fwd_ttt_inner(input_ids, target_ids, lora): + return ttt_model.forward_ttt(input_ids, target_ids, lora=lora) + + _fwd_ttt_compiled_inner = None + + def _fwd_ttt(input_ids, target_ids, lora): + nonlocal _fwd_ttt_compiled_inner + if _fwd_ttt_compiled_inner is None: + _fwd_ttt_compiled_inner = torch.compile(_fwd_ttt_inner, dynamic=True) + return _fwd_ttt_compiled_inner(input_ids, target_ids, lora=lora) + + fwd_ttt_compiled = _fwd_ttt + log(f"ttt_lora:warming up compile (random tokens, no val data)") + global BOS_ID + if BOS_ID is None: + BOS_ID = 1 + t_warmup = time.perf_counter() + warmup_bszes = [h.ttt_batch_size] + for bsz in warmup_bszes: + wl = BatchedTTTLoRA( + bsz, ttt_model, h.ttt_lora_rank, + k_lora=h.ttt_k_lora, mlp_lora=h.ttt_mlp_lora, o_lora=h.ttt_o_lora, + ).to(device) + wo = torch.optim.AdamW( + wl.parameters(), + lr=h.ttt_lora_lr, + betas=(h.ttt_beta1, h.ttt_beta2), + eps=1e-10, + weight_decay=h.ttt_weight_decay, + fused=True, + ) + for ctx_len in (h.ttt_chunk_size, h.ttt_eval_seq_len): + xw = torch.randint(0, h.vocab_size, (bsz, ctx_len), device=device, dtype=torch.int64) + yw = torch.randint(0, h.vocab_size, (bsz, ctx_len), device=device, dtype=torch.int64) + with torch.autocast(device_type="cuda", dtype=torch.bfloat16): + ptl = fwd_ttt_compiled(xw, yw, lora=wl) + ptl[:, : min(h.ttt_chunk_size, ctx_len)].mean(dim=-1).sum().backward() + wo.step() + wo.zero_grad(set_to_none=True) + del wl, wo + torch.cuda.empty_cache() + compile_elapsed = time.perf_counter() - t_warmup + log(f"ttt_lora:compile warmup done ({compile_elapsed:.1f}s)") + log("\nbeginning TTT eval timer") + torch.cuda.synchronize() + t_ttt = time.perf_counter() + ttt_val_loss, ttt_val_bpb = eval_val_ttt_phased( + h, ttt_model, device, val_data, forward_ttt_train=fwd_ttt_compiled + ) + torch.cuda.synchronize() + ttt_eval_elapsed = time.perf_counter() - t_ttt + log( + "quantized_ttt_phased " + f"val_loss:{ttt_val_loss:.8f} val_bpb:{ttt_val_bpb:.8f} " + f"eval_time:{1e3*ttt_eval_elapsed:.0f}ms" + ) + log(f"total_eval_time:{ttt_eval_elapsed:.1f}s") + del ttt_model + + +def main(): + world_size = int(os.environ.get("WORLD_SIZE", "1")) + local_rank = int(os.environ.get("LOCAL_RANK", "0")) + distributed = "RANK" in os.environ and "WORLD_SIZE" in os.environ + if not torch.cuda.is_available(): + raise RuntimeError("CUDA is required") + if world_size <= 0: + raise ValueError(f"WORLD_SIZE must be positive, got {world_size}") + if 8 % world_size != 0: + raise ValueError( + f"WORLD_SIZE={world_size} must divide 8 so grad_accum_steps stays integral" + ) + device = torch.device("cuda", local_rank) + torch.cuda.set_device(device) + if distributed: + dist.init_process_group(backend="nccl", device_id=device) + dist.barrier() + torch.backends.cuda.matmul.allow_tf32 = True + torch.backends.cudnn.allow_tf32 = True + torch.set_float32_matmul_precision("high") + from torch.backends.cuda import ( + enable_cudnn_sdp, + enable_flash_sdp, + enable_math_sdp, + enable_mem_efficient_sdp, + ) + + enable_cudnn_sdp(False) + enable_flash_sdp(True) + enable_mem_efficient_sdp(False) + enable_math_sdp(False) + torch._dynamo.config.optimize_ddp = False + torch._dynamo.config.cache_size_limit = 16 + h = Hyperparameters() + set_logging_hparams(h) + if h.is_main_process: + os.makedirs(h.artifact_dir if h.artifact_dir else "logs", exist_ok=True) + log(100 * "=", console=False) + log("Hyperparameters:", console=True) + for (k, v) in sorted(vars(type(h)).items()): + if not k.startswith("_"): + log(f" {k}: {v}", console=True) + log("=" * 100, console=False) + log("Source code:", console=False) + log("=" * 100, console=False) + with open(__file__, "r", encoding="utf-8") as _src: + log(_src.read(), console=False) + log("=" * 100, console=False) + log(f"Running Python {sys.version}", console=False) + log(f"Running PyTorch {torch.__version__}", console=False) + log("=" * 100, console=False) + train_and_eval(h, device) + if distributed: + dist.destroy_process_group() + + +if __name__ == "__main__": + main() diff --git a/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_seed0.log b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_seed0.log new file mode 100644 index 0000000000..c685ebd21c --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_seed0.log @@ -0,0 +1,489 @@ +W0430 02:23:17.516000 636951 torch/distributed/run.py:803] +W0430 02:23:17.516000 636951 torch/distributed/run.py:803] ***************************************** +W0430 02:23:17.516000 636951 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0430 02:23:17.516000 636951 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + artifact_dir: + attn_clip_sigmas: 13.0 + attn_out_gate: True + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp8192 + distributed: True + ema_decay: 0.9965 + embed_bits: 6 + embed_clip_sigmas: 20.0 + embed_lr: 0.6 + embed_wd: 0.085 + enable_looping_at: 0.35 + eval_seq_len: 2048 + eval_stride: 64 + gate_width: 32 + global_ttt_batch_seqs: 32 + global_ttt_chunk_tokens: 32768 + global_ttt_epochs: 1 + global_ttt_grad_clip: 1.0 + global_ttt_lr: 0.001 + global_ttt_momentum: 0.9 + global_ttt_respect_doc_boundaries: True + global_ttt_warmup_chunks: 0 + global_ttt_warmup_start_lr: 0.0 + gptq_calibration_batches: 16 + gptq_reserve_seconds: 4.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + is_main_process: True + iterations: 20000 + leaky_slope: 0.5 + ln_scale: True + local_rank: 0 + logfile: logs/seed_0.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + lqer_asym_enabled: True + lqer_asym_group: 64 + lqer_enabled: False + lqer_factor_bits: 4 + lqer_rank: 4 + lqer_top_k: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.026 + max_wallclock_seconds: 600.0 + min_lr: 0.1 + mlp_clip_sigmas: 10.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_momentum: 0.97 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_final_lane: mean + parallel_start_layer: 8 + phased_ttt_enabled: False + phased_ttt_num_phases: 1 + phased_ttt_prefix_docs: 2000 + qk_gain_init: 5.0 + quantized_model_path: final_model.int6.ptz + rank: 0 + rho1_enabled: False + rho1_top_k: 0.7 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + rope_yarn: False + run_id: seed_0 + scalar_lr: 0.02 + seed: 0 + sin_squared_activation: False + skip_gates_enabled: True + sliding_window_enabled: False + smear_gate: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_8192_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp8192/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 2048 + ttt_batch_size: 64 + ttt_beta1: 0.0 + ttt_beta2: 0.999 + ttt_chunk_size: 48 + ttt_enabled: True + ttt_eval_batches: + ttt_eval_seq_len: 2048 + ttt_grad_steps: 1 + ttt_k_lora: True + ttt_lora_lr: 0.0001 + ttt_lora_rank: 96 + ttt_mlp_lora: True + ttt_o_lora: True + ttt_optimizer: adam + ttt_weight_decay: 0.5 + val_batch_tokens: 524288 + val_doc_fraction: 1.0 + val_files: ./data/datasets/fineweb10B_sp8192/fineweb_val_*.bin + val_loss_every: 4000 + vocab_size: 8192 + warmdown_frac: 0.75 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 40540160 +model_params:35947451 +gptq:reserving 4s, effective=596000ms +warmup_cu_buckets:64,128,192,256 iters_each:3 +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +0/20000 val_loss: 9.0086 val_bpb: 3.4874 +1/20000 train_loss: 9.0089 train_time: 0.0m tok/s: 12288117 +2/20000 train_loss: 12.2576 train_time: 0.0m tok/s: 10899463 +3/20000 train_loss: 11.2708 train_time: 0.0m tok/s: 9643376 +4/20000 train_loss: 9.6466 train_time: 0.0m tok/s: 9084930 +5/20000 train_loss: 8.1799 train_time: 0.0m tok/s: 8768774 +500/20000 train_loss: 3.2519 train_time: 0.8m tok/s: 8013610 +1000/20000 train_loss: 3.0261 train_time: 1.6m tok/s: 7969308 +1500/20000 train_loss: 3.0310 train_time: 2.5m tok/s: 7954781 +2000/20000 train_loss: 2.9831 train_time: 3.3m tok/s: 7947971 +layer_loop:enabled step:2107 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 3.0635 train_time: 4.4m tok/s: 7414396 +3000/20000 train_loss: 2.9046 train_time: 5.7m tok/s: 6922846 +3500/20000 train_loss: 2.9637 train_time: 6.9m tok/s: 6645132 +4000/20000 train_loss: 2.8948 train_time: 8.1m tok/s: 6441989 +4000/20000 val_loss: 2.8668 val_bpb: 1.1098 +4500/20000 train_loss: 2.8378 train_time: 9.3m tok/s: 6318418 +4750/20000 val_loss: 2.7944 val_bpb: 1.0818 +stopping_early: wallclock_cap train_time: 596081ms step: 4750/20000 +peak memory allocated: 40445 MiB reserved: 44498 MiB +ema:applying EMA weights +diagnostic pre-quantization post-ema val_loss:2.76909407 val_bpb:1.07196806 eval_time:9717ms +Serialized model: 135424701 bytes +Code size (uncompressed): 130789 bytes +Code size (compressed): 29430 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 3.5s +Quantized weights: + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight, tok_emb.weight + passthrough (float16): blocks.attn.attn_out_gate_w, blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, parallel_post_lambdas, parallel_resid_lambdas, skip_gates, skip_weights, smear_lambda, smear_w +Serialized model quantized+brotli: 15860701 bytes +Total submission size quantized+brotli: 15890131 bytes +diagnostic quantized val_loss:2.82650574 val_bpb:1.09419319 eval_time:14630ms +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (108.8s) + +beginning TTT eval timer +ttt_phased: total_docs:50000 prefix_docs:2000 suffix_docs:48000 num_phases:1 boundaries:[2000] +ttp: b780/782 bl:2.6451 bb:1.0853 rl:2.6451 rb:1.0853 dl:11071-14414 gd:0 +ttp: b766/782 bl:2.5854 bb:1.0122 rl:2.6310 rb:1.0673 dl:3846-3962 gd:0 +ttp: b760/782 bl:2.8681 bb:1.1264 rl:2.6706 rb:1.0775 dl:3255-3334 gd:0 +ttp: b756/782 bl:2.8071 bb:1.0880 rl:2.6886 rb:1.0789 dl:2973-3032 gd:0 +ttp: b750/782 bl:2.8500 bb:1.0752 rl:2.7056 rb:1.0785 dl:2638-2688 gd:0 +ttp: b746/782 bl:2.7002 bb:1.0632 rl:2.7051 rb:1.0771 dl:2459-2501 gd:0 +ttpp: phase:1/1 pd:2448 gd:2000 t:229.4s +tttg: c1/213 lr:0.001000 t:0.4s +tttg: c2/213 lr:0.001000 t:0.5s +tttg: c3/213 lr:0.001000 t:0.7s +tttg: c4/213 lr:0.001000 t:0.8s +tttg: c5/213 lr:0.000999 t:0.9s +tttg: c6/213 lr:0.000999 t:1.0s +tttg: c7/213 lr:0.000998 t:1.2s +tttg: c8/213 lr:0.000997 t:1.3s +tttg: c9/213 lr:0.000996 t:1.4s +tttg: c10/213 lr:0.000996 t:1.5s +tttg: c11/213 lr:0.000995 t:1.7s +tttg: c12/213 lr:0.000993 t:1.8s +tttg: c13/213 lr:0.000992 t:2.0s +tttg: c14/213 lr:0.000991 t:2.1s +tttg: c15/213 lr:0.000989 t:2.2s +tttg: c16/213 lr:0.000988 t:2.3s +tttg: c17/213 lr:0.000986 t:4.1s +tttg: c18/213 lr:0.000984 t:4.2s +tttg: c19/213 lr:0.000982 t:4.3s +tttg: c20/213 lr:0.000980 t:4.4s +tttg: c21/213 lr:0.000978 t:4.6s +tttg: c22/213 lr:0.000976 t:4.7s +tttg: c23/213 lr:0.000974 t:4.8s +tttg: c24/213 lr:0.000971 t:4.9s +tttg: c25/213 lr:0.000969 t:5.1s +tttg: c26/213 lr:0.000966 t:5.2s +tttg: c27/213 lr:0.000963 t:5.3s +tttg: c28/213 lr:0.000961 t:5.4s +tttg: c29/213 lr:0.000958 t:5.5s +tttg: c30/213 lr:0.000955 t:5.7s +tttg: c31/213 lr:0.000951 t:5.8s +tttg: c32/213 lr:0.000948 t:5.9s +tttg: c33/213 lr:0.000945 t:6.0s +tttg: c34/213 lr:0.000941 t:6.1s +tttg: c35/213 lr:0.000938 t:6.3s +tttg: c36/213 lr:0.000934 t:6.4s +tttg: c37/213 lr:0.000931 t:6.5s +tttg: c38/213 lr:0.000927 t:6.6s +tttg: c39/213 lr:0.000923 t:6.7s +tttg: c40/213 lr:0.000919 t:6.9s +tttg: c41/213 lr:0.000915 t:7.0s +tttg: c42/213 lr:0.000911 t:7.1s +tttg: c43/213 lr:0.000906 t:7.2s +tttg: c44/213 lr:0.000902 t:7.3s +tttg: c45/213 lr:0.000897 t:7.5s +tttg: c46/213 lr:0.000893 t:7.6s +tttg: c47/213 lr:0.000888 t:7.7s +tttg: c48/213 lr:0.000884 t:7.8s +tttg: c49/213 lr:0.000879 t:8.0s +tttg: c50/213 lr:0.000874 t:8.1s +tttg: c51/213 lr:0.000869 t:8.2s +tttg: c52/213 lr:0.000864 t:8.3s +tttg: c53/213 lr:0.000859 t:8.4s +tttg: c54/213 lr:0.000854 t:8.6s +tttg: c55/213 lr:0.000848 t:8.7s +tttg: c56/213 lr:0.000843 t:8.8s +tttg: c57/213 lr:0.000837 t:8.9s +tttg: c58/213 lr:0.000832 t:9.0s +tttg: c59/213 lr:0.000826 t:9.2s +tttg: c60/213 lr:0.000821 t:9.3s +tttg: c61/213 lr:0.000815 t:9.4s +tttg: c62/213 lr:0.000809 t:9.5s +tttg: c63/213 lr:0.000803 t:9.6s +tttg: c64/213 lr:0.000797 t:9.8s +tttg: c65/213 lr:0.000791 t:9.9s +tttg: c66/213 lr:0.000785 t:10.0s +tttg: c67/213 lr:0.000779 t:10.1s +tttg: c68/213 lr:0.000773 t:10.2s +tttg: c69/213 lr:0.000767 t:10.3s +tttg: c70/213 lr:0.000761 t:10.4s +tttg: c71/213 lr:0.000754 t:10.5s +tttg: c72/213 lr:0.000748 t:10.6s +tttg: c73/213 lr:0.000741 t:10.7s +tttg: c74/213 lr:0.000735 t:10.8s +tttg: c75/213 lr:0.000728 t:10.9s +tttg: c76/213 lr:0.000722 t:11.0s +tttg: c77/213 lr:0.000715 t:11.1s +tttg: c78/213 lr:0.000708 t:11.3s +tttg: c79/213 lr:0.000702 t:11.4s +tttg: c80/213 lr:0.000695 t:11.5s +tttg: c81/213 lr:0.000688 t:11.6s +tttg: c82/213 lr:0.000681 t:11.7s +tttg: c83/213 lr:0.000674 t:11.8s +tttg: c84/213 lr:0.000667 t:11.9s +tttg: c85/213 lr:0.000660 t:12.0s +tttg: c86/213 lr:0.000653 t:12.1s +tttg: c87/213 lr:0.000646 t:12.2s +tttg: c88/213 lr:0.000639 t:12.3s +tttg: c89/213 lr:0.000632 t:12.4s +tttg: c90/213 lr:0.000625 t:12.6s +tttg: c91/213 lr:0.000617 t:12.7s +tttg: c92/213 lr:0.000610 t:12.8s +tttg: c93/213 lr:0.000603 t:12.9s +tttg: c94/213 lr:0.000596 t:13.0s +tttg: c95/213 lr:0.000588 t:13.1s +tttg: c96/213 lr:0.000581 t:13.2s +tttg: c97/213 lr:0.000574 t:13.3s +tttg: c98/213 lr:0.000566 t:13.4s +tttg: c99/213 lr:0.000559 t:13.5s +tttg: c100/213 lr:0.000552 t:13.6s +tttg: c101/213 lr:0.000544 t:13.7s +tttg: c102/213 lr:0.000537 t:13.8s +tttg: c103/213 lr:0.000530 t:14.0s +tttg: c104/213 lr:0.000522 t:14.1s +tttg: c105/213 lr:0.000515 t:14.2s +tttg: c106/213 lr:0.000507 t:14.3s +tttg: c107/213 lr:0.000500 t:14.4s +tttg: c108/213 lr:0.000493 t:14.5s +tttg: c109/213 lr:0.000485 t:14.6s +tttg: c110/213 lr:0.000478 t:14.7s +tttg: c111/213 lr:0.000470 t:14.8s +tttg: c112/213 lr:0.000463 t:14.9s +tttg: c113/213 lr:0.000456 t:15.0s +tttg: c114/213 lr:0.000448 t:15.1s +tttg: c115/213 lr:0.000441 t:15.2s +tttg: c116/213 lr:0.000434 t:15.3s +tttg: c117/213 lr:0.000426 t:15.5s +tttg: c118/213 lr:0.000419 t:15.6s +tttg: c119/213 lr:0.000412 t:15.7s +tttg: c120/213 lr:0.000404 t:15.8s +tttg: c121/213 lr:0.000397 t:15.9s +tttg: c122/213 lr:0.000390 t:16.0s +tttg: c123/213 lr:0.000383 t:16.1s +tttg: c124/213 lr:0.000375 t:16.2s +tttg: c125/213 lr:0.000368 t:16.3s +tttg: c126/213 lr:0.000361 t:16.4s +tttg: c127/213 lr:0.000354 t:16.5s +tttg: c128/213 lr:0.000347 t:16.6s +tttg: c129/213 lr:0.000340 t:16.7s +tttg: c130/213 lr:0.000333 t:16.8s +tttg: c131/213 lr:0.000326 t:16.9s +tttg: c132/213 lr:0.000319 t:17.0s +tttg: c133/213 lr:0.000312 t:17.2s +tttg: c134/213 lr:0.000305 t:17.3s +tttg: c135/213 lr:0.000298 t:17.4s +tttg: c136/213 lr:0.000292 t:17.5s +tttg: c137/213 lr:0.000285 t:17.6s +tttg: c138/213 lr:0.000278 t:17.7s +tttg: c139/213 lr:0.000272 t:17.8s +tttg: c140/213 lr:0.000265 t:17.9s +tttg: c141/213 lr:0.000259 t:18.0s +tttg: c142/213 lr:0.000252 t:18.1s +tttg: c143/213 lr:0.000246 t:18.2s +tttg: c144/213 lr:0.000239 t:18.3s +tttg: c145/213 lr:0.000233 t:18.4s +tttg: c146/213 lr:0.000227 t:18.5s +tttg: c147/213 lr:0.000221 t:18.6s +tttg: c148/213 lr:0.000215 t:18.8s +tttg: c149/213 lr:0.000209 t:18.9s +tttg: c150/213 lr:0.000203 t:19.0s +tttg: c151/213 lr:0.000197 t:19.1s +tttg: c152/213 lr:0.000191 t:19.2s +tttg: c153/213 lr:0.000185 t:19.3s +tttg: c154/213 lr:0.000179 t:19.4s +tttg: c155/213 lr:0.000174 t:19.5s +tttg: c156/213 lr:0.000168 t:19.6s +tttg: c157/213 lr:0.000163 t:19.7s +tttg: c158/213 lr:0.000157 t:19.8s +tttg: c159/213 lr:0.000152 t:19.9s +tttg: c160/213 lr:0.000146 t:20.0s +tttg: c161/213 lr:0.000141 t:20.1s +tttg: c162/213 lr:0.000136 t:20.2s +tttg: c163/213 lr:0.000131 t:20.4s +tttg: c164/213 lr:0.000126 t:20.5s +tttg: c165/213 lr:0.000121 t:20.6s +tttg: c166/213 lr:0.000116 t:20.7s +tttg: c167/213 lr:0.000112 t:20.8s +tttg: c168/213 lr:0.000107 t:20.9s +tttg: c169/213 lr:0.000103 t:21.0s +tttg: c170/213 lr:0.000098 t:21.1s +tttg: c171/213 lr:0.000094 t:21.2s +tttg: c172/213 lr:0.000089 t:21.3s +tttg: c173/213 lr:0.000085 t:21.4s +tttg: c174/213 lr:0.000081 t:21.5s +tttg: c175/213 lr:0.000077 t:21.6s +tttg: c176/213 lr:0.000073 t:21.7s +tttg: c177/213 lr:0.000069 t:21.8s +tttg: c178/213 lr:0.000066 t:21.9s +tttg: c179/213 lr:0.000062 t:22.1s +tttg: c180/213 lr:0.000059 t:22.2s +tttg: c181/213 lr:0.000055 t:22.3s +tttg: c182/213 lr:0.000052 t:22.4s +tttg: c183/213 lr:0.000049 t:22.5s +tttg: c184/213 lr:0.000045 t:22.6s +tttg: c185/213 lr:0.000042 t:22.7s +tttg: c186/213 lr:0.000039 t:22.8s +tttg: c187/213 lr:0.000037 t:22.9s +tttg: c188/213 lr:0.000034 t:23.0s +tttg: c189/213 lr:0.000031 t:23.1s +tttg: c190/213 lr:0.000029 t:23.2s +tttg: c191/213 lr:0.000026 t:23.3s +tttg: c192/213 lr:0.000024 t:23.4s +tttg: c193/213 lr:0.000022 t:23.5s +tttg: c194/213 lr:0.000020 t:23.7s +tttg: c195/213 lr:0.000018 t:23.8s +tttg: c196/213 lr:0.000016 t:23.9s +tttg: c197/213 lr:0.000014 t:24.0s +tttg: c198/213 lr:0.000012 t:24.1s +tttg: c199/213 lr:0.000011 t:24.2s +tttg: c200/213 lr:0.000009 t:24.3s +tttg: c201/213 lr:0.000008 t:24.4s +tttg: c202/213 lr:0.000007 t:24.5s +tttg: c203/213 lr:0.000005 t:24.6s +tttg: c204/213 lr:0.000004 t:24.7s +tttg: c205/213 lr:0.000004 t:24.8s +tttg: c206/213 lr:0.000003 t:24.9s +tttg: c207/213 lr:0.000002 t:25.0s +tttg: c208/213 lr:0.000001 t:25.1s +tttg: c209/213 lr:0.000001 t:25.3s +tttg: c210/213 lr:0.000000 t:25.4s +tttg: c211/213 lr:0.000000 t:25.5s +tttg: c212/213 lr:0.000000 t:25.6s +ttpr: phase:1/1 t:257.7s +ttp: b736/782 bl:2.6948 bb:1.0504 rl:2.7044 rb:1.0752 dl:2140-2165 gd:1 +ttp: b731/782 bl:2.7961 bb:1.0671 rl:2.7102 rb:1.0746 dl:2017-2041 gd:1 +ttp: b724/782 bl:2.7696 bb:1.0587 rl:2.7135 rb:1.0737 dl:1885-1900 gd:1 +ttp: b716/782 bl:2.8289 bb:1.0439 rl:2.7191 rb:1.0722 dl:1739-1754 gd:1 +ttp: b708/782 bl:2.7429 bb:1.0541 rl:2.7202 rb:1.0713 dl:1639-1649 gd:1 +ttp: b700/782 bl:2.6944 bb:1.0516 rl:2.7192 rb:1.0705 dl:1552-1562 gd:1 +ttp: b692/782 bl:2.7874 bb:1.0576 rl:2.7217 rb:1.0700 dl:1477-1484 gd:1 +ttp: b677/782 bl:2.8829 bb:1.1175 rl:2.7269 rb:1.0716 dl:1353-1360 gd:1 +ttp: b669/782 bl:2.8067 bb:1.0643 rl:2.7293 rb:1.0714 dl:1301-1308 gd:1 +ttp: b661/782 bl:2.7462 bb:1.0296 rl:2.7298 rb:1.0701 dl:1251-1258 gd:1 +ttp: b651/782 bl:2.7407 bb:1.0526 rl:2.7301 rb:1.0697 dl:1193-1198 gd:1 +ttp: b644/782 bl:2.7529 bb:1.0387 rl:2.7306 rb:1.0689 dl:1155-1160 gd:1 +ttp: b632/782 bl:2.7547 bb:1.0347 rl:2.7312 rb:1.0681 dl:1096-1101 gd:1 +ttp: b624/782 bl:2.8154 bb:1.0833 rl:2.7330 rb:1.0684 dl:1060-1064 gd:1 +ttp: b615/782 bl:2.8584 bb:1.0732 rl:2.7356 rb:1.0685 dl:1020-1023 gd:1 +ttp: b607/782 bl:2.7107 bb:1.0447 rl:2.7351 rb:1.0680 dl:986-990 gd:1 +ttp: b600/782 bl:2.8098 bb:1.0666 rl:2.7365 rb:1.0680 dl:958-963 gd:1 +ttp: b591/782 bl:2.6897 bb:1.0163 rl:2.7357 rb:1.0671 dl:927-930 gd:1 +ttp: b583/782 bl:2.8219 bb:1.1007 rl:2.7371 rb:1.0676 dl:901-904 gd:1 +ttp: b575/782 bl:2.8165 bb:1.0605 rl:2.7384 rb:1.0675 dl:874-877 gd:1 +ttp: b567/782 bl:2.6995 bb:1.0397 rl:2.7378 rb:1.0671 dl:849-852 gd:1 +ttp: b559/782 bl:2.7723 bb:1.0536 rl:2.7383 rb:1.0669 dl:824-827 gd:1 +ttp: b551/782 bl:2.8492 bb:1.0739 rl:2.7399 rb:1.0670 dl:801-804 gd:1 +ttp: b543/782 bl:2.8064 bb:1.0535 rl:2.7408 rb:1.0668 dl:779-782 gd:1 +ttp: b534/782 bl:2.8339 bb:1.0780 rl:2.7420 rb:1.0669 dl:757-759 gd:1 +ttp: b526/782 bl:2.7874 bb:1.0644 rl:2.7425 rb:1.0669 dl:737-739 gd:1 +ttp: b518/782 bl:2.7514 bb:1.0597 rl:2.7427 rb:1.0668 dl:717-720 gd:1 +ttp: b510/782 bl:2.7738 bb:1.0259 rl:2.7430 rb:1.0663 dl:698-700 gd:1 +ttp: b499/782 bl:2.8049 bb:1.0585 rl:2.7437 rb:1.0662 dl:673-675 gd:1 +ttp: b489/782 bl:2.8175 bb:1.0894 rl:2.7445 rb:1.0665 dl:651-653 gd:1 +ttp: b482/782 bl:2.7742 bb:1.0887 rl:2.7448 rb:1.0667 dl:637-639 gd:1 +ttp: b474/782 bl:2.7816 bb:1.0606 rl:2.7451 rb:1.0666 dl:620-622 gd:1 +ttp: b466/782 bl:2.8237 bb:1.0735 rl:2.7459 rb:1.0667 dl:604-606 gd:1 +ttp: b458/782 bl:2.8363 bb:1.0751 rl:2.7467 rb:1.0668 dl:589-591 gd:1 +ttp: b449/782 bl:2.8179 bb:1.0607 rl:2.7473 rb:1.0667 dl:573-575 gd:1 +ttp: b441/782 bl:2.7303 bb:1.0510 rl:2.7472 rb:1.0666 dl:559-560 gd:1 +ttp: b432/782 bl:2.7816 bb:1.0583 rl:2.7475 rb:1.0665 dl:542-544 gd:1 +ttp: b424/782 bl:2.8219 bb:1.0906 rl:2.7481 rb:1.0667 dl:528-530 gd:1 +ttp: b412/782 bl:2.7368 bb:1.0629 rl:2.7480 rb:1.0667 dl:508-510 gd:1 +ttp: b404/782 bl:2.8107 bb:1.0786 rl:2.7484 rb:1.0668 dl:495-497 gd:1 +ttp: b396/782 bl:2.7858 bb:1.0660 rl:2.7487 rb:1.0668 dl:482-484 gd:1 +ttp: b388/782 bl:2.7960 bb:1.0728 rl:2.7490 rb:1.0668 dl:470-471 gd:1 +ttp: b378/782 bl:2.8394 bb:1.1047 rl:2.7496 rb:1.0671 dl:456-457 gd:1 +ttp: b369/782 bl:2.9482 bb:1.0944 rl:2.7509 rb:1.0672 dl:443-444 gd:1 +ttp: b359/782 bl:2.8201 bb:1.0899 rl:2.7513 rb:1.0674 dl:429-430 gd:1 +ttp: b350/782 bl:2.7522 bb:1.0675 rl:2.7513 rb:1.0674 dl:417-418 gd:1 +ttp: b342/782 bl:2.8909 bb:1.1122 rl:2.7521 rb:1.0676 dl:406-407 gd:1 +ttp: b334/782 bl:2.9002 bb:1.1160 rl:2.7529 rb:1.0679 dl:395-396 gd:1 +ttp: b325/782 bl:2.8692 bb:1.1021 rl:2.7536 rb:1.0681 dl:384-385 gd:1 +ttp: b316/782 bl:2.7972 bb:1.1000 rl:2.7538 rb:1.0683 dl:371-373 gd:1 +ttp: b305/782 bl:2.8800 bb:1.0926 rl:2.7544 rb:1.0684 dl:358-359 gd:1 +ttp: b296/782 bl:2.8358 bb:1.0967 rl:2.7548 rb:1.0685 dl:347-348 gd:1 +ttp: b288/782 bl:2.8374 bb:1.1140 rl:2.7552 rb:1.0687 dl:337-339 gd:1 +ttp: b278/782 bl:2.9182 bb:1.1507 rl:2.7559 rb:1.0691 dl:326-327 gd:1 +ttp: b268/782 bl:2.8780 bb:1.1065 rl:2.7564 rb:1.0693 dl:315-316 gd:1 +ttp: b258/782 bl:2.9759 bb:1.1734 rl:2.7573 rb:1.0697 dl:304-305 gd:1 +ttp: b250/782 bl:2.8923 bb:1.1494 rl:2.7579 rb:1.0700 dl:295-296 gd:1 +ttp: b241/782 bl:2.9254 bb:1.1332 rl:2.7585 rb:1.0702 dl:286-287 gd:1 +ttp: b232/782 bl:2.9472 bb:1.1400 rl:2.7592 rb:1.0705 dl:277-278 gd:1 +ttp: b222/782 bl:2.9005 bb:1.1269 rl:2.7597 rb:1.0707 dl:267-268 gd:1 +ttp: b212/782 bl:2.9588 bb:1.1580 rl:2.7604 rb:1.0710 dl:257-258 gd:1 +ttp: b203/782 bl:2.8005 bb:1.1002 rl:2.7605 rb:1.0711 dl:249-250 gd:1 +ttp: b195/782 bl:2.8595 bb:1.1192 rl:2.7608 rb:1.0712 dl:242-243 gd:1 +ttp: b182/782 bl:2.8818 bb:1.1464 rl:2.7612 rb:1.0715 dl:230-231 gd:1 +ttp: b173/782 bl:2.9900 bb:1.1624 rl:2.7619 rb:1.0717 dl:223-224 gd:1 +ttp: b165/782 bl:2.9524 bb:1.1683 rl:2.7624 rb:1.0720 dl:216-217 gd:1 +ttp: b156/782 bl:2.9161 bb:1.1182 rl:2.7628 rb:1.0721 dl:208-209 gd:1 +ttp: b144/782 bl:2.8585 bb:1.1370 rl:2.7631 rb:1.0723 dl:199-200 gd:1 +ttp: b134/782 bl:3.0525 bb:1.2209 rl:2.7638 rb:1.0726 dl:190-191 gd:1 +ttp: b125/782 bl:3.0446 bb:1.2065 rl:2.7645 rb:1.0729 dl:184-185 gd:1 +ttp: b116/782 bl:3.0260 bb:1.1966 rl:2.7651 rb:1.0732 dl:177-178 gd:1 +ttp: b108/782 bl:2.9008 bb:1.1140 rl:2.7654 rb:1.0733 dl:171-172 gd:1 +ttp: b98/782 bl:3.0067 bb:1.1934 rl:2.7659 rb:1.0736 dl:164-164 gd:1 +ttp: b88/782 bl:3.1231 bb:1.2160 rl:2.7666 rb:1.0738 dl:156-157 gd:1 +ttp: b79/782 bl:3.0611 bb:1.2153 rl:2.7671 rb:1.0741 dl:149-150 gd:1 +ttp: b70/782 bl:3.0845 bb:1.1721 rl:2.7677 rb:1.0743 dl:142-143 gd:1 +ttp: b59/782 bl:3.0750 bb:1.2011 rl:2.7682 rb:1.0745 dl:134-134 gd:1 +ttp: b47/782 bl:2.9637 bb:1.1844 rl:2.7685 rb:1.0747 dl:124-125 gd:1 +ttp: b40/782 bl:3.0246 bb:1.2166 rl:2.7689 rb:1.0749 dl:119-119 gd:1 +ttp: b27/782 bl:3.1242 bb:1.2472 rl:2.7694 rb:1.0751 dl:107-108 gd:1 +ttp: b18/782 bl:3.1673 bb:1.2830 rl:2.7699 rb:1.0754 dl:99-100 gd:1 +ttp: b7/782 bl:3.2338 bb:1.2408 rl:2.7704 rb:1.0755 dl:84-86 gd:1 +quantized_ttt_phased val_loss:2.79482505 val_bpb:1.08196387 eval_time:391931ms +total_eval_time:391.9s diff --git a/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_seed1234.log b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_seed1234.log new file mode 100644 index 0000000000..45f1362e22 --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_seed1234.log @@ -0,0 +1,503 @@ +W0430 02:41:55.673000 2719175 torch/distributed/run.py:803] +W0430 02:41:55.673000 2719175 torch/distributed/run.py:803] ***************************************** +W0430 02:41:55.673000 2719175 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0430 02:41:55.673000 2719175 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + artifact_dir: + attn_clip_sigmas: 13.0 + attn_out_gate: True + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp8192 + distributed: True + ema_decay: 0.9965 + embed_bits: 6 + embed_clip_sigmas: 20.0 + embed_lr: 0.6 + embed_wd: 0.085 + enable_looping_at: 0.35 + eval_seq_len: 2048 + eval_stride: 64 + gate_width: 32 + global_ttt_batch_seqs: 32 + global_ttt_chunk_tokens: 32768 + global_ttt_epochs: 1 + global_ttt_grad_clip: 1.0 + global_ttt_lr: 0.001 + global_ttt_momentum: 0.9 + global_ttt_respect_doc_boundaries: True + global_ttt_warmup_chunks: 0 + global_ttt_warmup_start_lr: 0.0 + gptq_calibration_batches: 16 + gptq_reserve_seconds: 4.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + is_main_process: True + iterations: 20000 + leaky_slope: 0.5 + ln_scale: True + local_rank: 0 + logfile: logs/seed_1234.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + lqer_asym_enabled: True + lqer_asym_group: 64 + lqer_enabled: False + lqer_factor_bits: 4 + lqer_rank: 4 + lqer_top_k: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.026 + max_wallclock_seconds: 600.0 + min_lr: 0.1 + mlp_clip_sigmas: 10.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_momentum: 0.97 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_final_lane: mean + parallel_start_layer: 8 + phased_ttt_enabled: False + phased_ttt_num_phases: 1 + phased_ttt_prefix_docs: 2000 + qk_gain_init: 5.0 + quantized_model_path: final_model.int6.ptz + rank: 0 + rho1_enabled: False + rho1_top_k: 0.7 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + rope_yarn: False + run_id: seed_1234 + scalar_lr: 0.02 + seed: 1234 + sin_squared_activation: False + skip_gates_enabled: True + sliding_window_enabled: False + smear_gate: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_8192_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp8192/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 2048 + ttt_batch_size: 64 + ttt_beta1: 0.0 + ttt_beta2: 0.999 + ttt_chunk_size: 48 + ttt_enabled: True + ttt_eval_batches: + ttt_eval_seq_len: 2048 + ttt_grad_steps: 1 + ttt_k_lora: True + ttt_lora_lr: 0.0001 + ttt_lora_rank: 96 + ttt_mlp_lora: True + ttt_o_lora: True + ttt_optimizer: adam + ttt_weight_decay: 0.5 + val_batch_tokens: 524288 + val_doc_fraction: 1.0 + val_files: ./data/datasets/fineweb10B_sp8192/fineweb_val_*.bin + val_loss_every: 4000 + vocab_size: 8192 + warmdown_frac: 0.75 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 40546304 +model_params:35947451 +gptq:reserving 4s, effective=596000ms +warmup_cu_buckets:64,128,192,256 iters_each:3 +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +0/20000 val_loss: 9.0074 val_bpb: 3.4875 +1/20000 train_loss: 9.0076 train_time: 0.0m tok/s: 12533857 +2/20000 train_loss: 12.2123 train_time: 0.0m tok/s: 11605147 +3/20000 train_loss: 11.2370 train_time: 0.0m tok/s: 10354317 +4/20000 train_loss: 9.6263 train_time: 0.0m tok/s: 9798033 +5/20000 train_loss: 8.1133 train_time: 0.0m tok/s: 9519846 +500/20000 train_loss: 3.2982 train_time: 0.8m tok/s: 8228606 +1000/20000 train_loss: 3.1126 train_time: 1.6m tok/s: 8188883 +1500/20000 train_loss: 3.1027 train_time: 2.4m tok/s: 8188690 +2000/20000 train_loss: 3.0514 train_time: 3.2m tok/s: 8185033 +layer_loop:enabled step:2169 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 3.0552 train_time: 4.2m tok/s: 7713131 +3000/20000 train_loss: 2.8495 train_time: 5.4m tok/s: 7259914 +3500/20000 train_loss: 2.8931 train_time: 6.7m tok/s: 6891278 +4000/20000 train_loss: 2.9547 train_time: 7.8m tok/s: 6700482 +4000/20000 val_loss: 2.8765 val_bpb: 1.1137 +4500/20000 train_loss: 2.7800 train_time: 9.0m tok/s: 6560684 +4905/20000 val_loss: 2.7903 val_bpb: 1.0804 +stopping_early: wallclock_cap train_time: 596082ms step: 4905/20000 +peak memory allocated: 40445 MiB reserved: 44498 MiB +ema:applying EMA weights +diagnostic pre-quantization post-ema val_loss:2.76310153 val_bpb:1.06982350 eval_time:6949ms +Serialized model: 135424701 bytes +Code size (uncompressed): 130789 bytes +Code size (compressed): 29430 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 3.5s +Quantized weights: + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight, tok_emb.weight + passthrough (float16): blocks.attn.attn_out_gate_w, blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, parallel_post_lambdas, parallel_resid_lambdas, skip_gates, skip_weights, smear_lambda, smear_w +Serialized model quantized+brotli: 15859086 bytes +Total submission size quantized+brotli: 15888516 bytes +diagnostic quantized val_loss:2.81922578 val_bpb:1.09155380 eval_time:10690ms +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (103.3s) + +beginning TTT eval timer +ttt_phased: total_docs:49999 prefix_docs:2000 suffix_docs:47999 num_phases:1 boundaries:[2000] +ttp: b778/782 bl:2.8197 bb:1.1298 rl:2.8197 rb:1.1298 dl:7975-9043 gd:0 +ttp: b772/782 bl:2.7633 bb:1.1104 rl:2.7986 rb:1.1225 dl:4954-5213 gd:0 +ttp: b768/782 bl:2.7158 bb:1.0819 rl:2.7790 rb:1.1129 dl:4128-4316 gd:0 +ttp: b763/782 bl:2.7903 bb:1.0956 rl:2.7809 rb:1.1099 dl:3540-3634 gd:0 +ttp: b757/782 bl:2.6581 bb:1.0210 rl:2.7654 rb:1.0983 dl:3038-3111 gd:0 +ttp: b751/782 bl:2.8010 bb:1.0887 rl:2.7690 rb:1.0974 dl:2694-2741 gd:0 +ttp: b745/782 bl:2.8045 bb:1.0878 rl:2.7719 rb:1.0965 dl:2421-2457 gd:0 +ttpp: phase:1/1 pd:2447 gd:2000 t:183.5s +tttg: c1/214 lr:0.001000 t:0.3s +tttg: c2/214 lr:0.001000 t:0.4s +tttg: c3/214 lr:0.001000 t:0.5s +tttg: c4/214 lr:0.001000 t:0.6s +tttg: c5/214 lr:0.000999 t:0.7s +tttg: c6/214 lr:0.000999 t:0.7s +tttg: c7/214 lr:0.000998 t:0.8s +tttg: c8/214 lr:0.000997 t:0.9s +tttg: c9/214 lr:0.000997 t:1.0s +tttg: c10/214 lr:0.000996 t:1.0s +tttg: c11/214 lr:0.000995 t:1.1s +tttg: c12/214 lr:0.000993 t:1.2s +tttg: c13/214 lr:0.000992 t:1.3s +tttg: c14/214 lr:0.000991 t:1.4s +tttg: c15/214 lr:0.000989 t:1.4s +tttg: c16/214 lr:0.000988 t:1.5s +tttg: c17/214 lr:0.000986 t:1.6s +tttg: c18/214 lr:0.000984 t:1.7s +tttg: c19/214 lr:0.000982 t:1.7s +tttg: c20/214 lr:0.000980 t:1.8s +tttg: c21/214 lr:0.000978 t:1.9s +tttg: c22/214 lr:0.000976 t:2.0s +tttg: c23/214 lr:0.000974 t:2.1s +tttg: c24/214 lr:0.000972 t:2.1s +tttg: c25/214 lr:0.000969 t:2.2s +tttg: c26/214 lr:0.000966 t:2.3s +tttg: c27/214 lr:0.000964 t:2.4s +tttg: c28/214 lr:0.000961 t:2.4s +tttg: c29/214 lr:0.000958 t:2.5s +tttg: c30/214 lr:0.000955 t:2.6s +tttg: c31/214 lr:0.000952 t:2.7s +tttg: c32/214 lr:0.000949 t:2.8s +tttg: c33/214 lr:0.000945 t:2.8s +tttg: c34/214 lr:0.000942 t:2.9s +tttg: c35/214 lr:0.000938 t:3.0s +tttg: c36/214 lr:0.000935 t:3.1s +tttg: c37/214 lr:0.000931 t:3.2s +tttg: c38/214 lr:0.000927 t:3.2s +tttg: c39/214 lr:0.000924 t:3.3s +tttg: c40/214 lr:0.000920 t:3.4s +tttg: c41/214 lr:0.000915 t:3.5s +tttg: c42/214 lr:0.000911 t:3.5s +tttg: c43/214 lr:0.000907 t:3.6s +tttg: c44/214 lr:0.000903 t:3.7s +tttg: c45/214 lr:0.000898 t:3.8s +tttg: c46/214 lr:0.000894 t:3.9s +tttg: c47/214 lr:0.000889 t:3.9s +tttg: c48/214 lr:0.000885 t:4.0s +tttg: c49/214 lr:0.000880 t:4.1s +tttg: c50/214 lr:0.000875 t:4.2s +tttg: c51/214 lr:0.000870 t:4.3s +tttg: c52/214 lr:0.000865 t:4.3s +tttg: c53/214 lr:0.000860 t:4.4s +tttg: c54/214 lr:0.000855 t:4.5s +tttg: c55/214 lr:0.000850 t:4.6s +tttg: c56/214 lr:0.000844 t:4.6s +tttg: c57/214 lr:0.000839 t:4.7s +tttg: c58/214 lr:0.000833 t:4.8s +tttg: c59/214 lr:0.000828 t:4.9s +tttg: c60/214 lr:0.000822 t:4.9s +tttg: c61/214 lr:0.000817 t:5.0s +tttg: c62/214 lr:0.000811 t:5.1s +tttg: c63/214 lr:0.000805 t:5.2s +tttg: c64/214 lr:0.000799 t:5.3s +tttg: c65/214 lr:0.000793 t:5.3s +tttg: c66/214 lr:0.000787 t:5.4s +tttg: c67/214 lr:0.000781 t:5.5s +tttg: c68/214 lr:0.000775 t:5.6s +tttg: c69/214 lr:0.000769 t:5.7s +tttg: c70/214 lr:0.000763 t:5.7s +tttg: c71/214 lr:0.000756 t:5.8s +tttg: c72/214 lr:0.000750 t:5.9s +tttg: c73/214 lr:0.000744 t:6.0s +tttg: c74/214 lr:0.000737 t:6.0s +tttg: c75/214 lr:0.000731 t:6.1s +tttg: c76/214 lr:0.000724 t:6.2s +tttg: c77/214 lr:0.000717 t:6.3s +tttg: c78/214 lr:0.000711 t:6.4s +tttg: c79/214 lr:0.000704 t:6.4s +tttg: c80/214 lr:0.000697 t:6.5s +tttg: c81/214 lr:0.000690 t:6.6s +tttg: c82/214 lr:0.000684 t:6.7s +tttg: c83/214 lr:0.000677 t:8.2s +tttg: c84/214 lr:0.000670 t:8.3s +tttg: c85/214 lr:0.000663 t:8.4s +tttg: c86/214 lr:0.000656 t:8.4s +tttg: c87/214 lr:0.000649 t:8.5s +tttg: c88/214 lr:0.000642 t:8.6s +tttg: c89/214 lr:0.000635 t:8.7s +tttg: c90/214 lr:0.000628 t:8.8s +tttg: c91/214 lr:0.000620 t:8.8s +tttg: c92/214 lr:0.000613 t:8.9s +tttg: c93/214 lr:0.000606 t:9.0s +tttg: c94/214 lr:0.000599 t:9.1s +tttg: c95/214 lr:0.000592 t:9.1s +tttg: c96/214 lr:0.000584 t:9.2s +tttg: c97/214 lr:0.000577 t:9.3s +tttg: c98/214 lr:0.000570 t:9.4s +tttg: c99/214 lr:0.000563 t:9.5s +tttg: c100/214 lr:0.000555 t:9.5s +tttg: c101/214 lr:0.000548 t:9.6s +tttg: c102/214 lr:0.000541 t:9.7s +tttg: c103/214 lr:0.000533 t:9.8s +tttg: c104/214 lr:0.000526 t:9.9s +tttg: c105/214 lr:0.000518 t:9.9s +tttg: c106/214 lr:0.000511 t:10.0s +tttg: c107/214 lr:0.000504 t:10.1s +tttg: c108/214 lr:0.000496 t:10.2s +tttg: c109/214 lr:0.000489 t:10.3s +tttg: c110/214 lr:0.000482 t:10.3s +tttg: c111/214 lr:0.000474 t:10.4s +tttg: c112/214 lr:0.000467 t:10.5s +tttg: c113/214 lr:0.000459 t:10.6s +tttg: c114/214 lr:0.000452 t:10.7s +tttg: c115/214 lr:0.000445 t:10.7s +tttg: c116/214 lr:0.000437 t:10.8s +tttg: c117/214 lr:0.000430 t:10.9s +tttg: c118/214 lr:0.000423 t:11.0s +tttg: c119/214 lr:0.000416 t:11.0s +tttg: c120/214 lr:0.000408 t:11.1s +tttg: c121/214 lr:0.000401 t:11.2s +tttg: c122/214 lr:0.000394 t:11.3s +tttg: c123/214 lr:0.000387 t:11.4s +tttg: c124/214 lr:0.000380 t:11.5s +tttg: c125/214 lr:0.000372 t:11.5s +tttg: c126/214 lr:0.000365 t:11.6s +tttg: c127/214 lr:0.000358 t:11.7s +tttg: c128/214 lr:0.000351 t:11.8s +tttg: c129/214 lr:0.000344 t:11.8s +tttg: c130/214 lr:0.000337 t:11.9s +tttg: c131/214 lr:0.000330 t:12.0s +tttg: c132/214 lr:0.000323 t:12.1s +tttg: c133/214 lr:0.000316 t:12.2s +tttg: c134/214 lr:0.000310 t:12.2s +tttg: c135/214 lr:0.000303 t:12.3s +tttg: c136/214 lr:0.000296 t:12.4s +tttg: c137/214 lr:0.000289 t:12.5s +tttg: c138/214 lr:0.000283 t:12.6s +tttg: c139/214 lr:0.000276 t:12.6s +tttg: c140/214 lr:0.000269 t:12.7s +tttg: c141/214 lr:0.000263 t:12.8s +tttg: c142/214 lr:0.000256 t:12.9s +tttg: c143/214 lr:0.000250 t:13.0s +tttg: c144/214 lr:0.000244 t:13.1s +tttg: c145/214 lr:0.000237 t:13.1s +tttg: c146/214 lr:0.000231 t:13.2s +tttg: c147/214 lr:0.000225 t:13.3s +tttg: c148/214 lr:0.000219 t:13.4s +tttg: c149/214 lr:0.000213 t:13.4s +tttg: c150/214 lr:0.000207 t:13.5s +tttg: c151/214 lr:0.000201 t:13.6s +tttg: c152/214 lr:0.000195 t:13.7s +tttg: c153/214 lr:0.000189 t:13.8s +tttg: c154/214 lr:0.000183 t:13.8s +tttg: c155/214 lr:0.000178 t:13.9s +tttg: c156/214 lr:0.000172 t:14.0s +tttg: c157/214 lr:0.000167 t:14.1s +tttg: c158/214 lr:0.000161 t:14.1s +tttg: c159/214 lr:0.000156 t:14.2s +tttg: c160/214 lr:0.000150 t:14.3s +tttg: c161/214 lr:0.000145 t:14.4s +tttg: c162/214 lr:0.000140 t:14.5s +tttg: c163/214 lr:0.000135 t:14.5s +tttg: c164/214 lr:0.000130 t:14.6s +tttg: c165/214 lr:0.000125 t:14.7s +tttg: c166/214 lr:0.000120 t:14.8s +tttg: c167/214 lr:0.000115 t:14.9s +tttg: c168/214 lr:0.000111 t:14.9s +tttg: c169/214 lr:0.000106 t:15.0s +tttg: c170/214 lr:0.000102 t:15.1s +tttg: c171/214 lr:0.000097 t:15.2s +tttg: c172/214 lr:0.000093 t:15.3s +tttg: c173/214 lr:0.000089 t:15.3s +tttg: c174/214 lr:0.000085 t:15.4s +tttg: c175/214 lr:0.000080 t:15.5s +tttg: c176/214 lr:0.000076 t:15.6s +tttg: c177/214 lr:0.000073 t:15.6s +tttg: c178/214 lr:0.000069 t:15.7s +tttg: c179/214 lr:0.000065 t:15.8s +tttg: c180/214 lr:0.000062 t:15.9s +tttg: c181/214 lr:0.000058 t:16.0s +tttg: c182/214 lr:0.000055 t:16.0s +tttg: c183/214 lr:0.000051 t:16.1s +tttg: c184/214 lr:0.000048 t:16.2s +tttg: c185/214 lr:0.000045 t:16.3s +tttg: c186/214 lr:0.000042 t:16.4s +tttg: c187/214 lr:0.000039 t:16.4s +tttg: c188/214 lr:0.000036 t:16.5s +tttg: c189/214 lr:0.000034 t:16.6s +tttg: c190/214 lr:0.000031 t:16.7s +tttg: c191/214 lr:0.000028 t:16.8s +tttg: c192/214 lr:0.000026 t:16.8s +tttg: c193/214 lr:0.000024 t:16.9s +tttg: c194/214 lr:0.000022 t:17.0s +tttg: c195/214 lr:0.000020 t:17.1s +tttg: c196/214 lr:0.000018 t:17.1s +tttg: c197/214 lr:0.000016 t:17.2s +tttg: c198/214 lr:0.000014 t:17.3s +tttg: c199/214 lr:0.000012 t:17.4s +tttg: c200/214 lr:0.000011 t:18.9s +tttg: c201/214 lr:0.000009 t:18.9s +tttg: c202/214 lr:0.000008 t:19.0s +tttg: c203/214 lr:0.000007 t:19.1s +tttg: c204/214 lr:0.000005 t:19.2s +tttg: c205/214 lr:0.000004 t:19.2s +tttg: c206/214 lr:0.000003 t:19.3s +tttg: c207/214 lr:0.000003 t:19.4s +tttg: c208/214 lr:0.000002 t:19.5s +tttg: c209/214 lr:0.000001 t:19.6s +tttg: c210/214 lr:0.000001 t:19.6s +tttg: c211/214 lr:0.000000 t:19.7s +tttg: c212/214 lr:0.000000 t:19.8s +tttg: c213/214 lr:0.000000 t:19.9s +ttpr: phase:1/1 t:205.5s +ttp: b742/782 bl:2.7688 bb:1.0697 rl:2.7717 rb:1.0945 dl:2321-2356 gd:1 +ttp: b733/782 bl:2.7349 bb:1.0452 rl:2.7694 rb:1.0914 dl:2063-2092 gd:1 +ttp: b727/782 bl:2.7994 bb:1.0713 rl:2.7711 rb:1.0903 dl:1938-1960 gd:1 +ttp: b721/782 bl:2.8002 bb:1.0460 rl:2.7725 rb:1.0880 dl:1834-1846 gd:1 +ttp: b713/782 bl:2.8904 bb:1.0779 rl:2.7776 rb:1.0876 dl:1698-1712 gd:1 +ttp: b705/782 bl:2.7627 bb:1.0770 rl:2.7770 rb:1.0872 dl:1607-1618 gd:1 +ttp: b697/782 bl:2.8028 bb:1.0665 rl:2.7779 rb:1.0864 dl:1523-1534 gd:1 +ttp: b688/782 bl:2.7911 bb:1.0650 rl:2.7784 rb:1.0857 dl:1442-1450 gd:1 +ttp: b677/782 bl:2.7853 bb:1.0704 rl:2.7786 rb:1.0852 dl:1354-1361 gd:1 +ttp: b669/782 bl:2.8416 bb:1.0957 rl:2.7803 rb:1.0855 dl:1303-1309 gd:1 +ttp: b661/782 bl:2.7571 bb:1.0498 rl:2.7797 rb:1.0846 dl:1251-1259 gd:1 +ttp: b652/782 bl:2.7721 bb:1.0737 rl:2.7795 rb:1.0843 dl:1199-1204 gd:1 +ttp: b642/782 bl:2.7880 bb:1.0882 rl:2.7797 rb:1.0844 dl:1145-1150 gd:1 +ttp: b641/782 bl:2.8176 bb:1.0653 rl:2.7806 rb:1.0839 dl:1138-1145 gd:1 +ttp: b632/782 bl:2.8131 bb:1.0575 rl:2.7812 rb:1.0834 dl:1096-1101 gd:1 +ttp: b624/782 bl:2.7454 bb:1.0324 rl:2.7805 rb:1.0823 dl:1059-1063 gd:1 +ttp: b613/782 bl:2.7689 bb:1.0317 rl:2.7803 rb:1.0813 dl:1011-1015 gd:1 +ttp: b603/782 bl:2.8333 bb:1.0698 rl:2.7812 rb:1.0811 dl:970-974 gd:1 +ttp: b596/782 bl:2.7245 bb:1.0363 rl:2.7803 rb:1.0804 dl:943-947 gd:1 +ttp: b590/782 bl:2.7579 bb:1.0476 rl:2.7799 rb:1.0798 dl:924-927 gd:1 +ttp: b582/782 bl:2.7989 bb:1.0799 rl:2.7802 rb:1.0798 dl:896-900 gd:1 +ttp: b574/782 bl:2.8203 bb:1.0549 rl:2.7808 rb:1.0795 dl:871-874 gd:1 +ttp: b565/782 bl:2.7937 bb:1.0938 rl:2.7810 rb:1.0796 dl:842-845 gd:1 +ttp: b557/782 bl:2.7891 bb:1.0491 rl:2.7811 rb:1.0792 dl:818-820 gd:1 +ttp: b548/782 bl:2.7518 bb:1.0388 rl:2.7807 rb:1.0787 dl:793-795 gd:1 +ttp: b540/782 bl:2.7699 bb:1.0454 rl:2.7806 rb:1.0783 dl:772-773 gd:1 +ttp: b531/782 bl:2.7539 bb:1.0489 rl:2.7803 rb:1.0779 dl:749-751 gd:1 +ttp: b523/782 bl:2.8078 bb:1.0625 rl:2.7806 rb:1.0777 dl:730-732 gd:1 +ttp: b515/782 bl:2.8439 bb:1.0681 rl:2.7813 rb:1.0776 dl:709-711 gd:1 +ttp: b510/782 bl:2.8164 bb:1.0586 rl:2.7817 rb:1.0774 dl:697-700 gd:1 +ttp: b502/782 bl:2.8471 bb:1.0719 rl:2.7823 rb:1.0774 dl:680-682 gd:1 +ttp: b492/782 bl:2.8203 bb:1.0547 rl:2.7827 rb:1.0771 dl:657-659 gd:1 +ttp: b484/782 bl:2.7782 bb:1.0694 rl:2.7827 rb:1.0771 dl:640-643 gd:1 +ttp: b476/782 bl:2.7903 bb:1.0655 rl:2.7827 rb:1.0769 dl:623-625 gd:1 +ttp: b473/782 bl:2.7821 bb:1.0657 rl:2.7827 rb:1.0768 dl:618-620 gd:1 +ttp: b465/782 bl:2.8108 bb:1.1249 rl:2.7830 rb:1.0772 dl:602-604 gd:1 +ttp: b457/782 bl:2.8027 bb:1.0884 rl:2.7831 rb:1.0773 dl:588-590 gd:1 +ttp: b449/782 bl:2.7264 bb:1.0509 rl:2.7827 rb:1.0771 dl:573-574 gd:1 +ttp: b441/782 bl:2.7943 bb:1.0692 rl:2.7828 rb:1.0771 dl:558-560 gd:1 +ttp: b433/782 bl:2.8510 bb:1.0870 rl:2.7833 rb:1.0771 dl:544-545 gd:1 +ttp: b425/782 bl:2.7718 bb:1.0851 rl:2.7832 rb:1.0772 dl:530-532 gd:1 +ttp: b412/782 bl:2.7144 bb:1.0286 rl:2.7827 rb:1.0768 dl:508-510 gd:1 +ttp: b405/782 bl:2.8286 bb:1.0865 rl:2.7830 rb:1.0769 dl:497-498 gd:1 +ttp: b397/782 bl:2.7966 bb:1.0587 rl:2.7831 rb:1.0768 dl:484-486 gd:1 +ttp: b390/782 bl:2.7946 bb:1.0502 rl:2.7832 rb:1.0766 dl:473-474 gd:1 +ttp: b382/782 bl:2.8807 bb:1.0933 rl:2.7838 rb:1.0767 dl:461-463 gd:1 +ttp: b375/782 bl:2.7367 bb:1.0667 rl:2.7835 rb:1.0767 dl:451-453 gd:1 +ttp: b367/782 bl:2.9068 bb:1.1255 rl:2.7842 rb:1.0769 dl:440-441 gd:1 +ttp: b361/782 bl:2.8076 bb:1.0812 rl:2.7844 rb:1.0770 dl:431-433 gd:1 +ttp: b356/782 bl:2.7567 bb:1.0562 rl:2.7842 rb:1.0769 dl:424-426 gd:1 +ttp: b349/782 bl:2.8206 bb:1.0974 rl:2.7844 rb:1.0770 dl:415-416 gd:1 +ttp: b342/782 bl:2.8632 bb:1.0745 rl:2.7848 rb:1.0770 dl:405-407 gd:1 +ttp: b334/782 bl:2.8483 bb:1.0810 rl:2.7852 rb:1.0770 dl:395-396 gd:1 +ttp: b326/782 bl:2.8688 bb:1.1024 rl:2.7856 rb:1.0771 dl:385-386 gd:1 +ttp: b313/782 bl:2.8278 bb:1.0966 rl:2.7858 rb:1.0772 dl:368-369 gd:1 +ttp: b305/782 bl:2.7866 bb:1.0572 rl:2.7858 rb:1.0771 dl:358-359 gd:1 +ttp: b297/782 bl:2.8713 bb:1.0997 rl:2.7861 rb:1.0772 dl:348-349 gd:1 +ttp: b289/782 bl:2.9475 bb:1.1393 rl:2.7868 rb:1.0775 dl:338-340 gd:1 +ttp: b281/782 bl:2.8809 bb:1.1627 rl:2.7872 rb:1.0778 dl:329-330 gd:1 +ttp: b273/782 bl:2.9190 bb:1.1138 rl:2.7878 rb:1.0779 dl:320-321 gd:1 +ttp: b266/782 bl:2.9330 bb:1.1476 rl:2.7883 rb:1.0782 dl:313-314 gd:1 +ttp: b259/782 bl:2.8479 bb:1.1067 rl:2.7885 rb:1.0783 dl:305-306 gd:1 +ttp: b251/782 bl:2.7636 bb:1.1096 rl:2.7885 rb:1.0784 dl:296-297 gd:1 +ttp: b243/782 bl:2.7785 bb:1.0817 rl:2.7884 rb:1.0784 dl:288-289 gd:1 +ttp: b236/782 bl:2.9078 bb:1.1482 rl:2.7888 rb:1.0787 dl:281-282 gd:1 +ttp: b228/782 bl:2.9556 bb:1.1497 rl:2.7894 rb:1.0789 dl:273-274 gd:1 +ttp: b220/782 bl:2.8710 bb:1.1337 rl:2.7897 rb:1.0791 dl:265-266 gd:1 +ttp: b212/782 bl:2.9344 bb:1.1514 rl:2.7901 rb:1.0793 dl:257-258 gd:1 +ttp: b204/782 bl:2.8519 bb:1.0962 rl:2.7903 rb:1.0794 dl:250-251 gd:1 +ttp: b196/782 bl:2.9708 bb:1.1560 rl:2.7908 rb:1.0796 dl:243-244 gd:1 +ttp: b188/782 bl:2.8895 bb:1.1316 rl:2.7911 rb:1.0797 dl:236-237 gd:1 +ttp: b179/782 bl:3.0317 bb:1.2305 rl:2.7918 rb:1.0801 dl:228-229 gd:1 +ttp: b173/782 bl:2.9422 bb:1.1463 rl:2.7922 rb:1.0803 dl:223-223 gd:1 +ttp: b164/782 bl:2.9848 bb:1.1624 rl:2.7927 rb:1.0805 dl:215-216 gd:1 +ttp: b157/782 bl:2.9573 bb:1.1132 rl:2.7931 rb:1.0806 dl:209-210 gd:1 +ttp: b150/782 bl:2.8903 bb:1.1305 rl:2.7933 rb:1.0807 dl:204-204 gd:1 +ttp: b141/782 bl:2.9109 bb:1.1376 rl:2.7936 rb:1.0808 dl:196-197 gd:1 +ttp: b132/782 bl:3.0436 bb:1.2126 rl:2.7941 rb:1.0811 dl:189-190 gd:1 +ttp: b126/782 bl:2.9074 bb:1.1527 rl:2.7944 rb:1.0813 dl:185-185 gd:1 +ttp: b116/782 bl:2.9741 bb:1.1934 rl:2.7948 rb:1.0815 dl:177-178 gd:1 +ttp: b110/782 bl:3.0347 bb:1.2012 rl:2.7952 rb:1.0817 dl:173-173 gd:1 +ttp: b101/782 bl:2.8818 bb:1.1440 rl:2.7954 rb:1.0819 dl:166-167 gd:1 +ttp: b94/782 bl:3.0131 bb:1.1951 rl:2.7958 rb:1.0821 dl:160-161 gd:1 +ttp: b85/782 bl:3.0417 bb:1.2306 rl:2.7963 rb:1.0823 dl:154-155 gd:1 +ttp: b76/782 bl:3.0311 bb:1.2023 rl:2.7967 rb:1.0825 dl:147-148 gd:1 +ttp: b69/782 bl:3.0968 bb:1.2051 rl:2.7972 rb:1.0827 dl:141-142 gd:1 +ttp: b61/782 bl:3.1269 bb:1.2394 rl:2.7977 rb:1.0830 dl:135-136 gd:1 +ttp: b52/782 bl:3.1172 bb:1.2285 rl:2.7982 rb:1.0832 dl:128-129 gd:1 +ttp: b43/782 bl:3.0516 bb:1.2201 rl:2.7985 rb:1.0834 dl:121-122 gd:1 +ttp: b37/782 bl:3.0721 bb:1.2190 rl:2.7989 rb:1.0835 dl:116-117 gd:1 +ttp: b28/782 bl:3.0417 bb:1.2267 rl:2.7992 rb:1.0837 dl:108-109 gd:1 +ttp: b21/782 bl:3.1162 bb:1.1983 rl:2.7995 rb:1.0839 dl:102-103 gd:1 +ttp: b13/782 bl:3.1571 bb:1.2608 rl:2.7999 rb:1.0840 dl:93-94 gd:1 +ttp: b7/782 bl:3.3100 bb:1.2864 rl:2.8004 rb:1.0842 dl:84-86 gd:1 +quantized_ttt_phased val_loss:2.78749405 val_bpb:1.07931499 eval_time:333265ms +total_eval_time:333.3s diff --git a/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_seed42.log b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_seed42.log new file mode 100644 index 0000000000..52203ae87f --- /dev/null +++ b/records/track_10min_16mb/2026-04-30_SP8192_CaseOps_WiderGate32_GPTQ-int6_1.0804/train_seed42.log @@ -0,0 +1,788 @@ +W0430 02:17:33.901000 2684062 torch/distributed/run.py:803] +W0430 02:17:33.901000 2684062 torch/distributed/run.py:803] ***************************************** +W0430 02:17:33.901000 2684062 torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. +W0430 02:17:33.901000 2684062 torch/distributed/run.py:803] ***************************************** +Hyperparameters: + adam_eps: 1e-08 + adam_wd: 0.02 + artifact_dir: + attn_clip_sigmas: 13.0 + attn_out_gate: True + beta1: 0.9 + beta2: 0.95 + compressor: brotli + data_dir: ./data/ + datasets_dir: ./data/datasets/fineweb10B_sp8192 + distributed: True + ema_decay: 0.9965 + embed_bits: 6 + embed_clip_sigmas: 20.0 + embed_lr: 0.6 + embed_wd: 0.085 + enable_looping_at: 0.35 + eval_seq_len: 2048 + eval_stride: 64 + gate_width: 32 + global_ttt_batch_seqs: 32 + global_ttt_chunk_tokens: 32768 + global_ttt_epochs: 1 + global_ttt_grad_clip: 1.0 + global_ttt_lr: 0.001 + global_ttt_momentum: 0.9 + global_ttt_respect_doc_boundaries: True + global_ttt_warmup_chunks: 0 + global_ttt_warmup_start_lr: 0.0 + gptq_calibration_batches: 16 + gptq_reserve_seconds: 4.0 + grad_accum_steps: 1 + grad_clip_norm: 0.3 + is_main_process: True + iterations: 20000 + leaky_slope: 0.5 + ln_scale: True + local_rank: 0 + logfile: logs/seed_42.txt + logit_softcap: 30.0 + loop_end: 5 + loop_start: 3 + lqer_asym_enabled: True + lqer_asym_group: 64 + lqer_enabled: False + lqer_factor_bits: 4 + lqer_rank: 4 + lqer_top_k: 3 + matrix_bits: 6 + matrix_clip_sigmas: 12.85 + matrix_lr: 0.026 + max_wallclock_seconds: 600.0 + min_lr: 0.1 + mlp_clip_sigmas: 10.0 + mlp_mult: 4.0 + model_dim: 512 + model_path: final_model.pt + muon_backend_steps: 5 + muon_momentum: 0.97 + muon_momentum_warmup_start: 0.92 + muon_momentum_warmup_steps: 1500 + muon_row_normalize: True + muon_wd: 0.095 + num_heads: 8 + num_kv_heads: 4 + num_layers: 11 + num_loops: 2 + parallel_final_lane: mean + parallel_start_layer: 8 + phased_ttt_enabled: False + phased_ttt_num_phases: 1 + phased_ttt_prefix_docs: 2000 + qk_gain_init: 5.0 + quantized_model_path: final_model.int6.ptz + rank: 0 + rho1_enabled: False + rho1_top_k: 0.7 + rope_base: 10000.0 + rope_dims: 16 + rope_train_seq_len: 2048 + rope_yarn: False + run_id: seed_42 + scalar_lr: 0.02 + seed: 42 + sin_squared_activation: False + skip_gates_enabled: True + sliding_window_enabled: False + smear_gate: True + tie_embeddings: True + tied_embed_init_std: 0.005 + tied_embed_lr: 0.03 + tokenizer_path: ./data/tokenizers/fineweb_8192_bpe.model + train_batch_tokens: 786432 + train_files: ./data/datasets/fineweb10B_sp8192/fineweb_train_*.bin + train_log_every: 500 + train_seq_len: 2048 + ttt_batch_size: 64 + ttt_beta1: 0.0 + ttt_beta2: 0.999 + ttt_chunk_size: 48 + ttt_enabled: True + ttt_eval_batches: + ttt_eval_seq_len: 2048 + ttt_grad_steps: 1 + ttt_k_lora: True + ttt_lora_lr: 0.0001 + ttt_lora_rank: 96 + ttt_mlp_lora: True + ttt_o_lora: True + ttt_optimizer: adam + ttt_weight_decay: 0.5 + val_batch_tokens: 524288 + val_doc_fraction: 1.0 + val_files: ./data/datasets/fineweb10B_sp8192/fineweb_val_*.bin + val_loss_every: 4000 + vocab_size: 8192 + warmdown_frac: 0.75 + warmup_steps: 20 + world_size: 8 + xsa_last_n: 11 +train_shards: 80 +val_tokens: 40546304 +model_params:35947451 +gptq:reserving 4s, effective=596000ms +warmup_cu_buckets:64,128,192,256 iters_each:3 +[rank5]:W0430 02:17:57.602000 2684325 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.319c0b70-bf5a-475f-a927-a2be1a0e73ec is not empty - skipping! +[rank2]:W0430 02:17:59.409000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.f1543e47-2ede-4646-a3ca-1c1e932e35b0 is not empty - skipping! +[rank2]:W0430 02:17:59.410000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.0e3f00d9-89f0-496a-8eaf-71241d895ee8 is not empty - skipping! +[rank2]:W0430 02:17:59.411000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.2cba07d9-446d-4a6b-bf61-4fe1179ab0b4 is not empty - skipping! +[rank2]:W0430 02:17:59.412000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.1a10a214-af41-43d1-83f1-029f826e382a is not empty - skipping! +[rank2]:W0430 02:17:59.412000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.2833b9f9-1d87-455a-8881-53d3172583f4 is not empty - skipping! +[rank2]:W0430 02:17:59.413000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.c5e8fdfb-ab0e-428a-abb7-e89146558bf6 is not empty - skipping! +[rank2]:W0430 02:17:59.414000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.3852661b-6be3-408f-b9e2-3e9461974735 is not empty - skipping! +[rank2]:W0430 02:17:59.415000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.58c04a0f-1985-4d3d-92bf-a914f143b201 is not empty - skipping! +[rank2]:W0430 02:17:59.415000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.5e423382-4ffb-4a15-8461-6d0d97021514 is not empty - skipping! +[rank2]:W0430 02:17:59.416000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.639e482f-bd64-49b2-8a82-d5a0d3656fce is not empty - skipping! +[rank2]:W0430 02:17:59.417000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.6eca1fac-9518-4ce6-8fee-bccfbf12d46a is not empty - skipping! +[rank2]:W0430 02:17:59.418000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.b29f8476-aa00-4027-bce7-745967d58d44 is not empty - skipping! +[rank2]:W0430 02:17:59.419000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.3a591a77-8ec9-4055-b6f4-05f26fe19882 is not empty - skipping! +[rank2]:W0430 02:17:59.419000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.93bc6261-7329-4be8-bb97-5f02cb908784 is not empty - skipping! +[rank2]:W0430 02:17:59.420000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.64f06248-efc0-4748-adad-07d25be80ec2 is not empty - skipping! +[rank2]:W0430 02:17:59.421000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.eb9e087d-a6aa-4974-b906-761a9d05e010 is not empty - skipping! +[rank2]:W0430 02:17:59.421000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.1c860067-b387-469f-9de8-681939c90019 is not empty - skipping! +[rank2]:W0430 02:17:59.422000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.6424610d-c5b2-4c79-b458-6f22a6e8cb8a is not empty - skipping! +[rank2]:W0430 02:17:59.423000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.f1a71690-30a2-4474-9b2e-9f091d9a7917 is not empty - skipping! +[rank2]:W0430 02:17:59.424000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.9bc3dee1-8ea8-48bb-9df7-7d3619f1ccb9 is not empty - skipping! +[rank2]:W0430 02:17:59.425000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.bdf4c742-e131-4595-b7d4-8cdc1fffbde4 is not empty - skipping! +[rank2]:W0430 02:17:59.425000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.06c9b554-77c3-42e7-99e1-7f68c51d1e99 is not empty - skipping! +[rank2]:W0430 02:17:59.426000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.130a8ada-fafb-43a7-914d-04773ae35d41 is not empty - skipping! +[rank2]:W0430 02:17:59.427000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.de9df326-e1a8-45a6-bc10-678b684194ec is not empty - skipping! +[rank2]:W0430 02:17:59.428000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.e99dc0cf-2692-4500-85f8-55686a86890f is not empty - skipping! +[rank2]:W0430 02:17:59.428000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.91d14e6b-5ffb-40f0-8f29-770a2c3b0019 is not empty - skipping! +[rank2]:W0430 02:17:59.429000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.8018a804-5718-4583-ba71-9b096ffa0ba0 is not empty - skipping! +[rank2]:W0430 02:17:59.430000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.fbafd19b-834f-4264-937f-060284e9fb3f is not empty - skipping! +[rank2]:W0430 02:17:59.431000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.033c8eaa-c7c4-452c-b32b-ad933e4dd7b3 is not empty - skipping! +[rank2]:W0430 02:17:59.431000 2684319 torch/_inductor/triton_bundler.py:396] [0/0] Directory /workspace/.cache/triton/tmp.5d3395db-ddce-4138-a526-3e735ba15ea2 is not empty - skipping! +[rank6]:W0430 02:19:54.395000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.4d3fca8a-75e6-4a49-a9bf-a477b48eca37 is not empty - skipping! +[rank2]:W0430 02:19:54.396000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.301637fe-5007-40ad-8d7c-14223111a0b4 is not empty - skipping! +[rank6]:W0430 02:19:54.396000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.6e286397-290a-40ab-b15e-e33b7c79fb01 is not empty - skipping! +[rank4]:W0430 02:19:54.396000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.758bc009-93f0-4065-a992-f50b0cffd55a is not empty - skipping! +[rank0]:W0430 02:19:54.396000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.fadbf4aa-b3de-412b-8fb0-7d6683115c73 is not empty - skipping! +[rank1]:W0430 02:19:54.396000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.9e1a5b5e-b2b3-41fd-8747-c24065f78766 is not empty - skipping! +[rank2]:W0430 02:19:54.396000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.6209aa4b-ed5b-4ad0-96e3-5b15ea55a1ca is not empty - skipping! +[rank6]:W0430 02:19:54.397000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.d7c69063-8c8a-44f9-8e48-19ee87c0dd7c is not empty - skipping! +[rank4]:W0430 02:19:54.397000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.b00b9c8a-0b5d-4ae8-97f1-351586abb08d is not empty - skipping! +[rank0]:W0430 02:19:54.397000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.a9e055f0-fd79-4771-98b6-da4adef9d6aa is not empty - skipping! +[rank2]:W0430 02:19:54.397000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.ddcd1c2b-32b8-4bc0-b464-75dcc6db8fee is not empty - skipping! +[rank1]:W0430 02:19:54.397000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.e0a27daa-69bd-4b7b-9cd3-df0c694526be is not empty - skipping! +[rank6]:W0430 02:19:54.397000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.05ea1bd7-33f9-4886-bc6e-f56fc3a1a19e is not empty - skipping! +[rank2]:W0430 02:19:54.398000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.b8cddb23-7e72-4ef6-9163-8e872169f359 is not empty - skipping! +[rank4]:W0430 02:19:54.398000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.68c73e7f-cc96-4117-984f-e40c94049e37 is not empty - skipping! +[rank6]:W0430 02:19:54.398000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.47ca26c2-f5a0-4a3b-bbfa-a6edc69224a2 is not empty - skipping! +[rank0]:W0430 02:19:54.398000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.d38531ff-51a7-4176-96a0-48d9053e3eb7 is not empty - skipping! +[rank1]:W0430 02:19:54.399000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.4e581022-87de-4f95-80e0-85d9d85e1ad5 is not empty - skipping! +[rank5]:W0430 02:19:54.399000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.a3b46aa1-c336-4bad-a7e5-6a03a138f63f is not empty - skipping! +[rank2]:W0430 02:19:54.399000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.fb2ef782-8d1b-4956-8902-5290f9c6ebb1 is not empty - skipping! +[rank4]:W0430 02:19:54.399000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.568ef2de-f033-48c6-9729-b3fe109ceedb is not empty - skipping! +[rank6]:W0430 02:19:54.399000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.98273707-b08a-45c3-bce2-1e5484f4d9e1 is not empty - skipping! +[rank0]:W0430 02:19:54.399000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.e4f74e63-45f1-43d4-8f7d-eaee65532cfb is not empty - skipping! +[rank1]:W0430 02:19:54.400000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.d93cd9f9-a3a7-472b-8122-d01e900bdf30 is not empty - skipping! +[rank5]:W0430 02:19:54.400000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.7b26dfe4-af55-4b8d-a0dc-9d4fb1e63818 is not empty - skipping! +[rank2]:W0430 02:19:54.400000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.5ca485d9-e2a5-4dc7-aa4b-89e2c5924200 is not empty - skipping! +[rank6]:W0430 02:19:54.400000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.6a77bc8f-afae-4c16-bd6a-1a5009d3b97e is not empty - skipping! +[rank4]:W0430 02:19:54.400000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.d276c80b-6fdd-444a-8627-7ba14ca65b30 is not empty - skipping! +[rank0]:W0430 02:19:54.400000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.ff42d271-cbfb-4a63-8813-91937f2ae19e is not empty - skipping! +[rank5]:W0430 02:19:54.400000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.800e0079-ab5f-4e6b-b31a-623e3d9c4284 is not empty - skipping! +[rank1]:W0430 02:19:54.400000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.6be91fed-4c0f-4452-9222-a0ead5fed2e8 is not empty - skipping! +[rank2]:W0430 02:19:54.401000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.0f3bc172-205d-4718-a5d3-bc10d78b87a7 is not empty - skipping! +[rank6]:W0430 02:19:54.401000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.78697bdf-53ea-4221-b036-28cd1acb02a8 is not empty - skipping! +[rank4]:W0430 02:19:54.401000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.068d636b-9ee3-4494-8fbb-93d10273ded5 is not empty - skipping! +[rank0]:W0430 02:19:54.401000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.fc786ea1-f20a-49e7-9f80-801d5625f531 is not empty - skipping! +[rank5]:W0430 02:19:54.401000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.c936faf6-4860-4aec-8e81-606d369b7171 is not empty - skipping! +[rank1]:W0430 02:19:54.401000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.2bfe6b70-a056-44ac-8723-d88e983c3c34 is not empty - skipping! +[rank2]:W0430 02:19:54.402000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.608fba85-38e0-4385-b14d-067e72c2df20 is not empty - skipping! +[rank6]:W0430 02:19:54.402000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.54f674c8-33cd-4154-b19e-18387ae0d851 is not empty - skipping! +[rank4]:W0430 02:19:54.402000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.24b245ad-2b81-4dc3-a9a8-3c0ab8e93fa6 is not empty - skipping! +[rank0]:W0430 02:19:54.402000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.803c8f36-77aa-442b-9c01-e437109d3599 is not empty - skipping! +[rank5]:W0430 02:19:54.402000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.c8380ca8-0f3b-4cc4-a7cf-2cc33eff6caf is not empty - skipping! +[rank1]:W0430 02:19:54.402000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.d863e793-b7d0-4db4-88ce-dd62368e3477 is not empty - skipping! +[rank7]:W0430 02:19:54.402000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.1bf80801-f214-4a7d-8187-02fdac1da7c2 is not empty - skipping! +[rank6]:W0430 02:19:54.403000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.178db356-6e5b-48dd-8e33-5138a3b124a8 is not empty - skipping! +[rank2]:W0430 02:19:54.403000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.a9410d71-5bf3-45df-ac08-ae7a1d14bf93 is not empty - skipping! +[rank4]:W0430 02:19:54.403000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.db1fc21f-8ab2-4c86-8eb4-97aaff7b3e25 is not empty - skipping! +[rank0]:W0430 02:19:54.403000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.026e1ad3-193d-4e7b-a0db-31263cf39f47 is not empty - skipping! +[rank5]:W0430 02:19:54.403000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.73808383-d2a4-4a51-ac77-916f963bd17b is not empty - skipping! +[rank1]:W0430 02:19:54.403000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.316e7895-f56f-4d34-99d2-d3d9e3b00097 is not empty - skipping! +[rank7]:W0430 02:19:54.403000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.582404a0-55fd-4e4d-82cc-4d123ef74187 is not empty - skipping! +[rank6]:W0430 02:19:54.404000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.18949164-456d-4b12-b914-7ac6c7d63197 is not empty - skipping! +[rank2]:W0430 02:19:54.404000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.5e372fe3-b45b-4c0a-ade0-ece167edab2d is not empty - skipping! +[rank4]:W0430 02:19:54.404000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.a76a17a5-adda-4b14-8b69-9d5457d981f2 is not empty - skipping! +[rank0]:W0430 02:19:54.404000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.b67b1934-b6ed-4bf7-8121-ef47084c59ae is not empty - skipping! +[rank5]:W0430 02:19:54.404000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.96ef1f8c-b630-47dc-965d-3bd8bbb28063 is not empty - skipping! +[rank7]:W0430 02:19:54.404000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.c6507cbe-79d6-4928-ad2a-29aea216c3f0 is not empty - skipping! +[rank6]:W0430 02:19:54.404000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.f252d340-0b88-4a88-b5a8-6ddbd9b57a57 is not empty - skipping! +[rank2]:W0430 02:19:54.405000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.6012a06c-64b7-4d2f-b50b-bd1078290a52 is not empty - skipping! +[rank3]:W0430 02:19:54.404000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.c0048f31-33fb-44e0-9fce-63aaae3c6848 is not empty - skipping! +[rank0]:W0430 02:19:54.405000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.fdb901ec-1e34-4fb3-8859-7443025b242c is not empty - skipping! +[rank4]:W0430 02:19:54.405000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.51e2acb4-71b1-4044-a16d-42b7adbc030e is not empty - skipping! +[rank5]:W0430 02:19:54.405000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.eb1e3f72-ca22-4ab4-8cc0-f59c0332d07b is not empty - skipping! +[rank7]:W0430 02:19:54.405000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.b4579d70-d631-4187-9f4d-0ce8796042dc is not empty - skipping! +[rank6]:W0430 02:19:54.405000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.a02fb907-e90b-4c47-995b-4f23e69a1ed9 is not empty - skipping! +[rank2]:W0430 02:19:54.405000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.e07b2a0d-5ca3-4d64-9940-9cd448598612 is not empty - skipping! +[rank3]:W0430 02:19:54.406000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.3d627599-5711-43fa-b480-b5302928d989 is not empty - skipping! +[rank0]:W0430 02:19:54.406000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.cd01cc7a-54f8-4922-8d62-4fe377cf157a is not empty - skipping! +[rank4]:W0430 02:19:54.406000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.545d1abd-0d46-46f6-bdcf-3e6cc9586281 is not empty - skipping! +[rank5]:W0430 02:19:54.406000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.3735b021-1433-4b80-9272-19a908de5530 is not empty - skipping! +[rank7]:W0430 02:19:54.406000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.d503475b-00cf-4b1e-b3a0-ba05fade6898 is not empty - skipping! +[rank2]:W0430 02:19:54.406000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.a58945f9-0aef-46a2-aa9e-d22fb5cbe606 is not empty - skipping! +[rank6]:W0430 02:19:54.406000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.9cedd4d4-509b-460d-8220-a071aa39700d is not empty - skipping! +[rank0]:W0430 02:19:54.407000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.795a42fa-0fb3-4480-bad0-b66bf1222d9e is not empty - skipping! +[rank3]:W0430 02:19:54.407000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.b2d5dc11-24f2-46a8-b40b-94900985d191 is not empty - skipping! +[rank4]:W0430 02:19:54.407000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.870a9998-5175-4e0f-b08f-249ea34d70a0 is not empty - skipping! +[rank5]:W0430 02:19:54.407000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.e901026f-c55a-4ed3-b49f-e197f865b1de is not empty - skipping! +[rank7]:W0430 02:19:54.407000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.b4899d63-ec01-4e8a-86b5-05f0dc401fe9 is not empty - skipping! +[rank2]:W0430 02:19:54.407000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.7d93048a-7212-4e19-93dd-e9bfaa6652cb is not empty - skipping! +[rank6]:W0430 02:19:54.407000 2684326 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.32fea9f8-b4e6-4c0f-abcd-5f3bf037c0ac is not empty - skipping! +[rank1]:W0430 02:19:54.407000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.29aaeecd-3de5-4244-9679-e4218c9ba79f is not empty - skipping! +[rank4]:W0430 02:19:54.408000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.9c79e4e8-ba64-498d-a537-6ccf98f54566 is not empty - skipping! +[rank3]:W0430 02:19:54.408000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.14b5829f-aa4a-4363-acd8-29b62cc0246f is not empty - skipping! +[rank5]:W0430 02:19:54.408000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.4e23d1f0-0396-4530-91bf-1f1ef11de6f0 is not empty - skipping! +[rank2]:W0430 02:19:54.408000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.4b0ff466-02e0-44f3-860c-8d5afe4e6220 is not empty - skipping! +[rank7]:W0430 02:19:54.408000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.3f837dcb-fad3-47fc-b3f7-b032b9b6e634 is not empty - skipping! +[rank0]:W0430 02:19:54.408000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.83f0e806-a54a-4106-be56-beb559f346fe is not empty - skipping! +[rank1]:W0430 02:19:54.408000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.6fa6c210-41b9-4e7c-b06c-eb6a203f9e35 is not empty - skipping! +[rank4]:W0430 02:19:54.408000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.501c797b-0374-4fe0-b59a-c5974658a487 is not empty - skipping! +[rank3]:W0430 02:19:54.408000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.7dc0d91c-2cd9-4af1-bc4b-ded755f2d636 is not empty - skipping! +[rank5]:W0430 02:19:54.408000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.705e4d4c-bb50-4b81-9663-719c8ea8fdfa is not empty - skipping! +[rank2]:W0430 02:19:54.409000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.e1409412-f7d2-4eed-a58f-614d4c02f350 is not empty - skipping! +[rank7]:W0430 02:19:54.409000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.fce1d1ee-6c70-49e2-8bb7-8f6c96ca8aa1 is not empty - skipping! +[rank0]:W0430 02:19:54.409000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.aca929f2-fffa-46ac-ab5c-fc646904d1e1 is not empty - skipping! +[rank4]:W0430 02:19:54.409000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.04232a3d-fc7b-4f38-bccf-2ba28bdcf745 is not empty - skipping! +[rank1]:W0430 02:19:54.409000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.e5b746bc-a363-4ba4-aed5-d27e0d2ca4ce is not empty - skipping! +[rank5]:W0430 02:19:54.409000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.1c0364dc-2595-40a9-8e89-2d9a15542760 is not empty - skipping! +[rank3]:W0430 02:19:54.409000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.c56d5f25-1063-44f3-bce4-409cd0a86455 is not empty - skipping! +[rank2]:W0430 02:19:54.410000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.1e30dc94-b2de-45ba-9ae5-57cd5dddb3bd is not empty - skipping! +[rank7]:W0430 02:19:54.410000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.d16c9d41-e5e6-4ee4-9a43-6cad775ae558 is not empty - skipping! +[rank0]:W0430 02:19:54.410000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.9451aae5-73fb-49ad-92e5-97dd7237fe66 is not empty - skipping! +[rank4]:W0430 02:19:54.410000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.5fd0c409-56cd-41f9-b8d3-e3ec8c95af0f is not empty - skipping! +[rank1]:W0430 02:19:54.410000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.2a6aee39-b74b-4c3e-8fd4-03d319dc78c3 is not empty - skipping! +[rank5]:W0430 02:19:54.410000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.fc4f1812-d408-4475-a595-d5e3d9902db4 is not empty - skipping! +[rank3]:W0430 02:19:54.410000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.1f3919f8-dd4c-4a7a-a6fe-b3b8d0e8e878 is not empty - skipping! +[rank2]:W0430 02:19:54.411000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.c40e2c6f-6378-46d0-b9f2-9832186b2053 is not empty - skipping! +[rank7]:W0430 02:19:54.411000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.1f8dfd41-79db-4268-bb56-504e1415ceb9 is not empty - skipping! +[rank0]:W0430 02:19:54.411000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.bf78f40c-9e64-42a3-8177-1dbd8bdf1e55 is not empty - skipping! +[rank4]:W0430 02:19:54.411000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.339eacbb-72c2-45a8-af0c-8bd10e51de25 is not empty - skipping! +[rank5]:W0430 02:19:54.411000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.7aac9478-fbc6-4b24-9aea-a8de4986c664 is not empty - skipping! +[rank1]:W0430 02:19:54.411000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.9a0ad05b-3024-488b-9232-9a1e1b398b74 is not empty - skipping! +[rank3]:W0430 02:19:54.411000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.cdc93780-6f19-4035-a968-d6982849c5e5 is not empty - skipping! +[rank2]:W0430 02:19:54.411000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.c4169605-4431-471d-944f-7a7f1293806d is not empty - skipping! +[rank7]:W0430 02:19:54.412000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.229490c0-b3b3-487a-bda7-12ccc094166d is not empty - skipping! +[rank0]:W0430 02:19:54.412000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.c6b45fca-288c-44d9-91e6-e896a34155bc is not empty - skipping! +[rank4]:W0430 02:19:54.412000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.bf292fd9-ea2d-4d51-95f1-987e4b26ef4e is not empty - skipping! +[rank5]:W0430 02:19:54.412000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.bc96703a-fa79-47a7-b437-cfb8fa001f83 is not empty - skipping! +[rank1]:W0430 02:19:54.412000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.af6159f5-396a-4f96-aa6e-ad83b0e5b307 is not empty - skipping! +[rank3]:W0430 02:19:54.412000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.7b9cbc90-ef11-40e4-88c7-d8f410baf509 is not empty - skipping! +[rank2]:W0430 02:19:54.412000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.cab8b279-aa14-4078-b819-e35ab69597d8 is not empty - skipping! +[rank7]:W0430 02:19:54.412000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.a79c09cd-ddde-4d3a-a5e9-f27b88591319 is not empty - skipping! +[rank0]:W0430 02:19:54.413000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.faa4467c-607f-4521-adb2-1757a6e644db is not empty - skipping! +[rank4]:W0430 02:19:54.413000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.a56bc48b-9e8e-4ee1-a0f9-70df5b236563 is not empty - skipping! +[rank5]:W0430 02:19:54.413000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.c9d7a023-7349-48b9-b96e-c3edaea11e9e is not empty - skipping! +[rank1]:W0430 02:19:54.413000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.cc0d25c0-9ee5-446a-8921-afd852c3afae is not empty - skipping! +[rank3]:W0430 02:19:54.413000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.bea11515-8685-41e4-bac8-3d8fb9b4d29a is not empty - skipping! +[rank2]:W0430 02:19:54.413000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.cbfce613-c579-40f6-9725-56607941f0fe is not empty - skipping! +[rank7]:W0430 02:19:54.413000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.917a1346-e7d9-4d60-a0b8-ee404e271129 is not empty - skipping! +[rank0]:W0430 02:19:54.413000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.d03d31fa-9a39-4e4d-acca-54d94d9354ef is not empty - skipping! +[rank4]:W0430 02:19:54.414000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.da58ca60-09a6-46d8-b802-24c48ddc6ce9 is not empty - skipping! +[rank5]:W0430 02:19:54.414000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.be905672-5952-4e5e-84bb-f4ed73aa6d4b is not empty - skipping! +[rank1]:W0430 02:19:54.414000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.20058e9e-cfef-43df-a1b3-2449ac6eea44 is not empty - skipping! +[rank3]:W0430 02:19:54.414000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.77a6a51b-c69c-4984-a152-a36c2a85f3ff is not empty - skipping! +[rank2]:W0430 02:19:54.414000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.23481eb9-abbf-45fd-86b4-1e729bcfceb9 is not empty - skipping! +[rank7]:W0430 02:19:54.414000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.c9d84186-a61b-42d2-a2cb-c46500db89e8 is not empty - skipping! +[rank0]:W0430 02:19:54.414000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.29c90043-e157-4732-b45e-d6d5e44c080e is not empty - skipping! +[rank4]:W0430 02:19:54.415000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.189fda89-8138-4806-b8a3-3180645170c9 is not empty - skipping! +[rank5]:W0430 02:19:54.415000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.e9df601f-1e00-4816-b15d-73004978f3a8 is not empty - skipping! +[rank1]:W0430 02:19:54.415000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.27b06926-1e50-4f1c-aee3-810804c175b1 is not empty - skipping! +[rank2]:W0430 02:19:54.415000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.1d3f1a2e-13c3-4327-a7e9-914e3d0a0cfc is not empty - skipping! +[rank3]:W0430 02:19:54.415000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.5b901d7b-fbbc-4181-bdb5-a5f2b14fc42d is not empty - skipping! +[rank7]:W0430 02:19:54.415000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.b37154f1-3520-4316-b028-2a62c4b15edc is not empty - skipping! +[rank0]:W0430 02:19:54.415000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.25f355af-ab5f-4050-81d2-8436a5dd3691 is not empty - skipping! +[rank4]:W0430 02:19:54.415000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.b856392f-8f30-4c55-96e3-c7ff69617437 is not empty - skipping! +[rank5]:W0430 02:19:54.415000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.70913fe0-ca3f-434a-bc24-390af1443885 is not empty - skipping! +[rank2]:W0430 02:19:54.416000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.fac9c1e1-b31d-431a-a38b-0ea60a7ff9eb is not empty - skipping! +[rank1]:W0430 02:19:54.416000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.12053b3a-3f52-4b46-9f58-2c97f3ac2146 is not empty - skipping! +[rank3]:W0430 02:19:54.416000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.ed4bcdd5-0fdf-462b-bf7d-265df13f63e5 is not empty - skipping! +[rank7]:W0430 02:19:54.416000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.23a53cdc-c715-424c-a5a8-b121fc3ef9fa is not empty - skipping! +[rank0]:W0430 02:19:54.416000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.9e6e0cb3-60af-41e5-bc84-4aed0bc5b842 is not empty - skipping! +[rank4]:W0430 02:19:54.416000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.d4427c5e-6508-4537-8a21-2e67092bc8b9 is not empty - skipping! +[rank5]:W0430 02:19:54.416000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.e1f52cbb-1d61-425c-ac4d-af36e3c15cd8 is not empty - skipping! +[rank2]:W0430 02:19:54.416000 2684319 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.5dd17939-6f34-40bf-8578-9d118fca1102 is not empty - skipping! +[rank1]:W0430 02:19:54.417000 2684317 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.a2350d11-eea3-45f5-b0c8-dd8a14e078b0 is not empty - skipping! +[rank3]:W0430 02:19:54.417000 2684321 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.a0143506-ffed-4d37-b597-f615c205f4d3 is not empty - skipping! +[rank7]:W0430 02:19:54.417000 2684329 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.9c7cba90-95f2-4aa8-8909-9a8f0c0229b1 is not empty - skipping! +[rank0]:W0430 02:19:54.417000 2684315 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.cbdfd096-0abe-4e56-8e47-c7b77b697072 is not empty - skipping! +[rank5]:W0430 02:19:54.417000 2684325 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.b89c85c5-8839-42eb-b883-fe9d1d88174d is not empty - skipping! +[rank4]:W0430 02:19:54.417000 2684324 torch/_inductor/triton_bundler.py:396] [1/0] Directory /workspace/.cache/triton/tmp.adee9d7a-47b4-42f8-8652-c3626f60286a is not empty - skipping! +[rank4]:W0430 02:19:54.773000 2684324 torch/_inductor/triton_bundler.py:396] [1/1] Directory /workspace/.cache/triton/tmp.10b2fee3-4c62-40a3-ace8-1e3faad7d77c is not empty - skipping! +[rank4]:W0430 02:19:54.774000 2684324 torch/_inductor/triton_bundler.py:396] [1/1] Directory /workspace/.cache/triton/tmp.bf357635-b064-4b8d-96d4-d7da756a2e7e is not empty - skipping! +[rank4]:W0430 02:19:54.775000 2684324 torch/_inductor/triton_bundler.py:396] [1/1] Directory /workspace/.cache/triton/tmp.e72de486-8beb-4b94-afa6-175c08e36933 is not empty - skipping! +[rank6]:W0430 02:19:54.775000 2684326 torch/_inductor/triton_bundler.py:396] [1/1] Directory /workspace/.cache/triton/tmp.8ad1f647-c9d8-4991-8a9e-9080ad25f9ea is not empty - skipping! +[rank4]:W0430 02:19:54.775000 2684324 torch/_inductor/triton_bundler.py:396] [1/1] Directory /workspace/.cache/triton/tmp.9acb01f2-7d02-471b-8fb0-21dc1454fd0d is not empty - skipping! +[rank6]:W0430 02:19:54.776000 2684326 torch/_inductor/triton_bundler.py:396] [1/1] Directory /workspace/.cache/triton/tmp.3f8fed0d-180d-4301-bdc8-dc088a0bf05e is not empty - skipping! +warmup_step: 1/20 +warmup_step: 2/20 +warmup_step: 3/20 +warmup_step: 4/20 +warmup_step: 5/20 +warmup_step: 6/20 +warmup_step: 10/20 +warmup_step: 20/20 +loop_warmup:enabled encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +loop_warmup_step: 1/20 +loop_warmup_step: 2/20 +loop_warmup_step: 3/20 +loop_warmup_step: 4/20 +loop_warmup_step: 5/20 +loop_warmup_step: 6/20 +loop_warmup_step: 10/20 +loop_warmup_step: 20/20 +[rank6]:W0430 02:20:01.910000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.7b4848f3-1ea8-43ba-81cb-12f63c313163 is not empty - skipping! +[rank6]:W0430 02:20:01.911000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.9451ffad-1e76-4d63-aecc-0ed7136e6e6a is not empty - skipping! +[rank6]:W0430 02:20:01.912000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.8cb2bb2a-1c5e-4df2-8444-5afb48de7e30 is not empty - skipping! +[rank6]:W0430 02:20:01.912000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.76e573b6-a194-4dca-8534-96d27eaa6e27 is not empty - skipping! +[rank6]:W0430 02:20:01.914000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.8f48749e-8c02-43ff-bcf4-f01c21e30366 is not empty - skipping! +[rank6]:W0430 02:20:01.914000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.de9adb53-bc28-45b2-8a45-39788e9c6828 is not empty - skipping! +[rank6]:W0430 02:20:01.915000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.3cbb7777-44b6-4e85-adc8-bf758ae417fe is not empty - skipping! +[rank6]:W0430 02:20:01.916000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.be6dc795-b6d5-422e-b9cd-97fbea66e225 is not empty - skipping! +[rank6]:W0430 02:20:01.916000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.f8965a43-d430-4961-a012-f70ffd1a3294 is not empty - skipping! +[rank6]:W0430 02:20:01.917000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.d6543b67-cf7a-449c-a0f7-4ad0944adddf is not empty - skipping! +[rank6]:W0430 02:20:01.918000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.417b9733-165f-4a67-b740-e5d8c3d26aec is not empty - skipping! +[rank6]:W0430 02:20:01.919000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.619714d2-bab7-4200-be43-942a300ecd4b is not empty - skipping! +[rank6]:W0430 02:20:01.920000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.b544bb59-9d7d-46d1-b3ac-2efbbc27721d is not empty - skipping! +[rank6]:W0430 02:20:01.922000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.32b80430-dd10-4937-b2d9-4d382b04fa7e is not empty - skipping! +[rank6]:W0430 02:20:01.923000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.70472ca5-8668-401f-8e77-56d2d3134311 is not empty - skipping! +[rank6]:W0430 02:20:01.923000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.0e53fcf8-1ed2-426f-95ee-3ab33d71eac6 is not empty - skipping! +[rank2]:W0430 02:20:01.924000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.8eca8b71-1ac6-4aaf-9c7d-e0f9314f7133 is not empty - skipping! +[rank6]:W0430 02:20:01.924000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.b4402990-9f40-42d8-9c42-38b395275816 is not empty - skipping! +[rank2]:W0430 02:20:01.925000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.739ca192-bf19-4f44-93fe-cb70ff05b00a is not empty - skipping! +[rank6]:W0430 02:20:01.926000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.b0c2d68c-f0b6-4dbf-8249-1076b638992b is not empty - skipping! +[rank2]:W0430 02:20:01.927000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.aad85643-367d-4fb1-836b-5b6b4d660e07 is not empty - skipping! +[rank6]:W0430 02:20:01.927000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.80b326eb-c09b-4115-8059-1f2f6b8a17b7 is not empty - skipping! +[rank6]:W0430 02:20:01.927000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.b67074d3-aa07-4ed9-869c-01d99c757229 is not empty - skipping! +[rank2]:W0430 02:20:01.927000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.68e9f9a6-df4e-4f8f-bf54-9c6c6bbd7136 is not empty - skipping! +[rank6]:W0430 02:20:01.928000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.466db1ca-ac05-4689-a115-a89a8ce36d50 is not empty - skipping! +[rank2]:W0430 02:20:01.929000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.4413cb1d-628f-44b6-a2a4-870857971f8b is not empty - skipping! +[rank6]:W0430 02:20:01.929000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.50807cb5-e546-4c6f-a1d6-1d5714537796 is not empty - skipping! +[rank2]:W0430 02:20:01.929000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.8497b036-978f-4d46-a4f1-7ed04ab08857 is not empty - skipping! +[rank2]:W0430 02:20:01.930000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.fd3564e4-ba52-4c80-8b45-1eff5155ecad is not empty - skipping! +[rank6]:W0430 02:20:01.930000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.eb6b9587-bddc-4cbb-8f1f-97056661ac15 is not empty - skipping! +[rank2]:W0430 02:20:01.931000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.93589fb7-fc70-4c8a-aa9b-6b1ef21ccc4a is not empty - skipping! +[rank6]:W0430 02:20:01.931000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.ab50d1ba-7cfe-492c-93ad-ed5e2f0dfffa is not empty - skipping! +[rank7]:W0430 02:20:01.931000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.b36c55fa-bf6f-46b8-8d6b-a57464b6f498 is not empty - skipping! +[rank2]:W0430 02:20:01.931000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.14f4457c-c5c2-4ab5-b7af-1d1391b5487f is not empty - skipping! +[rank7]:W0430 02:20:01.932000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.53e7041f-5afe-40d5-936a-3feb78111a96 is not empty - skipping! +[rank2]:W0430 02:20:01.932000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.87785462-ee80-4051-abfa-0f0edcddba25 is not empty - skipping! +[rank6]:W0430 02:20:01.933000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.f64c573d-986f-4dd6-9a34-6275b00079fc is not empty - skipping! +[rank7]:W0430 02:20:01.933000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.b1bf2f57-a79f-40f2-b45c-b2a59fb63fec is not empty - skipping! +[rank2]:W0430 02:20:01.933000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.e45ab25f-bec5-480b-9bdb-fcb4503468a7 is not empty - skipping! +[rank6]:W0430 02:20:01.933000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.8194b21c-10b6-46d5-9525-a41fa3e3af45 is not empty - skipping! +[rank7]:W0430 02:20:01.934000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.17096d0b-1e5b-4a72-974d-b00a237e195b is not empty - skipping! +[rank2]:W0430 02:20:01.934000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.2fa1a953-e291-4da0-8426-aa452dfeadf5 is not empty - skipping! +[rank7]:W0430 02:20:01.934000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.5e6ab6ee-e869-4614-b8e6-84f3f2aba850 is not empty - skipping! +[rank6]:W0430 02:20:01.934000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.f461caeb-80d7-4e84-aa49-0c7545c0ccda is not empty - skipping! +[rank2]:W0430 02:20:01.934000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.98ac566b-2f9c-402f-baf1-558b75c6ef24 is not empty - skipping! +[rank7]:W0430 02:20:01.935000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.0ce0878b-bef1-4ecf-afb3-e9e19c5953b2 is not empty - skipping! +[rank6]:W0430 02:20:01.935000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.52273992-6cb1-48a9-abc6-a2b554e46d26 is not empty - skipping! +[rank2]:W0430 02:20:01.935000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.67970d8c-4b38-490b-b13b-f231798d75b9 is not empty - skipping! +[rank7]:W0430 02:20:01.936000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.d4c4097d-a885-4574-b9c7-c2181a98f826 is not empty - skipping! +[rank6]:W0430 02:20:01.936000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.0cad5311-6ca4-4180-b8ed-3139f26960f6 is not empty - skipping! +[rank7]:W0430 02:20:01.936000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.6a8d0563-4427-41f3-9558-8201579da45c is not empty - skipping! +[rank6]:W0430 02:20:01.937000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.4e59edd5-3c9c-46d4-b211-34113c04bba3 is not empty - skipping! +[rank7]:W0430 02:20:01.937000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.a1540800-f3a8-458f-8ccc-341119021abd is not empty - skipping! +[rank2]:W0430 02:20:01.937000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.594a87e5-11ce-41e1-b20a-f1e85242adea is not empty - skipping! +[rank6]:W0430 02:20:01.937000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.be80fd49-d3ff-435c-b624-8fb3a4dbd88c is not empty - skipping! +[rank7]:W0430 02:20:01.938000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.1acc8949-6865-4ed9-aec1-5a645580739a is not empty - skipping! +[rank2]:W0430 02:20:01.938000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.3fc1ade5-bd13-4dc7-b541-78e00e55259a is not empty - skipping! +[rank6]:W0430 02:20:01.938000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.a00333fb-d0a6-47cb-8113-525a74cad9d6 is not empty - skipping! +[rank7]:W0430 02:20:01.939000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.ae7a9e95-61f7-412c-940d-80bf62b3a967 is not empty - skipping! +[rank2]:W0430 02:20:01.939000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.9ccf71f0-1928-4f9c-b52e-d2d397206653 is not empty - skipping! +[rank6]:W0430 02:20:01.939000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.2ef4c511-716e-4486-9cd2-489bedc6effa is not empty - skipping! +[rank2]:W0430 02:20:01.939000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.2109d375-c257-444c-a66b-352400906f72 is not empty - skipping! +[rank7]:W0430 02:20:01.940000 2684329 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.331be18d-998d-43da-afaa-18f26ff74a63 is not empty - skipping! +[rank6]:W0430 02:20:01.940000 2684326 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.882be637-025d-48a4-8e49-24d93dc5e966 is not empty - skipping! +[rank2]:W0430 02:20:01.940000 2684319 torch/_inductor/triton_bundler.py:396] [2/0] Directory /workspace/.cache/triton/tmp.8c18c54d-b631-40fb-9eaa-f9877ed5b847 is not empty - skipping! +0/20000 val_loss: 9.0095 val_bpb: 3.4883 +1/20000 train_loss: 9.0097 train_time: 0.0m tok/s: 12495036 +2/20000 train_loss: 12.2256 train_time: 0.0m tok/s: 11596773 +3/20000 train_loss: 11.2678 train_time: 0.0m tok/s: 10382476 +4/20000 train_loss: 9.6303 train_time: 0.0m tok/s: 9823223 +5/20000 train_loss: 8.0954 train_time: 0.0m tok/s: 9527899 +500/20000 train_loss: 3.2935 train_time: 0.8m tok/s: 8059518 +1000/20000 train_loss: 3.1140 train_time: 1.7m tok/s: 7820952 +1500/20000 train_loss: 3.0923 train_time: 2.5m tok/s: 7758084 +2000/20000 train_loss: 3.0486 train_time: 3.4m tok/s: 7715542 +layer_loop:enabled step:2047 frac:0.350 encoder:[0, 1, 2, 3, 4, 5, 3, 4] decoder:[5, 3, 4, 5, 6, 7, 8, 9, 10] +2500/20000 train_loss: 3.0482 train_time: 4.5m tok/s: 7226541 +3000/20000 train_loss: 2.8329 train_time: 5.7m tok/s: 6893133 +3500/20000 train_loss: 2.8831 train_time: 6.9m tok/s: 6622681 +4000/20000 train_loss: 2.9363 train_time: 8.1m tok/s: 6448975 +[rank3]:W0430 02:28:16.042000 2684321 torch/_inductor/triton_bundler.py:396] [2/2] Directory /workspace/.cache/triton/tmp.c7ee942d-2c98-4484-a42c-7ff94b621eb8 is not empty - skipping! +[rank3]:W0430 02:28:16.043000 2684321 torch/_inductor/triton_bundler.py:396] [2/2] Directory /workspace/.cache/triton/tmp.881749be-4958-419c-ab39-c3a31ea95617 is not empty - skipping! +[rank3]:W0430 02:28:16.044000 2684321 torch/_inductor/triton_bundler.py:396] [2/2] Directory /workspace/.cache/triton/tmp.07d144f0-6502-41f7-b2b3-29f8f2e0e7ba is not empty - skipping! +[rank4]:W0430 02:28:16.044000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.767d0a2a-bd38-4819-abc6-64dcea0b3384 is not empty - skipping! +[rank4]:W0430 02:28:16.045000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.8e689428-dec8-4e93-9cdd-16aa7125c530 is not empty - skipping! +[rank4]:W0430 02:28:16.046000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.d1f8450c-da58-4787-84e2-dfe2311000e1 is not empty - skipping! +[rank4]:W0430 02:28:16.047000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.17c88156-e23c-46a9-b499-2ca5196f306a is not empty - skipping! +[rank4]:W0430 02:28:16.048000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.d7dada34-7bd2-42e3-bf9b-a121d936e348 is not empty - skipping! +[rank4]:W0430 02:28:16.049000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.920b2091-aef5-4b94-a3c9-d72d02db5f5f is not empty - skipping! +[rank4]:W0430 02:28:16.050000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.16a6d418-c8f9-42b7-990d-bb40ac54bf30 is not empty - skipping! +[rank4]:W0430 02:28:16.050000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.eb5b30a3-6680-4a7b-935b-2270a4f5ea49 is not empty - skipping! +[rank4]:W0430 02:28:16.052000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.87febdb1-0aa9-4f72-a574-873697963c57 is not empty - skipping! +[rank4]:W0430 02:28:16.053000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.0b94c3db-4182-4268-9dce-8526008ea436 is not empty - skipping! +[rank4]:W0430 02:28:16.054000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.d97063ba-9c24-4146-afd6-b7b238d9cccb is not empty - skipping! +[rank4]:W0430 02:28:16.055000 2684324 torch/_inductor/triton_bundler.py:396] [2/1] Directory /workspace/.cache/triton/tmp.88da3285-a048-471a-96dd-15541710e114 is not empty - skipping! +4000/20000 val_loss: 2.8621 val_bpb: 1.1081 +4500/20000 train_loss: 2.7624 train_time: 9.3m tok/s: 6344744 +4773/20000 val_loss: 2.7903 val_bpb: 1.0804 +stopping_early: wallclock_cap train_time: 596070ms step: 4773/20000 +peak memory allocated: 40445 MiB reserved: 44498 MiB +ema:applying EMA weights +diagnostic pre-quantization post-ema val_loss:2.76456068 val_bpb:1.07038845 eval_time:6960ms +Serialized model: 135424701 bytes +Code size (uncompressed): 130789 bytes +Code size (compressed): 29430 bytes +GPTQ:collecting Hessians from calibration data... +GPTQ:collected 67 Hessians in 3.5s +Quantized weights: + gptq (int6): blocks.attn.c_k.weight, blocks.attn.c_q.weight, blocks.attn.c_v.weight, blocks.attn.proj.weight, blocks.mlp.fc.weight, blocks.mlp.proj.weight, tok_emb.weight + passthrough (float16): blocks.attn.attn_out_gate_w, blocks.attn.q_gain, blocks.attn_scale, blocks.mlp_scale, blocks.resid_mix, parallel_post_lambdas, parallel_resid_lambdas, skip_gates, skip_weights, smear_lambda, smear_w +Serialized model quantized+brotli: 15857707 bytes +Total submission size quantized+brotli: 15887137 bytes +[rank0]:W0430 02:31:43.075000 2684315 torch/_inductor/triton_bundler.py:396] [0/2] Directory /workspace/.cache/triton/tmp.c627287b-dc01-4364-b3f4-dd58c09e50fc is not empty - skipping! +[rank0]:W0430 02:31:43.076000 2684315 torch/_inductor/triton_bundler.py:396] [0/2] Directory /workspace/.cache/triton/tmp.6daef428-b0c9-4265-b079-106942ca087a is not empty - skipping! +[rank0]:W0430 02:31:43.077000 2684315 torch/_inductor/triton_bundler.py:396] [0/2] Directory /workspace/.cache/triton/tmp.0b46912f-8cda-4a95-b87a-6cf8de9d2f9d is not empty - skipping! +[rank0]:W0430 02:31:43.078000 2684315 torch/_inductor/triton_bundler.py:396] [0/2] Directory /workspace/.cache/triton/tmp.636510f0-4866-4346-be3b-0a41cb7b15ee is not empty - skipping! +[rank0]:W0430 02:31:43.078000 2684315 torch/_inductor/triton_bundler.py:396] [0/2] Directory /workspace/.cache/triton/tmp.84265a46-1cb0-4890-8246-edb985b736e7 is not empty - skipping! +[rank0]:W0430 02:31:43.079000 2684315 torch/_inductor/triton_bundler.py:396] [0/2] Directory /workspace/.cache/triton/tmp.4018c09e-b26d-42b7-8050-5a43fad5f4da is not empty - skipping! +[rank0]:W0430 02:31:43.080000 2684315 torch/_inductor/triton_bundler.py:396] [0/2] Directory /workspace/.cache/triton/tmp.ae0f08a6-df50-42e1-b65f-2d3b1cab4ddd is not empty - skipping! +[rank0]:W0430 02:31:43.080000 2684315 torch/_inductor/triton_bundler.py:396] [0/2] Directory /workspace/.cache/triton/tmp.54f71d26-3b8c-4469-83dd-490a5661a0cd is not empty - skipping! +[rank0]:W0430 02:31:43.081000 2684315 torch/_inductor/triton_bundler.py:396] [0/2] Directory /workspace/.cache/triton/tmp.e3b7f3a8-301a-4054-8cd6-4f95030d9ff6 is not empty - skipping! +[rank0]:W0430 02:31:43.082000 2684315 torch/_inductor/triton_bundler.py:396] [0/2] Directory /workspace/.cache/triton/tmp.37e6c771-6cc3-4427-8a1f-b53d7ac4c93f is not empty - skipping! +[rank4]:W0430 02:31:43.083000 2684324 torch/_inductor/triton_bundler.py:396] [0/1] Directory /workspace/.cache/triton/tmp.3cd754f1-823f-4e19-952c-2d2dfce53b61 is not empty - skipping! +[rank4]:W0430 02:31:43.084000 2684324 torch/_inductor/triton_bundler.py:396] [0/1] Directory /workspace/.cache/triton/tmp.49285614-fbeb-4058-b398-2b02140f1134 is not empty - skipping! +diagnostic quantized val_loss:2.81973181 val_bpb:1.09174973 eval_time:10906ms +ttt_lora:warming up compile (random tokens, no val data) +ttt_lora:compile warmup done (142.0s) + +beginning TTT eval timer +ttt_phased: total_docs:49999 prefix_docs:2000 suffix_docs:47999 num_phases:1 boundaries:[2000] +ttp: b779/782 bl:2.6524 bb:1.0836 rl:2.6524 rb:1.0836 dl:9051-11137 gd:0 +ttp: b770/782 bl:2.6652 bb:1.0601 rl:2.6564 rb:1.0760 dl:4490-4708 gd:0 +ttp: b762/782 bl:2.8554 bb:1.0883 rl:2.6948 rb:1.0785 dl:3433-3538 gd:0 +ttp: b756/782 bl:2.8661 bb:1.0980 rl:2.7193 rb:1.0814 dl:2974-3037 gd:0 +ttp: b752/782 bl:2.8056 bb:1.0690 rl:2.7293 rb:1.0799 dl:2742-2798 gd:0 +ttp: b744/782 bl:2.6554 bb:1.0676 rl:2.7225 rb:1.0788 dl:2388-2421 gd:0 +ttpp: phase:1/1 pd:2447 gd:2000 t:216.4s +tttg: c1/214 lr:0.001000 t:1.8s +tttg: c2/214 lr:0.001000 t:1.8s +tttg: c3/214 lr:0.001000 t:1.9s +tttg: c4/214 lr:0.001000 t:2.0s +tttg: c5/214 lr:0.000999 t:2.1s +tttg: c6/214 lr:0.000999 t:2.2s +tttg: c7/214 lr:0.000998 t:2.3s +tttg: c8/214 lr:0.000997 t:2.3s +tttg: c9/214 lr:0.000997 t:2.4s +tttg: c10/214 lr:0.000996 t:2.5s +tttg: c11/214 lr:0.000995 t:2.6s +tttg: c12/214 lr:0.000993 t:2.7s +tttg: c13/214 lr:0.000992 t:2.7s +tttg: c14/214 lr:0.000991 t:2.8s +tttg: c15/214 lr:0.000989 t:2.9s +tttg: c16/214 lr:0.000988 t:3.0s +tttg: c17/214 lr:0.000986 t:3.0s +tttg: c18/214 lr:0.000984 t:3.1s +tttg: c19/214 lr:0.000982 t:3.2s +tttg: c20/214 lr:0.000980 t:3.3s +tttg: c21/214 lr:0.000978 t:3.4s +tttg: c22/214 lr:0.000976 t:3.4s +tttg: c23/214 lr:0.000974 t:3.5s +tttg: c24/214 lr:0.000972 t:3.6s +tttg: c25/214 lr:0.000969 t:3.7s +tttg: c26/214 lr:0.000966 t:3.8s +tttg: c27/214 lr:0.000964 t:3.8s +tttg: c28/214 lr:0.000961 t:3.9s +tttg: c29/214 lr:0.000958 t:4.0s +tttg: c30/214 lr:0.000955 t:4.1s +tttg: c31/214 lr:0.000952 t:4.2s +tttg: c32/214 lr:0.000949 t:4.2s +tttg: c33/214 lr:0.000945 t:4.3s +tttg: c34/214 lr:0.000942 t:4.4s +tttg: c35/214 lr:0.000938 t:4.5s +tttg: c36/214 lr:0.000935 t:4.5s +tttg: c37/214 lr:0.000931 t:4.6s +tttg: c38/214 lr:0.000927 t:4.7s +tttg: c39/214 lr:0.000924 t:4.8s +tttg: c40/214 lr:0.000920 t:4.9s +tttg: c41/214 lr:0.000915 t:4.9s +tttg: c42/214 lr:0.000911 t:5.0s +tttg: c43/214 lr:0.000907 t:5.1s +tttg: c44/214 lr:0.000903 t:5.2s +tttg: c45/214 lr:0.000898 t:5.3s +tttg: c46/214 lr:0.000894 t:5.3s +tttg: c47/214 lr:0.000889 t:5.4s +tttg: c48/214 lr:0.000885 t:5.5s +tttg: c49/214 lr:0.000880 t:5.6s +tttg: c50/214 lr:0.000875 t:5.7s +tttg: c51/214 lr:0.000870 t:5.7s +tttg: c52/214 lr:0.000865 t:5.8s +tttg: c53/214 lr:0.000860 t:5.9s +tttg: c54/214 lr:0.000855 t:6.0s +tttg: c55/214 lr:0.000850 t:6.1s +tttg: c56/214 lr:0.000844 t:6.1s +tttg: c57/214 lr:0.000839 t:6.2s +tttg: c58/214 lr:0.000833 t:6.3s +tttg: c59/214 lr:0.000828 t:6.4s +tttg: c60/214 lr:0.000822 t:6.5s +tttg: c61/214 lr:0.000817 t:6.5s +tttg: c62/214 lr:0.000811 t:6.6s +tttg: c63/214 lr:0.000805 t:6.7s +tttg: c64/214 lr:0.000799 t:6.8s +tttg: c65/214 lr:0.000793 t:6.8s +tttg: c66/214 lr:0.000787 t:6.9s +tttg: c67/214 lr:0.000781 t:7.0s +tttg: c68/214 lr:0.000775 t:7.1s +tttg: c69/214 lr:0.000769 t:7.2s +tttg: c70/214 lr:0.000763 t:7.2s +tttg: c71/214 lr:0.000756 t:7.3s +tttg: c72/214 lr:0.000750 t:7.4s +tttg: c73/214 lr:0.000744 t:7.5s +tttg: c74/214 lr:0.000737 t:7.6s +tttg: c75/214 lr:0.000731 t:7.6s +tttg: c76/214 lr:0.000724 t:7.7s +tttg: c77/214 lr:0.000717 t:7.8s +tttg: c78/214 lr:0.000711 t:7.9s +tttg: c79/214 lr:0.000704 t:7.9s +tttg: c80/214 lr:0.000697 t:8.0s +tttg: c81/214 lr:0.000690 t:8.1s +tttg: c82/214 lr:0.000684 t:8.2s +tttg: c83/214 lr:0.000677 t:8.3s +tttg: c84/214 lr:0.000670 t:8.4s +tttg: c85/214 lr:0.000663 t:8.4s +tttg: c86/214 lr:0.000656 t:8.5s +tttg: c87/214 lr:0.000649 t:8.6s +tttg: c88/214 lr:0.000642 t:8.7s +tttg: c89/214 lr:0.000635 t:8.7s +tttg: c90/214 lr:0.000628 t:8.8s +tttg: c91/214 lr:0.000620 t:8.9s +tttg: c92/214 lr:0.000613 t:9.0s +tttg: c93/214 lr:0.000606 t:9.1s +tttg: c94/214 lr:0.000599 t:9.1s +tttg: c95/214 lr:0.000592 t:9.2s +tttg: c96/214 lr:0.000584 t:9.3s +tttg: c97/214 lr:0.000577 t:9.4s +tttg: c98/214 lr:0.000570 t:9.4s +tttg: c99/214 lr:0.000563 t:9.5s +tttg: c100/214 lr:0.000555 t:9.6s +tttg: c101/214 lr:0.000548 t:9.7s +tttg: c102/214 lr:0.000541 t:9.8s +tttg: c103/214 lr:0.000533 t:9.8s +tttg: c104/214 lr:0.000526 t:9.9s +tttg: c105/214 lr:0.000518 t:10.0s +tttg: c106/214 lr:0.000511 t:10.1s +tttg: c107/214 lr:0.000504 t:10.2s +tttg: c108/214 lr:0.000496 t:10.3s +tttg: c109/214 lr:0.000489 t:10.3s +tttg: c110/214 lr:0.000482 t:10.4s +tttg: c111/214 lr:0.000474 t:10.5s +tttg: c112/214 lr:0.000467 t:10.6s +tttg: c113/214 lr:0.000459 t:10.7s +tttg: c114/214 lr:0.000452 t:10.7s +tttg: c115/214 lr:0.000445 t:10.8s +tttg: c116/214 lr:0.000437 t:10.9s +tttg: c117/214 lr:0.000430 t:11.0s +tttg: c118/214 lr:0.000423 t:11.1s +tttg: c119/214 lr:0.000416 t:11.1s +tttg: c120/214 lr:0.000408 t:11.2s +tttg: c121/214 lr:0.000401 t:11.3s +tttg: c122/214 lr:0.000394 t:11.4s +tttg: c123/214 lr:0.000387 t:11.5s +tttg: c124/214 lr:0.000380 t:11.5s +tttg: c125/214 lr:0.000372 t:11.6s +tttg: c126/214 lr:0.000365 t:11.7s +tttg: c127/214 lr:0.000358 t:11.8s +tttg: c128/214 lr:0.000351 t:11.9s +tttg: c129/214 lr:0.000344 t:12.0s +tttg: c130/214 lr:0.000337 t:12.0s +tttg: c131/214 lr:0.000330 t:12.1s +tttg: c132/214 lr:0.000323 t:12.2s +tttg: c133/214 lr:0.000316 t:12.3s +tttg: c134/214 lr:0.000310 t:12.4s +tttg: c135/214 lr:0.000303 t:12.4s +tttg: c136/214 lr:0.000296 t:12.5s +tttg: c137/214 lr:0.000289 t:12.6s +tttg: c138/214 lr:0.000283 t:12.7s +tttg: c139/214 lr:0.000276 t:12.8s +tttg: c140/214 lr:0.000269 t:12.8s +tttg: c141/214 lr:0.000263 t:12.9s +tttg: c142/214 lr:0.000256 t:13.0s +tttg: c143/214 lr:0.000250 t:13.1s +tttg: c144/214 lr:0.000244 t:13.2s +tttg: c145/214 lr:0.000237 t:13.2s +tttg: c146/214 lr:0.000231 t:13.3s +tttg: c147/214 lr:0.000225 t:13.4s +tttg: c148/214 lr:0.000219 t:13.5s +tttg: c149/214 lr:0.000213 t:13.6s +tttg: c150/214 lr:0.000207 t:13.6s +tttg: c151/214 lr:0.000201 t:13.7s +tttg: c152/214 lr:0.000195 t:13.8s +tttg: c153/214 lr:0.000189 t:13.9s +tttg: c154/214 lr:0.000183 t:13.9s +tttg: c155/214 lr:0.000178 t:14.0s +tttg: c156/214 lr:0.000172 t:14.1s +tttg: c157/214 lr:0.000167 t:14.2s +tttg: c158/214 lr:0.000161 t:14.3s +tttg: c159/214 lr:0.000156 t:14.3s +tttg: c160/214 lr:0.000150 t:14.4s +tttg: c161/214 lr:0.000145 t:14.5s +tttg: c162/214 lr:0.000140 t:14.6s +tttg: c163/214 lr:0.000135 t:14.7s +tttg: c164/214 lr:0.000130 t:14.7s +tttg: c165/214 lr:0.000125 t:14.8s +tttg: c166/214 lr:0.000120 t:14.9s +tttg: c167/214 lr:0.000115 t:15.0s +tttg: c168/214 lr:0.000111 t:15.1s +tttg: c169/214 lr:0.000106 t:15.1s +tttg: c170/214 lr:0.000102 t:15.2s +tttg: c171/214 lr:0.000097 t:15.3s +tttg: c172/214 lr:0.000093 t:15.4s +tttg: c173/214 lr:0.000089 t:15.4s +tttg: c174/214 lr:0.000085 t:15.5s +tttg: c175/214 lr:0.000080 t:15.6s +tttg: c176/214 lr:0.000076 t:15.7s +tttg: c177/214 lr:0.000073 t:15.8s +tttg: c178/214 lr:0.000069 t:15.8s +tttg: c179/214 lr:0.000065 t:15.9s +tttg: c180/214 lr:0.000062 t:16.0s +tttg: c181/214 lr:0.000058 t:16.1s +tttg: c182/214 lr:0.000055 t:16.2s +tttg: c183/214 lr:0.000051 t:16.2s +tttg: c184/214 lr:0.000048 t:16.3s +tttg: c185/214 lr:0.000045 t:16.4s +tttg: c186/214 lr:0.000042 t:16.5s +tttg: c187/214 lr:0.000039 t:16.5s +tttg: c188/214 lr:0.000036 t:16.6s +tttg: c189/214 lr:0.000034 t:16.7s +tttg: c190/214 lr:0.000031 t:16.8s +tttg: c191/214 lr:0.000028 t:16.9s +tttg: c192/214 lr:0.000026 t:17.0s +tttg: c193/214 lr:0.000024 t:17.0s +tttg: c194/214 lr:0.000022 t:17.1s +tttg: c195/214 lr:0.000020 t:17.2s +tttg: c196/214 lr:0.000018 t:17.3s +tttg: c197/214 lr:0.000016 t:17.3s +tttg: c198/214 lr:0.000014 t:17.4s +tttg: c199/214 lr:0.000012 t:17.5s +tttg: c200/214 lr:0.000011 t:17.6s +tttg: c201/214 lr:0.000009 t:17.7s +tttg: c202/214 lr:0.000008 t:17.8s +tttg: c203/214 lr:0.000007 t:17.8s +tttg: c204/214 lr:0.000005 t:17.9s +tttg: c205/214 lr:0.000004 t:18.0s +tttg: c206/214 lr:0.000003 t:18.1s +tttg: c207/214 lr:0.000003 t:18.1s +tttg: c208/214 lr:0.000002 t:18.2s +tttg: c209/214 lr:0.000001 t:18.3s +tttg: c210/214 lr:0.000001 t:18.4s +tttg: c211/214 lr:0.000000 t:18.5s +tttg: c212/214 lr:0.000000 t:18.5s +tttg: c213/214 lr:0.000000 t:18.6s +ttpr: phase:1/1 t:237.2s +ttp: b738/782 bl:2.7717 bb:1.0714 rl:2.7264 rb:1.0782 dl:2196-2228 gd:1 +ttp: b733/782 bl:2.7393 bb:1.0468 rl:2.7272 rb:1.0760 dl:2063-2092 gd:1 +ttp: b725/782 bl:2.7435 bb:1.0757 rl:2.7282 rb:1.0760 dl:1902-1916 gd:1 +ttp: b717/782 bl:2.7783 bb:1.0567 rl:2.7308 rb:1.0750 dl:1758-1773 gd:1 +ttp: b710/782 bl:2.7102 bb:1.0311 rl:2.7298 rb:1.0729 dl:1661-1674 gd:1 +ttp: b702/782 bl:2.8322 bb:1.0808 rl:2.7341 rb:1.0732 dl:1573-1583 gd:1 +ttp: b694/782 bl:2.7874 bb:1.0773 rl:2.7362 rb:1.0734 dl:1494-1504 gd:1 +ttp: b684/782 bl:2.7933 bb:1.0951 rl:2.7382 rb:1.0741 dl:1408-1414 gd:1 +ttp: b673/782 bl:2.8698 bb:1.0649 rl:2.7424 rb:1.0738 dl:1328-1335 gd:1 +ttp: b670/782 bl:2.8270 bb:1.0773 rl:2.7450 rb:1.0739 dl:1310-1316 gd:1 +ttp: b657/782 bl:2.7996 bb:1.0599 rl:2.7465 rb:1.0735 dl:1227-1233 gd:1 +ttp: b652/782 bl:2.7762 bb:1.0753 rl:2.7473 rb:1.0736 dl:1199-1204 gd:1 +ttp: b640/782 bl:2.7785 bb:1.0671 rl:2.7480 rb:1.0734 dl:1135-1138 gd:1 +ttp: b636/782 bl:2.7556 bb:1.0591 rl:2.7482 rb:1.0731 dl:1116-1119 gd:1 +ttp: b626/782 bl:2.7716 bb:1.0329 rl:2.7487 rb:1.0721 dl:1069-1073 gd:1 +ttp: b618/782 bl:2.7396 bb:1.0585 rl:2.7485 rb:1.0719 dl:1032-1036 gd:1 +ttp: b611/782 bl:2.8583 bb:1.0848 rl:2.7507 rb:1.0721 dl:1003-1007 gd:1 +ttp: b606/782 bl:2.7935 bb:1.0919 rl:2.7515 rb:1.0725 dl:982-986 gd:1 +ttp: b598/782 bl:2.8179 bb:1.0801 rl:2.7527 rb:1.0726 dl:950-954 gd:1 +ttp: b590/782 bl:2.7563 bb:1.0470 rl:2.7528 rb:1.0722 dl:924-927 gd:1 +ttp: b579/782 bl:2.7463 bb:1.0442 rl:2.7527 rb:1.0717 dl:887-889 gd:1 +ttp: b571/782 bl:2.8459 bb:1.0799 rl:2.7541 rb:1.0718 dl:861-865 gd:1 +ttp: b566/782 bl:2.7306 bb:1.0306 rl:2.7538 rb:1.0712 dl:845-848 gd:1 +ttp: b558/782 bl:2.7608 bb:1.0510 rl:2.7539 rb:1.0709 dl:821-824 gd:1 +ttp: b545/782 bl:2.7758 bb:1.0752 rl:2.7542 rb:1.0710 dl:784-787 gd:1 +ttp: b537/782 bl:2.8328 bb:1.0937 rl:2.7552 rb:1.0713 dl:764-766 gd:1 +ttp: b533/782 bl:2.7813 bb:1.0534 rl:2.7555 rb:1.0710 dl:754-757 gd:1 +ttp: b524/782 bl:2.7997 bb:1.0684 rl:2.7561 rb:1.0710 dl:732-735 gd:1 +ttp: b516/782 bl:2.8155 bb:1.0650 rl:2.7568 rb:1.0709 dl:711-714 gd:1 +ttp: b508/782 bl:2.7356 bb:1.0342 rl:2.7565 rb:1.0705 dl:693-695 gd:1 +ttp: b500/782 bl:2.8718 bb:1.0834 rl:2.7578 rb:1.0706 dl:675-677 gd:1 +ttp: b488/782 bl:2.7852 bb:1.0555 rl:2.7581 rb:1.0705 dl:648-651 gd:1 +ttp: b480/782 bl:2.8075 bb:1.1024 rl:2.7586 rb:1.0708 dl:632-634 gd:1 +ttp: b471/782 bl:2.8139 bb:1.0669 rl:2.7591 rb:1.0708 dl:614-616 gd:1 +ttp: b465/782 bl:2.8081 bb:1.1239 rl:2.7595 rb:1.0712 dl:602-604 gd:1 +ttp: b456/782 bl:2.7688 bb:1.0459 rl:2.7596 rb:1.0710 dl:586-588 gd:1 +ttp: b448/782 bl:2.7914 bb:1.0692 rl:2.7599 rb:1.0710 dl:571-573 gd:1 +ttp: b444/782 bl:2.8227 bb:1.0659 rl:2.7604 rb:1.0709 dl:564-566 gd:1 +ttp: b435/782 bl:2.7750 bb:1.0588 rl:2.7606 rb:1.0708 dl:547-549 gd:1 +ttp: b427/782 bl:2.7796 bb:1.0694 rl:2.7607 rb:1.0708 dl:533-535 gd:1 +ttp: b417/782 bl:2.9365 bb:1.1067 rl:2.7620 rb:1.0711 dl:516-517 gd:1 +ttp: b411/782 bl:2.6960 bb:1.0604 rl:2.7615 rb:1.0710 dl:506-508 gd:1 +ttp: b403/782 bl:2.8296 bb:1.0520 rl:2.7620 rb:1.0709 dl:493-495 gd:1 +ttp: b396/782 bl:2.7958 bb:1.0487 rl:2.7623 rb:1.0707 dl:482-484 gd:1 +ttp: b389/782 bl:2.8030 bb:1.0610 rl:2.7625 rb:1.0707 dl:471-473 gd:1 +ttp: b380/782 bl:2.8093 bb:1.0829 rl:2.7628 rb:1.0707 dl:459-460 gd:1 +ttp: b372/782 bl:2.8300 bb:1.0682 rl:2.7633 rb:1.0707 dl:447-448 gd:1 +ttp: b364/782 bl:2.8067 bb:1.0814 rl:2.7635 rb:1.0708 dl:435-437 gd:1 +ttp: b356/782 bl:2.7678 bb:1.0604 rl:2.7636 rb:1.0707 dl:424-426 gd:1 +ttp: b348/782 bl:2.7578 bb:1.0710 rl:2.7635 rb:1.0707 dl:414-415 gd:1 +ttp: b339/782 bl:2.8285 bb:1.0909 rl:2.7639 rb:1.0708 dl:402-403 gd:1 +ttp: b331/782 bl:2.8922 bb:1.1219 rl:2.7646 rb:1.0711 dl:391-393 gd:1 +ttp: b323/782 bl:2.8254 bb:1.1008 rl:2.7649 rb:1.0713 dl:381-382 gd:1 +ttp: b318/782 bl:2.7746 bb:1.0598 rl:2.7649 rb:1.0712 dl:374-376 gd:1 +ttp: b310/782 bl:2.7748 bb:1.0938 rl:2.7650 rb:1.0713 dl:364-365 gd:1 +ttp: b302/782 bl:2.7916 bb:1.0457 rl:2.7651 rb:1.0712 dl:354-355 gd:1 +ttp: b294/782 bl:2.8696 bb:1.1059 rl:2.7656 rb:1.0714 dl:344-345 gd:1 +ttp: b286/782 bl:2.8664 bb:1.1183 rl:2.7660 rb:1.0716 dl:335-336 gd:1 +ttp: b281/782 bl:2.8864 bb:1.1649 rl:2.7666 rb:1.0720 dl:329-330 gd:1 +ttp: b273/782 bl:2.9206 bb:1.1144 rl:2.7672 rb:1.0721 dl:320-321 gd:1 +ttp: b264/782 bl:2.9141 bb:1.0937 rl:2.7678 rb:1.0722 dl:311-312 gd:1 +ttp: b255/782 bl:2.8662 bb:1.1437 rl:2.7682 rb:1.0725 dl:300-301 gd:1 +ttp: b247/782 bl:2.8954 bb:1.1623 rl:2.7687 rb:1.0728 dl:292-293 gd:1 +ttp: b237/782 bl:2.7893 bb:1.1189 rl:2.7688 rb:1.0730 dl:282-283 gd:1 +ttp: b229/782 bl:2.8862 bb:1.1080 rl:2.7692 rb:1.0731 dl:274-275 gd:1 +ttp: b221/782 bl:2.8905 bb:1.1331 rl:2.7696 rb:1.0733 dl:266-267 gd:1 +ttp: b213/782 bl:2.8957 bb:1.1217 rl:2.7700 rb:1.0735 dl:258-259 gd:1 +ttp: b205/782 bl:2.9018 bb:1.1286 rl:2.7704 rb:1.0737 dl:251-252 gd:1 +ttp: b196/782 bl:2.9605 bb:1.1520 rl:2.7710 rb:1.0739 dl:243-244 gd:1 +ttp: b188/782 bl:2.8934 bb:1.1332 rl:2.7714 rb:1.0741 dl:236-237 gd:1 +ttp: b181/782 bl:2.8645 bb:1.1188 rl:2.7717 rb:1.0742 dl:230-230 gd:1 +ttp: b172/782 bl:3.0114 bb:1.1750 rl:2.7723 rb:1.0745 dl:222-223 gd:1 +ttp: b164/782 bl:2.9786 bb:1.1599 rl:2.7729 rb:1.0747 dl:215-216 gd:1 +ttp: b157/782 bl:2.9393 bb:1.1064 rl:2.7733 rb:1.0748 dl:209-210 gd:1 +ttp: b150/782 bl:2.8998 bb:1.1343 rl:2.7737 rb:1.0750 dl:204-204 gd:1 +ttp: b140/782 bl:3.0067 bb:1.1865 rl:2.7742 rb:1.0752 dl:195-196 gd:1 +ttp: b133/782 bl:3.0668 bb:1.2087 rl:2.7749 rb:1.0755 dl:190-190 gd:1 +ttp: b124/782 bl:2.9367 bb:1.1691 rl:2.7753 rb:1.0757 dl:183-184 gd:1 +ttp: b117/782 bl:3.0126 bb:1.2046 rl:2.7758 rb:1.0760 dl:178-178 gd:1 +ttp: b108/782 bl:2.9793 bb:1.1984 rl:2.7763 rb:1.0763 dl:171-172 gd:1 +ttp: b100/782 bl:2.9660 bb:1.1648 rl:2.7766 rb:1.0764 dl:165-166 gd:1 +ttp: b93/782 bl:3.0245 bb:1.2401 rl:2.7771 rb:1.0768 dl:160-160 gd:1 +ttp: b84/782 bl:3.0667 bb:1.2488 rl:2.7777 rb:1.0771 dl:153-154 gd:1 +ttp: b77/782 bl:3.0298 bb:1.1721 rl:2.7781 rb:1.0772 dl:148-148 gd:1 +ttp: b66/782 bl:3.1107 bb:1.2331 rl:2.7787 rb:1.0775 dl:139-140 gd:1 +ttp: b59/782 bl:2.9846 bb:1.2028 rl:2.7790 rb:1.0777 dl:134-134 gd:1 +ttp: b47/782 bl:3.0066 bb:1.1836 rl:2.7794 rb:1.0778 dl:124-125 gd:1 +ttp: b39/782 bl:3.0712 bb:1.2178 rl:2.7798 rb:1.0780 dl:118-119 gd:1 +ttp: b30/782 bl:3.2387 bb:1.2997 rl:2.7804 rb:1.0783 dl:110-111 gd:1 +ttp: b25/782 bl:3.2637 bb:1.3229 rl:2.7810 rb:1.0786 dl:106-106 gd:1 +ttp: b16/782 bl:3.0200 bb:1.1847 rl:2.7813 rb:1.0788 dl:96-98 gd:1 +ttp: b6/782 bl:3.2615 bb:1.2430 rl:2.7818 rb:1.0789 dl:82-84 gd:1 +quantized_ttt_phased val_loss:2.78881493 val_bpb:1.07982643 eval_time:428731ms +total_eval_time:428.7s