Summary
Directly instantiating Boltz2 (**ckpt['hyper_parameters']) from the official checkpoint crashes on the first forward pass because pairformer_args.v2 is not serialized in the checkpoint. This blocks any use of Boltz-2 outside the official boltz predict CLI (custom inference, fine-tuning, LoRA, benchmarking without Lightning).
Reproduction
import torch
ckpt = torch.load('boltz2_conf.ckpt', weights_only=False)
print(ckpt['hyper_parameters']['pairformer_args'])
Output:
{'num_blocks': 64, 'num_heads': 16, ..., 'use_trifast': True}
Note: no 'v2' key
from boltz.model.models.boltz2 import Boltz2
model = Boltz2(**ckpt['hyper_parameters'])
model(batch)
→ AttentionPairBias.forward() got an unexpected keyword argument 'k_in
Root cause
The official pipeline instantiates the architecture from the Hydra YAML (full.yaml), which explicitly sets v2: True. The checkpoint is only used to reload weights (load_from_checkpoint, strict=False). As a result, v2 is never serialized into hyper_parameters.
When instantiating directly from the checkpoint, v2 defaults to False, and AttentionPairBias (V1) is built instead of AttentionPairBiasV2 — which expects the k_in argument.
Additional subtlety
pairformer_args is an omegaconf.DictConfig, not a plain dict — isinstance(x, dict) returns False. Patching requires `hasattr(x, 'setitem'):
if 'pairformer_args' in hparams and hasattr(hparams['pairformer_args'], '__setitem__'):
hparams['pairformer_args']['v2'] = True
Expected behavior
pairformer_args.v2 should be serialized in the checkpoint hyper_parameters, or Boltz2.__init__ should default to v2=True when loading from a checkpoint that contains `AttentionPairBiasV2 weights.
Impact
Blocks all direct usage of the model class: custom inference, fine-tuning, LoRA integration, benchmarking without Hydra/Lightning.
Environment
Boltz version: v2.2.1
Checkpoint: boltz2_conf.ckpt (official release)
Python: 3.11 / PyTorch 2.x
Summary
Directly instantiating Boltz2 (
**ckpt['hyper_parameters']) from the official checkpoint crashes on the first forward pass becausepairformer_args.v2is not serialized in the checkpoint. This blocks any use of Boltz-2 outside the official boltz predict CLI (custom inference, fine-tuning, LoRA, benchmarking without Lightning).Reproduction
Output:
{'num_blocks': 64, 'num_heads': 16, ..., 'use_trifast': True}
Note: no 'v2' key
→
AttentionPairBias.forward()got an unexpected keyword argument'k_inRoot cause
The official pipeline instantiates the architecture from the Hydra YAML (full.yaml), which explicitly sets v2: True. The checkpoint is only used to reload weights (
load_from_checkpoint,strict=False). As a result, v2 is never serialized into hyper_parameters.When instantiating directly from the checkpoint, v2 defaults to
False, andAttentionPairBias(V1) is built instead ofAttentionPairBiasV2— which expects thek_inargument.Additional subtlety
pairformer_argsis anomegaconf.DictConfig, not a plain dict —isinstance(x, dict)returnsFalse. Patching requires `hasattr(x, 'setitem'):Expected behavior
pairformer_args.v2should be serialized in the checkpointhyper_parameters, orBoltz2.__init__should default tov2=Truewhen loading from a checkpoint that contains `AttentionPairBiasV2 weights.Impact
Blocks all direct usage of the model class: custom inference, fine-tuning, LoRA integration, benchmarking without Hydra/Lightning.
Environment
Boltz version: v2.2.1
Checkpoint: boltz2_conf.ckpt (official release)
Python: 3.11 / PyTorch 2.x