Here is the terminal output when running a boltz 2 inference in a conda environment on a Linux system with an Nvida RTX A6000 GPU and CUDA 13.2:
=====
(boltz) rjrich@rjr-sd1:~/boltz$ boltz predict examples/prot2_2ligs.yaml --use_msa_server MSA server enabled: https://api.colabfold.com MSA server authentication: no credentials provided Checking input data. Processing 1 inputs with 1 threads. 0%| | 0/1 [00:00<?, ?it/s]Generating MSA for examples/prot2_2ligs.yaml with 1 protein entities. Calling MSA server for target prot2_2ligs with 1 sequences MSA server URL: https://api.colabfold.com MSA pairing strategy: greedy No authentication provided for MSA server COMPLETE: 100%|██████████████████████| 150/150 [elapsed: 00:02 remaining: 00:00] 100%|█████████████████████████████████████████████| 1/1 [00:02<00:00, 2.53s/it] Using bfloat16 Automatic Mixed Precision (AMP) GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores HPU available: False, using: 0 HPUs /home/rjrich/anaconda3/envs/boltz/lib/python3.12/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:76: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `pytorch_lightning` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default Running structure prediction for 1 input. /home/rjrich/anaconda3/envs/boltz/lib/python3.12/site-packages/pytorch_lightning/utilities/migration/utils.py:56: The loaded checkpoint was produced with Lightning v2.5.0.post0, which is newer than your current Lightning version: v2.5.0 You are using a CUDA device ('NVIDIA RTX A6000') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1] /home/rjrich/anaconda3/envs/boltz/lib/python3.12/site-packages/pytorch_lightning/utilities/_pytree.py:21: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead. Predicting DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s]/home/rjrich/anaconda3/envs/boltz/lib/python3.12/site-packages/torch/jit/_script.py:1488: DeprecationWarning: `torch.jit.script` is deprecated. Please switch to `torch.compile` or `torch.export`. warnings.warn( /home/rjrich/anaconda3/envs/boltz/lib/python3.12/site-packages/cuequivariance_ops_torch/triangle_attention.py:199: UserWarning: Non-SM100f kernel expects bias to be float32 so it's going to be cast to torch.float32. Check if you can change your code for maximum performance. warnings.warn( Predicting DataLoader 0: 100%|████████████████████| 1/1 [00:14<00:00, 0.07it/s]Number of failed examples: 0 Predicting DataLoader 0: 100%|████████████████████| 1/1 [00:14<00:00, 0.07it/s] (boltz) rjrich@rjr-sd1:~/boltz$
=====
Can the warnings shown above be safely ignored? Alternatively, what steps might be taken to enable boltz2 to run without these warnings? Thank you for any suggestions you might have.
Here is the terminal output when running a boltz 2 inference in a conda environment on a Linux system with an Nvida RTX A6000 GPU and CUDA 13.2:
=====
(boltz) rjrich@rjr-sd1:~/boltz$ boltz predict examples/prot2_2ligs.yaml --use_msa_server MSA server enabled: https://api.colabfold.com MSA server authentication: no credentials provided Checking input data. Processing 1 inputs with 1 threads. 0%| | 0/1 [00:00<?, ?it/s]Generating MSA for examples/prot2_2ligs.yaml with 1 protein entities. Calling MSA server for target prot2_2ligs with 1 sequences MSA server URL: https://api.colabfold.com MSA pairing strategy: greedy No authentication provided for MSA server COMPLETE: 100%|██████████████████████| 150/150 [elapsed: 00:02 remaining: 00:00] 100%|█████████████████████████████████████████████| 1/1 [00:02<00:00, 2.53s/it] Using bfloat16 Automatic Mixed Precision (AMP) GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores HPU available: False, using: 0 HPUs /home/rjrich/anaconda3/envs/boltz/lib/python3.12/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:76: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `pytorch_lightning` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default Running structure prediction for 1 input. /home/rjrich/anaconda3/envs/boltz/lib/python3.12/site-packages/pytorch_lightning/utilities/migration/utils.py:56: The loaded checkpoint was produced with Lightning v2.5.0.post0, which is newer than your current Lightning version: v2.5.0 You are using a CUDA device ('NVIDIA RTX A6000') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1] /home/rjrich/anaconda3/envs/boltz/lib/python3.12/site-packages/pytorch_lightning/utilities/_pytree.py:21: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead. Predicting DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s]/home/rjrich/anaconda3/envs/boltz/lib/python3.12/site-packages/torch/jit/_script.py:1488: DeprecationWarning: `torch.jit.script` is deprecated. Please switch to `torch.compile` or `torch.export`. warnings.warn( /home/rjrich/anaconda3/envs/boltz/lib/python3.12/site-packages/cuequivariance_ops_torch/triangle_attention.py:199: UserWarning: Non-SM100f kernel expects bias to be float32 so it's going to be cast to torch.float32. Check if you can change your code for maximum performance. warnings.warn( Predicting DataLoader 0: 100%|████████████████████| 1/1 [00:14<00:00, 0.07it/s]Number of failed examples: 0 Predicting DataLoader 0: 100%|████████████████████| 1/1 [00:14<00:00, 0.07it/s] (boltz) rjrich@rjr-sd1:~/boltz$=====
Can the warnings shown above be safely ignored? Alternatively, what steps might be taken to enable boltz2 to run without these warnings? Thank you for any suggestions you might have.