Skip to content

Commit 0941eb7

Browse files
committed
docs: add Google Colab setup and troubleshooting section
Add a Colab-specific setup (PyTorch 2.6.0 + CUDA 12.4, MONAI 1.5) and troubleshooting notes (torchaudio mismatch, filelock conflict, num_workers guidance). Include a quick smoke test to verify the environment. Signed-off-by: minsu <[email protected]>
1 parent 8b90a16 commit 0941eb7

File tree

1 file changed

+41
-0
lines changed

1 file changed

+41
-0
lines changed

README.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,47 @@ Running:
4141

4242
in a cell will verify this has worked and show you what kind of hardware you have access to.
4343

44+
#### Google Colab Setup (CUDA 12.x, PyTorch 2.6, MONAI 1.5)
45+
46+
In Google Colab, the default environment may cause version conflicts with MONAI.
47+
To ensure compatibility, install PyTorch and MONAI explicitly as follows:
48+
49+
# Install PyTorch 2.6.0 with CUDA 12.4
50+
pip install --index-url https://download.pytorch.org/whl/cu124 \
51+
torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0
52+
53+
# Install MONAI and common dependencies
54+
pip install "monai[all]" nibabel pydicom ipywidgets==8.1.2
55+
56+
57+
### Known issues and fixes
58+
59+
- Torchaudio mismatch
60+
Colab may come with torchaudio 2.8.0, which is incompatible with torch 2.6.0.
61+
Installing the versions above resolves this issue.
62+
63+
- filelock conflicts with nni
64+
Some preinstalled packages (such as pytensor with newer filelock) may conflict.
65+
Use the following commands to fix:
66+
67+
pip uninstall -y pytensor
68+
pip install -U filelock
69+
70+
- Too many workers warning
71+
Colab has limited CPU resources, and high num_workers settings may freeze execution.
72+
It is recommended to use --num_workers=2 when running tutorials.
73+
74+
75+
### Quick smoke test
76+
77+
After installation, verify the environment by running:
78+
79+
git clone https://github.com/Project-MONAI/tutorials.git
80+
cd tutorials/3d_segmentation/torch
81+
python -u unet_training_array.py --max_epochs 2 --batch_size 1 --num_workers 2
82+
83+
If the logs show decreasing training loss and a Dice score, the setup is correct.
84+
4485
#### Data
4586

4687
Some notebooks will require additional data.

0 commit comments

Comments
 (0)