Trellis colab notebook. jupyter. whatever. working setup! This README details the environment setup, package versions, and steps used to run the Microsoft TRELLIS project, along with the mip-splatting submodules, FlashAttention (2.5.8), spconv, and Kaolin (0.17.0). Everything is pinned to CUDA 11.8 and PyTorch 2.3.0 for reproducibility.
If you are looking for just the Google Colab - go here: Open In Colab
- NVIDIA GPU with CUDA Compute Capability >= 7.0 (recommended)
- Google Colab GPU Runtime (or equivalent local environment)
- Ubuntu 22.04 or a similar Linux environment for best compatibility
- Miniconda/Conda environment management
When using Colab, be sure to enable a GPU runtime:
Runtime > Change runtime type > Hardware accelerator: GPU.
Below is a summary of major packages, pinned versions, and the sources we use:
-
Operating System
- Ubuntu 22.04 (default Google Colab base image)
-
CUDA
- CUDA 11.8 (installed from NVIDIA’s Ubuntu 22.04 repo)
- GPG key and repo from:
[https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/](https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/)
-
Python
- Python 3.11 (via Miniconda)
-
Conda Environment
- Environment name:
trellis
- Environment name:
-
PyTorch & Related
- PyTorch 2.3.0
- TorchVision 0.18.0
- TorchAudio 2.3.0
pytorch-cuda=11.8
-
FlashAttention
flash-attn==2.5.8
(cu118, torch2.3, cxx11abiFALSE build) from
https://github.com/Dao-AILab/flash-attention
-
spconv
spconv-cu118
(PyPI)
-
Kaolin
kaolin==0.17.0
, installed from
https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-2.3.0_cu118.html
-
Pillow
pillow<11.0
(forced reinstall to avoid compatibility issues)
-
Gradio
gradio==4.44.1
gradio_litmodel3d==0.0.1
-
System Tools
build-essential
,cmake
,ninja-build
,nvidia-cuda-toolkit
- Python build libraries:
pip
,setuptools
,wheel
,ninja
,Cython
-
Microsoft TRELLIS
- Cloned from https://github.com/microsoft/TRELLIS with submodules
- Submodule builds:
flash-attn
,spconv
,mipgaussian
,nvdiffrast
,kaolin
- Patched
app.py
to usedemo.launch(share=True)
-
mip-splatting
- Cloned from https://github.com/autonomousvision/mip-splatting
- Installs the
diff_gaussian_rasterization
submodule
-
Ngrok
- Used to expose the local Gradio app externally
- Make sure you have an ngrok auth token
(or just view the kaiser-colab file for full cells)
- Update apt and install
gnupg
- Download and install
cuda-keyring_1.1-1_all.deb
from NVIDIA apt-get update
- Install
cuda-11-8
- (Optional) Check
nvcc --version
# Example snippet:
sudo apt-get update -y
sudo apt-get install -y gnupg
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update -y
sudo apt-get install -y cuda-11-8
export PATH="/usr/local/cuda-11.8/bin:$PATH"
nvcc --version
- Install Miniconda inside Colab using
condacolab
- Activate conda environment in a Bash cell
- Create
trellis
with Python 3.11 - Install PyTorch 2.3.0 + CUDA 11.8, flash-attn, spconv, kaolin, etc.
# Example snippet:
pip install -q condacolab
python -c "import condacolab;condacolab.install_miniconda()"
# Then in a bash cell:
source /usr/local/etc/profile.d/conda.sh
conda create -y -n trellis python=3.11
conda activate trellis
conda install pytorch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.8/flash_attn-2.5.8+cu118torch2.3cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
pip install spconv-cu118
pip install kaolin==0.17.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-2.3.0_cu118.html
pip install --force-reinstall "pillow<11.0"
pip install gradio==4.44.1 gradio_litmodel3d==0.0.1
- Install build-essential, cmake, ninja-build, nvidia-cuda-toolkit
- Activate
trellis
again - Upgrade Python build libraries
apt-get update -y
apt-get install -y build-essential cmake ninja-build nvidia-cuda-toolkit
source /usr/local/etc/profile.d/conda.sh
conda activate trellis
pip install --upgrade pip setuptools wheel ninja Cython
- Remove any previous
/content/TRELLIS
git clone --recurse-submodules https://github.com/microsoft/TRELLIS.git /content/TRELLIS
cd /content/TRELLIS && git submodule update --init --recursive
- Create
assets/example_image
soapp.py
doesn’t crash - Run
setup.sh --basic --flash-attn --spconv --mipgaussian --nvdiffrast --kaolin --demo
- Patch
app.py
to setdemo.launch(share=True)
rm -rf /content/TRELLIS
git clone --recurse-submodules https://github.com/microsoft/TRELLIS.git /content/TRELLIS
cd /content/TRELLIS
git submodule update --init --recursive
mkdir -p /content/TRELLIS/assets/example_image
bash ./setup.sh --basic --flash-attn --spconv --mipgaussian --nvdiffrast --kaolin --demo
sed -i 's/demo.launch()$/demo.launch(share=True)/' app.py
git clone --recurse-submodules https://github.com/autonomousvision/mip-splatting.git
cd mip-splatting && git submodule update --init --recursive
- Install system libs:
libgl1-mesa-dev
, etc. - Install
diff_gaussian_rasterization
submodule
cd /content
git clone --recurse-submodules https://github.com/autonomousvision/mip-splatting.git
cd mip-splatting
git submodule update --init --recursive
apt-get install -y libgl1-mesa-dev
cd submodules/diff-gaussian-rasterization
pip install . --no-build-isolation -v
- Make sure
ngrok
is installed in your environment or is available on Colab - Set your ngrok auth token (replace
YOUR_NGROK_AUTH_TOKEN
below) - Expose port 7860
- Run
app.py
in thetrellis
environment
import subprocess, time, json
NGROK_AUTH = "YOUR_NGROK_AUTH_TOKEN"
subprocess.run(["ngrok", "config", "add-authtoken", NGROK_AUTH], check=True)
proc_ngrok = subprocess.Popen(["ngrok", "http", "7860"])
time.sleep(4)
tunnels_json = subprocess.check_output(["curl", "-s", "127.0.0.1:4040/api/tunnels"])
public_url = json.loads(tunnels_json)["tunnels"][0]["public_url"]
print("NGROK URL =>", public_url)
p = subprocess.Popen(
["conda", "run", "-n", "trellis", "python", "-u", "app.py"],
cwd="/content/TRELLIS",
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
try:
for line in iter(p.stdout.readline, b""):
if not line:
break
print(line.decode(), end="")
except KeyboardInterrupt:
print("[INFO] Interrupted by user.")
p.kill()
finally:
p.wait()
You will see logs from the Gradio/TRELLIS app in real-time. Click the NGROK URL
to open the Gradio interface in your browser.
-
TRELLIS (Microsoft)
-
mip-splatting (autonomousvision)
-
FlashAttention
-
Kaolin (NVIDIA)
-
Ngrok
- https://ngrok.com/
- After you sign up, get your token from the dashboard.
-
condacolab
Feel free to open issues or pull requests if you encounter any build problems or want to add instructions for other OS distributions.
Enjoy exploring TRELLIS, Mip-Splatting, and advanced PyTorch-based 3D frameworks!