You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I took a ubuntu 22.04 docker image. Installed miniconda on it. Built torch and torch_xla wheels with export _GLIBCXX_USE_CXX11_ABI=1 and ran the above training script.
Expected behavior
Docker shouldn't have any overhead.
Environment
Reproducible on XLA backend [CPU/TPU/CUDA]: TPU
torch_xla version: nightly 03/21/2025
The text was updated successfully, but these errors were encountered:
bhavya01
changed the title
Python tracing 2x slower in docker for Stable Diffusion
Python tracing 1.5x slower in docker for Stable Diffusion
Apr 7, 2025
🐛 Bug
While running SDXL model on v5p, I see the following differences in tracing times on docker vs native TPU
Docker:
Native TPU:
To Reproduce
The training script is at: https://github.com/entrpn/diffusers/blob/sdxl_training_bbahl/examples/research_projects/pytorch_xla/training/text_to_image/README_sdxl.md
I took a ubuntu 22.04 docker image. Installed miniconda on it. Built torch and torch_xla wheels with
export _GLIBCXX_USE_CXX11_ABI=1
and ran the above training script.Expected behavior
Docker shouldn't have any overhead.
Environment
The text was updated successfully, but these errors were encountered: