π€ Optimum TPU
Optimum TPU provides all the necessary machinery to leverage and optimize AI workloads running on Google Cloud TPU devices.
The API provides the overall same user-experience as Hugging Face transformers with the minimum amount of changes required to target performance for inference and training.
Installation
Optimum TPU is meant to reduce as much as possible the friction in order to leverage Google Cloud TPU accelerators. As such, we provide a pip installable package to make sure everyone can get easily started.
Run Cloud TPU with pip
pip install optimum-tpu -f https://storage.googleapis.com/libtpu-releases/index.html
Run Cloud TPU within Docker container
PyTorch
export TPUVM_IMAGE_URL=us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla
export TPUVM_IMAGE_VERSION=8f1dcd5b03f993e4da5c20d17c77aff6a5f22d5455f8eb042d2e4b16ac460526
docker pull
docker run -ti --rm --privileged --network=host ${TPUVM_IMAGE_URL}@sha256:${TPUVM_IMAGE_VERSION} bash
From there you can install optimum-tpu through the pip instructions mentioned above.
Learn the basics and become familiar with deploying transformers on Google TPUs. Start here if you are using π€ Optimum-TPU for the first time!
Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use π€ Optimum-TPU to solve real-world problems.