From a9e6f4868d6bdc24365c035b073e1768d0ef6b36 Mon Sep 17 00:00:00 2001 From: Pavel Chekin Date: Fri, 13 Sep 2024 15:16:31 -0700 Subject: [PATCH] oneapi locations for root vs user --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index c11900c2a9..73fd6acec5 100644 --- a/README.md +++ b/README.md @@ -42,9 +42,10 @@ pip install torch-*.whl triton-*.whl ``` Before using Intel® XPU Backend for Triton\* you need to initialize the toolchain. -By default, it is installed to `/opt/intel/oneapi`: +The default location is `/opt/intel/oneapi` (if installed as a `root` user) or `~/intel/oneapi` (if installed as a regular user). ```shell +# replace /opt/intel/oneapi with the actual location of PyTorch Prerequisites for Intel GPUs source /opt/intel/oneapi/setvars.sh ``` @@ -61,9 +62,10 @@ source /opt/intel/oneapi/setvars.sh Currently, Intel® XPU Backend for Triton\* requires a special version of PyTorch and both need to be compiled at the same time. Before compiling PyTorch and Intel® XPU Backend for Triton\* you need to initialize the toolchain. -By default, it is installed to `/opt/intel/oneapi`: +The default location is `/opt/intel/oneapi` (if installed as a `root` user) or `~/intel/oneapi` (if installed as a regular user). ```shell +# replace /opt/intel/oneapi with the actual location of PyTorch Prerequisites for Intel GPUs source /opt/intel/oneapi/setvars.sh ```