diff --git a/README.md b/README.md index 2a4a9d0..a1005fa 100644 --- a/README.md +++ b/README.md @@ -211,6 +211,13 @@ Note: ONNX Runtime is not required to run the MoE layer. It is integrated in sta +**EOL NOTICE : OpenVINO™ integration with Torch-ORT will no longer be supported as of OpenVINO™ 2023.0 release.** + +If you are looking to deploy your PyTorch models on Intel based devices, you have a few options. +If you prefer the native PyTorch framework APIs, consider using the Intel Extension for PyTorch (IPEX). Another option is to utilize [OpenVINO Model Conversion API](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html),which enables the automatic importation and conversion of standard PyTorch models during runtime. It is not necessary to convert your PyTorch models offline now. + +**END OF EOL NOTICE** + ONNX Runtime for PyTorch supports PyTorch model inference using ONNX Runtime and Intel® OpenVINO™. It is available via the torch-ort-infer python package. This package enables OpenVINO™ Execution Provider for ONNX Runtime by default for accelerating inference on various Intel® CPUs, Intel® integrated GPUs, and Intel® Movidius™ Vision Processing Units - referred to as VPU.