onnxruntime-gpu and CUDA 12? #544
Replies: 3 comments
-
are you able to run piper tts on gpu ? |
Beta Was this translation helpful? Give feedback.
-
The Python version of piper works with CUDA, the executable (C++) version doesn't. Many people (including myself) have found that they can't install the Python version using Pip, but you can follow these instructions to build a .whl file for piper-phonemise. You can then install piper-tts and onnxruntime-gpu, and run piper with the --cuda flag. |
Beta Was this translation helpful? Give feedback.
-
I can report that while I was able to run piper on GPU (Cuda 12.2) I was unable to see large performance improvements on GPU. This may have been because I was not using onnxruntime gpu with the proper onnx configuration or hardware (Nvidia V100). Infact, I saw the CPU implementation being 5x faster. FYI I'm using the GPU libraries:
Test utterance: 'Olá, este é um teste de síntese de voz com Piper.' CPU libraries:
Test utterance: 'Olá, este é um teste de síntese de voz com Piper.' I believe this may be due to my configuration being out of whack (e.g. CUDA but also cuDNN libraries). Although my problems may be hardware / configuration specific. I believe a working dockerfile may be found within the dockerhub optimum page, this will make testing this library with onnxruntime-gpu more predictable. |
Beta Was this translation helpful? Give feedback.
-
Hi all.
Does anyone have a recipe for getting Piper-TTS to work with onnxruntime-gpu? When using the version that is supposed to work with CUDA 12...it can't find files meant for previous versions of CUDA anyhow...
Beta Was this translation helpful? Give feedback.
All reactions