I transformed belle-whisper-large-v2 by ctranslate2, the model size is almost same as faster-whisper-large-v2. But when the word_timestamp parameter is True, Belle took much more time(at least 3x, sometimes 10x) than the faster-whisper model. Is it normal?
I translate the model by the following command:
ct2-transformers-converter --model .\Belle-whisper-large-v2-zh\ --output_dir faster-belle-whisper-large-v2-zh --copy_files preprocessor_config.json --quantization float16