-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Android real-time transcription #4
Comments
@salehsoleimani this library is derived from RTranslator, an Android app, so 1.5GB is not too much for mobile devices (although it is certainly quite heavy). |
can you provide an android example in this repo? |
are you sure it works real time on mobile devices? i tried out RTranslator and It didn't seem to be real-time! it takes at least 3~6 seconds to process each cunck |
@salehsoleimani |
Depends a lot on the phone you use (mine takes 1.6/2 seconds for each cunk), but yeah the audio is always processed in cuncks, no matter how small or fast the model is. For a true real time speech recognition with whisper, the only option I know is the stream version of whisper.cpp |
thanks for the reply. where you mentioned 1.6 seconds per chunk you mean RTranslator or this repo?.... do you have any examples for this code you've replied? |
I mean RTranslator, and what do you mean with an example? |
an example for android implementation |
Oh ok, there is a Whisper.cpp example app for Android but it doesn't implement the stream inference for Whisper, but you could implement yourself understanding how the stream version works and implementing it on Android in C++ (the code is in the example I linked in the previous message, and the issue linked in that page explain how it works) |
nice thanks i appreciate it |
Hey! thanks for the great repo.
any chances to run whisper real-time on mobile devices? according to your docs 1.5GB is too much for mobile devices. chances around maybe ~300MB memory usage?
The text was updated successfully, but these errors were encountered: