Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apple MBS support Not found in the master branch #180

Closed
Fanlen opened this issue Jun 26, 2024 · 4 comments
Closed

Apple MBS support Not found in the master branch #180

Fanlen opened this issue Jun 26, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@Fanlen
Copy link

Fanlen commented Jun 26, 2024

The current master branch does not seem to have added support for Apple MBS in this link #170

modules/faster_whisper_inference.py Not found

  if torch.cuda.is_available():
        self.device = "cuda"
    elif torch.backends.mps.is_available():
        self.device = "mps"
    else:
        self.device = "cpu"

modules/whisper_Inference.py Not found

@Fanlen Fanlen added the bug Something isn't working label Jun 26, 2024
@jhj0517
Copy link
Owner

jhj0517 commented Jun 26, 2024

Hi. To integrate with various whisper implementation such as insanely-fast-whisper,
I created an abstract class WhisperBase() and moved the device logic into there:

@staticmethod
def get_device():
if torch.cuda.is_available():
return "cuda"
elif torch.backends.mps.is_available():
return "mps"
else:
return "cpu"

So it will detect the mps device when the script starts. But if you don't see the

device "mps" is detected

when you run the shell script, please let me know.

@Fanlen
Copy link
Author

Fanlen commented Jun 26, 2024

@jhj0517 thks, My devies List:

   Apple M2 Ultra:

  Chipset Model: Apple M2 Ultra
  Type: GPU
  Bus: Built-In
  Total Number of Cores: 60
  Vendor: Apple (0x106b)
  Metal Support: Metal 3
  Displays:
     S2700:
      Resolution: 3840 x 2160 (2160p/4K UHD 1 - Ultra High Definition)
      UI Looks like: 1920 x 1080 @ 60.00Hz
      Main Display: Yes
      Mirror: Off
      Online: Yes
      Rotation: Supported

run Install. sh and then run start webui. sh, the error message appears as follows:
./start-webui.sh
venv ./venv/bin/python
/Users/hengshi/Case/Whisper-WebUI/venv/lib/python3.9/site-packages/urllib3/init.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: urllib3/urllib3#3020
warnings.warn(
Use "faster-whisper" implementation
Device "mps" is detected
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
model.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.09G/3.09G [02:10<00:00, 8.84MB/s]
Error transcribing file: unsupported device mps
Traceback (most recent call last):
File "/Users/hengshi/Case/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/queueing.py", line 527, in process_events
response = await route_utils.call_process_api(
File "/Users/hengshi/Case/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/route_utils.py", line 270, in call_process_api
output = await app.get_blocks().process_api(
File "/Users/hengshi/Case/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1856, in process_api
data = await self.postprocess_data(fn_index, result["prediction"], state)
File "/Users/hengshi/Case/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1634, in postprocess_data
self.validate_outputs(fn_index, predictions) # type: ignore
File "/Users/hengshi/Case/Whisper-WebUI/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1610, in validate_outputs
raise ValueError(
ValueError: An event handler (transcribe_file) didn't receive enough output values (needed: 2, received: 1).
Wanted outputs:
[<gradio.components.textbox.Textbox object at 0x327259190>, <gradio.templates.Files object at 0x3272591c0>]
Received outputs:
[None]`

@jhj0517
Copy link
Owner

jhj0517 commented Jun 26, 2024

Thanks for reporting this.
This should be fixed in #182.

But I don't know if faster-whisper actually supports the "mps" device.
In faster-whisper, they say available devices are: ("cpu", "cuda", "auto").

So I just set the device to "auto" for mps. It should support the mps anyway.

@Fanlen
Copy link
Author

Fanlen commented Jun 26, 2024

@jhj0517
After the update, it is perfect to run and use the APP device MPS, and the speed has improved significantly. Thank you again for the author's efforts

@Fanlen Fanlen closed this as completed Jun 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants