Skip to content

[BUG] flash_attn is needed but not specified in requirements #754

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 tasks done
Almenon opened this issue Jan 19, 2025 · 3 comments
Open
2 tasks done

[BUG] flash_attn is needed but not specified in requirements #754

Almenon opened this issue Jan 19, 2025 · 3 comments
Assignees

Comments

@Almenon
Copy link

Almenon commented Jan 19, 2025

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

When running the model using the requirements provided I get this error:

ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run pip install flash_attn`

期望行为 | Expected Behavior

No error - the requirements file should have listed flash attention, or the need for it should be documented somewhere.

期望能够提供取消对flash的绑定,或者提供一个安装教程,以及版本的指定。

复现方法 | Steps To Reproduce

  1. Install requirements listed in https://huggingface.co/openbmb/MiniCPM-o-2_6#usage
  2. Run the code in https://huggingface.co/openbmb/MiniCPM-o-2_6#model-initialization

运行环境 | Environment

- OS: Windows 10 WSL Ubuntu 24
- Python: 3.10
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 12.1

备注 | Anything else?

Many others have gotten the same error: #429.

This can be fixed by adding flash attention to the requirements file. However, flash attention does not have prebuilt wheels in the pypi release, so a better fix would be to tell people to look up the relevant wheel in https://github.com/Dao-AILab/flash-attention/releases and pip install that wheel.

For example, I used https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.8/flash_attn-2.5.8+cu122torch2.3cxx11abiFALSE-cp310-cp310-linux_x86_64.whl

@Almenon
Copy link
Author

Almenon commented Jan 19, 2025

To get it working I did this:

  1. Install python 3.10 in WSL (Linux would also work of course)
  2. Install from this first requirements file. Be prepare for a large download.
--index-url https://download.pytorch.org/whl/cu121
torch==2.3.1
torchaudio==2.3.1
torchvision==0.18.1
  1. Install from this second requirements file
# reqs adapated from https://huggingface.co/openbmb/MiniCPM-o-2_6
Pillow==10.1.0
transformers==4.44.2
librosa==0.9.0
soundfile==0.12.1
vector-quantize-pytorch==1.18.5
vocos==0.1.0

# Install from prebuilt wheel so user doesn't have to install CUDA toolkit & compile themselves
# https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu12torch2.3cxx11abiTRUE-cp310-cp310-linux_x86_64.whl
# above doesn't work ~ https://github.com/Dao-AILab/flash-attention/issues/975
https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.8/flash_attn-2.5.8+cu122torch2.3cxx11abiFALSE-cp310-cp310-linux_x86_64.whl

# only needed if you work with videos
# decord==0.6.0
# moviepy==2.1.2

This fixes the issue, but I'm still leaving the issue open because the need for flash attention should be documented in the readme or huggingface page.

Sadly I went through the trouble of fixing it only to find out I don't have enough memory for the model 😂

@YuzaChongyi
Copy link
Collaborator

Thanks! We will look into the possibility of removing the dependency on flash attention.

@jasstionzyf
Copy link

@YuzaChongyi
even install flash attention, it still not work will use xformers
vllm-project/vllm#12656
Cannot use FlashAttention-2 backend for head size 72
So minicpm-v can not use flash attention?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants