-
Notifications
You must be signed in to change notification settings - Fork 256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError: fwd(): incompatible function arguments. The following argument types are supported,求助是否我flash-attn安装不对 #70
Comments
Same problem. I solved by uninstalling flash-attn and reinstalling evo2(may not need install flash-attn by manual) pip uninstall -y flash-attn transformer-engine
pip install transformer_engine[pytorch]==1.13
pip install . -vv and here's my pip environment : (savanna) root@g0024:~/zhengyulong/zhengyulong/evo2# pip list
Package Version Editable project location
------------------------ -------------- ---------------------------------------------
annotated-types 0.7.0
antlr4-python3-runtime 4.9.3
arrow 1.3.0
asttokens 3.0.0
autopep8 2.3.2
biopython 1.85
boto3 1.37.15
botocore 1.37.15
build 1.2.2.post1
cattrs 24.1.2
causal-conv1d 1.5.0.post8
certifi 2025.1.31
cfgv 3.4.0
charset-normalizer 3.4.1
clang-format 20.1.0
click 8.1.8
cmake 3.31.6
comm 0.2.2
coverage 7.7.0
debugpy 1.8.11
decorator 5.1.1
deepspeed 0.16.4
dill 0.3.8
distlib 0.3.9
docker-pycreds 0.4.0
einops 0.8.0
et_xmlfile 2.0.0
evo2 0.1.0
execnet 2.1.1
executing 2.1.0
filelock 3.18.0
fsspec 2025.3.0
gitdb 4.0.12
GitPython 3.1.44
hjson 3.1.0
huggingface-hub 0.29.3
identify 2.6.9
idna 3.10
importlib_metadata 8.6.1
iniconfig 2.0.0
ipykernel 6.29.5
ipython 8.30.0
ipywidgets 8.1.5
jedi 0.19.2
Jinja2 3.1.6
jmespath 1.0.1
jupyter_client 8.6.3
jupyter_core 5.7.2
jupyterlab_widgets 3.0.13
lazy_import_plus 0.0.2
local_flash_attn 0.0.0 /root/zhengyulong/evo2/vortex/vortex/ops/attn
loguru 0.7.3
markdown-it-py 3.0.0
MarkupSafe 3.0.2
matplotlib-inline 0.1.7
mdurl 0.1.2
mpmath 1.3.0
msgpack 1.1.0
multiprocess 0.70.16
networkx 3.4.2
ninja 1.11.1.3
nodeenv 1.9.1
numpy 2.2.4
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-cusparselt-cu12 0.6.2
nvidia-ml-py 12.570.86
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
omegaconf 2.3.0
openpyxl 3.1.5
opt_einsum 3.4.0
packaging 24.2
parso 0.8.4
pexpect 4.9.0
pillow 11.1.0
pip 25.0
platformdirs 4.3.6
pluggy 1.5.0
pre_commit 4.0.1
prompt_toolkit 3.0.48
protobuf 5.29.3
psutil 7.0.0
ptyprocess 0.7.0
pure_eval 0.2.3
py 1.11.0
py-cpuinfo 9.0.0
pybind11 2.13.6
pycodestyle 2.12.1
pydantic 2.10.6
pydantic_core 2.27.2
Pygments 2.18.0
pyproject_hooks 1.2.0
pytest 8.3.5
pytest-cov 6.0.0
pytest-forked 1.6.0
pytest-xdist 3.6.1
python-dateutil 2.9.0.post0
PyYAML 6.0.2
RapidFuzz 3.10.1
regex 2024.11.6
requests 2.32.3
requests-cache 1.2.1
retrying 1.3.4
rich 13.9.2
ring-flash-attn 0.1.4
ruff 0.1.1
s3transfer 0.11.4
scipy 1.15.2
sentencepiece 0.2.0
sentry-sdk 2.23.1
setproctitle 1.3.5
setuptools 75.8.0
six 1.17.0
smmap 5.0.2
stack-data 0.6.3
sympy 1.13.1
thefuzz 0.22.1
tokenizers 0.20.1
torch 2.5.1
torchaudio 2.6.0
torchvision 0.21.0
tornado 6.4.2
tqdm 4.67.1
traitlets 5.14.3
transformer_engine 1.13.0
transformer_engine_cu12 1.13.0
transformer_engine_torch 1.13.0
triton 3.1.0
types-python-dateutil 2.9.0.20241206
typing_extensions 4.12.2
url-normalize 1.4.3
urllib3 2.3.0
virtualenv 20.29.3
vortex 0.0.2 /root/zhengyulong/evo2/vortex
wandb 0.19.8
wcwidth 0.2.13
wheel 0.45.1
widgetsnbextension 4.0.13
xxhash 3.5.0
zipp 3.21.0 or you can just try the methods which described in issues/27 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
我的pip环境
再执行这一段代码的时候,出现以下报错
我的报错内容
`File "/evo2/vortex/vortex/ops/attn_interface.py", line 466, in forward
out_padded, softmax_lse, S_dmask, rng_state = _wrapped_flash_attn_forward(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/evo2/lib/python3.11/site-packages/torch/_ops.py", line 1116, in call
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/evo2/lib/python3.11/site-packages/torch/_library/autograd.py", line 113, in autograd_impl
result = forward_no_grad(*args, Metadata(keyset, keyword_only_args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/evo2/lib/python3.11/site-packages/torch/_library/autograd.py", line 40, in forward_no_grad
result = op.redispatch(keyset & _C._after_autograd_keyset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/evo2/lib/python3.11/site-packages/torch/_ops.py", line 721, in redispatch
return self._handle.redispatch_boxed(keyset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/evo2/lib/python3.11/site-packages/torch/_library/custom_ops.py", line 324, in backend_impl
result = self._backend_fns[device_type](*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/evo2/lib/python3.11/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/evo2/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/evo2/lib/python3.11/site-packages/torch/_library/custom_ops.py", line 367, in wrapped_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/evo2/vortex/vortex/ops/attn_interface.py", line 61, in _flash_attn_forward
out, softmax_lse, S_dmask, rng_state = flash_attn_gpu.fwd(
^^^^^^^^^^^^^^^^^^^
TypeError: fwd(): incompatible function arguments. The following argument types are supported:
1. (arg0: torch.Tensor, arg1: torch.Tensor, arg2: torch.Tensor, arg3: Optional[torch.Tensor], arg4: Optional[torch.Tensor], arg5: float, arg6: float, arg7: bool, arg8: int, arg9: int, arg10: bool, arg11: Optional[torch.Generator]) -> list[torch.Tensor]
Invoked with: tensor([[[[-2.6245e-02, -4.2419e-03, -3.0212e-03, ..., -2.4219e-01,
-2.1484e-01, -1.9922e-01],
[-1.5198e-02, 1.3885e-03, 3.9062e-02, ..., 3.9551e-02,
-3.3447e-02, 3.8818e-02],
[ 5.0659e-03, 2.0996e-02, -9.7046e-03, ..., 8.3008e-02,
1.4355e-01, 1.1523e-01],
...,
[-3.8818e-02, -1.2878e-02, 1.4465e-02, ..., 1.6406e-01,
-1.4771e-02, 1.3281e-01],
[ 4.5898e-02, 4.7607e-03, -7.2327e-03, ..., 2.2339e-02,
3.2812e-01, 1.2891e-01],
[-1.3245e-02, 1.4526e-02, 3.1738e-02, ..., -7.7637e-02,
-2.5781e-01, -3.5400e-02]],
我的pip环境是
joblib 1.3.2
jsonlines 4.0.0
kiwisolver 1.4.5
lazy_import_plus 0.0.2
lightning-utilities 0.10.0
lm-dataformat 0.0.20
lm-eval 0.3.0
local_flash_attn 0.0.0 /evo2/vortex/vortex/ops/attn
lxml 4.9.4
Mako 1.3.5
Markdown 3.5.1
markdown-it-py 3.0.0
MarkupSafe 2.1.3
matplotlib 3.8.2
matplotlib-inline 0.1.6
mbstrdecoder 1.1.3
mdurl 0.1.2
mkl_fft 1.3.10
mkl_random 1.2.7
mkl-service 2.4.0
ml_collections 0.1.1
modelcif 0.9
mpi4py 3.1.4
mpmath 1.3.0
msgpack 1.0.7
multidict 6.0.4
multiprocess 0.70.15
mypy-extensions 1.0.0
networkx 3.2.1
ninja 1.11.1.1
nltk 3.8.1
nodeenv 1.8.0
numexpr 2.8.8
numpy 1.26.2
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-ml-py 12.560.30
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
oauthlib 3.2.2
omegaconf 2.3.0
openai 1.6.1
opentelemetry-api 1.27.0
opentelemetry-sdk 1.27.0
opentelemetry-semantic-conventions 0.48b0
opt-einsum 3.3.0
packaging 23.2
pandas 2.1.4
parso 0.8.3
pathvalidate 3.2.0
peft 0.7.1
pexpect 4.9.0
Pillow 10.1.0
pip 24.2
platformdirs 4.2.2
pluggy 1.3.0
polars 1.2.1
portalocker 2.8.2
pre_commit 4.0.1
prompt-toolkit 3.0.43
protobuf 3.20.3
psutil 6.0.0
ptyprocess 0.7.0
pure-eval 0.2.2
py 1.11.0
py-cpuinfo 9.0.0
py-spy 0.3.14
pyarrow 14.0.2
pyarrow-hotfix 0.6
pyasn1 0.5.1
pyasn1-modules 0.3.0
pybind11 2.11.1
pycodestyle 2.11.1
pycountry 23.12.11
pydantic 2.9.0
pydantic_core 2.23.2
Pygments 2.17.2
pynvml 11.5.0
pyparsing 3.1.1
pyproject_hooks 1.2.0
PySocks 1.7.1
pytablewriter 1.2.0
pytest 7.4.3
pytest-cov 4.1.0
pytest-forked 1.6.0
pytest-xdist 3.5.0
python-dateutil 2.8.2
pytorch-lightning 2.1.3
pytz 2023.3.post1
PyYAML 6.0.2
regex 2023.10.3
requests 2.32.3
requests-oauthlib 1.3.1
responses 0.18.0
rich 13.9.2
rouge_score 0.1.2
rsa 4.9
ruff 0.1.1
s3transfer 0.10.2
sacrebleu 1.5.0
safetensors 0.4.1
scikit-learn 1.3.2
scipy 1.11.4
seaborn 0.13.1
sentencepiece 0.1.99
sentry-sdk 2.13.0
setproctitle 1.3.3
setuptools 74.1.0
shtab 1.7.0
six 1.16.0
smmap 5.0.1
sniffio 1.3.0
SQLAlchemy 2.0.34
sqlitedict 2.1.0
sqlparse 0.5.1
stack-data 0.6.3
stua 0.3
sympy 1.13.1
tabledata 1.3.3
tabulate 0.9.0
tcolorpy 0.1.4
tenacity 9.0.0
tensorboard 2.5.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
threadpoolctl 3.2.0
tokenizers 0.15.0
torch 2.5.1
torchaudio 2.4.1
torchmetrics 1.4.0.post0
torchvision 0.15.2a0
tqdm 4.66.1
tqdm-multiprocess 0.0.11
traitlets 5.14.0
transformer_engine 1.10.0+08a85d3
transformers 4.36.2
triton 3.1.0
trl 0.7.11
typeguard 2.13.3
typepy 1.3.2
types-python-dateutil 2.9.0.20241003
typing_extensions 4.9.0
typing-inspect 0.9.0
tyro 0.7.3
tzdata 2023.3
ujson 5.9.0
urllib3 2.2.2
virtualenv 20.26.6
vortex 0.0.2 /evo2/vortex
wandb 0.17.7
wandb-workspaces 0.1.8
wcwidth 0.2.12
Werkzeug 3.0.1
wheel 0.41.2
wrapt 1.16.0
xxhash 3.4.1
yarl 1.9.4
zipp 3.20.0
zstandard 0.22.0
The text was updated successfully, but these errors were encountered: