-
-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: 在相对应模块安装后仍然提示缺少该模块 #66
Comments
vllm的更新存在问题,尝试降级vllm到0.7.1及以下的版本 |
降级后确实没这个问题了,但是又出现了新的问题...我上网找了教程,都没有合适的解决方法 |
你是不是装了其他版本的 transformers |
没有哦 |
只有4.48.3版本 |
按照我这个环境来,找出现问题的依赖就行了
|
请问怎么配置?用pip install吗 |
是,就只装有问题的依赖 |
现在看来似乎快搞完了,但是这个问题就是不知道怎么解决,第一次尝试这种东西... [INFO|tokenization_utils_base.py:2209] 2025-02-14 22:41:51,978 >> loading file vocab.json 02/14/2025 22:41:52 - INFO - llmtuner.model.utils.quantization - Loading 4-bit GPTQ-quantized model. |
使用git clone https://github.com/PanQiWei/AutoGPTQ.git && cd AutoGPTQ命令时出现错误并提示无法访问https://github.com/PanQiWei/AutoGPTQ.git,是要梯子吗? |
网不行,直接下吧 |
pip install -vvv --no-build-isolation -e . |
你漏了个点 |
额...抱歉... |
好像又出错了... Using pip 25.0 from C:\Users\123\miniconda3\Lib\site-packages\pip (python 3.11)
Rolling back uninstall of auto-gptq × python setup.py develop did not run successfully. note: This error originates from a subprocess, and is likely not a problem with pip. |
This comment has been minimized.
This comment has been minimized.
重新配置编译环境,在这的基础上勾选 Windows 11 SDK (单个组件 -> SDK、库和框架) |
重启系统 |
重启了一遍电脑,还出现这个... C:\AutoGPTQ-0.7.1>pip install -vvv --no-build-isolation -e .
Rolling back uninstall of auto-gptq × python setup.py develop did not run successfully. note: This error originates from a subprocess, and is likely not a problem with pip. |
你不是说已经配置好了么 |
试着先执行:我在互联网上找不到解决方案
|
谢谢,解决了,但是...这是什么情况 [WARNING] 2025.02.10更新:由于配置文件格式变更,如果先前你拉取过本 Repo 并在 02.10 后执行过fetch操作,请您重新设置模型配置,由此带来的不便我们深表歉意 02/16/2025 13:11:54 - INFO - llmtuner.model.utils.quantization - Loading 4-bit GPTQ-quantized model. C:\Users\123\miniconda3\Lib\site-packages\transformers\modeling_utils.py:5006: FutureWarning: [INFO|modeling_utils.py:4808] 2025-02-16 13:12:03,774 >> All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at E:\Muice-Chatbot\model\Qwen2.5-7B-Instruct-GPTQ-Int4. 02/16/2025 13:12:03 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference. During handling of the above exception, another exception occurred: Traceback (most recent call last): |
这咋整 [WARNING] 2025.02.10更新:由于配置文件格式变更,如果先前你拉取过本 Repo 并在 02.10 后执行过fetch操作,请您重新设置模型配置,由此带来的不便我们深表歉意 02/16/2025 13:25:45 - INFO - llmtuner.model.utils.quantization - Loading 4-bit GPTQ-quantized model. C:\Users\123\miniconda3\Lib\site-packages\transformers\modeling_utils.py:5006: FutureWarning: [INFO|modeling_utils.py:4808] 2025-02-16 13:25:49,362 >> All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at E:\Muice-Chatbot\model\Qwen2.5-7B-Instruct-GPTQ-Int4. 02/16/2025 13:25:49 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference. |
|
成功了...? [WARNING] 2025.02.10更新:由于配置文件格式变更,如果先前你拉取过本 Repo 并在 02.10 后执行过fetch操作,请您重新设置模型配置,由此带来的不便我们深表歉意 02/16/2025 13:44:38 - INFO - llmtuner.model.utils.quantization - Loading 4-bit GPTQ-quantized model. C:\Users\123\miniconda3\Lib\site-packages\transformers\modeling_utils.py:5006: FutureWarning: [INFO|modeling_utils.py:4808] 2025-02-16 13:44:41,362 >> All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at E:\Muice-Chatbot\model\Qwen2.5-7B-Instruct-GPTQ-Int4. 02/16/2025 13:44:41 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference.
|
额...都已经到这步了,我发消息机器人不跟我说话,点击下面的链接会出现404错误 [WARNING] 2025.02.10更新:由于配置文件格式变更,如果先前你拉取过本 Repo 并在 02.10 后执行过fetch操作,请您重新设置模型配置,由此带来的不便我们深表歉意 02/16/2025 15:01:41 - INFO - llmtuner.model.utils.quantization - Loading 4-bit GPTQ-quantized model. C:\Users\123\miniconda3\Lib\site-packages\transformers\modeling_utils.py:5006: FutureWarning: [INFO|modeling_utils.py:4808] 2025-02-16 15:01:45,014 >> All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at E:\Muice-Chatbot\model\Qwen2.5-7B-Instruct-GPTQ-Int4. 02/16/2025 15:01:45 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference.
|
啊嘞? [WARNING] 2025.02.10更新:由于配置文件格式变更,如果先前你拉取过本 Repo 并在 02.10 后执行过fetch操作,请您重新设置模型配置,由此带来的不便我们深表歉意 02/16/2025 15:40:34 - INFO - llmtuner.model.utils.quantization - Loading 4-bit GPTQ-quantized model. C:\Users\123\miniconda3\Lib\site-packages\transformers\modeling_utils.py:5006: FutureWarning: [INFO|modeling_utils.py:4808] 2025-02-16 15:40:38,016 >> All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at E:\Muice-Chatbot\model\Qwen2.5-7B-Instruct-GPTQ-Int4. 02/16/2025 15:40:38 - INFO - llmtuner.model.utils.attention - Using torch SDPA for faster training and inference.
|
没看明白要怎么操作... |
改配置文件大哥,就改这一行 |
OK,解决了 |
问题描述
模型加载出错并提示错误:没有名为‘resource’的模块,然而在使用’pip install resource‘指令安装后再运行程序仍然提示缺少‘resource’模块
相关日志输出
配置文件
复现步骤
1.在目标文件目录下运行python main.py
2.出现错误ModuleNotFoundError: No module named 'resource'
3.使用pip install resource指令安装'resource'
4.再次运行python main.py
5.仍然报错ModuleNotFoundError: No module named 'resource'
其他信息
目录结构:

The text was updated successfully, but these errors were encountered: