llama.cpp #106
llama.cpp
#106
Replies: 2 comments 2 replies
-
|
Python 3.11 is no longer supported, so consider a fresh install. If only llama is the problem, this is the command you need to run from the python_embeded folder: python.exe -m pip install https://github.com/JamePeng/llama-cpp-python/releases/download/v0.3.22-cu130-Basic-win-20260118/llama_cpp_python-0.3.22-cp311-cp311-win_amd64.whl |
Beta Was this translation helpful? Give feedback.
1 reply
-
|
@Tavris1 could you include the installation of llama.cpp. Python ? Either as an optional, or bundled with QwenVL..... |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
When starting Comfyui i get this error: "Error loading module AILab_QwenVL_GGUF_PromptEnhancer: No module named 'llama_cpp'"
There is a folder llama.cpp in the Add-ons/Tools folder. How to use it?
About
ComfyUI 0.9.1
ComfyUI_frontend v1.38.4
Discord
ComfyOrg
ComfyUI-Ollama
LoRA Manager v0.9.11-stable
EasyUse v1.3.6
rgthree-comfy v1.0.2512112053
ComfyUI-Manager V3.39
System Info
OS
win32
Python Version
3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
Embedded Python
true
Pytorch Version
2.9.1+cu130
Arguments
ComfyUI\main.py --windows-standalone-build --front-end-version Comfy-Org/ComfyUI_frontend@latest --fast fp16_accumulation --lowvram --reserve-vram 2 --use-sage-attention --disable-pinned-memory --disable-async-offload --output-directory E:\Comfyui_Output --input-directory E:\Comfyui_Input
RAM Total
31.93 GB
RAM Free
20.04 GB
Devices
Name
cuda:0 NVIDIA GeForce RTX 4060 : cudaMallocAsync
Type
cuda
VRAM Total
8 GB
VRAM Free
6.89 GB
Torch VRAM Total
32 MB
Torch VRAM Free
23.88 MB
Beta Was this translation helpful? Give feedback.
All reactions