-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
I run into a problem running llama-cpp-python with llama-2-7b-chat.gguf with GPU, but it crash directly.
output:
ggml_opencl: (mem = clCreateBuffer(context, CL_MEM_READ_WRITE, size, NULL, &err), err) error -61 at /tmp/pip-req-build-34ftntbj/vendor/llama.cpp/ggml-opencl.cpp:1339
I print the related information, to check the size it want to allocate, it shows:
Size of cl_context: 8 bytes, size: 524288000
and I did the collection for all buffer it creats. as below:
Size of cl_context: 8 bytes, size: 67108864
Size of cl_context: 8 bytes, size: 524288
Size of cl_context: 8 bytes, size: 524288
Size of cl_context: 8 bytes, size: 9437184
Size of cl_context: 8 bytes, size: 180355072
Size of cl_context: 8 bytes, size: 25362432
Size of cl_context: 8 bytes, size: 524288000
here size is the size in function clCreateBuffer(context, CL_MEM_READ_WRITE, size, NULL, &err).
on another hand, I execute the C++ executable file directly(compiled by myself), it works. or I use another built-in http server, it also works fine.
command as below:
$ ./server -m ~/llama-2-7b-chat.gguf --port 6000 -c 2048 --n-gpu-layers 25
related link:
https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md
so I'm sure it's a bug of llama-cpp-python.
for these two, I also collect its information, we can see it's smaller than llama-cpp-python, maybe caused by some configurations, I checked, looks it's from node configuration:
Size of cl_context: 8 bytes, size: 32768
Size of cl_context: 8 bytes, size: 32768
Size of cl_context: 8 bytes, size: 67108864
Size of cl_context: 8 bytes, size: 32768
Size of cl_context: 8 bytes, size: 180355072
Size of cl_context: 8 bytes, size: 88064
Size of cl_context: 8 bytes, size: 81920
Size of cl_context: 8 bytes, size: 220160
Current Behavior
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
- Physical (or virtual) hardware you are using, e.g. for Linux:
$ lscpu Architecture: aarch64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: ARM Model: 1 Thread(s) per core: 1 Core(s) per socket: 3 Socket(s): 1 Stepping: r1p1 Frequency boost: enabled CPU max MHz: 2016.0000 CPU min MHz: 307.2000 BogoMIPS: 38.40 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 a simddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint i8mm bf16 bti Model: 0 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 1 Stepping: r1p0 CPU max MHz: 2803.2000 CPU min MHz: 499.2000 BogoMIPS: 38.40 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 a simddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint i8mm bf16 bti Model: 0 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 1 Stepping: r2p0 CPU max MHz: 2803.2000 CPU min MHz: 499.2000 BogoMIPS: 38.40 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 a simddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint i8mm bf16 bti Model: 0 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 1 Stepping: 0x1 CPU max MHz: 3187.2000 CPU min MHz: 595.2000 Vulnerabilities: Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Retbleed: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Spectre v1: Mitigation; __user pointer sanitization Spectre v2: Vulnerable: Unprivileged eBPF enabled Srbds: Not affected Tsx async abort: Not affected
- Operating System, e.g. for Linux:
$ uname -a Linux aidlux 5.15.78-qki-consolidate-android13-8-gcae15e468c11 #1 SMP PREEMPT Tue Jan 30 11:14:24 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
- SDK version, e.g. for Linux:
$ python3 --version
Python 3.11.8
$ make --version
GNU Make 4.3
$ g++ --version
g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Failure Information (for bugs)
ggml_opencl: (mem = clCreateBuffer(context, CL_MEM_READ_WRITE, size, NULL, &err), err) error -61 at /tmp/pip-req-build-34ftntbj/vendor/llama.cpp/ggml-opencl.cpp:1339
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
- step 1 : start llama-cpp-python via: python3 -m llama_cpp.server --model ~/llama-2-7b-chat.gguf --port 6000 --host 0.0.0.0 --n_gpu_layers 20 --logits_all True
- step 2: send request to http server
- step 3: get error information.
- etc.
Note: Many issues seem to be regarding functional or performance issues / differences with llama.cpp
. In these cases we need to confirm that you're comparing against the version of llama.cpp
that was built with your python package, and which parameters you're passing to the context.
Try the following:
git clone https://github.com/abetlen/llama-cpp-python
cd llama-cpp-python
rm -rf _skbuild/
# delete any old buildspython -m pip install .
cd ./vendor/llama.cpp
- Follow llama.cpp's instructions to
cmake
llama.cpp - Run llama.cpp's
./main
with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp
Failure Logs
Please include any relevant log snippets or files. If it works under one configuration but not under another, please provide logs for both configurations and their corresponding outputs so it is easy to see where behavior changes.
Also, please try to avoid using screenshots if at all possible. Instead, copy/paste the console output and use Github's markdown to cleanly format your logs for easy readability.
Example environment info:
llama-cpp-python$ git log | head -1
commit 47b0aa6e957b93dbe2c29d53af16fbae2dd628f2
llama-cpp-python$ python3 --version
Python 3.10.10
llama-cpp-python$ pip list | egrep "uvicorn|fastapi|sse-starlette|numpy"
fastapi 0.95.0
numpy 1.24.3
sse-starlette 1.3.3
uvicorn 0.21.1
llama-cpp-python/vendor/llama.cpp$ git log | head -3
commit 66874d4fbcc7866377246efbcee938e8cc9c7d76
Author: Kerfuffle <[email protected]>
Date: Thu May 25 20:18:01 2023 -0600