Replies: 5 comments 1 reply
-
Also getting errors when running
|
Beta Was this translation helpful? Give feedback.
-
Is my driver simply too old?
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
I switched machines and believe I have compatible CUDA driver and toolkit now:
But I'm still getting the same errors. |
Beta Was this translation helpful? Give feedback.
-
Got the
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I've tried
v2.11.0-cublas-cuda12
and compiling current main branch with Mistral.When doing chat completion log says that layers are offloaded to GPU:
But I get error messages from
llama.cpp
:Also I'm seeing weird values that look like pointers (e.g.
TopP
,TopK
,Temperature
) in the config in debug log:My
mistral.yaml
:Any ideas, what I'm doing wrong?
Beta Was this translation helpful? Give feedback.
All reactions