Replies: 3 comments
-
I also want to set this up with an AMD gpu. However, it seems unsupported as far as I can tell. I was able to get LLama.cpp running so I know it should theoretically be possible but it seems to unsupported for this project. If you do get it running let me know. |
Beta Was this translation helpful? Give feedback.
-
This is not an answer. I noted that hipblas support has since been added with release v1.40.0. So theoretically, this project should support both clblas(t) and hipblas. But when I tried to build them locally by following the guide, it failed, saying some folder is missing (I'll update what). I also read this discussion but it seems to be only about intel iGPUs. The question is, is there something like a GPU bypass that I can run the docker version on the AMD GPU? If not, what are the correct steps to build localai for hipblas or clblas? |
Beta Was this translation helpful? Give feedback.
-
Has anyone had any luck with this in the mean time? I still can't get it to compile with clblas, and there still is no pre-made clblas docker image |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
So CLBlast seems to be supported, but I have no idea how to actually get it to work.
Building the project locally doesn't work, so I'm following the easy docker setup guide.
I've enabled the
BUILD_TYPE=clBLAS
parameter in the.env
file, but I'm stuck in setting the correct values in thedocker-compose.yaml
file.It seems to require a
services.api.deploy.resources.devices
entry, but any documentation I find only mentions thedriver
property can benvidia
, nothing else seems to work.I've tried
mesa
,amd
,amdgpu
, ... Nothing works.Leaving the
driver
property out, and just setting thecount
property toall
gives me this error:Beta Was this translation helpful? Give feedback.
All reactions