-
Notifications
You must be signed in to change notification settings - Fork 7.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bolt.diy is using more CPU and less GPU #1518
Comments
Hey @Anybody2007, this has nothing to do with bolt.diy itself. That depends on your Ollama configuration. It would also not use more gpu if you just use ollama locally in the terminal. Check your Ollama configuration and make sure it uses your GPU: https://github.com/ollama/ollama/blob/main/docs/gpu.md As far as I see in the table, your card is not supported by Ollama. Just the RTX 2060 but not GTX 2060. |
Hi @leex279 , root@model:~# nvidia-smi -L And I will post whatever I finds. |
Hi @leex279 ,
But still I am seeing Below is the error log pasted from the bolt.diy web-ui. Chat request failed
Ollama is configured with localhost, below is the url.
One additional thing, in the Settings -> Local Provider I have given the mentioned url But surprisingly, in the chat prompt I can list down all the models. |
you wrote GTX 2060 in your initial post, not RTX 2060 ;) |
Check the DEV-Console and the Terminal log please. Also how is your bolt.diy instance running? Also local with pnpm run dev? |
I am running as -> pnpm run dev Below is the nginx config I am using
And
I found nothing special I found the logs, when I am getting error in the frontend.
|
Hi @leex279, |
Describe the bug
When I am making improvments in my local projects which is medium size(approx = 126 files), the request to ollama will be more load on CPU rather than GPU, and which results in error with network. And I using ollama in my system with GTX 2060 card for local model.
Link to the Bolt URL that caused the error
localhost
Steps to reproduce
Expected behavior
It should use less CPU and More depended on GPU. And there will be response.
Screen Recording / Screenshot
Platform
Provider Used
ollama
Model Used
qwen-code:7b and deepseek-r1:8b
Additional context
No response
The text was updated successfully, but these errors were encountered: