You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, is there any local Ollama model that can work with this? I tried many of them, and none produced a working query—they always give me an error. However, when I use GPT or Claude, it works perfectly. Is there any additional setup required to make it work?
Thanks,
The text was updated successfully, but these errors were encountered:
There won't be any additional setup required --- unfortunately, some smaller models struggle with following the instructions in the prompt and generating code so they fail pretty often. My personal experience on llama3.2, codellama:7b, qwen-coder-2.5:3b isn't that good
Wondering if larger sized codellama or deepseek-coder would work better? I haven't done much extensive test on smaller models.
Community -- maybe we can build up a small benchmark to test out and get some recommendations for custom models?
Hi, is there any local Ollama model that can work with this? I tried many of them, and none produced a working query—they always give me an error. However, when I use GPT or Claude, it works perfectly. Is there any additional setup required to make it work?
Thanks,
The text was updated successfully, but these errors were encountered: