You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Excuse me, my machine configuration is 32G+4060ti 16G. Which model would have better performance when used locally?
If attempting to run ollama run deepseek-coder-v2, select whether the model is configured as shown in the image
May I ask what my model name should be filled in when I use the local Olama to build qwen2.5-coder: 32b
The text was updated successfully, but these errors were encountered: