Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

module name #98

Open
Mrship12138 opened this issue Feb 18, 2025 · 2 comments
Open

module name #98

Mrship12138 opened this issue Feb 18, 2025 · 2 comments

Comments

@Mrship12138
Copy link

May I ask what my model name should be filled in when I use the local Olama to build qwen2.5-coder: 32b

@Chenglong-MS
Copy link
Collaborator

For Qwen2.5-coder:3b the configuration is as follows (key is left empty):

Image

SO I think the 32b model the configuration should be the same, by replacing model name to qwen2.5-coder:32b

@Mrship12138
Copy link
Author

Excuse me, my machine configuration is 32G+4060ti 16G. Which model would have better performance when used locally?
If attempting to run ollama run deepseek-coder-v2, select whether the model is configured as shown in the image

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants