Skip to content

Conversation

@airpods69
Copy link

This commit introduces automatic model discovery for OpenAI-compatible providers, allowing OpenCode to dynamically fetch available models from the /v1/models endpoint instead of requiring manual model configuration.

Key changes:

  • Implement automatic fetching of models from /v1/models endpoint for custom OpenAI-compatible providers
  • Add proper API key handling from both auth.json and environment variables
  • Enable support for providers using the @ai-sdk/openai-compatible npm package
  • Include enhanced logging for debugging provider configuration
  • Ensure compatibility with existing provider configuration system

This enhancement enables OpenCode to work seamlessly with OpenAI-compatible APIs such as local LLM servers (Ollama, LocalAI), API proxies, and custom implementations without requiring manual model definitions in the configuration. The functionality addresses the need for supporting custom and local AI providers with dynamic model discovery.

Fixes the issue where OpenAI-compatible providers would show no available models unless manually configured in opencode.json.

(the PR is generated by qwen, though I have tested it out locally properly with the provider, used nano-gpt.com/api/v1 for this)

Your Name added 2 commits October 24, 2025 16:42
- Implement automatic fetching of models from /v1/models endpoint for custom OpenAI-compatible providers
- Add proper API key handling from auth.json or environment variables
- Include enhanced logging for debugging provider configuration
- Enable support for providers configured with @ai-sdk/openai-compatible npm package
@airpods69 airpods69 changed the title Fetch from models OpenAI api Fetch from model list from /v1/models/ from OpenAI Compatible APIs Oct 24, 2025
Comment on lines +383 to +387
// For custom config providers, we may need to get the npm field from config
const npmPackage = providerInfo?.npm || config.provider?.[providerID]?.npm;
if (npmPackage !== "@ai-sdk/openai-compatible") {
continue;
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this isn't gonna work for like any providers

Comment on lines +447 to +451
},
limit: {
context: 4096, // Default context limit
output: 4096, // Default output limit
},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will cause a veryyy poor experience for a lot of people 4096 context window isn't gonna be useful for most agentic work

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just opened the repo to search if I could get away with 4096 context window length (or by disabling /compact)

And yes, I agree it was pretty ugly. just happened to me. should have also tested out the models itself as well.

// Create a default model entry with basic information
fetchedModels[modelData.id] = {
id: modelData.id,
name: modelData.id || modelData.name || modelData.object || "Unknown Model", // Use id, name, or object from API
Copy link
Collaborator

@rekram1-node rekram1-node Oct 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't model name required, i think it will always be defined

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmmm, It is required but ig a good idea to let it stay here as well? unless I am missing out on something that you can see.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean name field returned from the api will always be defined modelData.name so no need for all the ORs

@rekram1-node
Copy link
Collaborator

There is something to this for providing a good ux, I'm wondering can we restrict it to:

  • custom providers only
  • only custom providers that have the "@ai-sdk/openai-compatible" set for the provider

And then don't override any of their existing custom models for that provider

What is teh default context size for LM studio?

@rekram1-node
Copy link
Collaborator

Also we have these style preferences I should make a style guide:

Try to keep things in one function unless composable or reusable
DO NOT do unnecessary destructuring of variables
DO NOT use else statements unless necessary
DO NOT use try/catch if it can be avoided
AVOID try/catch where possible
AVOID else statements
AVOID using any type
AVOID let statements
PREFER single word variable names where possible

@airpods69
Copy link
Author

airpods69 commented Oct 24, 2025

default context size for LM studio
it is 4096 (reduced from 8k) but it is mostly to avoid OOM errors since LM studio is used with local LLMs most of the time. (or was built with that aspect)

custom providers only
only custom providers that have the "@ai-sdk/openai-compatible" set for the provider

This would be a good idea and what also I had in my mind. I just added the model to opencode.json and there is no particular context length window on it. (stays at 0%)
image

Which I think would be fine with people who are going towards using custom providers like nano-gpt? atleast they don't have to worry about it for a while. If they are crossing the limits then they can add the model and the num_ctx value manually?

Better than the 4096 limit atleast in my opinion... (just suffered from it lol)

Also will keep the style guide in mind, it was a pretty quick PR for something I wanted to try out with the provider cause it has way too many models to select from.

The UX factor definitely felt good after using the /v1/models path.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants