-
Notifications
You must be signed in to change notification settings - Fork 493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple LLM model selection in DashBot #704
base: main
Are you sure you want to change the base?
Conversation
What do you mean by multi-modularity? |
ollama, | ||
gemini, | ||
openai | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each LLMProvider should have some sort of setting where the user can click on a small settings button to change the parameters like API URL, model, etc.
DashBotService() | ||
: _client = OllamaClient(baseUrl: 'http://127.0.0.1:11434/api') { | ||
: _ollamaClient = OllamaClient(baseUrl: 'http://127.0.0.1:11434/api'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add provision to change baseURL via LLMProvider settings
: _client = OllamaClient(baseUrl: 'http://127.0.0.1:11434/api') { | ||
: _ollamaClient = OllamaClient(baseUrl: 'http://127.0.0.1:11434/api'), | ||
//TODO: Add API key to .env file | ||
_openAiClient = OpenAIClient(apiKey: "your_openai_api_key") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add provision to add API Key via LLMProvider settings
); | ||
return response.response.toString(); | ||
try { | ||
switch (_selectedModel) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LLMProvider != model
|
||
case LLMProvider.ollama: | ||
final response = await _ollamaClient.generateCompletion( | ||
request: GenerateCompletionRequest(model: 'llama3.2:3b', prompt: prompt), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
llama3.2:3b is the model. Add provision to change it via LLMProvider settings based on list of installed models in the system
case LLMProvider.openai: | ||
final response = await _openAiClient.createChatCompletion( | ||
request: CreateChatCompletionRequest( | ||
model: ChatCompletionModel.modelId('gpt-4o'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gpt-4o is the model. Add provision to change it via LLMProvider settings based on list of available openai models.
lib/main.dart
Outdated
@@ -9,6 +10,8 @@ import 'app.dart'; | |||
|
|||
void main() async { | |||
WidgetsFlutterBinding.ensureInitialized(); | |||
//TODO: Add API key to .env file | |||
Gemini.init(apiKey: "apiKey"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gemini.init cannot happen until user adds an API key via LLMProvider settings
gsoc proposal initial draft
…edar_DashBot.md making into md file
@ashitaprasad check the new changes. Made changes in-order to select model of LLM as well |
PR Description
The following PR adds feature to choose between Ollama, Gemini and openAI in-order to generate responses. However api-keys have not been added in the .env due to security reasons but the responses have been tested by using personal api-keys

Related Issues
#621
Checklist
main
branch before making this PRflutter upgrade
and verify)flutter test
) and all tests are passingAdded/updated tests?
We encourage you to add relevant test cases.
OS on which you have developed and tested the feature?