-
Notifications
You must be signed in to change notification settings - Fork 4
Description
Is there an existing issue for this?
- I have searched the existing issues
Feature Description
Requirement: Please complete the task before #629
🧠 Why Chatbot Backend?
To communicate with Google Gemini (Generative AI Model) using your Gemini API Key, we need to build a backend layer that works as a mediator service.
This backend becomes your Application Programming Interface (API) responsible for:
- Sending user prompts to the Gemini Generative AI Model
- Receiving model-generated responses
- Managing message storage and conversation history
- Acting like a real-time LLM-powered messaging system
Instead of chatting with another human, your server will continuously send queries to Gemini's text-generation endpoint and return AI-generated replies.
I’ve prepared some initial setup along with basic endpoints. Feel free to use them as a starting point while building the rest. (Below is the link).
https://github.com/SB2318/IEEE-s-Mindful-Devs-Bootcamp/tree/main/chatbot
Visualization
https://uhsocial.in/docs/#/ChatBot/post_gemini_send
Use Case
✅ Tasks
- Fork the submodule (https://github.com/SB2318/IEEE-s-Mindful-Devs-Bootcamp)
- Create a new branch add your changes there
- Create PR For the submodule
✅ Endpoints to Implement (Phase 1)
1️⃣ POST /send-message
Purpose:
Send a user prompt to the Gemini Generative AI model and return the generated response.
Required Input
userIdconversationIdtext– user prompt / question
Workflow
- Backend receives the user's message
- Calls the Gemini Generative AI API using the model endpoint
- Saves both:
- User’s prompt
- Gemini’s generated response
- Returns the AI-generated output to frontend
Notes
This behaves as a normal messaging system, except the reply is generated by the LLM (Gemini).
2️⃣ GET /load-conversations
Purpose:
Fetch all messages for a specific user with proper ordering.
Required Input
userId
Workflow
- Query all conversations from database (based on userId)
- Sort messages by timestamp
- Return complete conversation history
Notes
This endpoint helps in:
- Displaying conversation history
- Resuming past LLM sessions
- Maintaining context for the Generative AI model
📝 Future Endpoints (Optional but Useful)
POST /regenerate-message(ask model again)POST /upload-file(for Gemini document/vision capabilities)
✔️ Acceptance Criteria
- Gemini API integrated using official SDK or REST
- Messages stored in DB (MongoDB recommended)
- Conversation history fetched properly
- Clean architecture (Controller → Service → Routes)
- Proper error handling for model failures
- API key securely stored in environment variables
Benefits
Learning
Add ScreenShots
Priority
High
Record
- I have read the Contributing Guidelines
- I'm a GSSOC'24 contributor
- I'm a IEEE IGDTUW contributor
- I want to work on this issue