Skip to content

💡[Feature]: Day 3: AI ChatBot Bakend: Implement ChatBot Backend (Generative AI Integration) #637

@SB2318

Description

@SB2318

Is there an existing issue for this?

  • I have searched the existing issues

Feature Description

Requirement: Please complete the task before #629

🧠 Why Chatbot Backend?

To communicate with Google Gemini (Generative AI Model) using your Gemini API Key, we need to build a backend layer that works as a mediator service.

This backend becomes your Application Programming Interface (API) responsible for:

  • Sending user prompts to the Gemini Generative AI Model
  • Receiving model-generated responses
  • Managing message storage and conversation history
  • Acting like a real-time LLM-powered messaging system

Instead of chatting with another human, your server will continuously send queries to Gemini's text-generation endpoint and return AI-generated replies.

I’ve prepared some initial setup along with basic endpoints. Feel free to use them as a starting point while building the rest. (Below is the link).

https://github.com/SB2318/IEEE-s-Mindful-Devs-Bootcamp/tree/main/chatbot

Visualization

https://uhsocial.in/docs/#/ChatBot/post_gemini_send

Use Case

✅ Tasks


✅ Endpoints to Implement (Phase 1)

1️⃣ POST /send-message

Purpose:
Send a user prompt to the Gemini Generative AI model and return the generated response.

Required Input

  • userId
  • conversationId
  • text – user prompt / question

Workflow

  1. Backend receives the user's message
  2. Calls the Gemini Generative AI API using the model endpoint
  3. Saves both:
    • User’s prompt
    • Gemini’s generated response
  4. Returns the AI-generated output to frontend

Notes

This behaves as a normal messaging system, except the reply is generated by the LLM (Gemini).


2️⃣ GET /load-conversations

Purpose:
Fetch all messages for a specific user with proper ordering.

Required Input

  • userId

Workflow

  1. Query all conversations from database (based on userId)
  2. Sort messages by timestamp
  3. Return complete conversation history

Notes

This endpoint helps in:

  • Displaying conversation history
  • Resuming past LLM sessions
  • Maintaining context for the Generative AI model

📝 Future Endpoints (Optional but Useful)

  • POST /regenerate-message (ask model again)
  • POST /upload-file (for Gemini document/vision capabilities)

✔️ Acceptance Criteria

  • Gemini API integrated using official SDK or REST
  • Messages stored in DB (MongoDB recommended)
  • Conversation history fetched properly
  • Clean architecture (Controller → Service → Routes)
  • Proper error handling for model failures
  • API key securely stored in environment variables

Benefits

Learning

Add ScreenShots

Image

Priority

High

Record

  • I have read the Contributing Guidelines
  • I'm a GSSOC'24 contributor
  • I'm a IEEE IGDTUW contributor
  • I want to work on this issue

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions