Skip to content

mitralone/openai-assistant-backend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

openai-assistant-backend

A minimal backend service demonstrating how to interact with the OpenAI Assistants API using a simple HTTP interface.

The service allows clients to:

  • create conversation threads
  • send messages to an assistant
  • retrieve assistant responses

The goal of this project is to illustrate a simple communication layer between a backend application and an LLM-powered assistant.

Architecture

The service acts as a thin communication layer between a client application and the OpenAI Assistants API.

Flow:

Client Request ↓ Express API Endpoint ↓ Thread Creation / Message Submission ↓ Assistant Run Execution ↓ Polling for Completion ↓ Return Assistant Response

This pattern is useful when integrating LLM-powered assistants into web applications that require conversation persistence.

Stack

  • Node.js
  • Express
  • OpenAI Node SDK (openai)
  • dotenv for environment variables

Prerequisites

  • Node.js 18+
  • An OpenAI API key
  • An existing Assistant ID

Setup

  1. Install dependencies:
npm install
  1. Create a .env file in the project root:
OPENAI_API_KEY=your_api_key_here
ASSISTANT_ID=your_assistant_id_here

Run

Start the server:

node index.js

The app listens on:

http://localhost:3000

API Endpoints

GET /

Health check.

Response:

I am alive

GET /createThread

Creates a new OpenAI thread.

Example response:

{
    "method": "create_thread",
    "thread_id": "thread_..."
}

POST /sendMessage

Sends a user message to an existing thread, runs the configured assistant, and returns the latest assistant reply.

Request body:

{
    "thread_id": "thread_...",
    "message": "Hello assistant"
}

Example response:

{
    "method": "send_message",
    "reply": "Hello! How can I help you today?"
}

Quick Test (cURL)

  1. Create thread:
curl http://localhost:3000/createThread
  1. Send message (replace THREAD_ID):
curl -X POST http://localhost:3000/sendMessage \
  -H "Content-Type: application/json" \
  -d '{"thread_id":"THREAD_ID","message":"Hi there"}'

Limitations

This project is intentionally minimal and does not yet include:

  • streaming responses
  • robust retry logic
  • request validation
  • structured logging
  • rate limiting
  • persistent thread storage
  • background job handling

In production systems, these concerns should typically be handled using task queues (e.g., Redis/Celery/BullMQ) and proper observability tooling.

Possible Improvements

Future improvements could include:

  • streaming assistant responses
  • WebSocket support
  • background worker for run polling
  • persistent storage of threads
  • better error handling and retry logic
  • request validation with Zod or Joi

About

A minimal backend service demonstrating how to interact with the OpenAI Assistants API using a simple HTTP interface.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors