Backend service for an AI-powered CV optimization platform that helps users tailor their resume and generate cover letters for specific job descriptions using large language models.
Users upload their CV and a job description, and the system analyzes the content, generates an optimized version of the CV, and produces a targeted cover letter while tracking AI usage through a credit system.
This repository demonstrates backend engineering depth across architecture, domain modeling, reliability, security, and operational concerns, not just endpoint wiring.
- Modular NestJS architecture with clear bounded contexts (
auth,users,credits,files,optimizations,llm,email) - Database-first approach with explicit SQL migrations and TypeORM entities (
synchronize: false) - Credit-ledger accounting model with idempotency support for safe, auditable balance changes
- JWT + refresh-token auth flow with OTP-based login
- File ingestion pipeline for
PDF,DOCX,TXT, andMD - LLM orchestration layer with provider abstraction (OpenAI integration designed to remain vendor-agnostic)
- Structured validation, error handling, and configuration layering for production readiness
- Runtime: Node.js, TypeScript
- Framework: NestJS 11
- Database: MySQL + TypeORM
- Auth: JWT + Passport
- AI: OpenAI Node SDK
- Email: Nodemailer
- Testing: Jest + Supertest
The backend follows a layered architecture typical of production NestJS services.
Request flow:
- Controller receives request and validates DTO input.
- Global
JwtAuthGuardenforces authentication by default (with explicit@Public()exceptions). - Service layer executes domain logic.
- Repositories/entities persist state changes in MySQL.
- Cross-cutting concerns (global exception filter, validation pipe, config service) apply consistently.
Key structural decisions:
- Module isolation: each domain owns its controller/service/models.
- Manual migrations: avoids drift and accidental schema mutations.
- Ledger-based credits: immutable transaction history for traceability.
- Provider abstraction for LLM: business logic is decoupled from vendor SDK details.
The system is designed around a credit-based AI usage model where LLM operations are metered and auditable.
Core entities include:
usersotp_codesrefresh_tokensoptimizationscredit_ledgerllm_usageemails_sent
This model supports secure auth, metered AI usage, and auditability of user credits.
Base path: /api/v1
POST /auth/send-otpPOST /auth/verify-otpPOST /auth/refreshPOST /auth/logoutGET /users/meGET /users/statsGET /credits/balanceGET /credits/historyPOST /credits/addPOST /files/uploadGET /files/:filenamePOST /optimizationsGET /optimizationsGET /optimizations/:id
npm installcp .env.example .envSet required values in .env:
DB_HOST,DB_PORT,DB_USERNAME,DB_PASSWORD,DB_DATABASEJWT_SECRETEMAIL_HOST,EMAIL_PORT,EMAIL_USER,EMAIL_PASSWORD,EMAIL_FROMOPENAI_API_KEY
mysql -u root -p cvoptimizer < src/database/migrations/001_initial_schema.sql
mysql -u root -p cvoptimizer < src/database/migrations/002_add_input_file_format.sqlnpm run start:devServer: http://localhost:3000/api/v1
- Global input validation (
whitelist,forbidNonWhitelisted,transform) - Global exception filtering for predictable API errors
- Environment-based configuration via Nest ConfigModule
.envand uploaded user files are git-ignored- Production deployment should use managed secrets and rotated credentials
Project and implementation documents are in Docs/.
- Designing maintainable backend services with clear modular boundaries
- Modeling transactional business logic with auditability requirements
- Integrating LLM-powered features while controlling usage and cost
- Building production-ready APIs with migrations, validation, and structured error handling
Possible extensions to this system include:
- background job processing for long-running optimizations
- streaming LLM responses
- vector search for resume/job similarity scoring
- usage analytics and cost monitoring dashboards