Skip to content

MrPeterss/project-showcase-backend

Repository files navigation

project-showcase-backend

Environment Configuration

This application uses environment variables for configuration. Follow these steps to set up your environment:

1. Create Environment File

Copy the example environment file and customize it for your setup:

cp .env.example .env

2. Configure Environment Variables

Edit your .env file with the following variables:

Variable Description Default Example
ADMIN_EMAILS Comma-separated admin emails (empty) admin1@example.com,admin2@example.com
PORT Server port 3000 3000
NODE_ENV Environment mode development development, production, test
DATABASE_URL Database connection string file:./dev.db postgresql://user:pass@localhost:5432/db
RATE_LIMIT_WINDOW_MS Rate limit window in milliseconds 900000 (15 min) 900000
RATE_LIMIT_MAX_REQUESTS Max requests per window 100 100
FIREBASE_SERVICE_ACCOUNT_PATH Path to Firebase service account JSON ./firebase-service-account.json ./config/firebase.json
PRISMA_LOG_QUERIES Enable query logging true true, false
PRISMA_LOG_ERRORS Enable error logging true true, false
PRISMA_LOG_WARNINGS Enable warning logging true true, false
DATA_FILES_DIR Directory where uploaded data files are stored (container path) /app/data/project-data-files /app/data/project-data-files
DATA_FILES_HOST_DIR Host/server path for data files (used for Docker bind mounts) (empty - uses DATA_FILES_DIR) /home/shared/project-data-files

3. Seed Admin Users

Add admin emails to your .env file and run the seed script:

# Add to .env:
# ADMIN_EMAILS=admin1@example.com,admin2@example.com

# Run seed script
npm run seed

4. Running in Different Environments

# Development
npm run dev

# Production (after building)
npm run build
npm start

# Test environment
npm run test

Docker Deployment

This application includes Docker support with automatic database migrations and admin user seeding on startup.

Quick Start with Docker Compose (Recommended)

The recommended way to run the backend is with Docker Compose, which ensures proper volume mounts for data persistence:

# Create required directories on host
mkdir -p data project-data-files

# Start the service
docker-compose up -d

# View logs
docker-compose logs -f

Important Volume Mounts:

  • ./data:/app/data - Persists the SQLite database across container restarts
  • ./project-data-files:/app/data/project-data-files - Persists uploaded data files for deployed projects
  • /var/run/docker.sock:/var/run/docker.sock - Allows the backend to manage student project containers

Manual Docker Run

If you prefer to run without Docker Compose:

# Create required directories
mkdir -p data project-data-files

# Build the image
docker build -t project-showcase-backend .

# Run the container with all necessary volume mounts
docker run -d \
  --name showcase-backend \
  -p 8000:8000 \
  -e ADMIN_EMAILS="admin1@example.com,admin2@example.com" \
  -e DATABASE_URL="file:/app/data/sqlite.db" \
  -e NODE_ENV="production" \
  -v $(pwd)/data:/app/data \
  -v $(pwd)/project-data-files:/app/data/project-data-files \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v $(pwd)/firebase-service-account.json:/app/firebase-service-account.json:ro \
  --network projects_network \
  project-showcase-backend

# View logs
docker logs -f showcase-backend

Data Persistence

All data is persisted on the host machine in the following directories:

  • ./data/ - Contains the SQLite database (sqlite.db) and Prisma migrations
  • ./project-data-files/ - Contains uploaded data files that are mounted to student project containers

These directories will survive container restarts, rebuilds, and updates.

What Happens on Container Startup

The Dockerfile CMD runs three commands sequentially:

  1. npx prisma migrate deploy - Database migrations run automatically
  2. npm run seed - Admin users are seeded from ADMIN_EMAILS
  3. npm start - Server starts and accepts connections

Project Deployment with Build Flags

When deploying projects, you can now provide custom Docker build arguments to configure the build process. This is useful for setting build-time variables like environment-specific configurations, feature flags, or build optimization settings.

API Usage

When deploying a project, include the optional buildArgs field in your request:

# Deploy with build arguments
curl -X POST http://localhost:3000/api/projects/deploy \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -d '{
    "teamId": 1,
    "githubUrl": "https://github.com/username/repository",
    "buildArgs": {
      "NODE_ENV": "production",
      "API_URL": "https://api.example.com",
      "FEATURE_FLAG": "enabled"
    }
  }'

Streaming Deployment with Build Flags

The streaming deployment endpoint also supports build arguments:

# Deploy with streaming and build arguments
curl -X POST http://localhost:3000/api/projects/deploy/stream \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -d '{
    "teamId": 1,
    "githubUrl": "https://github.com/username/repository",
    "buildArgs": {
      "BUILD_VERSION": "1.0.0",
      "ENABLE_CACHE": "true"
    }
  }'

Using Build Arguments in Dockerfile

To use the build arguments in your project's Dockerfile, declare them with ARG:

# Dockerfile example
FROM node:18-alpine

# Declare build arguments
ARG NODE_ENV=development
ARG API_URL
ARG FEATURE_FLAG=disabled

# Use build arguments as environment variables
ENV NODE_ENV=$NODE_ENV
ENV API_URL=$API_URL
ENV FEATURE_FLAG=$FEATURE_FLAG

WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .

# Build arguments are available during build
RUN echo "Building with NODE_ENV=$NODE_ENV"

EXPOSE 3000
CMD ["npm", "start"]

Build arguments are stored in the database along with the project deployment information and can be viewed in the project details.

Project Deployment with Data Files

When deploying projects, you can upload a data file that will be automatically mounted to the deployed container. This is useful for providing datasets, configuration files, or other resources that your project needs at runtime.

API Usage with Data File Upload

Data files must be uploaded using multipart/form-data:

# Deploy with a data file
curl -X POST http://localhost:3000/api/projects/deploy \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -F "teamId=1" \
  -F "githubUrl=https://github.com/username/repository" \
  -F "buildArgs={\"NODE_ENV\":\"production\"}" \
  -F "dataFile=@/path/to/your/data.csv"

Streaming Deployment with Data File

# Deploy with streaming and data file
curl -X POST http://localhost:3000/api/projects/deploy-streaming \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -F "teamId=1" \
  -F "githubUrl=https://github.com/username/repository" \
  -F "dataFile=@/path/to/dataset.json"

Data File Details

  • Upload Limit: 100MB per file
  • Storage: Files are stored on the host at ./project-data-files/ (persists across restarts)
  • Mount Path: All uploaded files are mounted read-only at /data/uploaded-data in the container
  • Access in Container: Your application can read the file at /data/uploaded-data

Example: Using Data File in Your Application

# Python example
import pandas as pd

# Read the uploaded data file
df = pd.read_csv('/data/uploaded-data')
print(f"Loaded {len(df)} rows from uploaded data")
// Node.js example
const fs = require('fs');
const path = require('path');

// Read the uploaded data file
const dataPath = '/data/uploaded-data';
const data = fs.readFileSync(dataPath, 'utf8');
console.log('Loaded data:', data);

The data file path is stored in the database with the project deployment information.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages