This application uses environment variables for configuration. Follow these steps to set up your environment:
Copy the example environment file and customize it for your setup:
cp .env.example .envEdit your .env file with the following variables:
| Variable | Description | Default | Example |
|---|---|---|---|
ADMIN_EMAILS |
Comma-separated admin emails | (empty) | admin1@example.com,admin2@example.com |
PORT |
Server port | 3000 |
3000 |
NODE_ENV |
Environment mode | development |
development, production, test |
DATABASE_URL |
Database connection string | file:./dev.db |
postgresql://user:pass@localhost:5432/db |
RATE_LIMIT_WINDOW_MS |
Rate limit window in milliseconds | 900000 (15 min) |
900000 |
RATE_LIMIT_MAX_REQUESTS |
Max requests per window | 100 |
100 |
FIREBASE_SERVICE_ACCOUNT_PATH |
Path to Firebase service account JSON | ./firebase-service-account.json |
./config/firebase.json |
PRISMA_LOG_QUERIES |
Enable query logging | true |
true, false |
PRISMA_LOG_ERRORS |
Enable error logging | true |
true, false |
PRISMA_LOG_WARNINGS |
Enable warning logging | true |
true, false |
DATA_FILES_DIR |
Directory where uploaded data files are stored (container path) | /app/data/project-data-files |
/app/data/project-data-files |
DATA_FILES_HOST_DIR |
Host/server path for data files (used for Docker bind mounts) | (empty - uses DATA_FILES_DIR) |
/home/shared/project-data-files |
Add admin emails to your .env file and run the seed script:
# Add to .env:
# ADMIN_EMAILS=admin1@example.com,admin2@example.com
# Run seed script
npm run seed# Development
npm run dev
# Production (after building)
npm run build
npm start
# Test environment
npm run testThis application includes Docker support with automatic database migrations and admin user seeding on startup.
The recommended way to run the backend is with Docker Compose, which ensures proper volume mounts for data persistence:
# Create required directories on host
mkdir -p data project-data-files
# Start the service
docker-compose up -d
# View logs
docker-compose logs -fImportant Volume Mounts:
./data:/app/data- Persists the SQLite database across container restarts./project-data-files:/app/data/project-data-files- Persists uploaded data files for deployed projects/var/run/docker.sock:/var/run/docker.sock- Allows the backend to manage student project containers
If you prefer to run without Docker Compose:
# Create required directories
mkdir -p data project-data-files
# Build the image
docker build -t project-showcase-backend .
# Run the container with all necessary volume mounts
docker run -d \
--name showcase-backend \
-p 8000:8000 \
-e ADMIN_EMAILS="admin1@example.com,admin2@example.com" \
-e DATABASE_URL="file:/app/data/sqlite.db" \
-e NODE_ENV="production" \
-v $(pwd)/data:/app/data \
-v $(pwd)/project-data-files:/app/data/project-data-files \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/firebase-service-account.json:/app/firebase-service-account.json:ro \
--network projects_network \
project-showcase-backend
# View logs
docker logs -f showcase-backendAll data is persisted on the host machine in the following directories:
./data/- Contains the SQLite database (sqlite.db) and Prisma migrations./project-data-files/- Contains uploaded data files that are mounted to student project containers
These directories will survive container restarts, rebuilds, and updates.
The Dockerfile CMD runs three commands sequentially:
npx prisma migrate deploy- Database migrations run automaticallynpm run seed- Admin users are seeded fromADMIN_EMAILSnpm start- Server starts and accepts connections
When deploying projects, you can now provide custom Docker build arguments to configure the build process. This is useful for setting build-time variables like environment-specific configurations, feature flags, or build optimization settings.
When deploying a project, include the optional buildArgs field in your request:
# Deploy with build arguments
curl -X POST http://localhost:3000/api/projects/deploy \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"teamId": 1,
"githubUrl": "https://github.com/username/repository",
"buildArgs": {
"NODE_ENV": "production",
"API_URL": "https://api.example.com",
"FEATURE_FLAG": "enabled"
}
}'The streaming deployment endpoint also supports build arguments:
# Deploy with streaming and build arguments
curl -X POST http://localhost:3000/api/projects/deploy/stream \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"teamId": 1,
"githubUrl": "https://github.com/username/repository",
"buildArgs": {
"BUILD_VERSION": "1.0.0",
"ENABLE_CACHE": "true"
}
}'To use the build arguments in your project's Dockerfile, declare them with ARG:
# Dockerfile example
FROM node:18-alpine
# Declare build arguments
ARG NODE_ENV=development
ARG API_URL
ARG FEATURE_FLAG=disabled
# Use build arguments as environment variables
ENV NODE_ENV=$NODE_ENV
ENV API_URL=$API_URL
ENV FEATURE_FLAG=$FEATURE_FLAG
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
# Build arguments are available during build
RUN echo "Building with NODE_ENV=$NODE_ENV"
EXPOSE 3000
CMD ["npm", "start"]Build arguments are stored in the database along with the project deployment information and can be viewed in the project details.
When deploying projects, you can upload a data file that will be automatically mounted to the deployed container. This is useful for providing datasets, configuration files, or other resources that your project needs at runtime.
Data files must be uploaded using multipart/form-data:
# Deploy with a data file
curl -X POST http://localhost:3000/api/projects/deploy \
-H "Authorization: Bearer YOUR_TOKEN" \
-F "teamId=1" \
-F "githubUrl=https://github.com/username/repository" \
-F "buildArgs={\"NODE_ENV\":\"production\"}" \
-F "dataFile=@/path/to/your/data.csv"# Deploy with streaming and data file
curl -X POST http://localhost:3000/api/projects/deploy-streaming \
-H "Authorization: Bearer YOUR_TOKEN" \
-F "teamId=1" \
-F "githubUrl=https://github.com/username/repository" \
-F "dataFile=@/path/to/dataset.json"- Upload Limit: 100MB per file
- Storage: Files are stored on the host at
./project-data-files/(persists across restarts) - Mount Path: All uploaded files are mounted read-only at
/data/uploaded-datain the container - Access in Container: Your application can read the file at
/data/uploaded-data
# Python example
import pandas as pd
# Read the uploaded data file
df = pd.read_csv('/data/uploaded-data')
print(f"Loaded {len(df)} rows from uploaded data")// Node.js example
const fs = require('fs');
const path = require('path');
// Read the uploaded data file
const dataPath = '/data/uploaded-data';
const data = fs.readFileSync(dataPath, 'utf8');
console.log('Loaded data:', data);The data file path is stored in the database with the project deployment information.