This project demonstrates an event-driven task processing system built with:
FastAPIfor the API serviceRQ + Redisfor asynchronous background processingReact + Vite + Bunfor the web interfaceDocker Composefor local developmentKubernetes + Helmfor container orchestrationGitHub Actionsfor CI automation
The demo focuses on two realistic background workloads: image processing and CSV analysis.
- The web app submits a task to the API
- The API validates the request, stores task metadata in Redis, and enqueues a background job.
- The worker service consumes the job from Redis and processes it independently.
- The worker updates task status and result in Redis.
- The web app polls the API to show real-time task progress.
image_processingcsv_analysis
.
├── charts/
├── infra/k8s/
├── packages/common/
├── services/api/
├── services/worker/
├── web/
├── docker-compose.yml
└── .github/workflows/
cp .env.example .env
docker compose up --buildServices:
- API:
http://localhost:8000 - Web:
http://localhost:5173 - Redis:
localhost:6379
Python services use uv.
uv sync
uv run --package api-service uvicorn api_service.main:app --host 0.0.0.0 --port 8000 --reload
uv run --package worker-service python -m worker_service.mainFrontend uses bun.
cd web
bun install
bun run dev --hostYou also need a Redis instance running locally:
docker run --rm -p 6379:6379 redis:7-alpineGET /health/liveGET /health/readyPOST /api/v1/tasksGET /api/v1/tasksGET /api/v1/tasks/{task_id}
Example task creation:
curl -X POST http://localhost:8000/api/v1/tasks \
-H "Content-Type: application/json" \
-d '{
"task_type": "csv_analysis",
"payload": {
"filename": "sales.csv",
"csv_text": "name,amount\nAsha,10\nRavi,20"
}
}'Apply the raw manifests:
kubectl apply -f infra/k8s/namespace.yaml
kubectl apply -f infra/k8s/Or deploy with Helm:
helm upgrade --install task-system ./charts/task-system -n task-system --create-namespaceThe Helm chart exposes the web application as a NodePort by default.
On Docker Desktop Kubernetes, you can usually open:
http://localhost:30080
The web container proxies API requests to the in-cluster API service, so you do not need a separate API port-forward just to use the UI.
The project now uses separate GitHub Actions workflows for CI and CD:
- PRs run
.github/workflows/ci.yml - pushes to
mainrun.github/workflows/deploy.yml
Pull requests validate the project by:
- installing Python dependencies with
uv - running
ruff - running
pytest - installing frontend dependencies with
bun - building the frontend
- linting the Helm chart
- building Docker images for the API, worker, and web services
Pushes to main perform the full delivery pipeline:
- validate the backend, frontend, and Helm chart
- build Docker images for API, worker, and web
- push those images to
GHCR - connect to the Kubernetes cluster
- deploy the release with
helm upgrade --install
To make the deployment workflow work, configure:
KUBE_CONFIG_DATA- base64-encoded kubeconfig for the target cluster
The deployment workflow expects the cluster to be able to pull GHCR images. If your GHCR packages are private, create an image pull secret in the deployment namespace:
kubectl create namespace task-system --dry-run=client -o yaml | kubectl apply -f -
kubectl create secret docker-registry ghcr-auth \
--namespace task-system \
--docker-server=ghcr.io \
--docker-username=YOUR_GITHUB_USERNAME \
--docker-password=YOUR_GITHUB_PAT \
--docker-email=YOUR_EMAILThen add a GitHub repository variable:
GHCR_PULL_SECRET_NAME=ghcr-auth
If your images are public, leave GHCR_PULL_SECRET_NAME unset and the deploy workflow will not inject an image pull secret.
- Redis is used for both queueing and task state storage to keep the demo simple.
- Images are passed as data URLs and processed into preview artifacts by the worker.
- CSV analysis returns lightweight structural summaries rather than storing files on disk.
- The current pipeline is local-first and does not auto-deploy to a cloud cluster.
- The structure is ready for future additions such as Postgres, Prometheus metrics, or cloud deployment.