This repo implements the v3 smart parking pipeline:
static camera -> fixed ROIs -> per-spot crop -> YOLOv8*-cls -> temporal smoothing -> JSON -> FastAPI
The deployed demo still supports fixed ROIs for a static camera, but the recommended ML workflow is now:
full-frame parking-space detector -> per-slot crop -> occupancy classifier
Stage 1 full-frame parking-space detection is the primary generalization track. Stage 2 patch classification remains the occupancy model. The single-model full-frame occupancy detector remains only as an ML comparison baseline.
The final-project default is now:
trained Stage 1 detector -> per-slot crop -> trained Stage 2 classifier -> smoothing -> JSON -> FastAPI
edge/runs the two-stage edge pipeline and emits the v3 payload contractml/prepares Stage 2 datasets, trains YOLOv8 classification models, and evaluates classification metricsbackend/stores payloads from the edge pipeline and returns latest/history viewsdocs/contains the canonical PRD and aligned milestone notes
This repo is intended to stay public-safe:
- code, configs, metrics, and reproducible commands stay in the repo
- trained weights do not get committed into git history
- dataset archives, extracted datasets, runtime databases, and generated logs stay out of git
Academic-use guidance:
- this repo is shared publicly for academic review and demonstration
- no claim is made that all upstream datasets permit public redistribution of derived checkpoints
- before publishing model weights, verify that the exact training data license allows redistribution of trained artifacts
Model publishing guidance is in MODEL_LICENSE.md.
Recommended release pattern:
- publish final weights as GitHub Release assets or an external model registry
- document the exact datasets used to train each released checkpoint
- do not redistribute checkpoints trained on datasets whose licenses do not clearly allow model redistribution
- Stage 2 dataset:
stage2_data/ - Weather export for CNR evaluation:
datasets/stage2_weather/ - Cross-dataset exports:
datasets/pklot_test/,datasets/cnrpark_test/ - Stage 2 training handoff:
runs/stage2_cls/.../weights/best.pt - Backend payload:
{
"spots": {
"spot_1": "free",
"spot_2": "occupied"
},
"confidence": {
"spot_1": 0.91,
"spot_2": 0.84
},
"timestamp": "2026-04-21T00:00:00Z"
}python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
pip install -r requirements-dev.txtFull-frame detection must be evaluated by scene holdout. Image-level random splits are not treated as evidence of generalization in this repo.
Recommended Stage 1 full-frame parking-space detector:
python ml/prepare_dataset.py --stage1 --pklot-dir /path/to/pklot_roboflow
python ml/train.py --stage1 --variant s --device mps
python ml/train.py --stage1 --variant m --imgsz 960 --device mps
python ml/evaluate.py --stage1 --weights runs/stage1_det/yolov8s_stage1/weights/best.pt --split valFine-tune an existing Stage 1 parking-space detector on a custom camera while keeping the repo's MPS patch path:
python ml/train.py --stage1 \
--data datasets/yolo_parking/dataset.yaml \
--weights runs/stage1_det/yolov8s_stage1/weights/best.pt \
--epochs 50 \
--imgsz 640 \
--freeze 10 \
--lr 0.001 \
--batch 8 \
--device mps \
--project runs/stage1_finetune \
--name my_camera \
--exist-ok \
--no-ampSingle-model occupancy detector baseline:
python ml/prepare_dataset.py --single-model \
--pklot-dir datasets/pklot_v4 \
--single-model-output single_model_data_boxes \
--single-model-yaml ml/single_model_boxes.yaml
python ml/train.py --single-model --variant n --device mps
python ml/evaluate.py --single-model --weights runs/single_model_det/yolov8n_single_model/weights/best.pt --split valStage 2 occupancy classifier with Stage 1-derived crops:
Prepare the classification dataset:
python ml/prepare_dataset.py --stage2 --pklot-dir /path/to/pklot_roboflow
python ml/prepare_dataset.py --stage2 --pklot-dir /path/to/pklot_roboflow --cnrpark-dir /path/to/cnrpark_ext--cnrpark-dir now supports both:
- pre-flattened patch folders with
free/andoccupied/subdirectories - the official
cnrpark.itarchive layout withPATCHES/andLABELS/
When weather labels are available, dataset prep also exports datasets/stage2_weather/{sunny,cloudy,rainy}/{free,occupied} for per-weather evaluation.
Train the main classifier comparison set:
python ml/train.py --stage2 --variant n --device mps
python ml/train.py --stage2 --variant s --device mps
python ml/train.py --stage2 --variant m --device mpsCurrent final artifact choice:
- deployed Stage 1 detector path:
runs/stage1_det/yolov8s_stage1/weights/best.pt - strongest Stage 2 classifier from the saved comparison logs:
runs/stage2_cls/yolov8m_stage2/weights/best.pt - current exported inference artifacts:
artifacts/models/best.pt,artifacts/models/best.onnx,artifacts/models/best_int8.onnx
Generate a final artifact summary after training and evaluation:
python ml/finalize.pyAccuracy notes:
- the saved
runs/stage2_cls/yolov8n_stage2/results.csvshows an early accuracy collapse after epoch 4, so the repo now uses a lower Stage 2 learning rate, longer patience, cosine LR decay, and classifier dropout by default - Stage 1 and single-model prep now rebuild train/val/test by scene holdout after deduplicating
.rf.*Roboflow variants - PKLot full-frame prep excludes zero-label frames and writes
detection_dataset_report.jsonwith split leakage checks and annotation-audit summaries - Roboflow PKLot exports may use per-slot polygons;
ml/prepare_dataset.pyconverts those polygons into clipped YOLO detection boxes automatically for Stage 1, single-model detection, and Stage 2 patch cropping
Evaluate Stage 2 classification:
python ml/evaluate.py --stage2 --weights runs/stage2_cls/yolov8m_stage2/weights/best.pt --split val --device mps --batch 256
python ml/evaluate.py --stage2 --weights runs/stage2_cls/yolov8m_stage2/weights/best.pt --cross-dataset datasets/pklot_test --device mps --batch 256
python ml/evaluate.py --stage2 --weights runs/stage2_cls/yolov8m_stage2/weights/best.pt --cross-dataset datasets/cnrpark_test --device mps --batch 256
python ml/evaluate.py --stage2 --weights runs/stage2_cls/yolov8m_stage2/weights/best.pt --data datasets/stage2_weather --per-weather --device mps --batch 512
python ml/evaluate.py --stage2 --weights runs/stage2_cls/yolov8m_stage2/weights/best.pt --split val --device mps --batch 256 --sweep
python ml/evaluate.py --stage2 --compare \
runs/stage2_cls/yolov8n_stage2/weights/best.pt \
runs/stage2_cls/yolov8s_stage2/weights/best.pt \
runs/stage2_cls/yolov8m_stage2/weights/best.pt \
--split val --device mps --batch 256The saved threshold sweep currently selects 0.1 as the best offline validation threshold for yolov8m_stage2.
The deployed edge config still uses 0.3, which matches the saved comparison run and is less aggressive for live demos.
Run one-off prediction:
python ml/predict.py --weights runs/stage2_cls/yolov8m_stage2/weights/best.pt --source samples/demo.jpg
python ml/predict.py --stage1 --weights runs/stage1_det/yolov8s_stage1/weights/best.pt --source samples/demo.jpg
python ml/predict.py --single-model --weights runs/detect/train/weights/best.pt --source samples/demo.jpgThe deployed/default runtime remains two-stage. Fixed ROIs are still available for the static-camera demo, but they are deployment-specific rather than the repo’s generalizable ML recommendation.
Start the backend:
uvicorn backend.main:app --reloadOpen the live MJPEG stream in a browser:
<img src="http://127.0.0.1:8000/stream" alt="Parking stream" />Run image inference with fixed ROIs:
python edge/detect.py \
--image samples/demo.jpg \
--stage2-model runs/stage2_cls/yolov8m_stage2/weights/best.pt \
--save-annotated logs/demo-annotated.jpgdetect.py now posts to the backend by default, so /status and /history update automatically while the backend is running.
Use --no-post for offline-only inference.
For oblique camera views, configure preprocess.perspective in edge/config.example.yaml and redraw rois against the rectified output.
Run the integrated final pipeline with the trained Stage 1 parking-space detector:
python edge/detect.py \
--image samples/demo.jpg \
--stage1-detector \
--stage1-model runs/stage1_det/yolov8s_stage1/weights/best.pt \
--stage2-model runs/stage2_cls/yolov8m_stage2/weights/best.pt \
--save-annotated logs/final-demo-annotated.jpgBenchmark the Stage 2 classifier on a representative ROI patch:
python edge/benchmark.py \
--task classify \
--image samples/demo.jpg \
--model runs/stage2_cls/yolov8m_stage2/weights/best.pt \
--imgsz 64 \
--roi 50 100 200 250Run the short reproducible stability check that generated the current summary:
python edge/stability_test.py \
--image samples/demo.jpg \
--stage1-detector \
--stage1-model runs/stage1_det/yolov8s_stage1/weights/best.pt \
--stage2-model runs/stage2_cls/yolov8m_stage2/weights/best.pt \
--device mps \
--duration 15 \
--frame-interval 500For the longer Week 7 soak test, keep the same command and raise --duration to 1800.
Run live camera inference:
python edge/detect.py --camera 0 --stage2-model runs/stage2_cls/yolov8m_stage2/weights/best.pt
python edge/detect.py --camera iphone --stage2-model runs/stage2_cls/yolov8m_stage2/weights/best.pt--camera iphone is macOS-only and targets Continuity Camera / attached iPhone cameras.
Camera mode updates logs/latest_frame.jpg continuously so /stream can render the latest annotated frame without waiting for POST intervals.
- Canonical PRD: docs/prd.md
- Docs index: docs/README.md
- Edge details: edge/README.md
- Backend contract: backend/README.md