This backend ingests generic raw sensor data from a sign-language glove over HTTP (WiFi), streams live predictions to a minimal Jinja2 UI, and stores labeled datasets for easy retraining.
Components:
- FastAPI backend with REST + WebSocket
- Unified processing pipeline
- Dataset recorder (CSV)
- Optional ML prediction service
- Minimal Jinja2 tools (Interpretation + Data Collection)
Data flow diagram:
Sensors -> ATmega328P -> (WiFi HTTP / USB Serial)
-> FastAPI Backend -> Processing Pipeline
-> Dataset CSV / ML Model -> Web Dashboard
The backend expects a generic payload:
channels: at least 3 readings (hall sensors, flex sensors, etc.)imu(optional): MPU6050 accelerometer/gyro (ax/ay/az/gx/gy/gz)timestamp(optional): milliseconds since epoch
Example accepted JSON:
{
"channels": { "s1": 100, "s2": 200, "s3": 300, "s4": 400, "s5": 500 },
"imu": { "ax": 0.01, "ay": 0.02, "az": 0.98, "gx": 1.2, "gy": 0.3, "gz": 0.1 },
"timestamp": 1710000000000
}pip install -r requirements.txtRun the server (recommended):
python -m appRun the server (uvicorn):
uvicorn app.main:app --host 0.0.0.0 --port 8000Dashboard:
http://localhost:8000/
WebSocket stream:
ws://localhost:8000/ws/sensor-stream
The dashboard only shows data after something sends packets to POST /api/sensor-data.
Options:
- Use the Home page button: SEND DEMO PACKET (enabled by
ENABLE_DEMO=truein.env) - Run the simulator:
python scripts/simulate_glove_sender.py --random --count 20If your glove is on WiFi, make sure it posts to your PC’s LAN IP (not localhost). Example:
http://<YOUR_PC_IP>:8000/api/sensor-data
docker compose up --buildSend HTTP packets to:
POST /api/sensor-data
Example:
curl -X POST http://localhost:8000/api/sensor-data \
-H "Content-Type: application/json" \
-d '{"channels":{"s1":100,"s2":200,"s3":300,"s4":400,"s5":500},"timestamp":1710000000000}'Dataset file path:
data/datasets/gesture_dataset.csv
Data Collection tool:
- Open
http://localhost:8000/collect - Click START to buffer one sample every 2 seconds (from
/api/latest) - Click STOP, enter the label, then SAVE (writes to the CSV)
- Use RESET MODEL and RETRAIN to rebuild the model from the saved dataset
Interpretation tool:
- Open
http://localhost:8000/interpret - Shows live channels + predicted gesture (via WebSocket)
Use the RETRAIN button in the Data Collection tool (recommended).
pytest