Race Replay AI is an innovative educational initiative developed for IBM to demonstrate the capabilities of the IBM Granite AI model. The system transforms raw automotive sensor data (OBD-II telemetry) into dynamic, F1-style narration, allowing non-technical audiences to visualise AI interpretation in a familiar and engaging format.
The project's long-term vision is to address fatigue-related driving risks by providing proactive, narrated insights into behaviour patterns such as erratic acceleration or braking.
The system follows a modular, simulation-driven strategy:
- Data Import: Raw OBD-II telemetry is imported and preprocessed into JSON or plain-text formats compatible with IBM Granite.
- AI Interpretation: Preprocessed data is sent to IBM Granite to generate context-sensitive F1-style commentary.
- Visualisation: The TORCS racing simulator renders vehicle responses alongside the AI-generated commentary and telemetry graphs.
- Playback: Users interact with the simulation via a time-series playback engine to observe trends across various parameters.
This project seeks to simulate race telemetry and provide dynamic narration based on driving data.
Group Project Details
Overview: This project integrates a TORCS simulation environment with a custom replay engine for AI analysis and video processing.
.
βββ replay engine/ # Recording and playback engine module
β βββ replay_engine.py # Core playback logic
β βββ screen_recorder.py # Screen recording utility
β βββ video_preprocess.py # Video preprocessing scripts
β βββ *.mp4 / *.wav # Generated multimedia output
βββ torcs/ # TORCS core and API integration
β βββ api/ # External communication interfaces
β βββ app/ # Application layer logic
β βββ dataset/ # Training datasets
β βββ gym_torcs/ # OpenAI Gym environment wrapper
βββ dataset.zip # Compressed raw data
βββ requirements.txt # Project dependencies
βββ Timeline Gantt Chart.png # Project schedule visualization
βββ Race_Replay_AI_SE.pdf # Technical documentation
βββ TORCS Engine Setting Handbook.md # Environment configuration guide
System Requirements:
- Operating System: Windows 10/11 (Required for
pywin32support) or Ubuntu 20.04+. - CPU: Quad-core 2.5GHz or higher (to handle simultaneous TORCS simulation and AI inference).
- RAM: 8GB Minimum (16GB Recommended for running Ollama/LLM narration).
- Storage: 2GB of free space for TORCS engine, datasets, and dependencies.
Step-by-Step Setup:
-
Install TORCS: Follow the TORCS Engine Setting Handbook to install the base simulator.
-
Install & Configure Ollama:
- Download: Visit ollama.com to download and install the version for your OS (Windows/Linux).
- Install Granite 4 Model: Once Ollama is running, open your terminal (Command Prompt or PowerShell) and execute the following command to download and install the model:
ollama run granite4
- Verify Installation: You can confirm the model is ready by typing:
Ensure
ollama list
granite4:latest(or the specific size you downloaded) appears in the list.
-
Clone Repository:
git clone [https://github.com/COMP2281/software-engineering-group25-26-21.git](https://github.com/COMP2281/software-engineering-group25-26-21.git) cd software-engineering-group25-26-21 -
Install Dependencies:
pip install -r requirements.txt
-
Verify AI Connection: Ensure the Ollama service is running in the background so
api.pycan communicate with the model.
Local Machine Deployment:
- Database Setup: The system currently utilizes CSV-based telemetry logging. Ensure the
dataset/directory has write permissions. - CSD Environment: If deploying within a CSD environment, you must bypass execution policies to allow socket binding:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Containerization (If applicable):
- While the simulation requires a GUI (TORCS), the
api.pyandreplay_engine.pycan be containerized using a standard Python Docker image to handle data processing remotely.
User Access:
- Streamlit UI: Launch the dashboard via
streamlit run .\app.py. No registration is required for local viewing; the dashboard defaults to theAdminview to allow full telemetry control. - User Roles: * Admin: Can modify
quickrace.xmland restart simulation parameters.- Viewer: Can access the Streamlit URL to monitor real-time race data.
- First-Time Setup: Upon first launch, navigate to
torcs/torcs/config/raceman/quickrace.xmlto set your preferred track and number of opponents.
Common Issues:
- Socket Binding Error: Occurs if
app.pyorreplay_engine.pyis launched without admin privileges or if the port (8501/5000) is already in use. - Missing
pywin32: Ensure you are running on Windows if using the automated voice narration features. - TORCS Crash: Usually due to incompatible
quickrace.xmlsettings. Revert to the default config provided in the handbook.
Logs & Diagnostics:
- Runtime logs are printed directly to the terminal/console.
- Telemetry data for debugging can be found in the generated
.csvfiles within thedataset/folder.
Execute the main driving logic script to start the AI driver:
python torcs/gym_torcs/torcs_jm_par.pyEnvironment Note: You must obtain administrator permissions within the environment before attempting to run the application to avoid socket binding or file permission errors.
# Run this to get the permission
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Get-ExecutionPolicyTo visualise the real-time telemetry data and race results, launch the Flask/Streamlit application:
# Ensure you are running with administrative privileges if on CSD
cd .\torcs\app\
streamlit run .\app.pyThen, open your web browser and navigate to http://localhost:8501 (or the port specified in the terminal).
You can modify race settings (number of laps, opponents, track) by editing the configuration file: torcs/torcs/config/raceman/quickrace.xml
To run the voice-based AI commentary, launch the api.py in the api folder:
python torcs/api/api.pyTo start the server, launch the replay_engine/replay_engine.py:
cd '.\replay engine\'
streamlit run replay_engine.pyPlease follow the regulation here when you submit a commit. Click
Main Contact (Product Owner): John McNamara
Email: [email protected]
