TrafficLightRL leverages Reinforcement Learning (RL) to dynamically optimize traffic light control systems, reducing urban congestion and improving travel efficiency. By integrating real-world mapping tools with deep RL models, we develop adaptive signal control strategies that improve traffic flow, reduce emissions, and enhance urban mobility.
Our project applies OpenStreetMap's Web Wizard to export real-life geographical locations and train RL agents on real-world traffic data. By targeting intersections in major university campuses, we demonstrate TrafficLightRL’s ability to optimize traffic flow in practical, high-traffic areas. Each contributor focused on specific campus intersections, refining reward functions and agent performance for optimal real-world traffic simulations.
- Kristian Diana - Project Lead (McMaster University)
- Clara Wong - Project Member (University of Waterloo)
- Ryan Li - Project Member (Queen’s University)
- Varun Pathak - Project Member (University of Toronto)
- Tridib Banik - Project Member (Western University)
- Smart Traffic Control: Adaptive traffic light decisions powered by RL agents.
- Real-World Simulations: Authentic intersection models using SUMO and OpenStreetMap.
- Custom Reward Functions: Tailored metrics to balance traffic flow and safety.
- Comprehensive Visualizations: Progress tracking with SUMO-GUI simulations.
Our RL agent observes the current traffic state using SUMO simulation data. Based on predefined reward functions, it adjusts signal timing to minimize congestion and improve efficiency. Over multiple training episodes, the agent optimizes its policy for real-world intersections.
Process Flow Diagram
For an in-depth breakdown of TrafficLightRL, check out:
- 📄 Research Paper - Explains project motivation, RL methodology, and experimental results.
- 📑 Design Document - Project timeline, technical decisions, and key insights.
