Skip to content

Sammonster495/traffic-control-rl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multi-Agent Deep Q-Network for Adaptive Traffic Signal Control

Adaptive traffic-light control using multi-agent Dueling Double-DQN with SUMO + TensorFlow.

🚦 Overview

Each traffic light runs an independent DQN agent that learns optimal signal phases through interaction with the SUMO simulator. Includes training, evaluation, visualization, and automatic model saving.

📁 Project Structure

traffic-dqn/
  main.py           # training entrypoint
  config.py         # hyperparameters
  agents/           # DQN agent, dueling network, replay buffer
  training/         # coordinator + evaluator
  utils/            # logger + plotting
  models/           # saved weights
  logs/             # training/evaluation logs
  plots/            # reward/loss graphs

🔧 Installation (using uv)

1. Install SUMO

sudo add-apt-repository ppa:sumo/stable
sudo apt-get update
sudo apt-get install sumo sumo-tools sumo-doc

Set SUMO_HOME according to your installation. (default sumo installation path is /usr/share/sumo)

echo 'export SUMO_HOME="/usr/share/sumo"' >> ~/.bashrc
source ~/.bashrc

For a performance boost with Libsumo(you will not be able to run with sumo-gui if this is active)

export LIBSUMO_AS_TRACI=1

2. Create environment + install dependencies

uv sync
source .venv/bin/activate

3. ▶️ Train and 🧪 Evaluate

uv run main.py

For running GUI in parallel, update main.py

env = SumoEnvironment(
    ...
    use_gui=True,
)

Outputs:

  • models/ — saved checkpoints
  • logs/ — training & evaluation logs
  • plots/ — reward/loss curves

📊 Results

🔗 Key Dependencies

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages