Skip to content

πŸ›°οΈ ATLAS – Adaptive Traffic Light Allocation System ATLAS is an intelligent traffic management system that uses real-time vehicle detection and classification to optimize signal timings. Powered by YOLOv8 and a Density-Based Weighted Algorithm, it dynamically adjusts green, yellow, and red signals based on actual road usage,

License

Notifications You must be signed in to change notification settings

saketjha34/ATLAS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

23 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ›°οΈ ATLAS – Adaptive Traffic Light Allocation System

Overview

ATLAS is an intelligent traffic analysis and signal control system that performs real-time vehicle detection, tracking, and classification using YOLOv8 and ByteTrack. It analyzes traffic videos to count vehicles within defined regions, generating actionable insights for urban mobility.

Powered by the Density-Based Weighted Allocation (DBWA) algorithm, ATLAS dynamically adjusts traffic light durations (green, yellow, red) based on real-time vehicle density and typeβ€”ensuring smoother flow, reduced congestion, and smarter signal control.

Key Features

βœ” Detects and tracks multiple vehicle types including cars, buses, trucks, motorcycles, and bicycles.
βœ” Uses a mask to focus detection on areas of interest for precise analysis.
βœ” Confirms vehicle presence over multiple frames to improve classification accuracy.
βœ” Outputs annotated video with bounding boxes, labels, and live vehicle counts.
βœ” Generates detailed JSON reports summarizing vehicle counts and detection data.
βœ” Fully configurable via YAML for flexible model, input, and output management.

Ideal for smart traffic monitoring systems and adaptive signal control implementations.

Project Structure

project_root/
│── benchmark/
β”‚   └── traffic_signal_benchmark.py         # Benchmark for DBWSA.py
│── core/
β”‚   └── config.py                           # config.py Pydantic Validated
│── masks/
β”‚   └── test-video_mask.jpg                 # Place mask images here
│── models/
β”‚   └── VehicleDetectionYolov8Model.pt      # Pretrained YOLOv8 model
β”‚   └── VehicleDetectionYolov11LModel.pt    # Pretrained YOLOv11 model
│── output/
β”‚   └── Results will be stored here
│── schema/  
β”‚   └── detections.py                       # Schema for Vehicle Detection
β”‚   └── intersections.py                    # Schema for Traffic Signal Intersection Data
│── src/  
β”‚   └── DBWSA.py                            # Traffic Signal Optimization Algorithm
β”‚   └── generate_mask.py                    # Script to generate mask images
β”‚   └── run_detector.py                     # CMD Script for Vehicle Detection
β”‚   └── get_predictions.py                  # Script to generate detection(without mask)
│── traffic_data/
β”‚   └── Intersection Data
│── traffic-videos/
β”‚   └── test-video.mp4                      # Place input videos here
│── config.yaml                             # Config file              
│── requirements.txt                        # Required Python packages
│── README.md                               
│── preview.gif                            

Setup Instructions

1. Clone the Repository

Ensure you have Python(preferably 3.8+) & git installed. Then, install the required dependencies:

First, clone the repository using Git:

git https://github.com/saketjha34/ATLAS.git
cd ATLAS

2. Install Dependencies

Ensure you have Python installed (preferably 3.8+). Then, install the required dependencies:

python -m venv venv
venv/Scripts/activate
pip install -r requirements.txt

3. Congigure config.yaml

This project is configured via a config.yaml file. Below is a breakdown of the key configuration options and how to use them.

config.yaml Structure

detection:
  input_traffic_video_path: "traffic-videos/test-video.mp4"       # Path to the input traffic video
  output_folder_name: "output"                                     # Directory to save output video and data
  mask_image_path: "masks/test-video_mask.jpg"                     # Grayscale mask image to isolate the road area
  confirmation_frame: 15                                           # Number of consistent frames to confirm a vehicle
  confidence_threshold: 0.35                                       # Minimum confidence score for detections
  yolo_model:
    yolo_model_path: "models/VehicleDetectionYolov11LModel.pt"     # Path to the YOLOv8 model file

traffic_signal_allocator:
  traffic_intersection_data_path: "traffic_data/intersection_data.json"  # Path to the JSON file containing traffic data
  base_green_time: 5                                                    # Minimum green signal time (in seconds)
  max_green_time: 60                                                    # Maximum green signal time (in seconds)
  yellow_time: 3                                                        # Fixed yellow light duration (in seconds)
  total_cycle_time: 60                                                  # Total cycle time for one signal phase (in seconds)

Usage

  1. Update paths as per your directory structure for video, model, mask, and traffic data.
  2. Ensure the model and mask exist at specified paths before running.
  3. Run the main script:

This will:

  • Detect and track vehicles from the video.
  • Output an annotated video and JSON summary.
  • Use the DBWA algorithm to simulate adaptive signal allocation.

4. Place Input Video

Place your traffic video inside the traffic-videos/ folder.

5. Generate a Mask Image

A mask is used to filter the region of interest in the video. To generate a mask automatically, run:

python -m src.generate_mask

This will create a grayscale mask image where white (255) represents the area to analyze, and black (0) is ignored.

6. Run the Vehicle Detector

Execute the script with the following command:

python -m src.run_detector

7. Run the Traffic Signal Optimization Algorithm

  • Place the intersection_data.json file in the traffic-data folder.
  • Make sure to set up config.yaml, and ensure that intersection_data.json matches the schema defined in schema/intersections.py.

Parameters & Customization Congiure it in config.yaml file

traffic_signal_allocator:
  traffic_intersection_data_path: "traffic_data/intersection_data.json"
  base_green_time: 5
  max_green_time: 60
  yellow_time: 3
  total_cycle_time: 60

The algorithm uses the following default parameters, which can be modified:

Parameter Description Default
BASE_GREEN_TIME Minimum green signal duration 5s
MAX_GREEN_TIME Maximum green signal duration 60s
YELLOW_TIME Fixed yellow signal duration 3s
TOTAL_CYCLE_TIME Total signal cycle duration 60s

You can modify these values in DBWSA.py.

python -m src.DBWSA.py 

8. Output

  • Processed Video: The annotated video is saved in output/test-video/.
  • JSON Report: Vehicle tracking data is stored in output/test-video/test-video_traffic_data.json.

9. Benchmark Results

python -m benchmark.traffic_signal_benchmark

The benchmark compares the performance of two traffic signal control strategies:

  • DBWSA (Dynamic Balanced Weighted Signal Allocation) β€” an adaptive traffic signal control algorithm.
  • Fixed Timing β€” a traditional fixed-duration traffic signal control.

Overall Benchmark Summary

Metric Value
Average Weighted Green Time (DBWSA) 16.44 seconds
Average Weighted Green Time (Fixed) 14.00 seconds
Overall Improvement (DBWSA vs Fixed) 17.45%

The DBWSA approach achieves a notable increase in the average weighted green time compared to the fixed timing method, indicating more efficient signal allocation and improved traffic flow.

Peak Performance Metrics During Simulation

Metric Value Iteration
Max DBWSA Avg Green Time 24.66 seconds 217
Max Fixed Avg Green Time 14.00 seconds 1
Max Improvement Percentage 76.16% 217

At iteration 217, DBWSA reached its peak performance, improving green time by over 76% compared to the fixed signal timing baseline.

Example Output

Validation successful
Processed video saved in output\test-video
Traffic data saved to output\test-video\test-video_traffic_data.json
Total confirmed vehicles detected: 14
Bicycle: 0
Car: 12
Bus: 0
Truck: 2
Motorcycle: 0

Optimized Traffic Signal Timings:
🚦 Traffic Signal Allocation Results:

➑ ROAD1:
   🟒 Green  : 28 seconds
   🟑 Yellow :  3 seconds
   πŸ”΄ Red    : 29 seconds
   ⏱️ Total  : 60 seconds

➑ ROAD2:
   🟒 Green  :  9 seconds
   🟑 Yellow :  3 seconds
   πŸ”΄ Red    : 48 seconds
   ⏱️ Total  : 60 seconds

➑ ROAD3:
   🟒 Green  : 12 seconds
   🟑 Yellow :  3 seconds
   πŸ”΄ Red    : 45 seconds
   ⏱️ Total  : 60 seconds

➑ ROAD4:
   🟒 Green  : 11 seconds
   🟑 Yellow :  3 seconds
   πŸ”΄ Red    : 46 seconds
   ⏱️ Total  : 60 seconds

Notes

  • You can adjust detection parameters like confidence threshold using --confidence_threshold.
  • Change the confirmation frame count using --confirmation_frame to tweak detection stability.

Credits

Developed using Ultralytics and Supervision Library.

About

πŸ›°οΈ ATLAS – Adaptive Traffic Light Allocation System ATLAS is an intelligent traffic management system that uses real-time vehicle detection and classification to optimize signal timings. Powered by YOLOv8 and a Density-Based Weighted Algorithm, it dynamically adjusts green, yellow, and red signals based on actual road usage,

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages