ATLAS is an intelligent traffic analysis and signal control system that performs real-time vehicle detection, tracking, and classification using YOLOv8 and ByteTrack. It analyzes traffic videos to count vehicles within defined regions, generating actionable insights for urban mobility.
Powered by the Density-Based Weighted Allocation (DBWA) algorithm, ATLAS dynamically adjusts traffic light durations (green, yellow, red) based on real-time vehicle density and typeβensuring smoother flow, reduced congestion, and smarter signal control.
β Detects and tracks multiple vehicle types including cars, buses, trucks, motorcycles, and bicycles.
β Uses a mask to focus detection on areas of interest for precise analysis.
β Confirms vehicle presence over multiple frames to improve classification accuracy.
β Outputs annotated video with bounding boxes, labels, and live vehicle counts.
β Generates detailed JSON reports summarizing vehicle counts and detection data.
β Fully configurable via YAML for flexible model, input, and output management.
Ideal for smart traffic monitoring systems and adaptive signal control implementations.
project_root/
βββ benchmark/
β βββ traffic_signal_benchmark.py # Benchmark for DBWSA.py
βββ core/
β βββ config.py # config.py Pydantic Validated
βββ masks/
β βββ test-video_mask.jpg # Place mask images here
βββ models/
β βββ VehicleDetectionYolov8Model.pt # Pretrained YOLOv8 model
β βββ VehicleDetectionYolov11LModel.pt # Pretrained YOLOv11 model
βββ output/
β βββ Results will be stored here
βββ schema/
β βββ detections.py # Schema for Vehicle Detection
β βββ intersections.py # Schema for Traffic Signal Intersection Data
βββ src/
β βββ DBWSA.py # Traffic Signal Optimization Algorithm
β βββ generate_mask.py # Script to generate mask images
β βββ run_detector.py # CMD Script for Vehicle Detection
β βββ get_predictions.py # Script to generate detection(without mask)
βββ traffic_data/
β βββ Intersection Data
βββ traffic-videos/
β βββ test-video.mp4 # Place input videos here
βββ config.yaml # Config file
βββ requirements.txt # Required Python packages
βββ README.md
βββ preview.gif
Ensure you have Python(preferably 3.8+) & git installed. Then, install the required dependencies:
First, clone the repository using Git:
git https://github.com/saketjha34/ATLAS.git
cd ATLASEnsure you have Python installed (preferably 3.8+). Then, install the required dependencies:
python -m venv venvvenv/Scripts/activatepip install -r requirements.txtThis project is configured via a config.yaml file. Below is a breakdown of the key configuration options and how to use them.
detection:
input_traffic_video_path: "traffic-videos/test-video.mp4" # Path to the input traffic video
output_folder_name: "output" # Directory to save output video and data
mask_image_path: "masks/test-video_mask.jpg" # Grayscale mask image to isolate the road area
confirmation_frame: 15 # Number of consistent frames to confirm a vehicle
confidence_threshold: 0.35 # Minimum confidence score for detections
yolo_model:
yolo_model_path: "models/VehicleDetectionYolov11LModel.pt" # Path to the YOLOv8 model file
traffic_signal_allocator:
traffic_intersection_data_path: "traffic_data/intersection_data.json" # Path to the JSON file containing traffic data
base_green_time: 5 # Minimum green signal time (in seconds)
max_green_time: 60 # Maximum green signal time (in seconds)
yellow_time: 3 # Fixed yellow light duration (in seconds)
total_cycle_time: 60 # Total cycle time for one signal phase (in seconds)- Update paths as per your directory structure for video, model, mask, and traffic data.
- Ensure the model and mask exist at specified paths before running.
- Run the main script:
This will:
- Detect and track vehicles from the video.
- Output an annotated video and JSON summary.
- Use the DBWA algorithm to simulate adaptive signal allocation.
Place your traffic video inside the traffic-videos/ folder.
A mask is used to filter the region of interest in the video. To generate a mask automatically, run:
python -m src.generate_maskThis will create a grayscale mask image where white (255) represents the area to analyze, and black (0) is ignored.
Execute the script with the following command:
python -m src.run_detector- Place the
intersection_data.jsonfile in thetraffic-datafolder. - Make sure to set up
config.yaml, and ensure thatintersection_data.jsonmatches the schema defined inschema/intersections.py.
traffic_signal_allocator:
traffic_intersection_data_path: "traffic_data/intersection_data.json"
base_green_time: 5
max_green_time: 60
yellow_time: 3
total_cycle_time: 60The algorithm uses the following default parameters, which can be modified:
| Parameter | Description | Default |
|---|---|---|
BASE_GREEN_TIME |
Minimum green signal duration | 5s |
MAX_GREEN_TIME |
Maximum green signal duration | 60s |
YELLOW_TIME |
Fixed yellow signal duration | 3s |
TOTAL_CYCLE_TIME |
Total signal cycle duration | 60s |
You can modify these values in DBWSA.py.
python -m src.DBWSA.py - Processed Video: The annotated video is saved in
output/test-video/. - JSON Report: Vehicle tracking data is stored in
output/test-video/test-video_traffic_data.json.
python -m benchmark.traffic_signal_benchmarkThe benchmark compares the performance of two traffic signal control strategies:
- DBWSA (Dynamic Balanced Weighted Signal Allocation) β an adaptive traffic signal control algorithm.
- Fixed Timing β a traditional fixed-duration traffic signal control.
| Metric | Value |
|---|---|
| Average Weighted Green Time (DBWSA) | 16.44 seconds |
| Average Weighted Green Time (Fixed) | 14.00 seconds |
| Overall Improvement (DBWSA vs Fixed) | 17.45% |
The DBWSA approach achieves a notable increase in the average weighted green time compared to the fixed timing method, indicating more efficient signal allocation and improved traffic flow.
| Metric | Value | Iteration |
|---|---|---|
| Max DBWSA Avg Green Time | 24.66 seconds | 217 |
| Max Fixed Avg Green Time | 14.00 seconds | 1 |
| Max Improvement Percentage | 76.16% | 217 |
At iteration 217, DBWSA reached its peak performance, improving green time by over 76% compared to the fixed signal timing baseline.
Validation successful
Processed video saved in output\test-video
Traffic data saved to output\test-video\test-video_traffic_data.json
Total confirmed vehicles detected: 14
Bicycle: 0
Car: 12
Bus: 0
Truck: 2
Motorcycle: 0
Optimized Traffic Signal Timings:
π¦ Traffic Signal Allocation Results:
β‘ ROAD1:
π’ Green : 28 seconds
π‘ Yellow : 3 seconds
π΄ Red : 29 seconds
β±οΈ Total : 60 seconds
β‘ ROAD2:
π’ Green : 9 seconds
π‘ Yellow : 3 seconds
π΄ Red : 48 seconds
β±οΈ Total : 60 seconds
β‘ ROAD3:
π’ Green : 12 seconds
π‘ Yellow : 3 seconds
π΄ Red : 45 seconds
β±οΈ Total : 60 seconds
β‘ ROAD4:
π’ Green : 11 seconds
π‘ Yellow : 3 seconds
π΄ Red : 46 seconds
β±οΈ Total : 60 seconds
- You can adjust detection parameters like confidence threshold using
--confidence_threshold. - Change the confirmation frame count using
--confirmation_frameto tweak detection stability.
Developed using Ultralytics and Supervision Library.
