Skip to content

anshaggarwal04/self-driving-car-behavioral-cloning-hazard-detection-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 Self-Driving Car: Behavioral Cloning + Hazard Detection

This project combines two powerful capabilities to simulate autonomous driving:

  1. Behavioral Cloning — A deep learning model that mimics human driving behavior by predicting steering angles from front camera images.
  2. Hazard Detection — A real-time object detection model (based on YOLOv8) that identifies potential road hazards such as pedestrians, vehicles, traffic signs, and more.

Together, these systems enable a basic autonomous driving agent to navigate a track while identifying and reacting to hazards in real time.


🧩 Project Structure

.
├── config.py                  # Hyperparameters and constants
├── model.py                   # Steering model (behavioral cloning)
├── hazard_detection.py        # YOLOv8-based hazard detection pipeline
├── drive.py                   # Integrates steering + hazard into simulator driving
├── load_data.py               # Data loader and augmentation functions
├── visualize_data.py          # Dataset visualization
├── visualize_activations.py   # Network activation map visualization
├── pretrained/                # Pretrained weights for both models
└── README.md

🚘 Behavioral Cloning

A Convolutional Neural Network (CNN) is trained to predict the steering angle of a car given a center camera image. Inspired by the NVIDIA paper, it takes preprocessed image frames and outputs a continuous steering value.

🎬 Demo Video

Below is a demo showcasing the behavioral cloning model controlling the simulated car based on real-time steering angle predictions:

Behavioral Cloning Demo

Preprocessing

  • Cropping top (sky) and bottom (hood)
  • Normalization to [-1, 1]
  • Conversion to HSV with random brightness
  • Random steering noise and horizontal flipping

Data Augmentation

  • Use of left/right camera images with adjusted steering angles
  • Brightness shifting (HSV V-channel)
  • Normal noise on steering
  • Biased data sampling to counter zero-steering skew

data_skewness

Network Architecture

The network consists of:

  • Input normalization (Lambda)
  • 5 Conv2D layers with ELU activations and dropout
  • 3 Dense layers with ELU activations and dropout
  • 1 output layer with linear activation (steering)

nvidia_architecture


⚠️ Hazard Detection (YOLOv8)

This module detects real-time objects on the road including:

  • 🚶‍♂️ Pedestrians
  • 🚗 Vehicles
  • 🚦 Traffic lights
  • 🛑 Road signs
  • 🐶 Animals (if present in training set)

The YOLOv8 model has been trained (or fine-tuned) on the BDD100K dataset and optimized for performance on real-time hardware.

Key Features:

  • Runs in parallel with steering prediction
  • Lightweight (YOLOv8n variant)
  • Fast object tracking + class-wise suppression
  • Works with real-time camera or simulator feed

📦 Dataset

  • Behavioral Cloning: Udacity simulator data (8036 samples with 3 camera views + steering angle)
  • Hazard Detection: BDD100K object detection dataset (YOLO format)

🏋️ Training Details

Behavioral Cloning:

  • Optimizer: Adam
  • Loss: MSE
  • Dropout: 0.2–0.5
  • Training time: ~2 hours (NVIDIA GPU)

Hazard Detection:

  • YOLOv8n trained for 50 epochs
  • Augmentations: grayscale, resize, crop, flip
  • Custom data preprocessed and validated

🚀 Running the System

To run the integrated system (steering + hazard detection):

python drive.py --steering_model pretrained/behavioral_model.h5 \
                --yolo_model pretrained/yolov8n.pt \
                --source simulator_feed

📸 Hazard Detection Demo

Below are screenshots demonstrating hazard detection in action using the YOLOv8n model:

🌙 Nighttime Detection

Nighttime Detection

🌞 Daytime Detection

Daytime Detection


📈 Evaluation & Results

  • ✅ The car drives smoothly on both training and test tracks
  • ⚠️ Hazards are correctly detected in real-time without frame lag
  • 🔁 Pix2Pix-based style transfer can be used to simulate night driving (future scope)

🔭 Future Improvements

  • Predict throttle and braking along with steering
  • Integrate lane detection for safer navigation
  • Improve hazard priority ranking (e.g., pedestrian > traffic light)
  • Combine with generative models to simulate edge cases (e.g., fog, night)

🧠 Acknowledgements

  • Udacity for simulator and initial data
  • Berkeley DeepDrive (BDD100K)
  • Ultralytics for YOLOv8
  • NVIDIA End-to-End Learning for Self-Driving Cars

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors