Deep Vision Based Driving Behavior Analysis System for Roadside Restricted Area Traffic Control
Date of Award
8-1-2024
Degree Name
M.S. in Computer Engineering
Department
Department of Electrical and Computer Engineering
Advisor/Chair
Vijayan K. Asari
Abstract
Administering the behavior of drivers near roadside restricted areas, such as work zones, accident zones, or natural calamity zones, is necessary for safety. It helps steer vehicles clear of the ongoing blocked region. This ensures the safety of both drivers and people in that area. The vehicles need to be diverted to a different lane away from the restricted area for smooth running of the traffic. A computer vision-based autonomous system could be able to automatically monitor the movements of the vehicles and predict their pathways based on the direction and speed of the vehicles. This would help to provide appropriate signals to the drivers for changing the lanes appropriately. Development of an artificial intelligence-based learning system for detection and tracking vehicles on the road and prediction of their future locations in real-time videos captured by a stationary camera is proposed in this thesis. The videos captured in outdoor environments will be subjected to several challenges due to varying lighting conditions and changes in orientation, viewing angle, and object size. Surrounding objects like trees, buildings, or other vehicles can obscure a vehicle completely or partially, making reliable detection and tracking difficult. Stationary cameras may also capture background regions like buildings, trees, parking lots, etc. Sometimes, the detection vehicles become difficult due to their darker texture in non-uniform lighting conditions. In this thesis research, a YOLO_v8 neural network model is employed to detect the vehicles in the video frames in real-time. The neural network model needs an extensive set of annotated datasets of vehicles in roadside environments. A new annotated dataset named Dayton Annotated Vehicle Image Set (DAVIS) suited for US road conditions is built to train the vehicle detection model. An adaptive image enhancement technique, namely Contrast Limited Adaptive Histogram Equalization (CLAHE) is used in the moving object regions in the video streams to improve the vehicle detection accuracy. A Kalman filter-based tracking system is employed for tracking the detected vehicles on the road based on their speed and direction of movement. The future possible location of a vehicle on the road is determined by predicting the front bottom center point of the vehicle bounding box provided by the YOLO detector. The system simultaneously checks if this predicted location of the vehicle is likely to enter the restricted zone. A dynamic danger zone region allocation method is introduced to evaluate the performance of the proposed vision-based traffic control system. Experiments are conducted on several real-time videos captured by roadside stationary surveillance cameras to evaluate the performance of the proposed driving behavior analysis system. The experimental results observed provide accurate estimations of the future locations of the vehicles. Based on the predicted trajectory of the moving vehicles, warning signals will be generated to alert the drivers as well as the regulatory authorities or people around the restricted area. Future work includes building the system functional in a nighttime environment as well as taking into consideration the sharp road curvatures.
Keywords
YOLO, DAVIS, R-CLAHE, Background Subtraction, MOG, Kalman Filter, Future Location Estimation, Dynamic Restricted Zone Allocation
Rights Statement
Copyright © 2024, author.
Recommended Citation
Mallik, Anurag, "Deep Vision Based Driving Behavior Analysis System for Roadside Restricted Area Traffic Control" (2024). Graduate Theses and Dissertations. 7418.
https://ecommons.udayton.edu/graduate_theses/7418