Towards autonomous depth perception for surveillance in real world environments
Date of Award
M.S. in Electrical and Computer Engineering
Department of Electrical and Computer Engineering
Advisor: Vamsy Chodavarapu
The widespread emergence of human interactive systems has led to the development of portable 3D depth perception cameras. In this thesis, we aim to expand the functionality of surveillance systems by combining autonomous object recognition along with depth perception to identify an object and its distance from the camera. Specifically, we present an autonomous object detection method using the depth information obtained from the Microsoft Kinect sensor. We use the skeletal joints data obtained from Kinect V2 sensor to determine the hand position of people. The various hand gestures can be classified by training the system with the help of depth information generated by Kinect sensor. Our algorithm then works to detect and identify objects held in the human hand. The proposed system is compact, and the complete video processing can be performed by a low-cost single board computer.
Three-dimensional imaging, Optical pattern recognition, Video surveillance Data processing, Electrical Engineering
Copyright © 2017, author
Behara, Gayatri Mayukha, "Towards autonomous depth perception for surveillance in real world environments" (2017). Graduate Theses and Dissertations. 1312.