Gayatri Mayukha Behara
Download Project (669 KB)
The widespread emergence of human interactive gaming and entertainment systems based on using body gestures for control has led to the development of portable 3D depth perception cameras. Many standalone systems capable of 3D depth perception are now commercially available. Examples of such systems include Kinect motion sensing input device developed by Microsoft for Xbox 360 and Xbox One video game consoles, Creative Labs Senz3D, and ZED camera from Stereolabs which has combined a 3D Camera for depth sensing with motion tracking. In the current work, we aim to expand the functionality of such systems by combining autonomous object recognition along with depth perception which would provide the ability to both identify the object and its distance from the camera. Such capability would prove invaluable to autonomous surveillance applications, where persons carrying any forbidden and dangerous objects are detected in real-time and appropriate warnings are signaled. We have selected Microsoft Kinect V2 which includes built-in hardware algorithms to identify humans in a complex real-world setting. In addition, the system can simultaneously track 6 people at any time and provide their skeletal joint diagrams. The current work deals with using the skeletal joint diagrams and depth maps and create a focus area around the hand area of the peoples. The next phase of our developed algorithm deals with object detection after the segmentation of hands. We use machine learning techniques with establishment of training datasets that includes the library of objects we aim to detect. Finally, we believe his system could have uses in autonomous navigation of robots, vehicles and drones.
Graduate Research - Graduate
Primary Advisor's Department
Electrical and Computer Engineering
Stander Symposium project
"Autonomous Surveillance in Real World Environments" (2017). Stander Symposium Projects. 1047.