Authors

Presenter(s)

Ruixu Liu

Files

Download

Download Project (1.6 MB)

Description

A new methodology for 3D scene reconstruction, which can support effective robotic sensing and navigation in an indoor environment with only a low-cost RGB-D sensor is presented in this research. The 3D scene model can be used for many applications such as virtual reality visualization and robot navigation. Motivated by these applications, our goal is to create a system that takes a sequence of RGB and depth images captured with a hand-held camera as input and produces a globally consistent 3D probabilistic occupancy map model as output. This research introduces a robust system that estimates camera position for multiple RGB video frames based on a key-frame selection strategy. In order to create the 3D scene in real time, a direct method to minimize the photometric error is utilized. The camera pose is tracked using the ray casting model which means we use a frame-to-model method instead of the frame-to-frame Iterative Closest Point (ICP) tracking. The point to plan ICP algorithm is used to establish geometric constraints between the point-cloud as they become aligned. To fill in the holes, the raw depth map is improved using a Truncated Signed Distance Function (TSDF) to voxelize the 3D space, accumulating the depth map from nearby frames using the camera poses obtained above. Finally, a high resolution efficient probabilistic 3D mapping framework based on octrees (Octomap) is used to store the wide range of indoor environments. The saved 3D occupancy map could help the robot to avoid obstacle and display the robot location in the 3D virtual scene in real time.

Publication Date

4-5-2017

Project Designation

Graduate Research - Graduate

Primary Advisor

Vijayan K. Asari

Primary Advisor's Department

Electrical and Computer Engineering

Keywords

Stander Symposium project

3D Indoor Scene Reconstruction using RGB-D Sensor

Share

COinS