Document Type

Conference Paper

Publication Date

9-2013

Publication Source

IEEE International Conference on Image Processing

Abstract

With the increasing popularity of RGB-depth (RGB-D) sensors such as the Microsoft Kinect, there have been much research on capturing and reconstructing 3D environments using a movable RGB-D sensor. The key process behind these kinds of simultaneous location and mapping (SLAM) systems is the iterative closest point or ICP algorithm, which is an iterative algorithm that can estimate the rigid movement of the camera based on the captured 3D point clouds. While ICP is a well-studied algorithm, it is problematic when it is used in scanning large planar regions such as wall surfaces in a room. The lack of depth variations on planar surfaces makes the global alignment an ill-conditioned problem. In this paper, we present a novel approach for registering 3D point clouds by combining both color and depth information. Instead of directly searching for point correspondences among 3D data, the proposed method first extracts features from the RGB images, and then back-projects the features to the 3D space to identify more reliable correspondences. These color correspondences form the initial input to the ICP procedure which then proceeds to refine the alignment. Experimental results show that our proposed approach can achieve better accuracy than existing SLAMs in reconstructing indoor environments with large planar surfaces.

Inclusive pages

275-279

ISBN/ISSN

978-1-4799-2341-0

Document Version

Postprint

Comments

Document available for download is the authors' accepted manuscript, provided in compliance with publisher policy on self-archiving. Permission documentation is on file.

Publisher

IEEE

Place of Publication

Melbourne, Victoria, Australia

Peer Reviewed

yes

Link to published version

Share

COinS