Document Type

Article

Publication Date

3-2015

Publication Source

Journal of Electronic Imaging

Abstract

We present a three-dimensional (3-D) reconstruction system designed to support various autonomous navigation applications. The system presented focuses on the 3-D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the depths of a scene. In this way, the system can be used to construct a point-cloud model of its unknown surroundings.

We present the step-by-step methodology and analysis used in developing the 3-D reconstruction technique.

We present a reconstruction framework that generates a primitive point cloud, which is computed based on feature matching and depth triangulation analysis. To populate the reconstruction, we utilized optical flow features to create an extremely dense representation model. With the third algorithmic modification, we introduce the addition of the preprocessing step of nonlinear single-image super resolution. With this addition, the depth accuracy of the point cloud, which relies on precise disparity measurement, has significantly increased.

Our final contribution is an additional postprocessing step designed to filter noise points and mismatched features unveiling the complete dense point-cloud representation (DPR) technique. We measure the success of DPR by evaluating the visual appeal, density, accuracy, and computational expense and compare with two state-of-the-art techniques.

Inclusive pages

023003-1 to 023003-25

ISBN/ISSN

1017-9909

Document Version

Published Version

Comments

This document is provided for download in compliance with the publisher's policy on self-archiving. Permission documentation is on file.

DOI: http://dx.doi.org/10.1117/1.JEI.24.2.023003

Publisher

Society of Photo-optical Instrumentation Engineers

Volume

24

Peer Reviewed

yes

Issue

2


Share

COinS