Document Type
Article
Publication Date
3-2015
Publication Source
Journal of Electronic Imaging
Abstract
We present a three-dimensional (3-D) reconstruction system designed to support various autonomous navigation applications. The system presented focuses on the 3-D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the depths of a scene. In this way, the system can be used to construct a point-cloud model of its unknown surroundings.
We present the step-by-step methodology and analysis used in developing the 3-D reconstruction technique.
We present a reconstruction framework that generates a primitive point cloud, which is computed based on feature matching and depth triangulation analysis. To populate the reconstruction, we utilized optical flow features to create an extremely dense representation model. With the third algorithmic modification, we introduce the addition of the preprocessing step of nonlinear single-image super resolution. With this addition, the depth accuracy of the point cloud, which relies on precise disparity measurement, has significantly increased.
Our final contribution is an additional postprocessing step designed to filter noise points and mismatched features unveiling the complete dense point-cloud representation (DPR) technique. We measure the success of DPR by evaluating the visual appeal, density, accuracy, and computational expense and compare with two state-of-the-art techniques.
Inclusive pages
023003-1 to 023003-25
ISBN/ISSN
1017-9909
Document Version
Published Version
Copyright
Copyright © 2015, Society of Photo-optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper is prohibited.
Publisher
Society of Photo-optical Instrumentation Engineers
Volume
24
Peer Reviewed
yes
Issue
2
eCommons Citation
Diskin, Yakov and Asari, Vijayan K., "Dense Point-Cloud Representation of a Scene using Monocular Vision" (2015). Electrical and Computer Engineering Faculty Publications. 389.
https://ecommons.udayton.edu/ece_fac_pub/389
Included in
Computer and Systems Architecture Commons, Electrical and Electronics Commons, Systems and Communications Commons
Comments
This document is provided for download in compliance with the publisher's policy on self-archiving. Permission documentation is on file.
DOI: http://dx.doi.org/10.1117/1.JEI.24.2.023003