Volumetric change detection using uncalibrated 3D reconstruction models

Date of Award


Degree Name

Ph.D. in Electrical Engineering


Department of Electrical and Computer Engineering


Advisor: Vijayan K. Asari


We present a 3D change detection technique designed to support various wide-area-surveillance (WAS) applications in changing environmental conditions. The novelty of the work lies in our approach of creating an illumination invariant system tasked with detecting changes in a scene. Previous efforts have focused on image enhancement techniques that manipulate the intensity values of the image to create a more controlled and unnatural illumination. Since most applications require detecting changes in a scene irrespective of the time of day, (lighting conditions or weather conditions present at the time of the frame capture), image enhancement algorithms fail to suppress the illumination differences enough for Background Model (BM) subtraction to be effective. A more effective change detection technique utilizes the 3D scene reconstruction capabilities of structure from motion to create a 3D background model of the environment. By rotating and computing the projectile of the 3D model, previous work has been shown to effectively eliminate the background by subtracting the newly captured dataset from the BM projectile leaving only the changes within the scene. Although previous techniques have proven to work in some cases, these techniques fail when the illumination significantly changes between the capture of the datasets. Our approach completely eliminates the illumination challenges from the change detection problem. The algorithm is based on our previous work in which we have shown a capability to reconstruct a surrounding environment in near real-time speeds. The algorithm, namely Dense Point-Cloud Representation (DPR), allows for a 3D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the relative depths in a scene. The reconstruction process resulting in a point-cloud is computed based on SURF feature matching and depth triangulation analysis. We utilized optical flow features and a single image super resolution technique to create an extremely dense model. The accuracy of DPR is independent of the environmental changes that may be present between the datasets, since DPR only operates on images within one dataset to create the 3D model for each dataset. Our change detection technique utilizes a unique scheme to register the two 3D models. The technique uses an opportunistic approach to compute the optimal feature extraction and matching scheme to compute a fundamental matrix needed to transform a 3D point-cloud model from one dataset to align with the 3D model produced by another. Next, in order to eliminate any effects of the illumination change we convert each point-cloud model into a 3D binary voxel grid. A òneʹ is assigned to voxels containing points from the model while a ̀zeroʹ is assigned to voxels with no points. In our final step, we detect the changes between the two environments by geometrically subtracting the registered 3D binary voxel models. This process is computationally efficient due to logic-based operation available when handling binary models. We measure the success of our technique by evaluating the detection outputs, false alarm rate and computational expense when comparing with state-of-the-art change detection techniques.


Surveillance detection Mathematical models, Three-dimensional imaging Data processing, Remote sensing, Detectors Design and construction, Electrical Engineering, volumetric change detection, 3D reconstruction, aerial surveillance, point cloud registration, illumination invariant, noise suppression, Dense Point-cloud Representation

Rights Statement

Copyright © 2015, author