Robust feature based reconstruction technique to remove rain from video

Date of Award

2013

Degree Name

Ph.D. in Electrical Engineering

Department

Department of Electrical and Computer Engineering

Advisor/Chair

Advisor: Vijayan K. Asari

Abstract

In the context of extracting information from video, especially in the case of surveillance videos, bad weather conditions can pose a huge challenge. They affect feature extraction processes and hence the performance of other post-processing algorithms. In general, bad weather conditions can be classified into static and dynamic weather conditions. Static weather conditions like haze, fog and smoke cause blurring of features and saturation of intensities in the image. The temporal derivatives of the scene intensities are very low. Dynamic weather conditions like rain and snow have varying effects from frame to frame. The temporal derivative of the scene intensity for any pixel will not be zero in the presence of rain. In essence, the actual scene content is not occluded by rain or snow at all instances in the video sequence. In this research, a new framework is presented to achieve robust reconstruction of videos that are affected by rain. The main challenge is to model the location of rain streaks in a frame. This is due to the fact that the location of rain streaks at any particular instant is completely random. However, the changes in scene intensity caused by rain streaks have a generalized behavior. In addition, the instances in which the actual scene is not occluded is sufficient to enable modeling of an efficient technique to have a robust reconstruction of the scene. The first part of the proposed framework for rain removal is a novel technique to detect rain streaks based on phase congruency features. The features capture all structural edges that are conspicuous to the human visual system. The variation of features from frame to frame can be used to estimate the candidate rain pixels in a frame. In order to reduce the number of false candidates due to global motion, frames are registered using phase correlation. The presence of motion components in a local sense is ignored in this part of the framework. The second part of the proposed framework is a novel reconstruction technique that utilizes information from three different sources, which are intensities of the rain affected pixel, spatial neighbors, and temporal neighbors. An optimal estimate for the actual intensity of the rain affected pixel is made based on the minimization of registration error between frames. An optical flow technique based on local phase information is adopted for registration. This part of the proposed framework for removing rain is modeled such that the presence of local motion will not distort the features in the reconstructed video. The proposed framework is evaluated quantitatively and qualitatively on a variety of videos with varying complexities. The effectiveness of the algorithm is quantitatively verified by computing a no-reference image quality measure on individual frames of the reconstructed video. From a variety of experiments that are performed on output videos, it is shown that the proposed technique performs better than the state-of-the-art techniques. The performance of the proposed technique is evaluated in the case of removing snow from videos as well. It is observed that the method is capable of removing light snow streaks from the video. As part of ongoing research, attempts are being made at making the algorithm run in real-time.

Keywords

Image reconstruction Mathematical models, Precipitation (Meteorology) Photography Data processing, Video surveillance, Digital video Editing Data processing, Optics, Adaptive, Image processing Digital techniques Mathematical models, Computer Engineering; Electrical Engineering; rain removal; snow removal; phase congruency; monogenic signal; optical flow

Rights Statement

Copyright © 2013, author

Share

COinS