Proceedings of SPIE (SPIE Defense + Commercial Sensing, 2020)
In long-range imaging regimes, atmospheric turbulence degrades image quality. In addition to blurring, the turbulence causes geometric distortion effects that introduce apparent motion in acquired video. This is problematic for image processing tasks, including image enhancement and restoration (e.g., superresolution) and aided target recognition (e.g., vehicle trackers). To mitigate these warping effects from turbulence, it is necessary to distinguish between actual in-scene motion and apparent motion caused by atmospheric turbulence. Previously, the current authors generated a synthetic video by injecting moving objects into a static scene and then applying a well-validated anisoplanatic atmospheric optical turbulence simulator. With known per-pixel truth of all moving objects, a per-pixel Gaussian mixture model (GMM) was developed as a baseline technique. In this paper, the baseline technique has been modified to improve performance while decreasing computational complexity. Additionally, the technique is extended to patches such that spatial correlations are captured, which results in further performance improvement.
Anisoplanatic imaging, Atmospheric turbulence simulator, Long range imaging, Motion segmentation, Patch-based Gaussian mixture model, Turbulence mitigation, University of Dayton Electro-optics and Photonics
Richard L. Van Hook, Russell C. Hardie, "Patch-based Gaussian mixture model for scene motion detection in the presence of atmospheric optical turbulence," Proc. SPIE 11394, Automatic Target Recognition XXX, 1139414 (24 April 2020); https://doi.org/10.1117/12.2558318