Document Type
Article
Publication Date
1-1-2021
Publication Source
Computer Vision and Image Understanding
Abstract
We present a novel motion-based multiframe image super-resolution (SR) algorithm using a convolutional neural network (CNN) that fuses multiple interpolated input frames to produce an SR output. We refer to the proposed CNN and associated preprocessing as the Fusion of Interpolated Frames Network (FIFNET). We believe this is the first such CNN approach in the literature to perform motion-based multiframe SR by fusing multiple input frames in a single network. We study the FIFNET using translational interframe motion with both fixed and random frame shifts. The input to the network is a sequence of interpolated and aligned frames. One key innovation is that we compute subpixel interframe registration information for each interpolated pixel and feed this into the network as additional input channels. We demonstrate that this subpixel registration information is critical to network performance. We also employ a realistic camera-specific optical transfer function model that accounts for diffraction and detector integration when generating training data. We present a number of experimental results to demonstrate the efficacy of the proposed FIFNET using both simulated and real camera data. The real data come directly from a camera and are not artificially downsampled or degraded. In the quantitative results with simulated data, we show that the FIFNET performs favorably in comparison to the benchmark methods tested.
ISBN/ISSN
1077-3142
Document Version
Postprint
Publisher
Elsevier
Volume
202
Keywords
Convolutional neural network, Fusion of interpolated frames, Image restoration, Multiframe super-resolution, Subpixel registration, University of Dayton Electro-optics and Photonics
eCommons Citation
Elwarfalli, Hamed and Hardie, Russell C., "FIFNET: A convolutional neural network for motion-based multiframe super-resolution using fusion of interpolated frames" (2021). Electrical and Computer Engineering Faculty Publications. 423.
https://ecommons.udayton.edu/ece_fac_pub/423
Comments
The document available for download is the authors' accepted manuscript, provided in compliance with the publisher's policy on self-archiving. Permission documentation is on file.
To view the version of record, use the DOI: https://doi.org/10.1016/j.cviu.2020.103097