No-Reference Video Quality Assessment Based on Artifact Measurement and Statistical Analysis
Document Type
Article
Publication Date
4-2015
Publication Source
IEEE Transactions on Circuits and Systems for Video Technology
Abstract
A discrete cosine transform (DCT)-based no-reference video quality prediction model is proposed that measures artifacts and analyzes the statistics of compressed natural videos. The model has two stages:
- distortion measurement
- nonlinear mapping
In the first stage, an unsigned ac band, three frequency bands, and two orientation bands are generated from the DCT coefficients of each decoded frame in a video sequence. Six efficient frame-level features are then extracted to quantify the distortion of natural scenes. In the second stage, each frame-level feature of all frames is transformed to a corresponding video-level feature via a temporal pooling, then a trained multilayer neural network takes all video-level features as inputs and outputs, a score as the predicted quality of the video sequence. The proposed method was tested on videos with various compression types, content, and resolution in four databases. We compared our model with a linear model, a support-vector-regression-based model, a state-of-the-art training-based model, and a four popular full-reference metrics. Detailed experimental results demonstrate that the results of the proposed method are highly correlated with the subjective assessments.
Inclusive pages
533-546
ISBN/ISSN
1051-8215
Copyright
Copyright © 2015, IEEE
Publisher
IEEE
Volume
25
Peer Reviewed
yes
Issue
4
eCommons Citation
Zhu, Kongfeng; Li, Chengqing; Asari, Vijayan K.; and Saupe, Dietmar, "No-Reference Video Quality Assessment Based on Artifact Measurement and Statistical Analysis" (2015). Electrical and Computer Engineering Faculty Publications. 373.
https://ecommons.udayton.edu/ece_fac_pub/373
COinS
Comments
Permission documentation is on file.