No-Reference Video Quality Assessment Based on Artifact Measurement and Statistical Analysis
IEEE Transactions on Circuits and Systems for Video Technology
A discrete cosine transform (DCT)-based no-reference video quality prediction model is proposed that measures artifacts and analyzes the statistics of compressed natural videos. The model has two stages:
- distortion measurement
- nonlinear mapping
In the first stage, an unsigned ac band, three frequency bands, and two orientation bands are generated from the DCT coefficients of each decoded frame in a video sequence. Six efficient frame-level features are then extracted to quantify the distortion of natural scenes. In the second stage, each frame-level feature of all frames is transformed to a corresponding video-level feature via a temporal pooling, then a trained multilayer neural network takes all video-level features as inputs and outputs, a score as the predicted quality of the video sequence. The proposed method was tested on videos with various compression types, content, and resolution in four databases. We compared our model with a linear model, a support-vector-regression-based model, a state-of-the-art training-based model, and a four popular full-reference metrics. Detailed experimental results demonstrate that the results of the proposed method are highly correlated with the subjective assessments.
Copyright © 2015, IEEE
Zhu, Kongfeng; Li, Chengqing; Asari, Vijayan K.; and Saupe, Dietmar, "No-Reference Video Quality Assessment Based on Artifact Measurement and Statistical Analysis" (2015). Electrical and Computer Engineering Faculty Publications. 373.