Document Type

Conference Paper

Publication Date

10-2013

Publication Source

Proceedings of the 21st ACM international conference on Multimedia

Abstract

Recently visual saliency has attracted wide attention of researchers in the computer vision and multimedia field. However, most of the visual saliency-related research was conducted on still images for studying static saliency. In this paper, we give a comprehensive comparative study for the first time of dynamic saliency (video shots) and static saliency (key frames of the corresponding video shots), and two key observations are obtained: 1) video saliency is often different from, yet quite related with, image saliency, and 2) camera motions, such as tilting, panning or zooming, affect dynamic saliency significantly.

Motivated by these observations, we propose a novel camera motion and image saliency aware model for dynamic saliency prediction.

The extensive experiments on two static-vs-dynamic saliency datasets collected by us show that our proposed method outperforms the state-of-the-art methods for dynamic saliency prediction. Finally, we also introduce the application of dynamic saliency prediction for dynamic video captioning, assisting people with hearing impairments to better entertain videos with only off-screen voices, e.g., documentary films, news videos and sports videos.

Inclusive pages

987-996

ISBN/ISSN

978-1-4503-2404-5

Comments

The document available for download is the authors' accepted manuscript, provided in compliance with the publisher's policy on self-archiving. Permission documentation is on file.

Publisher

Association for Computing Machinery

Place of Publication

Barcelona, Spain

Link to published version

Share

COinS