Title

As-Similar-As-Possible Saliency Fusion

Document Type

Article

Publication Date

5-2016

Publication Source

Multimedia Tools and Applications

Abstract

Salient region detection has gradually become a popular topic in multimedia and computer vision research. However, existing techniques exhibit remarkable variations in methodology with inherent pros and cons. In this paper, we propose fusing the saliency hypotheses, namely the saliency maps produced by different methods, by accentuating their advantages and attenuating the disadvantages. To this end, our algorithm consists of three basic steps. First, given the test image, our method finds the similar images and their saliency hypotheses by comparing the similarity of the learned deep features. Second, the error-aware coefficients are computed from the saliency hypotheses. Third, our method produces a pixel-accurate saliency map which covers the objects of interest and exploits the advantages of the state-of-the-art methods. We then evaluate the proposed framework on three challenging datasets, namely MSRA-1000, ECSSD and iCoSeg. Extensive experimental results show that our method outperforms all state-of-the-art approaches. In addition, we have applied our method to the SquareMe application, an autonomous image resizing system. The subjective user-study experiment demonstrates that human prefers the image retargeting results obtained by using the saliency maps from our proposed algorithm.

Inclusive pages

1-19

ISBN/ISSN

1380-7501

Comments

Permission documentation is on file.

Publisher

Springer

Peer Reviewed

yes


Share

COinS