Document Type
Conference Paper
Publication Date
12-2015
Publication Source
2015 IEEE International Symposium on Multimedia
Abstract
The explosive growth of multimedia data on Internet has created huge opportunities for online video advertising. In this paper, we propose a novel advertising technique called SalAd, which utilizes textual information, visual content and the webpage saliency, to automatically associate the most suitable companion ads with online videos. Unlike most existing approaches that only focus on selecting the most relevant ads, SalAd further considers the saliency of selected ads to reduce intentional ignorance. SalAd consists of three basic steps. Given an online video and a set of advertisements, we first roughly identify a set of relevant ads based on the textual information matching. We then carefully select a sub-set of candidates based on visual content matching. In this regard, our selected ads are contextually relevant to online video content in terms of both textual information and visual content. We finally select the most salient ad among the relevant ads as the most appropriate one. To demonstrate the effectiveness of our method, we have conducted a rigorous eye-tracking experiment on two ad-datasets. The experimental results show that our method enhances the user engagement with the ad content while maintaining users' quality of video viewing experience.
Inclusive pages
211-216
ISBN/ISSN
978-1-5090-0379-2
Document Version
Postprint
Copyright
Copyright © 2015, IEEE
Publisher
IEEE
Place of Publication
Miami, FL
eCommons Citation
Xiang, Chen; Nguyen, Tam; and Kankanhalli, Mohan, "SalAd: A Multimodal Approach for Contextual Video Advertising" (2015). Computer Science Faculty Publications. 70.
https://ecommons.udayton.edu/cps_fac_pub/70
Included in
Graphics and Human Computer Interfaces Commons, OS and Networks Commons, Other Computer Sciences Commons
Comments
This document is provided for download in compliance with the publisher's policy on self-archiving. Differences may exist between this document and the published version, which is available using the link provided. Permission documentation is on file.