Document Type

Conference Paper

Publication Date

10-2012

Publication Source

Proceedings of the 20th ACM International Conference on Multimedia

Abstract

Discovering the secret of beauty has been the pursuit of artists and philosophers for centuries. Nowadays, the computational model for beauty estimation has been actively explored in computer science community, yet with the focus mainly on facial features. In this work, we perform a comprehensive study of female attractiveness conveyed by single/multiple modalities of cues, i.e., face, dressing, and/or voice, and aim to uncover how different modalities individually and collectively affect the human sense of beauty. To this end, we collect the first Multi-Modality Beauty (M2B) dataset in the world for female attractiveness study, which is thoroughly annotated with attractiveness levels converted from manual k-wise ratings and semantic attributes of different modalities. A novel Dual-supervised Feature-Attribute-Task (DFAT) network is proposed to jointly learn the beauty estimation models of single/multiple modalities as well as the attribute estimation models. The DFAT network differentiates itself by its supervision in both attribute and task layers. Several interesting beauty-sense observations over single/multiple modalities are reported, and the extensive experimental evaluations on the collected M2B dataset well demonstrate the effectiveness of the proposed DFAT network for female attractiveness estimation.

Inclusive pages

239-248

ISBN/ISSN

978-1-4503-1089-5

Document Version

Postprint

Comments

The document provided for download is the authors' accepted manuscript, provided in compliance with the publisher's policy on self-archiving. Permission documentation is on file.

Publisher

Association for Computing Machinery

Place of Publication

Nara, Japan

Link to published version

Share

COinS