Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Leveraging Large Models to Evaluate Novel Content: A Case Study on Advertisement Creativity

Created by
  • Haebom

Author

Zhaoyi Joey Hou, Adriana Kovashka, Xiang Lorraine Li

Outline

This paper addresses the challenging problem of assessing the creativity of visual advertising. Drawing inspiration from marketing research, we decompose the creativity of visual advertising into "unstructuredness" and "originality." Based on fine-grained human annotations on these dimensions, we propose tasks specifically tailored to the subjective issue. We then evaluate how state-of-the-art visual language models (VLMs) perform on the proposed benchmark, demonstrating the potential and limitations of using VLMs for automated creativity assessment.

Takeaways, Limitations

Takeaways:
A new approach to evaluating visual advertising creativity by decomposing it into ‘informality’ and ‘originality.’
Presenting a new benchmark dataset and tasks for subjective creativity assessment.
Experimental verification of the creativity evaluation performance and limitations of state-of-the-art VLMs.
Presenting the possibility of automatic creativity assessment using VLMs.
Limitations:
The proposed benchmark is limited to visual advertising, which may limit its generalizability.
The subjectivity of human annotations may influence the results.
The performance of VLMs still falls short of human performance and requires further improvement.
The breakdown of creativity into ‘informality’ and ‘originality’ may not encompass all types of creativity.
👍