Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

HAPI: A Model for Learning Robot Facial Expressions from Human Preferences

Created by
  • Haebom

Author

Dongsheng Yang, Qianying Liu, Wataru Sato, Takashi Minato, Chaoran Liu, Shin'ya Nishida

Outline

This paper proposes a novel learning-ranking framework to improve the realism and naturalness of robot facial expression generation. Existing manual adjustment methods have limitations in terms of expression delicacy and realism, and this study aims to overcome these limitations by utilizing human preference data. In particular, we use pairwise comparison annotations to collect human preference data, and develop the HAPI (Human Affective Pairwise Impressions) model based on Siamese RankNet to improve facial expression evaluation. Experimental results using Bayesian optimization and an online facial expression survey on a 35-DOF Android platform show that the proposed method generates anger, happiness, and surprise facial expressions much more realistically and socially relatable than existing methods. This confirms that the proposed framework effectively bridges the gap between human preference and model prediction, and matches robot facial expression generation with human emotional responses.

Takeaways, Limitations

Takeaways:
A novel approach to generating robot facial expressions that takes human preferences into account
Improving the accuracy of facial expression evaluation using HAPI model based on Siamese RankNet
Experimental results using 35-DOF Android platform, successful creation of realistic and natural facial expressions compared to existing methods
Effective linkage between human emotional responses and robot facial expression generation
Limitations:
Currently, only three emotional expressions, Anger, Happiness, and Surprise, have been tested. Expanded research on more diverse emotional expressions is needed.
Results specific to the 35-DOF Android platform. Generalizability to other robot platforms needs to be verified.
Considering the cost and time consumption of collecting people's preference data. Research on more efficient data collection methods is needed.
👍