Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Talk Less, Call Right: Enhancing Role-Play LLM Agents with Automatic Prompt Optimization and Role Prompting

Created by
  • Haebom

Author

Saksorn Ruangtanusak, Pittawat Taveekitworachai, Kunat Pipatanakul

Outline

This paper investigates approaches for prompting tool-assisted large-scale language models (LLMs) to act as role-playing conversational agents in the API track of the Commonsense Persona-grounded Dialogue Challenge (CPDC) 2025. Specifically, we focus on the problem of conversational agents generating excessively long responses (overspeech) and failing to effectively use tools according to their personas (underbehavior). To address this, we explore four prompting approaches: 1) basic role prompting, 2) improved role prompting, 3) automatic prompt optimization (APO), and 4) rule-based role prompting (RRP). The RRP approach achieves the best performance, achieving an overall score of 0.571, improving on the zero-shot baseline score of 0.519, thanks to two novel techniques: character card/scene contract design and strict enforcement of function calls.

Takeaways, Limitations

Takeaways:
Rule-based role prompting (RRP) can substantially improve the effectiveness and reliability of role-playing conversational agents over more complex methods such as APO.
Character card/scene contract design and strict enforcement of function calls are key success factors for RRP.
We are releasing the source code for our highest performing prompts and APO tools to support future development of persona prompts.
Limitations:
There is no specific mention of Limitations in the report.
👍