This paper investigates approaches for prompting tool-assisted large-scale language models (LLMs) to act as role-playing conversational agents in the API track of the Commonsense Persona-grounded Dialogue Challenge (CPDC) 2025. Specifically, we focus on the problem of conversational agents generating excessively long responses (overspeech) and failing to effectively use tools according to their personas (underbehavior). To address this, we explore four prompting approaches: 1) basic role prompting, 2) improved role prompting, 3) automatic prompt optimization (APO), and 4) rule-based role prompting (RRP). The RRP approach achieves the best performance, achieving an overall score of 0.571, improving on the zero-shot baseline score of 0.519, thanks to two novel techniques: character card/scene contract design and strict enforcement of function calls.