Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

AI Chaperones Are (Really) All You Need to Prevent Parasocial Relationships with Chatbots

Created by
  • Haebom

Author

Emma Rath, Stuart Armstrong, Rebecca Gorman

Outline

This paper raises concerns about the increasing incidence of harm to children and adults due to excessive parasocial ties and sycophancy with AI chatbots. To mitigate these risks, we propose an "AI Guardian Agent" that monitors AI chatbot conversations. Built by repurposing existing state-of-the-art language models, this agent analyzes conversational content to detect excessive parasocial ties and sycophancy. Experimental results using a synthetic dataset demonstrate that the AI Guardian Agent accurately identifies all parasocial conversations under majority rule, effectively preventing false positives. This suggests that the AI Guardian Agent can be effective in reducing the risk of parasocial ties.

Takeaways, Limitations

Takeaways:
A new approach to mitigate the harm caused by excessive intimacy and flattery with AI chatbots.
We demonstrate that existing language models can be reused to efficiently build AI guardian agents.
We have confirmed that excessive intimacy can be effectively detected even in early-stage conversations.
Limitations:
We use a small synthetic dataset of 30 conversations to question generalizability.
Validation of performance in real-world usage environments and generalizability to various types of excessive intimacy and flattery is needed.
Reliance on majority rule and the resulting potential for error.
Consideration should be given to user acceptance and privacy concerns regarding the intervention of AI guardian agents.
👍