Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

MAGneT: Coordinated Multi-Agent Generation of Synthetic Multi-Turn Mental Health Counseling Sessions

Created by
  • Haebom

Author

Aishik Mandal, Tanmoy Chakraborty, Iryna Gurevych

Outline

This paper highlights the need for fine-tuning large-scale language models (LLMs) to provide scalable services in the field of psychological counseling. To address the lack of high-quality, privacy-compliant data, we present MAGneT, a novel multi-agent framework. MAGneT decomposes counselor response generation into subtasks, with specialized LLM agents modeling psychological skills. This decomposition allows counselors to better capture the structure and nuance of real-world counseling than existing single-agent approaches. Furthermore, we propose an integrated evaluation framework that integrates various automated and expert evaluation metrics to address inconsistencies in existing evaluation protocols. Furthermore, we expand the number of expert evaluation items from four to nine, thereby enhancing the accuracy and robustness of data quality assessment. Experimental results show that MAGneT outperforms existing methods in terms of the quality, diversity, and therapeutic consistency of generated counseling sessions. The results show a 3.2% improvement in general counseling skills and a 4.3% improvement in CBT-specific skills based on the Cognitive Behavioral Therapy Scale (CTRS). Experts preferred MAGneT-generated sessions across all dimensions, with an average rate of 77.2%. Fine-tuning the open-source model using MAGneT-generated sessions resulted in a 6.3% improvement in general counseling skills and a 7.3% improvement in CBT-specific skills compared to sessions generated using traditional methods. The code and data are publicly available.

Takeaways, Limitations

Takeaways:
Presenting an effective multi-agent framework (MAGneT) for generating high-quality psychological counseling data.
Developing a model that better reflects the structure and nuances of actual consultations than existing single-agent methods.
Improving the accuracy and objectivity of data quality assessment through an integrated assessment framework.
Contributing to the advancement of the field of psychological counseling by providing high-quality synthetic data for fine-tuning open-source LLMs.
Ensuring reproducibility and scalability of research through open code and data disclosure.
Limitations:
Limitations of synthetic data: Even the most sophisticated models struggle to fully reflect the complexity and diversity of real-world counseling data.
Subjectivity of expert evaluation: Because there is a high reliance on expert evaluation, the subjectivity of the evaluation may affect the results.
Difficulty in reflecting long-term interactions and complex psychological factors: Since MAGneT focuses on short-term interactions, it may have limitations in fully reflecting long-term counseling processes or complex psychological factors.
Ethical Considerations: Ethical considerations should be given to training and utilizing models using synthetic data.
👍