This paper highlights the need for fine-tuning large-scale language models (LLMs) to provide scalable services in the field of psychological counseling. To address the lack of high-quality, privacy-compliant data, we present MAGneT, a novel multi-agent framework. MAGneT decomposes counselor response generation into subtasks, with specialized LLM agents modeling psychological skills. This decomposition allows counselors to better capture the structure and nuance of real-world counseling than existing single-agent approaches. Furthermore, we propose an integrated evaluation framework that integrates various automated and expert evaluation metrics to address inconsistencies in existing evaluation protocols. Furthermore, we expand the number of expert evaluation items from four to nine, thereby enhancing the accuracy and robustness of data quality assessment. Experimental results show that MAGneT outperforms existing methods in terms of the quality, diversity, and therapeutic consistency of generated counseling sessions. The results show a 3.2% improvement in general counseling skills and a 4.3% improvement in CBT-specific skills based on the Cognitive Behavioral Therapy Scale (CTRS). Experts preferred MAGneT-generated sessions across all dimensions, with an average rate of 77.2%. Fine-tuning the open-source model using MAGneT-generated sessions resulted in a 6.3% improvement in general counseling skills and a 7.3% improvement in CBT-specific skills compared to sessions generated using traditional methods. The code and data are publicly available.