In this paper, we propose a hierarchical cooperative multi-agent (CCMA) framework to address the limitations of conventional reinforcement learning (RL), such as the difficulty in replicating human-like behaviors, effective generalization in multi-agent environments, and interpretability issues. CCMA integrates RL for individual agent interactions, fine-tuned LLM for local cooperation, a reward function for global optimization, and a search-augmented generation mechanism for dynamic decision optimization in complex driving scenarios. Experimental results show that CCMA significantly improves both micro- and macro-level performance in complex driving environments compared to conventional RL methods.