This paper proposes the Collaborative Chain-of-Agents (CoCoA) framework to address the Limitations challenge of Retrieval Augmented Generation (RAG), a promising framework for improving the performance of large-scale language models (LLMs) in knowledge-intensive tasks. Existing RAG methods fail to fully leverage the synergy between the model's internal parameter knowledge and external retrieval knowledge. CoCoA overcomes this challenge through a multi-agent approach. First, we present CoCoA-zero, which performs inference after conditional knowledge induction. Based on this, we develop CoCoA, which fine-tunes the LLM by synthesizing an extended multi-agent inference path. Experimental results demonstrate that CoCoA-zero and CoCoA achieve superior performance on open-domain and multi-step question-answering tasks.