This paper presents a novel approach that simultaneously addresses ambiguous user queries, conflicting information, and inaccurate information that arise when leveraging Retrieval Augmented Generation (RAG) to enhance the realism of Large-Scale Language Model (LLM) agents. Unlike previous studies that tackled each problem individually, this paper proposes a new dataset, RAMDocs, which mimics realistic scenarios containing ambiguity, misinformation, and noise. We then present MADAM-RAG, a multi-agent approach that resolves ambiguity and removes misinformation and noise through multi-round discussions among LLM agents. Experimental results demonstrate that MADAM-RAG significantly outperforms existing RAG baseline models on the AmbigDocs and FaithEval datasets, but demonstrates that there is still room for improvement, especially when the imbalance between evidence and misinformation is severe.