The Retrieval Augmented Generation (RAG) framework improves the accuracy of LLMs by retrieving external documents, but it is vulnerable to adversarial attacks that manipulate the retrieval process. In this paper, we propose GRADA, a graph-based reranking framework for adversarial document attacks. GRADA aims to maintain retrieval quality while mitigating the impact of adversarial documents. We conducted experiments on five LLMs and three datasets—GPT-3.5-Turbo, GPT-4o, Llama3.1-8b, Llama3.1-70b, and Qwen2.5-7b—and achieved up to an 80% reduction in attack success rate on the Natural Questions dataset.