Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

GraphRAG under Fire

Created by
  • Haebom

Author

Jiacheng Liang, Yuhui Wang, Changjiang Li, Rongyi Zhu, Tanqiu Jiang, Neil Gong, Ting Wang

Outline

This paper studies the security vulnerabilities of GraphRAG, focusing specifically on poisoning attacks. While existing RAG poisoning attacks are less effective on GraphRAG, GraphRAG's graph-based indexing and search create a new attack surface. We propose a novel attack technique, GragPoison, which generates poisoning text that can simultaneously compromise multiple queries through relation injection, relation reinforcement, and narrative generation. GragPoison is more effective and scalable than existing attacks, and we explore defense mechanisms and Limitations.

Takeaways, Limitations

Takeaways:
GraphRAG is more resistant to some poisoning attacks than the original RAG, but it has its own new attack vulnerabilities.
GragPoison is a powerful poisoning attack technique specific to GraphRAG that can inject malicious information that affects multiple queries.
The study raises security concerns about GraphRAG and highlights the need to develop effective defense mechanisms.
Limitations:
The proposed defense mechanism has limitations and requires further research.
The effectiveness of GragPoison may vary depending on the specific dataset and model.
Further research is needed on security vulnerabilities and defense strategies in real-world deployment environments of GraphRAG.
👍