Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Hallucination Detection in LLMs with Topological Divergence on Attention Graphs

Created by
  • Haebom

Author

Alexandra Bazarova, Aleksandr Yugay, Andrey Shulga, Alina Ermilova, Andrei Volodichev, Konstantin Polev, Julia Belikova, Rauf Parchiev, Dmitry Simakov, Maxim Savchenko, Andrey Savchenko, Serguei Barannikov, Alexey Zaytsev

Outline

To address the problem of hallucination, a factual error generation problem in large-scale language models (LLMs), we propose a TOPology-based HAllucination detector (TOHA), which measures the topological divergence of the graph induced by the attention matrix in a RAG environment. By analyzing the topological divergence between prompt and response subgraphs, we find that higher divergence values in specific attention heads correlate with hallucination output. Through extensive experiments involving question answering and summarization tasks, TOHA achieves state-of-the-art or competitive results on multiple benchmarks using minimal annotation data and computational resources. This suggests that analyzing the topological structure of the attention matrix can efficiently and robustly represent factual reliability in LLMs.

Takeaways, Limitations

Takeaways:
By analyzing the topological structure of the attention matrix, we can effectively detect the hallucination phenomenon of LLM.
It achieves excellent performance while using minimal annotation data and computational resources.
Achieved state-of-the-art or competitive results in question answering and summarizing tasks.
Limitations:
The specific Limitations of this paper is not stated in the paper.
👍