This page organizes papers related to artificial intelligence published around the world. This page is summarized using Google Gemini and is operated on a non-profit basis. The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.
Resisting Contextual Interference in RAG via Parametric-Knowledge Reinforcement
Created by
Haebom
Author
Chenyu Lin, Yilin Wen, Du Su, Hexiang Tan, Fei Sun, Muhan Chen, Chenfu Bao, Zhonghou Lyu
Knowledgeable-R1: Robust Retrieval-Augmented Generation with Parametric Knowledge
Outline
A framework proposed to address the performance degradation caused by incorrect retrieval results, a problem in retrieval-augmented generation (RAG). Knowledgeable-R1 uses reinforcement learning to train a large-scale language model (LLM) to utilize parametric knowledge (PK) to leverage external context when it is useful, while remaining immune to incorrect context. The framework uses a joint sampling method to generate response pairs based on retrieval and absence, and learns local and global advantages to assess when to ignore and utilize incorrect context. Asymmetric advantage transformation enhances parametric knowledge seeking behavior. Experimental results demonstrate that Knowledgeable-R1 improves robustness and inference accuracy in knowledge conflict scenarios and general RAG scenarios, outperforming state-of-the-art (SOTA) baselines.
Takeaways, Limitations
•
Takeaways:
◦
Improving the robustness and inference accuracy of the RAG model
◦
Robustness to incorrect contexts
◦
Enhanced ability to utilize parametric knowledge and selectively use external context
◦
Superior performance compared to the SOTA baseline (23% improvement in counterfactual scenarios)
•
Limitations:
◦
Specific Limitations is not specified in the paper (code disclosed)