Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Conflict-Aware Soft Prompting for Retrieval-Augmented Generation

Created by
  • Haebom

Author

Eunseong Choi, June Park, Hyeri Lee, Jongwuk Lee

Outline

Retrieval-augmented generation (RAG) enhances the capabilities of LLM by incorporating external knowledge into its input prompts. However, when the retrieved context conflicts with the parametric knowledge of the LLM, it often fails to resolve the conflict between incorrect external context and correct parametric knowledge. To address this issue, we propose Conflict-Aware REtrieval-Augmented Generation (CARE), which consists of a context evaluator and a base LLM. The context evaluator encodes compressed memory token embeddings from raw context tokens. Through grounded/adversarial soft prompting, the context evaluator is trained to distinguish unreliable context and capture guidance signals that guide inference toward more reliable knowledge sources. Extensive experimental results demonstrate that CARE effectively mitigates context-memory conflicts, achieving an average performance improvement of 5.0% on QA and fact-checking benchmarks, suggesting a promising direction for reliable and adaptable RAG systems.

Takeaways, Limitations

Takeaways:
CARE improves the reliability of the RAG system by solving the context-memory conflict problem.
Showed an average performance improvement of 5.0% in QA and fact-checking benchmarks.
Presenting a new direction for the development of reliable RAG systems.
Limitations:
Specific Limitations is not stated in the paper.
👍