This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
Investigating Context-Faithfulness in Large Language Models: The Roles of Memory Strength and Evidence Style
Created by
Haebom
Author
Yuepei Li, Kang Zhou, Qiao Qiao, Bach Nguyen, Qing Wang, Qi Li
Outline
This paper studies the contextual fidelity of Large Language Models (LLMs) in Retrieval-augmented generation (RAG). We analyze the effects of LLM memory strength (measured by the difference in responses to different wordings of the question) and the way evidence is presented, which have not been addressed in previous studies. We find that questions with high memory strength tend to rely more on internal memory, and that presenting paraphrased evidence rather than simply repeating or adding details increases the acceptance of external evidence in LLMs. The results of this study provide important Takeaways for improving RAG and context-aware LLMs. The code can be found in https://github.com/liyp0095/ContextFaithful .
We found that the memory strength of the LLM and the way in which evidence was presented significantly influenced contextual fidelity.
◦
We suggest that presenting paraphrased evidence is an effective way to increase the acceptance of external evidence in LLMs.
◦
Provides important guidelines for improving RAG and context-aware LLM.
•
Limitations:
◦
Further research is needed to examine the generality and limitations of the memory strength measurement method presented in this study.
◦
Additional experiments on different LLMs and datasets are needed to verify the generalizability of our results.
◦
Further research is needed to determine how factors other than how evidence is presented (e.g., credibility and relevance of the evidence) affect the contextual fidelity of LLMs.