Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

KCR: Resolving Long-Context Knowledge Conflicts via Reasoning in LLMs

Created by
  • Haebom

Author

Xianda Zheng, Zijian Huang, Meng-Fen Chiang, Michael J. Witbrock, Kaiqi Zhao

Outline

This paper proposes a Knowledge Conflict Reasoning (KCR) framework to address the problem of large-scale language models (LLMs) struggling to resolve conflicting knowledge from multiple sources, particularly knowledge conflicts across conflicting contexts in long texts. KCR relies on reinforcement learning to train LLMs to select and adhere to contexts with stronger logical consistency when presented with conflicting contexts. First, it extracts inference paths, expressed as text or local knowledge graphs, from conflicting long text contexts. Based on these paths, the model is trained to follow the correct inference path, thereby enhancing its ability to resolve knowledge conflicts within long text contexts. Experimental results demonstrate that the proposed framework significantly improves the knowledge conflict resolution capabilities of various LLMs.

Takeaways, Limitations

Takeaways:
Contributes to improving LLM's ability to process long-form contexts.
It presents a novel approach to processing conflicting information.
Effectively improve the reasoning ability of LLM by utilizing reinforcement learning.
Provides a general framework applicable to a variety of LLMs.
Limitations:
The performance of the proposed framework may depend on the reinforcement learning algorithm and reward function used.
The accuracy of the inference path extraction process can affect the overall performance.
It may only be effective for certain types of knowledge conflicts.
Performance may vary depending on the quality and quantity of training data.
👍