This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
This paper proposes Long Question Coreference Adaptation (LQCA), a method that focuses on co-reference resolution tailored to long contexts, to address the challenges faced by large-scale language models (LLMs) in understanding long contexts and performing effective question answering. LQCA comprises four main steps: resolving co-references within subdocuments, calculating distances between mentions, defining representative mentions for co-references, and answering questions using mention replacement. LQCA improves comprehension by systematically processing information into manageable chunks. Experimental results on various LLMs and datasets demonstrate significant performance improvements on the OpenAI-o1-mini and GPT-4o models, highlighting the effectiveness of co-reference resolution in bridging context gaps in question answering. The code is available at https://github.com/OceannTwT/LQCA .
Takeaways, Limitations
•
Takeaways:
◦
A novel methodology is presented to improve the performance of LLM in understanding long-term contexts and answering questions.
◦
Experimentally proven to work effectively, especially on the OpenAI-o1-mini and GPT-4o models.
◦
Ensure reproducibility and extensibility through open code.
•
Limitations:
◦
Only experimental results for specific LLMs and datasets are presented, requiring further research on generalizability.
◦
Lack of analysis on the computational cost and efficiency of the LQCA method.
◦
Lack of robustness assessment across different types of questions and contexts.