Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

MAC-Tuning: LLM Multi-Compositional Problem Reasoning with Enhanced Knowledge Boundary Awareness

Created by
  • Haebom

Author

Junsheng Huang (May), Zhitao He (May), Yucheng Huang (May), Sandeep Polisetty (May), Qingyun Wang (May), Yi. R (May), Fung

Outline

This paper presents a novel method for solving the problem of generating fictional facts in large-scale language models (LLMs), which are widely used in various applications. Previous research has focused on estimating confidence by analyzing the model's internal parameterized knowledge boundaries, but has been limited to single-problem settings. In this paper, we propose MAC-Tuning (Multiple Answers and Confidence Stepwise Tuning), a novel method for the more challenging multi-problem setting, where multiple questions must be answered accurately simultaneously. MAC-Tuning decouples answer prediction and confidence estimation learning during fine-tuning on reference data. Extensive experiments demonstrate that the proposed method improves average accuracy by up to 25% over existing methods.

Takeaways, Limitations

Takeaways: We present MAC-Tuning, an effective method for solving the LLM's fact-generating problem in multi-problem settings. It significantly improves average precision compared to existing methods. We demonstrate the effectiveness of an approach that separates answer prediction and confidence estimation.
Limitations: The performance improvements of MAC-Tuning presented in this paper may be limited to specific datasets or problem types. Additional experiments on various LLM architectures and datasets are needed. Further analysis of the accuracy of reliability estimation in multi-problem settings is also needed.
👍