This paper presents a novel method for solving the problem of generating fictional facts in large-scale language models (LLMs), which are widely used in various applications. Previous research has focused on estimating confidence by analyzing the model's internal parameterized knowledge boundaries, but has been limited to single-problem settings. In this paper, we propose MAC-Tuning (Multiple Answers and Confidence Stepwise Tuning), a novel method for the more challenging multi-problem setting, where multiple questions must be answered accurately simultaneously. MAC-Tuning decouples answer prediction and confidence estimation learning during fine-tuning on reference data. Extensive experiments demonstrate that the proposed method improves average accuracy by up to 25% over existing methods.