This paper proposes the GenKI framework, which aims to improve the performance of open-ended question answering (OpenQA) using large-scale language models (LLMs). GenKI simultaneously explores knowledge integration and controllable generation, addressing two key challenges of LLM-based OpenQA methods: effective knowledge integration and adaptive generation of specific answer formats for diverse task situations. To achieve this, we introduce a knowledge integration model that retrieves relevant knowledge, fine-tunes the retrieved knowledge, and enables controllable generation through ensemble models based on consistency, fluency, and answer format guarantees. Experimental results on the TriviaQA, MSMARCO, and CMRC2018 datasets demonstrate GenKI's effectiveness compared to state-of-the-art baselines, revealing a linear relationship between knowledge retrieval frequency and the model's ability to accurately retrieve knowledge.