This paper aims to improve the quality of the process of converting natural language questions into SPARQL queries (Query Building) in a knowledge graph question answering (KGQA) system using a large-scale language model (LLM). Existing LLM-based KGQA systems have a limitation in that they cannot know whether the training data of LLM includes benchmarks or knowledge graphs. Therefore, in this paper, we present a new methodology to evaluate the quality of SPARQL query generation of LLM under various conditions, such as (1) zero-shot SPARQL generation, (2) knowledge injection, and (3) anonymized knowledge injection. Through this, we estimate for the first time the impact of LLM training data on improving QA quality, and evaluate the generalizability of the method by distinguishing between the actual performance of LLM and the effect of training data memorization. The proposed method is portable and robust, and can be applied to various knowledge graphs, providing consistent insights.