This paper highlights the need for scalable questionnaire generation suitable for LLMs, as psychometric surveys designed to evaluate the characteristics of large-scale language models (LLMs) increase. In particular, ensuring construct validity, which verifies that the generated items truly measure the intended characteristics, is crucial. While large-scale, costly human data collection has traditionally been required, this study presents a framework for simulating virtual respondents using LLMs. This framework considers parameters to account for the factors that cause diverse responses to survey items with identical characteristics. By simulating respondents with different parameters, it identifies questionnaire items that effectively measure the intended characteristics. Experimental results on three psychological trait theories (Big5, Schwartz, and VIA) demonstrate that the proposed parameter generation method and simulation framework effectively identify items with high validity. LLMs demonstrate the ability to generate plausible parameters from characteristic definitions and simulate respondent behavior to verify item validity. The problem formulation, metrics, methodology, and dataset of this study suggest new directions for cost-effective questionnaire development and a deeper understanding of human survey response simulation in LLMs.