In this paper, we present an adaptive questioning strategy that gathers information to reduce uncertainty about potential entities. By leveraging the generalization ability and world knowledge of large-scale language models (LLMs), we quantify uncertainty through a meta-learned language model that simulates future observations. Using autoregressive forward simulations, we quantify how much new questions reduce epistemic uncertainty, and develop a sophisticated information-gathering strategy that selects the most informative next question. We demonstrate that it outperforms existing methods in experiments on a 20-question game, dynamic polling, and adaptive student assessment.