As the use of large-scale language models (LLMs) increases, accurately assessing epistemic uncertainty, which reflects the model's knowledge deficits, has become crucial. However, quantifying this uncertainty is challenging due to the aleatory uncertainty arising from multiple valid answers. This study found that mitigating the bias introduced by prompts in a visual question answering (VQA) task improved GPT-4o's uncertainty quantification. Furthermore, based on the LLM's tendency to copy input information when model confidence is low, we analyzed the impact of this prompt bias on measured epistemic and aleatory uncertainty at various unbiased confidence levels in GPT-4o and Qwen2-VL.