This study conducted an online experiment (N=560) to investigate whether web search results could be used to validate inaccurate content, or "hallucinations," generated by large-scale language models (LLMs). We compared conditions in which static (fixed search results provided by the LLM) or dynamic (participant-driven search) search results for LLM-generated content were provided, versus a control condition (no search results). We analyzed participants' perceptions of the accuracy of LLM-generated content (genuine, minor hallucinations, severe hallucinations), their confidence in their accuracy assessments, and their overall evaluations of the LLM. Results showed that participants in both the static and dynamic search result conditions rated the hallucinated content as less accurate and had more negative perceptions of the LLM compared to the control condition. However, participants in the dynamic search condition rated genuine content more accurately and had higher overall confidence, highlighting the practical implications of integrating web search capabilities into LLMs in real-world settings.