This paper explores the hallucinations and related performance limitations that arise in large-scale language models (LLMs) and LLM-based agents from the perspective of computational complexity. We show that beyond a certain complexity, LLMs cannot perform computational and agent tasks or verify their correctness.