This paper presents a fundamental impossibility theorem, stating that no large-scale language model (LLM) capable of performing nontrivial knowledge aggregation can simultaneously achieve a truthful (internalally consistent) knowledge representation, semantic information preservation, complete disclosure of relevant knowledge, and knowledge-constrained optimality. This impossibility stems not from an engineering limitation, but from the mathematical structure of information aggregation itself. We establish this result by describing the inference process as an idea auction, where distributed components compete to form responses using their partial knowledge. The proof spans three independent mathematical areas: mechanism design theory (Green-Laffont), the theory of appropriate scoring rules (Savage), and a direct architectural analysis of transformers (Log-Sum-Exp convexity). Specifically, we show that in strictly concave settings, the aggregate score of various beliefs strictly exceeds the sum of their individual scores. This difference can quantify the generation of unattributable certainty or overconfidence, i.e., the mathematical origins of illusion, creativity, or imagination. To support this analysis, we introduce the complementary concepts of semantic information measures and emergence operators to model bounded inference in general settings. We demonstrate that bounded inference generates accessible information that provides useful insights and inspiration, while ideal inference strictly preserves semantic content. By demonstrating that hallucinations and imagination are mathematically equivalent phenomena based on a necessary violation of information preservation, this paper provides a principled foundation for managing these behaviors in advanced AI systems. Finally, we present some speculative ideas for evaluating and improving the proposed theory.