The Unified Cognitive Theory of Consciousness (UCCT) views the intelligence of large-scale language models (LLMs) not as residing internally, but as a vast, unconscious repository of patterns. Inference occurs only when external anchoring mechanisms (such as few-shot prompts, retrieval-augmented context, fine-tuning, or multi-agent argumentation) activate task-relevant patterns. UCCT formalizes this process as a Bayesian competition between statistical priors learned during pre-training and context-based target patterns, providing a single quantitative explanation that unifies existing adaptive techniques. It is based on three principles (threshold overshoot, modality universality, and density-distance predictive power) and is validated through cross-domain demonstrations in text QA, image caption generation, and multi-agent argumentation, as well as in-depth experiments using numerical models (base 8, 9, and 10), and layer-by-layer path analysis. Experimental results support UCCT's predictions by demonstrating threshold behavior, asymmetric interference, and memory hysteresis. By demonstrating that the "intelligence" of LLM is not inherent in the model but rather is generated through semantic anchoring, UCCT provides practical guidance for engineering interpretable diagnostics and prompts, model selection, and alignment-driven system design.