Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

The Unified Cognitive Consciousness Theory for Language Models: Anchoring Semantics, Thresholds of Activation, and Emergent Reasoning

Created by
  • Haebom

Author

Edward Y. Chang, Zeyneb N. Kaya, Ethan Chang

Outline

The Unified Cognitive Theory of Consciousness (UCCT) views the intelligence of large-scale language models (LLMs) not as residing internally, but as a vast, unconscious repository of patterns. Inference occurs only when external anchoring mechanisms (such as few-shot prompts, retrieval-augmented context, fine-tuning, or multi-agent argumentation) activate task-relevant patterns. UCCT formalizes this process as a Bayesian competition between statistical priors learned during pre-training and context-based target patterns, providing a single quantitative explanation that unifies existing adaptive techniques. It is based on three principles (threshold overshoot, modality universality, and density-distance predictive power) and is validated through cross-domain demonstrations in text QA, image caption generation, and multi-agent argumentation, as well as in-depth experiments using numerical models (base 8, 9, and 10), and layer-by-layer path analysis. Experimental results support UCCT's predictions by demonstrating threshold behavior, asymmetric interference, and memory hysteresis. By demonstrating that the "intelligence" of LLM is not inherent in the model but rather is generated through semantic anchoring, UCCT provides practical guidance for engineering interpretable diagnostics and prompts, model selection, and alignment-driven system design.

Takeaways, Limitations

Takeaways:
Presenting a New Theoretical Framework for Intelligence (UCCT) in LLM
Provides practical guidance on prompt engineering, model selection, and alignment-driven system design.
Contributing to improving the interpretability of LLM
Provides a single quantitative explanation that integrates existing adaptive technologies.
Verification of theory through various experiments
Limitations:
Further research is needed on the generality and scope of UCCT.
The scope of the presented experiment may be limited.
Applicability verification for more complex LLM architectures is needed.
Further performance evaluation in real-world applications is needed.
👍