This paper explores the root causes of hallucination in large-scale language models (LLMs). To this end, we 1) propose the Distributional Semantics Tracing (DST) framework, which generates causal maps based on distributed semantics, which treats meaning as a function of context. 2) identify the specific layer (commitment layer) where hallucination becomes inevitable. 3) elucidate predictable failure modes, such as Reasoning Shortcut Hijacks, that arise from the conflict between System 1 (fast, associative reasoning) and System 2 (slow, deliberate reasoning). Measuring the consistency of contextual paths using DST shows a strong negative correlation (-0.863) with the incidence of hallucinations, suggesting a predictable outcome due to inherent semantic weaknesses.