This paper analyzes the causes of inference errors in Reasoning Large Language Models (RLLMs) trained using the Chain-of-Thought (CoT) strategy. We apply the graph coloring problem, a constraint satisfaction logic problem of varying complexity, to o1-mini, o3-mini, DeepSeek-R1, Claude 3.7 Sonnet, Gemini 2.5 Pro Preview, and Grok 3 Mini Beta models. We find that a significant number of errors in all models stem from hallucinating graph edges not explicitly specified in the prompt. This hallucination phenomenon persists regardless of problem complexity and semantic frame, and we confirm that it generalizes to small-scale experiments on stable matching problems. This study identifies a problem in which RLLMs misrepresent the problem features and proposes a design strategy to mitigate it.