Hallucinations in large-scale language models (LLMs) are a critical issue for ensuring reliability, and token-level hallucination detection has been a recent research focus. This paper analyzes the distribution of hallucination signals within hallucination token sequences. Using token-level annotations from the RAGTruth corpus, we find that the first hallucination token is significantly more easily detected than subsequent tokens. This structural characteristic is consistent across models, suggesting that the first hallucination token plays a crucial role in token-level hallucination detection.