Detecting hallucinations in large-scale language models (LLMs) is crucial for building trust. Token-level detection allows for more granular interventions, but the distribution of hallucination signals across hallucination token sequences has not yet been studied. Using token-level annotations from the RAGTruth corpus, we found that the first hallucination token is significantly more easily detected than subsequent tokens. This structural characteristic is observed across various models, suggesting that the first hallucination token plays a crucial role in token-level hallucination detection.