This paper addresses the problem of detecting hallucinated spans in large-scale language model output. While relatively less attention has been paid to hallucination detection at the global output level, this problem is crucial in practice. Previous research has shown that attention exhibits unusual patterns when hallucinations occur. Based on this, we extract features from the attention matrix, which provide complementary insights into (a) whether specific tokens are influential or ignored, (b) whether attention is biased toward a specific subset, and (c) whether tokens are generated with a narrow or broad context. These features are then fed into a Transformer-based classifier to identify hallucinated spans through sequential labeling. Experimental results demonstrate that the proposed method outperforms robust baseline models in detecting hallucinated spans in long input contexts, such as text generation and summarization.