This paper proposes a novel approach, Attention-Guided SElf-Reflection (AGSER), to address the hallucination problem, which hinders the effective application of large-scale language models (LLMs). AGSER leverages attention contributions to classify input questions into attention-focused and non-attention-focused questions. For each question, it separately processes the LLM to compute a consistency score between the generated response and the original answer. The difference between the two consistency scores is used as a hallucination measurement. AGSER not only improves hallucination detection efficiency but also significantly reduces computational overhead by using only three passes over the LLM and two sets of tokens. Extensive experiments using four widely used LLMs and three hallucination benchmarks demonstrate that the proposed method significantly outperforms existing methods in hallucination detection.