This paper proposes RePPL, a novel method for solving the hallucination problem in large-scale language models (LLMs). Existing hallucination detection methods focus on uncertainty measurement, but fail to explain the cause of hallucinations. To overcome this limitation, we calculate a token-level uncertainty score by considering both uncertainty arising during semantic propagation and language generation. These scores are then aggregated as a perplexity-style logarithmic mean to produce an overall hallucination score. Our method demonstrates excellent performance, achieving an average AUC of 0.833 on various QA datasets and state-of-the-art models, demonstrating the utility of token-level uncertainty scores in explaining the cause of hallucinations.