This paper proposes GOAT (GFlOwNet-guided distribution Alignment), a novel method for addressing hallucinations in language model (LM)-based text-to-speech (TTS) systems. Unlike existing methods, GOAT is a post-training framework that mitigates hallucinations without excessive training resources or inference delays. By analyzing the strong correlation between model uncertainty and hallucinations, we reformulate TTS generation as a trajectory flow optimization problem, introducing enhanced sub-trajectory balance objectives and sharpened internal rewards as target distributions. We integrate reward temperature reduction and learning rate optimization to balance stability and performance. Experimental results demonstrate strong generalization ability and effectiveness, reducing character error rates by over 50% and uncertainty by up to 58% on challenging test cases.