This paper proposes a novel method to address the hallucination problem encountered in large-scale vision-language models (LVLMs). LVLMs generate contextually consistent text, but exhibit hallucinations that are inconsistent with the visual input, hindering their practical applications. Existing research has focused on improving the features or output of specific modalities (visual or textual), but has failed to explicitly and systematically enhance visual dependency. This paper comprehensively investigates factors that reduce visual dependency during LVLM text generation from a Bayesian perspective. Based on this analysis, we propose three approaches to mitigate the hallucination problem. First, because not all visual tokens are beneficial for generating meaningful text, we remove unnecessary visual tokens to prevent interference. Second, because LVLMs can generate unexpected words by encoding irrelevant prior information, we modify the prior information from a Bayesian perspective. Third, because the posterior probability of token predictions conditioned on visual tokens can collapse to a prior distribution that does not depend on any beneficial visual tokens, we stop generating additional text to avoid hallucinations. Through extensive experiments on three benchmarks: POPE, CHAIR, and MME, we demonstrate that the proposed method consistently mitigates the hallucination problem of LVLM and outperforms existing state-of-the-art techniques.