Existing vision-language models (VLMs) suffer from visual hallucination, a phenomenon in which generated responses contain inaccuracies unrelated to the visual input. Attempts to address this issue without model fine-tuning primarily mitigate hallucination by reducing linguistic biases in contrast or by amplifying the weights of visual embeddings during decoding. However, these approaches are limited in their ability to capture fine visual details. In this study, we propose Perception Magnifier (PM), a novel visual decoding method that iteratively isolates relevant visual tokens and magnifies these regions based on attention mechanisms, thereby guiding the model to focus on fine visual details during decoding. PM enhances the VLM's scrutiny of visual inputs by magnifying critical regions while preserving structural and contextual information at each decoding step, enabling it to generate more accurate and faithful responses. Extensive experimental results demonstrate that PM not only mitigates hallucination but also enhances language production while maintaining robust inference capabilities.