This paper proposes PruneCD, a novel contrastive decoding method, to mitigate the hallucination problem of large-scale language models (LLMs). We address the limitations of the existing contrastive learning method using early-terminating logits (DoLa) and propose PruneCD. DoLa suffers from the drawback that early-terminating logits, due to their small size and insufficient information, hinder effective contrastive learning. PruneCD constructs an amateur model through hierarchical pruning, generating more information-rich and aligned logits, thereby enhancing the efficiency of contrastive decoding. Experimental results demonstrate that PruneCD is an effective method for improving realism while minimizing inference overhead.