This paper proposes a novel decoding strategy, Avoidance Decoding, to address the problem of large-scale language models (LLMs) generating repetitive and monotonous outputs due to limited creative diversity for the same input prompt, especially in tasks such as story generation. Avoidance Decoding modifies token logits by penalizing similarity to previously generated outputs, thereby encouraging more diverse multi-branch narratives. This penalty is adaptively balanced, prioritizing concept-level similarity penalties in the early stages to promote diversity in early story concepts, and gradually emphasizing narrative-level similarity penalties in the later stages to ensure natural yet diverse plot developments. The proposed method achieves up to 2.6x higher output diversity than existing methods, reduces repetition by an average of 30%, and effectively mitigates text degradation. Furthermore, we demonstrate that the method activates a wider range of neurons, thereby leveraging the model's inherent creativity.