Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Avoidance Decoding for Diverse Multi-Branch Story Generation

Created by
  • Haebom

Author

Kyeongman Park, Nakyeong Yang, Kyomin Jung

Outline

This paper proposes a novel decoding strategy, Avoidance Decoding, to address the problem of large-scale language models (LLMs) generating repetitive and monotonous outputs due to limited creative diversity for the same input prompt, especially in tasks such as story generation. Avoidance Decoding modifies token logits by penalizing similarity to previously generated outputs, thereby encouraging more diverse multi-branch narratives. This penalty is adaptively balanced, prioritizing concept-level similarity penalties in the early stages to promote diversity in early story concepts, and gradually emphasizing narrative-level similarity penalties in the later stages to ensure natural yet diverse plot developments. The proposed method achieves up to 2.6x higher output diversity than existing methods, reduces repetition by an average of 30%, and effectively mitigates text degradation. Furthermore, we demonstrate that the method activates a wider range of neurons, thereby leveraging the model's inherent creativity.

Takeaways, Limitations

Takeaways:
We present avoidance decoding, a novel decoding strategy that improves the output diversity of LLM.
Achieves significantly higher output diversity and repetition reduction than existing methods.
Alleviate text degradation issues.
Demonstrating the inherent creativity of LLM.
Limitations:
Further experiments are needed to evaluate the generalization performance of the proposed method.
Applicability to different types of LLMs and jobs needs to be verified.
Further research is needed to determine the optimal ratio of concept-level and description-level similarity penalties.
👍