Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

A Survey on Parallel Text Generation: From Parallel Decoding to Diffusion Language Models

Created by
  • Haebom

Author

Lingzhe Zhang, Liancheng Fang, Chiming Duan, Minghua He, Leyi Pan, Pei Xiao, Shiyu Huang, Yunpeng Zhai, Xuming Hu, Philip S. Yu, Aiwei Liu

Outline

This paper presents a systematic survey of parallel text generation methods for large-scale language models (LLMs). To overcome the speed limitations of existing autoregressive (AR)-based text generation, we categorize parallel text generation methods into AR-based and non-AR-based ones and analyze the core technologies of each method in detail. We evaluate their theoretical advantages and disadvantages in terms of speed, quality, and efficiency, and examine their potential for combination and comparison with other acceleration strategies. Finally, we present recent advances, outstanding challenges, and future research directions, and we publish a GitHub repository containing related papers and materials.

Takeaways, Limitations

Takeaways:
We provide a systematic classification and analysis of parallel text generation methods, suggesting future research directions.
Comparative analysis of the pros and cons of AR-based and non-AR-based methods in terms of speed, quality, and efficiency helps in selecting the optimal method.
Increase accessibility through a GitHub repository that organizes related research materials.
Limitations:
The classification scheme presented in this paper may not cover all parallel text generation methods.
Due to the lack of experimental comparative analysis of various methods, it may not be possible to clearly present actual performance differences.
As new parallel text generation methods continue to emerge, the contents of this paper may quickly become outdated.
👍