Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Towards Reasoning Era: A Survey of Long Chain-of-Thoughts for Reasoning Large Language Models

Created by
  • Haebom

Author

Qiguang Chen, Libo Qin, Jinhao Liu, Dengyun Peng, Jiannan Guan, Peng Wang, Mengkang Hu, Yuhang Zhou, Te Gao, Wanxiang Che

Outline

This paper provides a comprehensive investigation of the long-chain thinking process (Long CoT), which has recently played a crucial role in improving the reasoning ability of large-scale language models (LLMs). We clarify its differences from the traditional short-chain thinking process (Short CoT), and analyze the core features of the long-chain thinking process (deep reasoning, extensive exploration, and actionable reflection). In addition, we investigate phenomena such as overthinking and inference time extension, and suggest future research directions such as multimodal inference integration, efficiency improvement, and enhanced knowledge framework.

Takeaways, Limitations

Takeaways:
We lay the foundation for LLM reasoning research by clarifying the differences between long-chain thought processes (CoT) and short-chain thought processes (CoT).
Demonstrates key features of long-chain thinking processes and how they enable complex problem solving.
Provides insights into phenomena such as overthinking and extended reasoning time.
It can contribute to improving the reasoning ability of LLM by suggesting future research directions.
Limitations:
It should be recognized that comprehensive research on Long CoT is still lacking.
Whether the proposed future research directions are actually effective needs to be verified through additional research.
Specific methodologies for improving the efficiency of Long CoT require further research.
👍