This paper provides a comprehensive investigation of the long-chain thinking process (Long CoT), which has recently played a crucial role in improving the reasoning ability of large-scale language models (LLMs). We clarify its differences from the traditional short-chain thinking process (Short CoT), and analyze the core features of the long-chain thinking process (deep reasoning, extensive exploration, and actionable reflection). In addition, we investigate phenomena such as overthinking and inference time extension, and suggest future research directions such as multimodal inference integration, efficiency improvement, and enhanced knowledge framework.