This paper provides a comprehensive investigation of the Long CoT, which plays a crucial role in improving the reasoning ability of large-scale language models (LLMs). We clarify its differences from the traditional Short CoT, and analyze the core features of Long CoT, such as deep reasoning, extensive exploration, and actionable reflection. In addition, we investigate phenomena such as overthinking and inference time expansion, and suggest future research directions such as multimodal inference integration, efficiency improvement, and enhanced knowledge framework.