This paper provides a comprehensive analysis of the Theory of Mind (ToM) capabilities of large-scale language models (LLMs), i.e., their ability to infer the mental states of themselves and others. We review methods for assessing ToM in LLMs, focusing on recently proposed and widely used story-based benchmarks, and provide an in-depth analysis of cutting-edge methods for enhancing ToM in LLMs. Furthermore, we suggest future research directions for further developing ToM in LLMs and making them more adaptable to realistic and diverse situations.