This paper addresses the limitations of the logical reasoning ability of large-scale language models (LLMs). LLMs struggle with complex logical problems, which are manifested in logical question answering that require complex deductive, inductive, or analogical reasoning, and logical contradictions that arise in the responses to different questions (e.g., a magpie is a bird, and birds have wings, but a magpie does not have wings). This paper comprehensively surveys existing works, categorizes methods based on external solvers, prompts, and fine-tuning, and discusses various logical consistency concepts and solutions, such as implication, negation, transitivity, and factual consistency. In addition, it reviews commonly used benchmark datasets and evaluation metrics, and suggests promising research directions, such as extending modal logic to account for uncertainty and developing efficient algorithms that simultaneously satisfy multiple logical consistencies.