This paper proposes an efficient Task and Motion Planning (TAMP) approach for performing complex manipulation tasks in dynamic environments. It leverages large-scale language models (LLMs), such as GPT-4, to describe tasks in natural language, generate symbolic plans, and infer them. The Onto-LLM-TAMP framework enhances and extends user prompts through knowledge-based reasoning, providing task-related contextual inference and knowledge-based environmental state descriptions. This enhances adaptability to dynamic environments and generates semantically accurate task plans. The effectiveness of the proposed framework is verified through simulations and real-world scenarios.