This paper addresses how large-scale language models (LLMs), such as ChatGPT, have revolutionized the field of natural language processing (NLP), but also introduce new security vulnerabilities. We categorize threats into several key areas, including inference-time attacks via prompt manipulation, training-time attacks, exploitation by malicious actors, and the inherent risks of autonomous LLM agents. We summarize recent academic and industry research from 2022 to 2025, illustrate each threat, analyze existing defense mechanisms and their limitations, and highlight outstanding challenges in securing LLM-based applications. Finally, we emphasize the importance of developing robust, multi-layered security strategies to ensure secure and beneficial LLMs.