This paper explores how large-scale language models (LLMs), such as ChatGPT, have revolutionized the field of natural language processing (NLP), but also introduce new security vulnerabilities. We categorize threats into several key areas: prompt injection and jailbreaking, adversarial attacks (including input perturbation and data poisoning), malicious misuse by malicious actors (including fake information, phishing emails, and malware generation), and the inherent risks of autonomous LLM agents (including goal mismatch, emerging deception, self-preservation instincts, and "planning" behaviors that develop and pursue covert and inconsistent goals). We summarize recent academic and industry research from 2022 to 2025 and present examples of each threat. We also analyze proposed defenses and their limitations, identify unresolved challenges in securing LLM-based applications, and emphasize the importance of a robust, multi-layered security strategy.