Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Security Concerns for Large Language Models: A Survey

Created by
  • Haebom

Author

Miles Q. Li, Benjamin CM Fung

Outline

This paper addresses how large-scale language models (LLMs), such as ChatGPT, have revolutionized the field of natural language processing (NLP), but also introduce new security vulnerabilities. We categorize threats into several key areas, including inference-time attacks via prompt manipulation, training-time attacks, exploitation by malicious actors, and the inherent risks of autonomous LLM agents. We summarize recent academic and industry research from 2022 to 2025, illustrate each threat, analyze existing defense mechanisms and their limitations, and highlight outstanding challenges in securing LLM-based applications. Finally, we emphasize the importance of developing robust, multi-layered security strategies to ensure secure and beneficial LLMs.

Takeaways, Limitations

Takeaways:
Provides a comprehensive overview of LLM's security vulnerabilities.
In-depth analysis of various types of LLM attacks and defense mechanisms.
Unsolved challenges to strengthen LLM security
Emphasize the importance of a multi-layered security strategy for developing a safe and beneficial LLM program.
Limitations:
The research covered in this paper is limited to research conducted between 2022 and 2025. Research trends after that date are not reflected.
Empirical evaluation of the effectiveness and limitations of defense mechanisms may be lacking.
Because research on LLM security is rapidly evolving, new threats and defense techniques may emerge after the paper is published.
Security vulnerability analysis for specific LLM architectures or applications may not be detailed.
👍