Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

The Information Security Awareness of Large Language Models

Created by
  • Haebom

Author

Ofir Cohen, Gil Ari Agmon, Asaf Shabtai, Rami Puzis

Outline

This paper presents an automated method for measuring the Information Security Awareness (ISA) level of large-scale language model (LLM)-based assistants. ISA encompasses not only the security knowledge of LLMs addressed in previous research, but also the attitudes and behaviors crucial for understanding implicit security context and rejecting unsafe requests, using 30 mobile ISA taxonomies. Using real-world scenarios to explore the tension between implicit security risks and user satisfaction, we evaluate the ISA levels of leading LLMs. We find that most models exhibit medium or low ISA levels. In particular, smaller variants within the same model family are even more dangerous, and the lack of consistent ISA improvements in recent versions suggests that providers are not actively addressing this issue. This demonstrates the widespread vulnerability of current LLM deployments, implying that most popular models, including smaller variants, systematically expose users to risks. This paper proposes a practical mitigation strategy that integrates security awareness guidelines into model system prompts to help LLMs better detect and reject unsafe requests.

Takeaways, Limitations

Takeaways:
We warn users of cybersecurity risks by revealing that the Information Security Awareness (ISA) level of LLM-based assistants is generally low.
It shows that smaller variants of LLM may pose a higher risk.
It points out that LLM providers are not making active efforts to improve ISA.
We present a practical mitigation approach that integrates security-aware guidance into model system prompts.
Limitations:
Further research is needed to determine the generalizability of the ISA measurement method presented in this paper.
Further analysis of the impact of ISA on various LLM architectures and training data is needed.
Empirical research is needed to evaluate the effectiveness of the proposed mitigation measures.
👍