Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Logic layer Prompt Control Injection (LPCI): A Novel Security Vulnerability Class in Agentic Systems

Created by
  • Haebom

Author

Hammad Atta, Ken Huang, Manish Bhatt, Kamal Ahmed, Muhammad Aziz Ul Haq, Yasir Mehmood

Outline

This paper addresses a new type of hidden security vulnerability arising from the integration of large-scale language models (LLMs) into enterprise systems, specifically vulnerabilities within the logic execution layer and persistent memory context. We introduce Logic Layer Prompt Control Injection (LPCI), a novel attack type that involves encoded, delayed, and conditionally triggered payloads within memory, vector storage, or tool output. These payloads can bypass existing input filters and trigger unauthorized actions across sessions.

Takeaways, Limitations

Takeaways: By increasing understanding of LPCI attacks, a new security threat to LLM-based systems, we can contribute to strengthening the security of enterprise systems. By emphasizing that existing input filters alone are insufficient to defend against LPCI attacks, we highlight the need for developing new security defense strategies.
Limitations: Currently, no specific defense techniques or mitigation strategies have been proposed against LPCI attacks. There is a lack of quantitative analysis of the actual threat level and likelihood of LPCI attacks. Further research is needed to determine the generalizability of LPCI attacks to various LLM and system environments.
👍