This paper addresses a new type of hidden security vulnerability arising from the integration of large-scale language models (LLMs) into enterprise systems, specifically vulnerabilities within the logic execution layer and persistent memory context. We introduce Logic Layer Prompt Control Injection (LPCI), a novel attack type that involves encoded, delayed, and conditionally triggered payloads within memory, vector storage, or tool output. These payloads can bypass existing input filters and trigger unauthorized actions across sessions.