This paper studies prompt injection attacks on large-scale language model (LLM)-based applications and agents. In particular, we reveal the structural vulnerability of Known-Answer Detection (KAD), a conventional prompt injection defense technique, and propose DataFlip, a novel attack technique that exploits it. DataFlip effectively evades KAD defense techniques (detection rate below 1.5%) and induces malicious behavior with a high success rate (up to 88%) without white-box access or optimization procedures for LLM.