This paper presents "AI-Human Conversation Hijacking," a novel security threat that manipulates the system prompts of a large-scale language model (LLM) to generate malicious answers only for specific questions. Malicious actors can conduct large-scale information manipulation by disseminating seemingly innocuous system prompts online. To demonstrate this attack, the researchers developed CAIN, an algorithm that automatically generates malicious system prompts for specific target questions in a black-box setting. Evaluated on both open-source and commercial LLMs, CAIN achieved up to a 40% F1 score degradation for target questions while maintaining high accuracy for benign inputs. It achieved an F1 score of over 70% for generating specific malicious answers while minimizing the impact on benign questions. These results highlight the importance of strengthened robustness measures to ensure the integrity and security of LLMs in real-world applications. The source code will be made publicly available.