Individual large-scale language models (LLMs) have demonstrated outstanding performance in various fields, and collaboratively coordinated multi-agent systems have enhanced decision-making and reasoning capabilities. We raise the question of whether an attacker can generate adversarial samples capable of misleading the collective decisions of a multi-agent system, even when only one agent is aware of the information. We propose a framework, called M-Spoiler, which formalizes this approach as an incomplete information game. It simulates agent interactions to generate adversarial samples and manipulates the collaborative decision-making process of the target system. M-Spoiler introduces a robust agent that simulates potential robust responses from agents in the target system, thereby assisting in the optimization of adversarial samples. Extensive experiments demonstrate the risks posed by single-agent knowledge to multi-agent systems and demonstrate the effectiveness of the proposed attack framework. Furthermore, we explore defense mechanisms and demonstrate that the proposed attack framework is more robust than existing methods.