While Large Language Model (LLM)-based multi-agent systems (LLM-MAS) excel at solving collaborative problems, they also pose new security risks. This paper systematically studies intent concealment attacks on LLM-MAS, designing four representative attack paradigms and evaluating them across centralized, distributed, and hierarchical communication architectures. Experimental results demonstrate that these attacks are destructive and can easily evade existing defense mechanisms. To address this, we propose AgentXposed, a psychology-based detection framework. AgentXposed leverages the HEXACO personality model and Reid interrogation techniques to proactively identify the intent of malicious agents. Experimental results on six datasets demonstrate that AgentXposed effectively detects various forms of malicious behavior and demonstrates robustness across various communication settings.