This paper proposes Chimera, a large-scale language model (LLM)-based multi-agent framework, to address the data shortage in the field of insider threat detection (ITD). Chimera automatically simulates benign and malicious insider activities in various corporate environments and collects various logs to generate a new dataset, ChimeraLog. Chimera models each employee as an agent with role-specific behavior and incorporates group meetings, two-way interactions, and autonomous scheduling modules to capture realistic organizational dynamics. The ChimeraLog dataset, which includes 15 types of insider attacks, was created by simulating activities in three sensitive domains: technology companies, financial firms, and healthcare institutions. Human studies and quantitative analysis validated ChimeraLog's diversity, realism, and the presence of explainable threat patterns. Evaluation of existing ITD methodologies revealed an average F1 score of 0.83 for ChimeraLog, significantly lower than the 0.99 score for the CERT dataset, demonstrating ChimeraLog's high difficulty and its utility in advancing ITD research.