Social simulations utilizing Large-Scale Language Models (LLMs) require more than just plausible behavior generation; they also require structured, modifiable, and traceable cognitive reasoning. LLM-based agents are often used to simulate individual and collective behaviors through prompting and guided fine-tuning. However, they lack internal consistency, causal inference, and belief traceability, making them unreliable in simulating how people reason, deliberate, and respond to interventions. To address this, this paper presents Generative Minds (GenMinds), a conceptual modeling paradigm inspired by cognitive science that supports structured belief representations in generative agents. Furthermore, to evaluate these agents, we introduce the REconstructing CAusal Paths (RECAP) framework, which assesses inference fidelity through causal traceability, demographic evidence, and intervention consistency. This study presents a broad shift from simple imitation to generative agents that simulate not only language but also thought for social simulation.