This paper presents a novel approach for multi-agent cooperation by implementing the Theory of Mind (ToM) within an active inference framework. Unlike existing active inference approaches to multi-agent cooperation, this approach does not rely on task-specific shared generative models or explicit communication. ToM agents represent their own and other agents' beliefs and goals distinctly and systematically explore a common policy space through recursive inference using an extended and modified sophisticated inference tree-based planning algorithm. We evaluate the approach using conflict avoidance and foraging simulations, demonstrating that ToM agents cooperate better than non-ToM agents by avoiding conflicts and reducing redundant effort. Crucially, ToM agents infer other agents' beliefs solely from observable behavior and consider these beliefs when planning their own actions. This approach demonstrates the potential for generalizable and scalable multi-agent systems, while also providing computational insights into the ToM mechanism.