In this paper, we propose a hybrid approach to the mechanical theory of mind (ToM), which combines the Bayesian reverse planning model and the large-scale language model (LLM), where the Bayesian reverse planning model computes posterior probabilities for possible mental states of an agent based on its actions, and the LLM is used to generate hypotheses and likelihood functions. The Bayesian reverse planning model can accurately predict human reasoning in a variety of ToM tasks, but it has limitations in scaling to scenarios with a large number of possible hypotheses and actions. On the other hand, the LLM-based approach shows promise in solving ToM benchmarks, but may exhibit weaknesses and failures in inference tasks. Our hybrid approach exploits the strengths of each component to achieve close to optimal results in tasks inspired by existing reverse planning models, and improves performance over models using the LLM alone or with thought process prompting. Furthermore, it demonstrates the potential to predict mental states in open tasks, suggesting promising directions for future development of ToM models and the creation of socially intelligent generative agents.