This paper aims to provide a computationally explicit account of the structure and development of human thought in cognitive science, and addresses the long-standing debate about the adequacy of artificial neural networks for abstract cognitive functions (language, logic, etc.). We argue that the development of large-scale language models (LLMs) has brought about a significant change in this debate. We test several LLMs using established experimental paradigms used in rule-induction studies of logical concepts, and find that the LLMs fit human behavior as well as the Bayesian probabilistic language of thought (pLoT) model. Furthermore, the LLMs make qualitatively different predictions about the nature of rules than pLoT, suggesting that the LLMs are not simply implementations of pLoTs. We argue that the LLMs can therefore provide a new theoretical account of the primitive representations and computations required to explain human logical concepts, and that future cognitive science research should address this.