Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

LLMs model how humans induce logically structured rules

Created by
  • Haebom

Author

Alyssa Loo, Ellie Pavlick, Roman Feiman

Outline

This paper aims to provide a computationally explicit account of the structure and development of human thought in cognitive science, and addresses the long-standing debate about the adequacy of artificial neural networks for abstract cognitive functions (language, logic, etc.). We argue that the development of large-scale language models (LLMs) has brought about a significant change in this debate. We test several LLMs using established experimental paradigms used in rule-induction studies of logical concepts, and find that the LLMs fit human behavior as well as the Bayesian probabilistic language of thought (pLoT) model. Furthermore, the LLMs make qualitatively different predictions about the nature of rules than pLoT, suggesting that the LLMs are not simply implementations of pLoTs. We argue that the LLMs can therefore provide a new theoretical account of the primitive representations and computations required to explain human logical concepts, and that future cognitive science research should address this.

Takeaways, Limitations

Takeaways:
A large-scale language model (LLM) is shown to perform comparable to conventional Bayesian probabilistic language of thought (pLoT) models in explaining human logical reasoning processes.
The LLM suggests that it may provide a new theoretical account of human cognitive processes.
Presenting new research directions using the LLM in cognitive science research.
Limitations:
Further research is needed to determine whether LLM fully explains human cognitive processes.
I don't fully understand the inner workings of LLM.
Further research is needed on the generalizability and limitations of the new theoretical explanations presented by the LLM.
👍