This paper provides an overview of recent deep learning research on compositionality, a core property of human intelligence, for readers in philosophy, cognitive science, and neuroscience. Focusing on large-scale language models (LLMs), we discuss two approaches to achieving combinatorial generalization, which enables infinite expressive power from limited learning experience: (1) structural inductive bias and (2) meta-learning. We argue that the pre-training process of LLMs can be understood as a type of meta-learning, which allows deep neural networks (DNNs) to achieve combinatorial generalization. We then discuss the implications of these findings for the study of compositionality in human cognition and future research directions.