In this paper, we argue that language models based on logical formalisms (LFLMs) are more data-efficient than text-based models. To demonstrate this, we present a prototype of a pre-trained language model based on a graph representation of logical formalisms, called Graph-based Formal-Logical Distributional Semantics (GFoLDS). Experimental results show that LFLMs can learn complex patterns faster by leveraging the underlying linguistic knowledge inherent in the model. GFoLDS significantly outperforms a text-based Transformer LM (BERT) pre-trained on the same data on subtasks, suggesting that LFLMs can learn with much less data. Furthermore, the model performance is likely to scale well with additional parameters and pre-training data, demonstrating the feasibility of LFLMs in real-world applications.