[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Exploring Graph Representations of Logical Forms for Language Modeling

Created by
  • Haebom

Author

Michael Sullivan

Outline

In this paper, we argue that language models based on logical formalisms (LFLMs) are more data-efficient than text-based models. To demonstrate this, we present a prototype of a pre-trained language model based on a graph representation of logical formalisms, called Graph-based Formal-Logical Distributional Semantics (GFoLDS). Experimental results show that LFLMs can learn complex patterns faster by leveraging the underlying linguistic knowledge inherent in the model. GFoLDS significantly outperforms a text-based Transformer LM (BERT) pre-trained on the same data on subtasks, suggesting that LFLMs can learn with much less data. Furthermore, the model performance is likely to scale well with additional parameters and pre-training data, demonstrating the feasibility of LFLMs in real-world applications.

Takeaways, Limitations

Takeaways:
We experimentally demonstrate that logical form-based language models (LFLMs) are more data-efficient than text-based models.
LFLMs suggest the possibility of efficient learning by leveraging inherent linguistic knowledge.
Suggesting the practical application potential of LFLMs.
Limitations:
GFoLDS is a prototype model and further research using larger models and data is needed.
Extensive experimentation with different subtasks is required.
Further research is needed into the complexity and applicability of logical formal representations.
👍