[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Filling the Gap: Is Commonsense Knowledge Generation useful for Natural Language Inference?

Created by
  • Haebom

Author

Chathuri Jayaweera, Brianna Yanqui, Bonnie Dorr

Outline

This paper studies the role of common sense knowledge in natural language inference (NLI). To address the lack of existing common sense knowledge resources, we explore the possibility of utilizing large-scale language models (LLMs) as common sense knowledge generators. We analyze the reliability of LLM's common sense knowledge generation and the impact of the generated knowledge on NLI prediction accuracy in two main aspects, and apply modified existing metrics to evaluate the factuality and consistency of LLM. Although explicitly incorporating common sense knowledge does not consistently improve the overall results, it is effective in distinguishing entailment relations and moderately improves distinguishing contradictory or neutral inferences.

Takeaways, Limitations

Takeaways:
The LLM presents the potential for NLI to be used to generate common-sense knowledge.
LLM-based common sense knowledge is shown to be effective in distinguishing implications.
Provides __T7431_____ for the development and application of new metrics for assessing the authenticity and consistency of LLMs.
Limitations:
Explicit integration of common-sense knowledge did not consistently improve the overall performance of NLI.
Further research is needed to determine the reliability and completeness of the common sense knowledge generated by LLMs.
👍