[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Temporal reasoning for timeline summary in social media

Created by
  • Haebom

Author

Jiayu Song, Mahmud Elahi Akhter, Dana Atzil Slonim, Maria Liakata

Outline

This paper explores whether improving the temporal reasoning ability of large-scale language models (LLMs) can improve the quality of timeline summarization, a task that summarizes long texts containing sequential events such as social media threads. We present a new dataset, NarrativeReason, that focuses on temporal relationships between sequential events in narratives. This dataset is different from existing temporal reasoning datasets that mainly deal with pairwise event relationships. In this paper, we present an approach that combines temporal reasoning and timeline summarization via a knowledge distillation framework. First, we fine-tune a teacher model on the temporal reasoning task, and then we distill this knowledge into a student model while simultaneously training it on the timeline summarization task. Experimental results show that the proposed model achieves excellent performance on a domain-specific mental health-related timeline summarization task that contains long social media threads containing repeated events and mixed emotions. This highlights the importance and generalizability of improving timeline summarization by leveraging temporal reasoning.

Takeaways, Limitations

Takeaways:
We experimentally demonstrate that the enhancement of LLM's temporal reasoning ability contributes to the improvement of timeline summarization performance.
We contribute to future research by presenting a new temporal inference dataset called NarrativeReason.
We present a method to effectively combine temporal reasoning and timeline summarization through a knowledge distillation framework.
The model's generalizability is confirmed by its excellent performance on domain-specific mental health-related data.
Limitations:
Further review of the size and diversity of the NarrativeReason dataset is needed.
The performance of the proposed model needs to be evaluated on other types of timeline summarization tasks.
Lack of detailed explanation of parameter tuning in the knowledge distillation framework.
Because the performance evaluation was biased toward a specific domain (mental health), further research is needed on the generalizability to other domains.
👍