This paper explores whether improving the temporal reasoning ability of large-scale language models (LLMs) can improve the quality of timeline summarization, a task that summarizes long texts containing sequential events such as social media threads. We present a new dataset, NarrativeReason, that focuses on temporal relationships between sequential events in narratives. This dataset is different from existing temporal reasoning datasets that mainly deal with pairwise event relationships. In this paper, we present an approach that combines temporal reasoning and timeline summarization via a knowledge distillation framework. First, we fine-tune a teacher model on the temporal reasoning task, and then we distill this knowledge into a student model while simultaneously training it on the timeline summarization task. Experimental results show that the proposed model achieves excellent performance on a domain-specific mental health-related timeline summarization task that contains long social media threads containing repeated events and mixed emotions. This highlights the importance and generalizability of improving timeline summarization by leveraging temporal reasoning.