Large-scale language models (LLMs) exhibit impressive performance in text summarization, but their performance tends to degrade when applied to specialized domains other than pretraining distributions. Fine-tuning can improve summarization quality, but it requires high-quality labeled data. In this study, we explore continuous pretraining, a scalable and self-supervised learning approach, to adapt LLMs to downstream summarization tasks involving noisy real-world conversations. Using a large-scale, unlabeled business conversation dataset, we conduct extensive experiments to determine whether continuous pretraining improves the model's ability to summarize conversations. Our results demonstrate that continuous pretraining yields significant gains on both in-domain and out-of-domain summarization benchmarks, while maintaining strong generalization and robustness. We also analyze the effectiveness of data selection strategies, providing practical guidance for applying continuous pretraining to summarization-centric industrial applications.