This paper focuses on leveraging multilingual parallel data to improve the performance of large-scale language models (LLMs) for low-resource languages. We highlight the limitations of existing pre-training and instruction tuning approaches using unaligned multilingual data and present a multilingual parallel data corpus, specifically TED2025, a large-scale, high-quality multilingual parallel corpus spanning 113 languages built from TED Talks. Using TED2025, we study how strategies such as continuous pre-training and instruction tuning can improve the performance of LLMs. We experimentally demonstrate that models based on multilingual parallel data outperform models based on unaligned multilingual data across six multilingual evaluation criteria.