This paper demonstrates the utility of multi-parallel data to improve the performance of multilingual large-scale language models (LLMs), including those with limited resources. We highlight the limitations of existing pre-training and instruction tuning approaches using unaligned multilingual data and introduce TED2025, a large-scale, high-quality multi-parallel corpus based on TED Talks, encompassing 113 languages and up to 50 languages aligned in parallel. Using TED2025, we explore strategies for leveraging multi-parallel data, including continuous pre-training and instruction tuning. We experimentally demonstrate that models based on multi-parallel data outperform models based on unaligned data across six multilingual benchmarks.