In this paper, we propose TrainVerify, a system for verifying errors that may occur during distributed training of large-scale language models (LLMs). Distributed training of LLMs using thousands of devices is extremely expensive, and unverified training can potentially waste millions of GPU hours. TrainVerify formally verifies whether distributed parallel execution plans are mathematically equivalent based on the logical specification of the model. Considering that direct verification is difficult due to the large size of LLMs, TrainVerify introduces shape reduction techniques and step-by-step parallel verification algorithms to significantly reduce complexity while maintaining formal correctness. It has been successfully applied to verify training plans of state-of-the-art LLMs such as Llama3 (405B) and DeepSeek-V3 (671B).