Advances in large-scale language models (LLMs) have led to significant improvements in various service areas, such as chatbots and medical pre-consultation applications. Supervised Fine-Tuning (SFT) is the most common method for adapting LLMs to multi-turn dialogue generation in the medical field. However, datasets for SFTs in tasks such as medical pre-consultation typically have an imbalanced turn distribution. Training on such data introduces a novel failure mechanism, known as "Format Inertia," which causes the model to generate repetitive, formally correct, but diagnostically uninformative questions in long medical conversations. To mitigate this failure mechanism, we adopted a simple, data-driven method to rebalance the turn distribution of the training dataset. Experimental results demonstrate that our method substantially mitigates Format Inertia in medical pre-consultation.