To address the problem that manual completion of the "Impression" section in radiology reports is a major cause of radiologist burnout, the authors propose a coarse-to-fine framework that automatically generates and personalizes impressions from clinical findings, leveraging open-source large-scale language models (LLMs). The system first generates a draft impression, then refines it using machine learning and human-feedback-driven reinforcement learning (RLHF) to ensure factual accuracy and adapt it to the individual radiologist's style. The LLaMA and Mistral models were fine-tuned using a large-scale report dataset from the University of Chicago Medicine. This approach is designed to significantly reduce administrative workload and improve reporting efficiency while maintaining high standards of clinical accuracy.