This study proposes a framework for fine-tuning a large-scale language model (LLM) using differential privacy (DP) to perform multiple anomaly detection in radiology report text. By injecting compensated noise during fine-tuning, we aim to mitigate privacy risks associated with sensitive patient data and prevent data leakage while maintaining classification performance. Using the MIMIC-CXR and CT-RATE datasets (50,232 reports collected from 2011 to 2019), we fine-tuned three model architectures: BERT-medium, BERT-small, and ALBERT-base using differential privacy low-rank adaptation (DP-LoRA). We evaluated model performance under various privacy budgets (0.01, 0.1, 1.0, and 10.0) using weighted F1 scores to quantitatively analyze the privacy-utility tradeoff.