This study proposes a framework for fine-tuning a large-scale language model (LLM) using differential privacy (DP) for multi-object classification using medical image report text. By injecting compensated noise during the fine-tuning process, we aim to mitigate privacy risks associated with sensitive patient data and prevent data leakage while maintaining classification performance. Using 50,232 medical image reports from publicly available MIMIC-CXR chest radiograph and CT-RATE computed tomography datasets collected from 2011 to 2019, we fine-tuned the LLM using differential privacy low-dimensional adaptation (DP-LoRA) on three model architectures: BERT-medium, BERT-small, and ALBERT-base, to classify 14 labels from the MIMIC-CXR dataset and 18 labels from the CT-RATE dataset. We evaluated model performance using the weighted F1 score at various privacy levels (privacy budget = {0.01, 0.1, 1.0, 10.0}) and compared model performance across different privacy levels to quantify the privacy-utility tradeoff. Experimental results revealed a clear privacy-utility tradeoff across two different datasets and three different models. Under moderate privacy guarantees, the DP fine-tuned model achieved weighted F1 scores of 0.88 for MIMIC-CXR and 0.59 for CT-RATE, demonstrating relatively similar performance to the non-privacy-preserving LoRA baseline models (0.90 and 0.78, respectively). In conclusion, differential privacy-preserving fine-tuning using LoRA enables effective and privacy-preserving multi-disease classification, addressing key challenges of LLM fine-tuning on sensitive medical data.