This paper presents a method for detecting specialized terms and personalizing explanations to enable readers with diverse backgrounds to understand specialized documents. Since existing user-specific fine-tuning approaches require significant annotation effort and computational resources, this paper explores efficient and scalable personalization strategies. Specifically, we explore two strategies: lightweight fine-tuning using Low-Rank Adaptation (LoRA) on open-source models and personalized prompting, which adjusts model behavior at inference time. We also study a hybrid approach that combines limited annotation data with user background signals from unsupervised learning. Experimental results show that the personalized LoRA model outperforms GPT-4 by 21.4% in F1 score and the best-performing oracle baseline model by 8.3%. Furthermore, it achieves similar performance using only 10% of the annotated training data, demonstrating its practicality even in resource-constrained environments.