This study investigates the memory mechanisms and factors in fine-tuned large-scale language models (LLMs), focusing on privacy concerns in the healthcare field. Using the PHEE dataset of pharmacovigilance events, we examine how various aspects of the fine-tuning process affect the model's tendency to memorize training data. We detect memorized data using two main methods: membership inference attacks and a generation task using prompt prefixes. We analyze the application of different weight matrices in transformer architectures, the relationship between perplexity and memorization, and the effect of increasing rank in Low-Rank Adaptation (LoRA) fine-tuning. Key findings include: (1) the value and output matrices contribute more to memorization than the query and key matrices; (2) lower perplexity in fine-tuned models leads to increased memorization; and (3) higher LoRA ranks lead to increased memorization, but with diminishing returns. These findings provide insight into the trade-off between model performance and privacy risks in fine-tuned LLMs. The findings of this study provide Takeaways guidance for developing more effective and responsible strategies for applying large-scale language models while managing data privacy concerns.