This paper presents the first comprehensive evaluation of data memorization in large-scale language models (LLMs) in the healthcare field. We systematically analyzed three common adaptation scenarios—continuous pretraining on a medical corpus, fine-tuning on a standard medical benchmark, and fine-tuning on real clinical data, including over 13,000 hospitalization records from the Yale New Haven Health System—to assess the frequency, nature, amount, and potential impact of memorization in LLMs. Results show that memorization occurs at a significantly higher frequency in all adaptation scenarios than in the general domain, suggesting implications for the development and adoption of LLMs in healthcare. Memorized content is categorized into three types: informative (e.g., accurate reproduction of clinical guidelines and biomedical references), uninformative (e.g., repetitive disclaimers or formulaic medical document language), and detrimental (e.g., reproducing dataset-specific or sensitive clinical content). We offer practical recommendations to promote beneficial memorization, minimize uninformative memorization, and mitigate detrimental memorization.