To overcome the limitations of small language models in specialized fields such as Persian, a low-resource language, this study introduces a new dataset consisting of 20,000 doctor-patient question-and-answer pairs and a 90 million-token corpus crawled from medical journals. Using this dataset, we improved the medical knowledge of the baseline model, aya-expanse-8b, through parameter-efficient fine-tuning.