This paper presents contextual fine-tuning, a novel method for fine-tuning large-scale language models (LLMs). Extending existing prompting techniques, we guide the learning process of LLMs using directive prompts that mimic human cognitive strategies. This approach aims to help the model better understand and interpret domain-specific knowledge, enhancing its ability to rapidly fine-tune on novel datasets, such as those in healthcare and finance. Experimental results demonstrate that the proposed method improves the fine-tuning speed and performance of LLMs.