This paper investigates the effectiveness of prior prompt engineering (pPE) in Reinforcement Fine-Tuning (RFT). While previous RFT research has primarily focused on algorithms, reward design, and data management, the design of pPE—the instructions prepended to queries during training (e.g., step-by-step inference guidance)—has been understudied. In this paper, we investigate whether various pPE approaches can induce different behaviors in language models (LMs) after RFT. We convert five strategies used in inference-time prompt engineering (iPE) (inference, planning, code-based reasoning, knowledge recall, and null-example exploitation) into pPE and apply them to the Qwen2.5-7B model. We evaluate their performance on benchmarks such as AIME2024, HumanEval+, and GPQA-Diamond. Experimental results show that all PPE-trained models outperform the iPE-prompted models, with the null-example PPE approach achieving the greatest performance gains, with the highest performance gains observed on AIME2024 and GPQA-Diamond. Furthermore, utilizing a behavioral classification framework, we demonstrate that different PPE strategies instill different behavioral styles in the models.