In this paper, we propose the ReCode framework to address the limitation of the code generation capability of large-scale language models (LLMs) that cannot adapt to frequent updates of external library APIs. ReCode imitates the way human programmers adapt to API changes, trains LLMs to perform version migration using about 2,000 data, and uses a modified string similarity measure as a reward for reinforcement learning. Experimental results show that ReCode significantly improves the code generation performance of LLMs, especially on the unknown CodeUpdateArena task, and has less impact on the general code generation capability than supervised learning fine-tuning. We apply ReCode to various LLMs and reinforcement learning algorithms (GRPO and DAPO) to achieve consistent performance improvements, and Qwen2.5-Coder-7B outperforms the 32B-parameter code-directed tuning model and the inference model with the same architecture. The source code is available on GitHub.