Meta processes tens of thousands of code review comments every week. This paper presents the process and results of developing Metamate for Code Review (MetaMateCR), a system that provides AI-assisted corrections to code reviewer comments at scale. We fine-tuned the Llama model using 64,000 data points, and deployed it to a production environment after the offline results reached a satisfactory level. Comparison results with GPT-4o show that the developed LargeLSFT model generates accurate patches in 68% of cases, which is 9%p higher than GPT-4o, and uses a more recent Hack function. Through safety tests, we evaluate the impact of AI patch suggestions on review time, and address the delay in review time through UX improvements. When deployed to a production environment, the LargeLSFT model achieved an ActionableToApplied rate of 19.7%, which is 9.2%p higher than GPT-4o.