This paper presents Finetuning-activated Adversarial Behaviors (FAB), a novel attack method that exploits the potential for fine-tuned large-scale language models (LLMs) to exhibit malicious behavior. This attack utilizes meta-learning techniques to induce specific malicious behaviors when a user performs fine-tuning. Before fine-tuning, the target LLM is designed to maintain normal performance and exhibit no malicious behavior, making it difficult for users to detect the model's malicious nature in advance. Experiments demonstrate that FAB is effective against multiple LLMs and diverse attack targets (advertising, jailbreaking, and excessive rejection), and is robust to various user-side fine-tuning settings.