This paper presents a novel backdoor attack method that overcomes the limitations of existing backdoor attacks that rely on data manipulation or model structure modification. While existing backdoor attacks based on structural modification require visible triggers, the proposed method inserts a backdoor within the model structure to create a stealthy, unobtrusive trigger. This allows the attacker to modify and redistribute pre-trained models, posing a threat to users. The effectiveness of the attack and the stealthiness of the trigger are verified through standard computer vision benchmark experiments. We emphasize that the attack is undetectable by both manual inspection and advanced detection tools.