This paper points out that existing backdoor attacks focus only on the hiddenness in the input space or feature space, making them vulnerable to various defense techniques. By analyzing 12 common backdoor attacks and 17 defense techniques, we find that even backdoor attacks hidden in the input and feature spaces are vulnerable to defense techniques that probe the parameter space. By analyzing the cause of this vulnerability, we find that there are prominent backdoor-related neurons in the parameter space, and propose Grond, a new supply chain attack technique that considers the hiddenness in the parameter space. Grond improves the hiddenness in the parameter space by restricting parameter changes using the Adversarial Backdoor Injection (ABI) module. Experimental results show that Grond outperforms existing 12 backdoor attacks against state-of-the-art defense techniques (including adaptive defenses) on CIFAR-10, GTSRB, and ImageNet subsets. We also show that ABI consistently improves the effectiveness of common backdoor attacks.