This paper proposes a novel attack technique, the Spectral Masking and Interpolation Attack (SMIA), which exposes the serious vulnerability of voice authentication systems (VAS). SMIA strategically manipulates frequency ranges inaudible to the human ear to modulate AI-generated voices, generating adversarial samples that bypass existing anti-spoofing countermeasures (CMs). Through various experiments simulating real-world environments, we evaluate the effectiveness of SMIA against state-of-the-art (SOTA) models. We achieve high attack success rates of at least 82% for combined VAS/CM systems, at least 97.5% for standalone speaker authentication systems, and 100% for countermeasures. This demonstrates that current security systems are inadequate against adaptive adversarial attacks.