This paper proposes a novel framework to improve the transferability of generative adversarial attacks. Existing generative adversarial attacks suffer from a lack of sufficient representational power of generative models, leading to misalignment of perturbations with meaningful regions of objects. In this study, we present a Mean Teacher-based semantic structure-aware attack framework that generates perturbations by leveraging semantic information extracted from the intermediate activations of the generator. Specifically, we utilize feature distillation, a technique that enhances consistency between the initial layer activations of the student model and the semantically rich teacher model, to generate adversarial perturbations targeting semantically significant regions. Experiments across various models, domains, and tasks demonstrate our improved performance compared to existing state-of-the-art methods. We also present a new metric, Accidental Correction Rate (ACR), for evaluation.