This paper deals with generative adversarial attacks that train a perturbation generator on a white-box surrogate model for application to a back-box Big Team model. Unlike conventional iterative attacks, generative adversarial attacks have excellent inference time efficiency, scalability, and transferability, but previous studies have failed to fully utilize the expression capabilities of generative models to preserve and utilize semantic information. In this paper, we point out that although the intermediate activations of the generator contain rich semantic features such as object boundaries and rough shapes, they are not fully utilized, which limits the alignment of perturbations with object-related regions. To address this issue, this paper proposes a semantic structure-aware attack framework based on Mean Teacher. The Mean Teacher acts as a temporally smoothed feature reference, which enhances the semantic consistency between the early layer activations of the student model and the semantically rich activations of the Teacher model through feature distillation. Based on experimental results, we anchor the perturbation generation to the semantically important early intermediate blocks in the generator, thereby inducing progressive adversarial perturbations in regions that significantly improve adversarial transferability. Through extensive experiments on various models, domains, and tasks, we demonstrate consistent performance improvements over existing state-of-the-art generative attacks, and comprehensively evaluate them using existing metrics and the newly proposed Accidental Correction Rate (ACR).