This paper analyzes how invariant regularization can be used to resolve the tradeoff between robustness and accuracy in adversarial learning, and proposes a novel method, Asymmetric Representation-regularized Adversarial Training (ARAT), to overcome this tradeoff. We identify problems with existing invariant regularization methods, such as gradient conflicts between the invariant and classification objectives, and mixed distribution problems caused by distributional differences between clean and adversarial inputs. ARAT addresses the gradient conflict problem using an asymmetric invariant loss, stop-gradient operation, and predictors, and addresses the mixed distribution problem through a split-batch norm architecture. Experimental results show that ARAT outperforms existing methods, offering a new perspective on knowledge distillation-based defense.