This paper presents a simple approach to integrating safety into imitation learning (IL), where ensuring constraint compliance is challenging, such as when operating near the system's operational limits. Existing imitation learning methods, such as behavioral replication (BC), struggle to enforce constraints, often resulting in suboptimal performance in high-precision tasks. In this paper, we experimentally validate the proposed approach through simulations on an autonomous driving racing task utilizing both full-state and image feedback, demonstrating improved constraint compliance and greater task performance consistency compared to BC.