This paper presents a method to improve the robustness of the Behavior Cloning (BC) technique. While BC is an effective imitation learning technique that trains policies using only expert state-action pair data, it is susceptible to measurement errors and adversarial interference during deployment. These errors can lead agents to suboptimal actions. This study demonstrates that using global Lipschitz regularization improves the robustness of the learned policy network, ensuring policy robustness against various bounded norm perturbations. Furthermore, we propose a method for constructing a Lipschitz neural network that guarantees policy robustness, and experimentally validate this method across various Gymnasium environments.