In this paper, we propose an object-centered recovery (OCR) framework to address the challenges of out-of-distribution (OOD) situations in visuomotor policy learning. While existing action replication (BC) methods rely heavily on a large amount of labeled data and fail in unfamiliar spatial conditions, OCR learns a recovery policy consisting of an inverse policy inferred from the object keypoint manifold gradients of the original training data without collecting additional data. This recovery policy acts as a simple add-on to any baseline visuomotor BC policy, regardless of the specific method, and guides the system back to the training distribution to ensure task success even in OOD situations. In both simulations and real robot experiments, we demonstrate up to 77.7% improvement over the baseline policy in OOD, and also demonstrate the ability of OCR to autonomously collect demos for continuous learning. We argue that this framework represents a step forward toward improving the robustness of visuomotor policies in real environments.