This paper addresses the problem that, unlike humans, who naturally adjust their actions to environmental changes during routine tasks, many autonomous robots often overlook subtle but significant scene changes, leading to failures in their planned actions. We highlight the limitations of existing replanning methods, which only react after failure, making recovery inefficient or impossible, and emphasize the importance of proactive replanning. This study presents a proactive replanning framework that detects and corrects failures at subtask boundaries by comparing the current scene graph generated from RGB-D observations with a reference graph extracted from successful demonstrations. If the current scene does not match the reference trajectory, a lightweight inference module diagnoses the mismatch and adjusts the plan. Experiments on the AI2-THOR simulator demonstrate that the proposed method significantly improves task success rates and robustness by detecting semantic and spatial mismatches before execution failures occur.