This paper presents a novel application that leverages model validation to achieve real-time multi-step planning and obstacle avoidance in a real-world autonomous robot. We develop a compact, custom model validation algorithm that generates plans in the field based on "core" knowledge and attention found in biological agents. This is achieved in real time, on low-power devices, without pre-computed data. It relies on a method of linking ad hoc control systems generated to counteract local environmental disturbances that prevent the autonomous agent from its preferred behavior (or resting state). We utilize a novel discretization technique for 2D LiDAR data that is sensitive to limited changes in the local environment. We apply multi-step planning to dead-end and playground scenarios using model validation via forward depth-first search. Both empirical results and informal demonstrations of two fundamental properties of the approach demonstrate that model validation can be used to generate efficient multi-step plans, improving the performance of reactive agents capable of planning only a single step. This approach serves as an educational case study for developing safe, reliable, and explainable plans in the context of autonomous vehicles.