In this paper, we propose OSCAR (Object Status Context Awareness for Recipes), a cooking process tracking technique utilizing Object Status Recognition, with the goal of developing a cooking assistance system for the visually impaired. OSCAR supports real-time cooking step tracking by integrating recipe parsing, object state extraction, visual alignment with cooking steps, and temporal causal modeling. We evaluate OSCAR using 173 cooking videos and a real cooking dataset recorded in the homes of 12 visually impaired people. We find that object status recognition improves the step prediction accuracy of a visual language model. We also analyze the impact of real-world factors such as implicit tasks, camera placement, and lighting on the performance. This paper provides a context-aware cooking process tracking pipeline, an annotated real-world non-visual cooking dataset, and design insights for future context-aware cooking assistance systems.