This paper clearly distinguishes between the concept of supervision, especially control and oversight, which are often mentioned as key elements for meeting the accountability, trustworthiness, and governance requirements of AI systems, and presents a framework for practical application. After critically reviewing existing research on supervision outside of AI and briefly summarizing existing research related to AI, we distinguish between control as an ex ante or real-time, operational perspective, and oversight as a policy and governance function or ex post perspective. We argue that control aims to prevent failures, while oversight focuses on incentives for detection, correction, or future prevention, and that all preventive supervision strategies require control. Based on this, we present a framework that specifies the conditions under which each mechanism is possible, Limitations, and requirements for practical application, and propose a maturity model for AI oversight, and clearly identify the applicability of oversight mechanisms, failure cases, and areas that cannot be met by existing methods.