This paper addresses the debate on the auditability of artificial intelligence (AI) systems. Auditability refers to the ability to independently assess compliance with ethical, legal, and technical standards throughout the AI system's lifecycle. It explores how emerging regulatory frameworks, such as the EU AI Act, are formalizing auditability and mandating documentation, risk assessment, and governance structures. It also analyzes various challenges facing AI auditability, including technical opacity, inconsistent documentation practices, a lack of standardized audit tools and metrics, and conflicting principles within existing responsible AI frameworks. It emphasizes the need for clear guidelines, harmonized international regulations, and robust socio-technical methodologies for implementing auditability at scale, and emphasizes the importance of multi-stakeholder collaboration and auditor capacity building to build an effective AI audit ecosystem. It argues for the integration of auditability into AI development practices and governance infrastructure to ensure that AI systems are not only functional but also ethically and legally compliant.