This paper addresses emerging risks, such as algorithmic bias and unpredictable system behavior, arising from the integration of artificial intelligence (AI) into telecommunications infrastructure. These risks fall outside the scope of existing cybersecurity and data protection frameworks. This paper presents a precise definition and detailed typology of telecommunications AI incidents, arguing that they should be recognized as a distinct risk category beyond existing cybersecurity and data protection breaches and as a separate regulatory concern. Using India as a case study of a jurisdiction lacking horizontal AI legislation, this paper analyzes key digital regulations in India. The analysis reveals that existing laws in India, including the Telecommunications Act, 2023, the CERT-In Rules, and the Digital Personal Data Protection Act, 2023, focus on cybersecurity and data breaches, leaving significant regulatory gaps for AI-specific operational incidents, such as performance degradation and algorithmic bias. It also examines structural barriers to information disclosure and the limitations of existing AI incident repositories. Based on these findings, we propose targeted policy recommendations centered on integrating AI incident reporting into India's existing telecommunications governance. Key proposals include mandating the reporting of high-risk AI failures, designating existing government agencies as incident data management node agencies, and developing a standardized reporting framework. These recommendations provide a practical and replicable blueprint for other countries seeking to manage AI risks within existing sector-specific frameworks, increasing regulatory clarity and strengthening long-term resilience.