This paper highlights the importance of responsible deployment of artificial intelligence (AI)/machine learning (ML) systems in high-risk environments, not only for proving their reliability but also for continuous monitoring after deployment to quickly detect and address unsafe behavior. Nonparametric sequential testing methods, particularly conformal test martingales (CTMs) and anytime-valid inference, offer valuable tools for this monitoring task. However, existing approaches are limited to a limited set of hypotheses or "alarm criteria" (e.g., detecting data changes that violate certain commutativity or IID assumptions), do not allow online adaptation to changes, or cannot diagnose the cause of performance degradation or alarms. In this paper, we propose weighted generalized conformal test martingales (WCTMs), establishing a theoretical foundation for online monitoring of unexpected changes in data distributions and controlling false alarms. For practical applications, we propose a specific WCTM algorithm that adapts online to minor covariate changes (the marginal input distribution), quickly detects deleterious changes, and diagnoses these deleterious changes as either concept changes (the conditional label distribution) or extreme (outside the support region) covariate changes that are difficult to adapt to. It demonstrates superior performance over state-of-the-art baselines on real-world datasets.