In this paper, we present a method to quantify the uncertainty of AI-based systems whose performance levels are uncertain, in the context of the increasing trend of software development that integrates AI-based subsystems into automated pipelines. Despite the knowledge of uncertainty in existing risk analysis, no study has attempted to estimate the uncertainty of AI-augmented systems considering error propagation in the pipeline. This study provides a formal foundation for capturing uncertainty propagation, develops a simulator to quantify uncertainty, and evaluates the simulation of error propagation through a case study. We also discuss the generalizability and limitations of the approach, and make recommendations for evaluation policies for AI systems. Future work includes extending the approach by relaxing the remaining assumptions and experimenting with real systems.