[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Risks of ignoring uncertainty propagation in AI-augmented security pipelines

Created by
  • Haebom

Author

Emanuele Mezzi, Aurora Papotti, Fabio Massacci, Katja Tuma

Outline

In this paper, we present a method to quantify the uncertainty of AI-based systems whose performance levels are uncertain, in the context of the increasing trend of software development that integrates AI-based subsystems into automated pipelines. Despite the knowledge of uncertainty in existing risk analysis, no study has attempted to estimate the uncertainty of AI-augmented systems considering error propagation in the pipeline. This study provides a formal foundation for capturing uncertainty propagation, develops a simulator to quantify uncertainty, and evaluates the simulation of error propagation through a case study. We also discuss the generalizability and limitations of the approach, and make recommendations for evaluation policies for AI systems. Future work includes extending the approach by relaxing the remaining assumptions and experimenting with real systems.

Takeaways, Limitations

Takeaways:
A novel methodology for quantitatively analyzing uncertainty propagation in AI-based systems is presented.
Providing a new perspective on safety assessment of AI-augmented software development pipelines.
Provide recommendations for establishing AI system evaluation policies.
Limitations:
Currently, only one case study has been conducted, and more experiments and applications to various systems are needed.
Some assumptions exist, and these should be relaxed in future studies.
Experiments on real systems have not yet been performed.
👍