[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Single- to multi-fidelity history-dependent learning with uncertainty quantification and disentanglement: application to data-driven constitutive modeling

Created by
  • Haebom

Author

Jiaxiang Yi, Bernardo P. Ferreira, Miguel A. Bessa

Outline

This paper presents a method to extend data-driven learning to consider history-dependent multi-fidelity data, quantify epistemic uncertainty, and separate it from data noise (probabilistic uncertainty). The method, which has a hierarchical structure, is applicable to a variety of learning scenarios, from simple single-fidelity deterministic neural network learning to the proposed multi-fidelity variance estimation Bayesian recurrent neural network learning. The versatility and generality of the method are demonstrated by applying it to several data-driven configurational modeling scenarios using data of various fidelities (with and without noise). The method accurately predicts responses, quantifies model errors, and captures noise distributions (if any). This opens up opportunities for practical applications in a wide range of scientific and engineering fields, including the most challenging cases involving design and analysis under uncertainty.

Takeaways, Limitations

Takeaways:
Generalizability of data-driven learning using history-dependent multi-fidelity data
Separating and Quantifying Epistemic and Probabilistic Uncertainty
Presenting a flexible methodology applicable to various learning scenarios (single/multi-fidelity, deterministic/Bayesian)
Accurate response prediction through model error quantification and noise distribution identification
Suggests potential applications in a variety of scientific/engineering fields, including design and analysis under uncertainty
Limitations:
Lack of specific analysis of practical applications of the methodology presented in this paper.
Additional validation of generalization performance on diverse datasets is needed.
Lack of evaluation of computational cost and complexity
The potential for bias toward certain types of data or issues
👍