Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Enhancing material behavior discovery using embedding-oriented Physically-Guided Neural Networks with Internal Variables

Created by
  • Haebom

Author

Rub en Mu noz-Sierra, Manuel Doblar e, Jacobo Ayensa-Jim enez

Outline

This paper proposes an improved PGNNIV framework that applies reduced-dimensional modeling techniques to address the scalability challenges of physically guided neural networks with internal variables (PGNNIV). To address the scalability challenges of PGNNIV models when applied to high-dimensional data (e.g., fine-grid spatial fields or time-varying systems), we propose alternative decoder architectures using spectral decomposition, Proper Orthogonal Decomposition (POD), and pre-trained autoencoder-based mapping. These alternative decoders offer various trade-offs between computational efficiency, accuracy, noise tolerance, and generalization performance, significantly improving scalability. Furthermore, by incorporating model reuse through transfer learning and fine-tuning strategies, we support efficient adaptation to new materials or configurations and significantly reduce training time while maintaining or improving model performance. We validate the effectiveness of the proposed technique through a representative case study dominated by a nonlinear diffusion equation. We demonstrate that the improved PGNNIV framework successfully identifies the underlying constitutive state equation, maintains high prediction accuracy, improves robustness to noise, mitigates overfitting, and reduces computational requirements. The proposed technique can be adapted to various scenarios depending on data availability, resources, and specific modeling goals, and overcomes scalability issues in all scenarios.

Takeaways, Limitations

Takeaways:
Addressing the scalability issue of the PGNNIV model to high-dimensional data.
We present an efficient alternative decoder architecture based on spectral decomposition, POD, and pre-trained autoencoders.
Reuse models and reduce training time through transfer learning and fine-tuning.
Improved robustness to noise and mitigation of overfitting
Applicability to various scenarios
Limitations:
Further analysis is needed to explore the trade-offs between computational efficiency, accuracy, noise tolerance, and generalization performance of proposed alternative decoder architectures.
Further research is needed on the generalizability of the results to various types of physical systems.
Lack of clear guidelines on selecting the optimal alternative decoder architecture for a particular problem.
👍