This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
MMET: A Multi-Input and Multi-Scale Transformer for Efficient PDEs Solving
Created by
Haebom
Author
Yichen Luo, Jia Wang, Dapeng Lan, Yu Liu, Zhibo Pang
Outline
This paper proposes a novel framework, the Multi-Input and Multi-Scale Efficient Transformer (MMET), to address the low generalization ability and high computational cost of existing machine learning-based partial differential equation (PDE) solutions. MMET utilizes a structure where mesh and query points are input to the encoder and decoder, respectively, and employs a Gated Condition Embedding (GCE) layer to efficiently handle input variables or functions of various dimensions. By reducing the input length through Hilbert curve-based reserialization and patch embedding mechanisms, the framework significantly reduces the computational cost of processing large-scale geometric models. These innovations enable efficient representation of large-scale and multi-input PDE problems and support multiscale-resolution queries. Benchmark experiments across various physics domains demonstrate that MMET outperforms state-of-the-art (SOTA) methods in both accuracy and computational efficiency. This study demonstrates the potential of MMET as a robust and scalable solution for real-time PDE solutions in engineering and physics-based applications, paving the way for future research on pre-trained large-scale models in specific domains. The source code was released in https://github.com/YichenLuo-0/MMET .