Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Learning Inter-Atomic Potentials without Explicit Equivariance

Created by
  • Haebom

Author

Ahmed A. Elhag, Arun Raja, Alex Morehead, Samuel M. Blau, Garrett M. Morris, Michael M. Bronstein

Outline

This paper presents the development of an accurate and scalable machine learning-based interatomic potential (MLIP), essential for molecular simulation. Unlike existing models that explicitly enforce rotation-translation symmetry, this study proposes a novel training paradigm, TransIP. TransIP induces anisotropic Transformer-based models to learn SO(3)-isotropy by optimizing their representations in the embedding space. Trained on the Open Molecules (OMol25) dataset, TransIP effectively learns symmetries in the latent space and outperforms data augmentation-based models by 40% to 60%.

Takeaways, Limitations

Takeaways:
We present a novel MLIP training method that learns symmetry without explicit structural constraints.
Demonstration of the applicability of MLIP to Transformer-based models.
Achieving more efficient performance improvements than data augmentation methods.
Suggesting that learned equivariance can be an effective alternative to MLIP models.
Limitations:
The specific Limitations is not explicitly mentioned in the abstract. (You should check the full text of the paper.)
👍