Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

EXAONE Path 2.0: Pathology Foundation Model with End-to-End Supervision

Created by
  • Haebom

Author

Myeongjang Pyeon, Janghyeon Lee, Minsoo Lee, Juseung Yun, Hwanil Choi, Jonghyun Kim, Jiwon Kim, Yi Hu, Jongseong Jang, Soonyoung Lee

Outline

This paper proposes a novel approach to address the challenges of processing gigapixel-scale whole-slide images (WSIs) in digital pathology. We address the limitations of existing patch-based self-supervised learning (SSL) and multiple-instance learning (MIL) methods and propose a novel approach. These approaches rely on general image augmentation in small patch regions, overlooking important domain features and suffering from low data efficiency. In contrast, EXAONE Path 2.0, a pathology-based model, learns patch-level representations under direct slide-level supervision. Using only 37,000 WSIs, we demonstrate superior data efficiency by achieving state-of-the-art performance on 10 biomarker prediction tasks.

Takeaways, Limitations

Takeaways:
We overcome the limitations of conventional patch-based self-supervised learning methods through slide-level direct supervised learning, significantly improving data efficiency.
Achieving state-of-the-art performance on 10 biomarker prediction tasks with limited data (37,000 WSIs), it opens new possibilities in the field of pathology image analysis.
The EXAONE Path 2.0 model demonstrates that it is a powerful pathology-based model that can be utilized for various biomarker predictions.
Limitations:
Performance on tasks other than the 10 biomarker prediction tasks presented in this paper has not been verified.
Generalization performance may be limited depending on the characteristics of the dataset used. Additional experiments on diverse datasets are required.
Further research is needed to determine the interpretability of the model.
👍