Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Revisiting semi-supervised learning in the era of foundation models

Created by
  • Haebom

Author

Ping Zhang, Zheda Mai, Quang-Huy Nguyen, Wei-Lun Chao

Outline

This paper presents research on semi-supervised learning (SSL) based on Vision Foundation Models (VFMs). To validate the effectiveness of SSL over rich unlabeled data with limited labeled data, we developed a new benchmark dataset and evaluated several SSL techniques. In particular, we found that Parameter Efficient Fine-tuning (PEFT) achieved comparable performance to SSL even with labeled data alone. Inspired by this finding, we revisited self-training, a simple SSL technique, and proposed a method for generating pseudo-labels for unlabeled data using PEFT models. To address the problem of noisy pseudo-labels, we proposed an ensemble of multiple PEFT techniques and a VFM backbone to generate more robust pseudo-labels, and demonstrated its effectiveness.

Takeaways, Limitations

Takeaways:
We find that PEFT can achieve similar performance to SSL with label data alone in VFMs-based SSL.
A simple yet powerful SSL approach is presented through a re-examination of self-training techniques.
Solving the noise problem of similar labels using ensemble techniques.
Provides practical insights into VFMs-based SSL and presents a scalable SSL learning method.
Limitations:
Further research is needed to determine the generalizability of the proposed method.
Extensive evaluation of various VFM architectures and SSL algorithms is needed.
Potential increase in computational cost and complexity (using ensemble techniques).
👍