Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Nested Graph Pseudo-Label Refinement for Noisy Label Domain Adaptation Learning

Created by
  • Haebom

Author

Yingxu Wang, Mengzhu Wang, Zhichao Huang, Suyu Liu, Nan Yin

Outline

This paper addresses the problem of Graph Domain Adaptation (GDA) in the presence of noisy source graph labels. Unlike existing GDA methods that assume clean source labels, we propose a novel framework, Nested Graph Pseudo-Label Refinement (NeGPR). NeGPR pretrains dual branches (semantic and topological) that enhance neighborhood consistency in the feature space to mitigate the effects of noise. It selects high-confidence target samples through a nested refinement mechanism to guide adaptation in the other branch, and integrates a noise-aware regularization strategy to mitigate overfitting and the negative effects of pseudo-label noise in the source domain. Experimental results show that NeGPR outperforms state-of-the-art methods in severe label-noise environments, achieving up to a 12.7% accuracy improvement.

Takeaways, Limitations

Takeaways:
We present an effective solution to the GDA problem in source graphs with noisy labels.
Robust domain adaptation performance enhanced through nested refinement mechanisms and noise-aware regularization strategies.
Achieved state-of-the-art performance on various benchmark datasets (up to 12.7% accuracy improvement)
Limitations:
Lack of analysis of the computational complexity of the proposed method.
Further research is needed on generalization performance to various types of noise.
Further verification of applicability and limitations in real-world applications is needed.
👍