Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Prompt Optimization Meets Subspace Representation Learning for Few-shot Out-of-Distribution Detection

Created by
  • Haebom

Author

Faizul Rakib Sayem, Shahana Ibrahim

Outline

This paper proposes a novel outlier detection (OOD) framework leveraging a large-scale vision-language model (VLM) to enhance the reliability of artificial intelligence (AI) systems in open-world environments. To overcome the limitations of existing prompt-learning-based OOD detection methods, which rely solely on softmax probabilities, we develop a context-optimized (CoOp) framework that leverages the discriminative power of feature embeddings. This framework projects in-distribution (ID) features onto a subspace spanned by prompt vectors and projects features unrelated to ID onto an orthogonal null space, thereby enhancing ID-OOD separation. Furthermore, we design an end-to-end learning criterion that ensures robust OOD detection performance and high ID classification accuracy. The effectiveness of the proposed method is demonstrated through experiments on real-world datasets.

Takeaways, Limitations

Takeaways:
Improving OOD detection performance by leveraging VLM feature embeddings.
Strengthening ID-OOD separation capabilities through a new CoOp-based framework.
Designing a simple learning baseline that can be learned end-to-end.
Demonstrating the effectiveness of the proposed method through real-world datasets.
Limitations:
There is no Limitations specified in the paper.
👍