Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

The Role of Model Confidence on Bias Effects in Measured Uncertainties for Vision-Language Models

Created by
  • Haebom

Author

Xinyi Liu, Weiguang Wang, Hangfeng He

Outline

As the use of large-scale language models (LLMs) increases, accurately assessing epistemic uncertainty, which reflects the model's knowledge deficits, has become crucial. However, quantifying this uncertainty is challenging due to the aleatory uncertainty arising from multiple valid answers. This study found that mitigating the bias introduced by prompts in a visual question answering (VQA) task improved GPT-4o's uncertainty quantification. Furthermore, based on the LLM's tendency to copy input information when model confidence is low, we analyzed the impact of this prompt bias on measured epistemic and aleatory uncertainty at various unbiased confidence levels in GPT-4o and Qwen2-VL.

Takeaways, Limitations

Takeaways:
Prompt bias mitigation improves uncertainty quantification.
Low unbiased model confidence is associated with an underestimation of epistemic uncertainty due to bias.
Low unbiased model confidence does not significantly affect the direction of the bias effect in the estimation of random uncertainty.
Limitations:
Details about the specific methodology, dataset, and models used in the study were not disclosed.
It only suggests the possibility of developing more advanced technologies for uncertainty quantification, but no specific direction is provided.
👍