[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Exploiting Primacy Effect To Improve Large Language Models

Created by
  • Haebom

Author

Bianca Raimondi, Maurizio Gabbrielli

Outline

This paper focuses on the positional bias of large-scale language models (LLMs), especially the primacy effect in multiple-choice question answering (MCQA). We find that the primacy effect is amplified when LLMs are exposed to human-like patterns during the fine-tuning process, and exploit this as a strategy to rearrange the order of answer options according to semantic similarity with the question. Experimental results show that this approach significantly improves the performance of MCQA, and presents a new perspective on bias as both a problem and an opportunity.

Takeaways, Limitations

Takeaways:
We elucidate the phenomenon in which positional bias is amplified during the fine-tuning process of LLM.
A novel method to improve MCQA performance by leveraging the precedence effect
Suggesting the possibility of utilizing bias in model design and NLP applications
Presenting a dual perspective on bias (both as a problem and as an opportunity)
Limitations:
The proposed method is specialized for MCQA, so its generalizability to other NLP tasks may be limited.
Further research is needed on the accuracy and generalizability of semantic similarity measurement methods.
Lack of consideration of other types of location bias (e.g. recency effect)
👍