This paper focuses on the positional bias of large-scale language models (LLMs), especially the primacy effect in multiple-choice question answering (MCQA). We find that the primacy effect is amplified when LLMs are exposed to human-like patterns during the fine-tuning process, and exploit this as a strategy to rearrange the order of answer options according to semantic similarity with the question. Experimental results show that this approach significantly improves the performance of MCQA, and presents a new perspective on bias as both a problem and an opportunity.