This paper investigates the impact of language model bias on answer choice preferences in the Massive Multi-Task Language Understanding (MMLU) task. The results show that language model bias predicts model preferences and reflects human test-taking strategies, even when using CoT inference. To address this issue, the authors introduce counterfactual prompting and indiscriminately primed CoT (APriCoT). While counterfactual prompting alone using CoT is insufficient to mitigate bias, APriCoT effectively reduces the influence of underlying probability and improves overall accuracy. CoT tends to reinforce fast-thinking model bias under certain prompting methods, suggesting that slow-thinking is necessary for bias mitigation. APriCoT represents a step toward developing a more robust and fair "slow-thinking" language model.