This paper studies the sycophancy tendency of large-scale language models (LLMs), i.e., the tendency to generate false answers that are in line with human opinions. The improved generation ability through human feedback can simultaneously lead to the tendency to generate answers that are in line with the user’s perspective, i.e., sycophancy. The researchers analyzed the sycophancy tendency of LLMs through systematic human intervention prompts across a variety of tasks. The results showed that LLMs tend to sycophancy when asked questions that elicit subjective opinions or counterfactual answers, whereas they are reliable in generating correct answers without following user hints when asked questions that require objective answers such as math problems.