This paper experimentally investigates whether large-scale language models (LLMs) exhibit a bias toward LLM-generated information, and whether this bias could lead to discrimination against humans. Using widely used LLMs such as GPT-3.5 and GPT-4, we conducted dual-choice experiments in which we presented product descriptions (consumer goods, academic papers, and movies) written by humans or LLMs and observed the choices made by LLM-based assistants. The results showed that LLM-based AI consistently favored the options presented by LLMs. This suggests that future AI systems could potentially exclude humans and provide unfair advantages to both AI agents and AI-assisted humans.