This paper analyzes how discourse generated by large-scale language models (LLMs) is limited in its depiction of marginalized social groups, particularly queer people. We hypothesize and experimentally test that LLMs exhibit problems such as harmful expressions, narrow representations, and discursive othering when depicting queer people. The results of the study show that LLMs have significant limitations in depicting queer characters. This reflects a tendency for queer groups to focus on narrow and stereotypical topics, unlike mainstream groups.