[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Unequal Voices: How LLMs Construct Constrained Queer Narratives

Created by
  • Haebom

Author

Atreya Ghosal, Ashim Gupta, Vivek Srikumar

Outline

This paper analyzes how discourse generated by large-scale language models (LLMs) is limited in its depiction of marginalized social groups, particularly queer people. We hypothesize and experimentally test that LLMs exhibit problems such as harmful expressions, narrow representations, and discursive othering when depicting queer people. The results of the study show that LLMs have significant limitations in depicting queer characters. This reflects a tendency for queer groups to focus on narrow and stereotypical topics, unlike mainstream groups.

Takeaways, Limitations

Takeaways: Illustrates the problem of LLMs generating biased representations of socially marginalized groups and suggests ways to improve them. Illustrates that LLM biases can reproduce social inequalities. Provides important Takeaways for understanding and addressing biases surrounding queer discourse.
Limitations: This study may be limited to a specific LLM and dataset. Additional research on a variety of LLMs and datasets is needed. More clarity is needed on the definition and measurement of concepts such as “harmful speech,” “narrow range speech,” and “discourse exclusivity.” No specific technical solutions are provided to address bias in LLM.
👍