[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Fairness Is Not Enough: Auditing Competence and Intersectional Bias in AI-powered Resume Screening

Created by
  • Haebom

Author

Kevin T Webster

Outline

This paper addresses the issue of bias in resume screening using generative AI. While AI-based resume screening systems are increasingly used under the assumption that they can replace biased human judgment, we question the very evaluation capabilities of these systems. We conduct two experiments on eight major AI platforms and find that some models exhibit complex, contextual racial and gender biases that disadvantage applicants based solely on demographic signals. We also find that some models that appear to be unbiased fail to make substantive evaluations and instead rely on superficial keyword matching, which we term the “Illusion of Neutrality.” Therefore, we recommend adopting a double-checking framework for demographic bias and substantive ability to ensure the fairness and effectiveness of AI recruiting tools.

Takeaways, Limitations

Takeaways:
We deeply analyze the bias problem of generative AI-based resume screening systems and propose a new concept called the “illusion of neutrality.”
Proposing a double verification framework (demographic bias and actual ability verification) to ensure fairness and efficiency of AI recruitment tools.
Emphasize that rather than simply examining bias, the actual evaluation capabilities of AI systems must be evaluated.
Limitations:
The number of AI platforms being analyzed is limited to eight.
Further research is needed on the generalizability of the concept of “illusion of neutrality.”
Lack of discussion on concrete implementations of the proposed double-verification framework.
👍