Identifying and Improving Disability Bias in GPT-Based Resume Screening
arxiv(2024)
摘要
As Generative AI rises in adoption, its use has expanded to include domains
such as hiring and recruiting. However, without examining the potential of
bias, this may negatively impact marginalized populations, including people
with disabilities. To address this important concern, we present a resume audit
study, in which we ask ChatGPT (specifically, GPT-4) to rank a resume against
the same resume enhanced with an additional leadership award, scholarship,
panel presentation, and membership that are disability related. We find that
GPT-4 exhibits prejudice towards these enhanced CVs. Further, we show that this
prejudice can be quantifiably reduced by training a custom GPTs on principles
of DEI and disability justice. Our study also includes a unique qualitative
analysis of the types of direct and indirect ableism GPT-4 uses to justify its
biased decisions and suggest directions for additional bias mitigation work.
Additionally, since these justifications are presumably drawn from training
data containing real-world biased statements made by humans, our analysis
suggests additional avenues for understanding and addressing human bias.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要