Tinker, Tailor, Configure, Customize: The Articulation Work of Contextualizing an AI Fairness Checklist

Proceedings of the ACM on Human-Computer Interaction(2024)

引用 0|浏览6
暂无评分
摘要
Many responsible AI resources, such as toolkits, playbooks, and checklists, have been developed to support AI practitioners in identifying, measuring, and mitigating potential fairness-related harms. These resources are often designed to be general purpose in order to be applicable to a variety of use cases, domains, and deployment contexts. However, this can lead to decontextualization, where such resources lack the level of relevance or specificity needed to use them. To understand how AI practitioners might contextualize one such resource, an AI fairness checklist, for their particular use cases, domains, and deployment contexts, we conducted a retrospective contextual inquiry with 13 AI practitioners from seven organizations. We identify how contextualizing this checklist introduces new forms of work for AI practitioners and other stakeholders, as well as opening up new sites for negotiation and contestation of values in AI. We also identify how the contextualization process may help AI practitioners develop a shared language around AI fairness, and we identify tensions related to ownership over this process that suggest larger issues of accountability in responsible AI work.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要