Interpretability Gone Bad: The Role of Bounded Rationality in How Practitioners Understand Machine Learning

Harmanpreet Kaur, Matthew R. Conrad, Davis Rule,Cliff Lampe,Eric Gilbert

Proceedings of the ACM on Human-Computer Interaction(2024)

引用 0|浏览1
暂无评分
摘要
While interpretability tools are intended to help people better understand machine learning (ML), we find that they can, in fact, impair understanding. This paper presents a pre-registered, controlled experiment showing that ML practitioners (N=119) spent 5x less time on task, and were 17% less accurate about the data and model, when given access to interpretability tools. We present bounded rationality as the theoretical reason behind these findings. Bounded rationality presumes human departures from perfect rationality, and it is often effectuated by satisficing, i.e., an inclination towards "good enough" understanding. Adding interactive elements---a strategy often employed to promote deliberative thinking and engagement, and tested in our experiment---also does not help. We discuss implications for interpretability designers and researchers related to how cognitive and contextual factors can affect the effectiveness of interpretability tool use.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要