Bias in AI Autocomplete Suggestions Leads to Attitude Shift on Societal Issues

crossref(2024)

引用 0|浏览4
暂无评分
摘要
AI technologies such as Large Language Models (LLMs) are increasingly used to make “autocomplete” suggestions when people write text. Can these suggestions impact people’s writing and attitudes? In two preregistered experiments (N=3,024), we expose participants writing about important societal issues to biased AI-generated suggestions. The attitudes participants expressed in their writing and in a post-task survey converged towards the AI’s position. Yet, a majority of participants were unaware of the AI suggestions’ bias and their influence. Awareness of the task or of the AI’s bias, e.g. warning participants about potential bias before or after exposure to the treatment, did not mitigate the influence effect. Moreover, the AI’s influence is not fully explained by the additional information provided by the suggestions.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要