Social Choice for AI Alignment: Dealing with Diverse Human Feedback

Vincent Conitzer,Rachel Freedman, Jobst Heitzig,Wesley H. Holliday, Bob M. Jacobs,Nathan Lambert, Milan Mossé,Eric Pacuit,Stuart Russell, Hailey Schoelkopf, Emanuel Tewolde,William S. Zwicker

arxiv(2024)

引用 0|浏览7
暂无评分
摘要
Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, so that, for example, they refuse to comply with requests for help with committing crimes or with producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about ”collective” preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions, and we discuss ways forward for this agenda, drawing on discussions in a recent workshop on Social Choice for AI Ethics and Safety held in Berkeley, CA, USA in December 2023.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要