Rethinking Machine Unlearning for Large Language Models
CoRR(2024)
摘要
We explore machine unlearning (MU) in the domain of large language models
(LLMs), referred to as LLM unlearning. This initiative aims to eliminate
undesirable data influence (e.g., sensitive or illegal information) and the
associated model capabilities, while maintaining the integrity of essential
knowledge generation and not affecting causally unrelated information. We
envision LLM unlearning becoming a pivotal element in the life-cycle management
of LLMs, potentially standing as an essential foundation for developing
generative AI that is not only safe, secure, and trustworthy, but also
resource-efficient without the need of full retraining. We navigate the
unlearning landscape in LLMs from conceptual formulation, methodologies,
metrics, and applications. In particular, we highlight the often-overlooked
aspects of existing LLM unlearning research, e.g., unlearning scope, data-model
interaction, and multifaceted efficacy assessment. We also draw connections
between LLM unlearning and related areas such as model editing, influence
functions, model explanation, adversarial training, and reinforcement learning.
Furthermore, we outline an effective assessment framework for LLM unlearning
and explore its applications in copyright and privacy safeguards and
sociotechnical harm reduction.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要