InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning
arxiv(2024)
摘要
Continual learning requires the model to learn multiple tasks sequentially.
In continual learning, the model should possess the ability to maintain its
performance on old tasks (stability) and the ability to adapt to new tasks
continuously (plasticity). Recently, parameter-efficient fine-tuning (PEFT),
which involves freezing a pre-trained model and injecting a small number of
learnable parameters to adapt to downstream tasks, has gained increasing
popularity in continual learning. Although existing continual learning methods
based on PEFT have demonstrated superior performance compared to those not
based on PEFT, most of them do not consider how to eliminate the interference
of the new task on the old tasks, which inhibits the model from making a good
trade-off between stability and plasticity. In this work, we propose a new PEFT
method, called interference-free low-rank adaptation (InfLoRA), for continual
learning. InfLoRA injects a small number of parameters to reparameterize the
pre-trained weights and shows that fine-tuning these injected parameters is
equivalent to fine-tuning the pre-trained weights within a subspace.
Furthermore, InfLoRA designs this subspace to eliminate the interference of the
new task on the old tasks, making a good trade-off between stability and
plasticity. Experimental results show that InfLoRA outperforms existing
state-of-the-art continual learning methods on multiple datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要