PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation
Conference of the European Chapter of the Association for Computational Linguistics(2024)
摘要
With the proliferation of large pre-trained language models (PLMs),
fine-tuning all model parameters becomes increasingly inefficient, particularly
when dealing with numerous downstream tasks that entail substantial training
and storage costs. Several approaches aimed at achieving parameter-efficient
fine-tuning (PEFT) have been proposed. Among them, Low-Rank Adaptation (LoRA)
stands out as an archetypal method, incorporating trainable rank decomposition
matrices into each target module. Nevertheless, LoRA does not consider the
varying importance of each layer. To address these challenges, we introduce
PRILoRA, which linearly allocates a different rank for each layer, in an
increasing manner, and performs pruning throughout the training process,
considering both the temporary magnitude of weights and the accumulated
statistics of the input to any given layer. We validate the effectiveness of
PRILoRA through extensive experiments on eight GLUE benchmarks, setting a new
state of the art.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要