AdaIR: Exploiting Underlying Similarities of Image Restoration Tasks with Adapters
arxiv(2024)
摘要
Existing image restoration approaches typically employ extensive networks
specifically trained for designated degradations. Despite being effective, such
methods inevitably entail considerable storage costs and computational
overheads due to the reliance on task-specific networks. In this work, we go
beyond this well-established framework and exploit the inherent commonalities
among image restoration tasks. The primary objective is to identify components
that are shareable across restoration tasks and augment the shared components
with modules specifically trained for individual tasks. Towards this goal, we
propose AdaIR, a novel framework that enables low storage cost and efficient
training without sacrificing performance. Specifically, a generic restoration
network is first constructed through self-supervised pre-training using
synthetic degradations. Subsequent to the pre-training phase, adapters are
trained to adapt the pre-trained network to specific degradations. AdaIR
requires solely the training of lightweight, task-specific modules, ensuring a
more efficient storage and training regimen. We have conducted extensive
experiments to validate the effectiveness of AdaIR and analyze the influence of
the pre-training strategy on discovering shareable components. Extensive
experimental results show that AdaIR achieves outstanding results on multi-task
restoration while utilizing significantly fewer parameters (1.9 MB) and less
training time (7 hours) for each restoration task. The source codes and trained
models will be released.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要