Dynamic Pre-training: Towards Efficient and Scalable All-in-One Image Restoration
arxiv(2024)
摘要
All-in-one image restoration tackles different types of degradations with a
unified model instead of having task-specific, non-generic models for each
degradation. The requirement to tackle multiple degradations using the same
model can lead to high-complexity designs with fixed configuration that lack
the adaptability to more efficient alternatives. We propose DyNet, a dynamic
family of networks designed in an encoder-decoder style for all-in-one image
restoration tasks. Our DyNet can seamlessly switch between its bulkier and
lightweight variants, thereby offering flexibility for efficient model
deployment with a single round of training. This seamless switching is enabled
by our weights-sharing mechanism, forming the core of our architecture and
facilitating the reuse of initialized module weights. Further, to establish
robust weights initialization, we introduce a dynamic pre-training strategy
that trains variants of the proposed DyNet concurrently, thereby achieving a
50
required in pre-training, we curate a high-quality, high-resolution image
dataset named Million-IRD having 2M image samples. We validate our DyNet for
image denoising, deraining, and dehazing in all-in-one setting, achieving
state-of-the-art results with 31.34
in parameters compared to baseline models. The source codes and trained models
are available at https://github.com/akshaydudhane16/DyNet.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要