Mutual Retinex: Combining Transformer and CNN for Image Enhancement

IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE(2024)

引用 0|浏览3
暂无评分
摘要
Images captured in low-light or underwater environments are often accompanied by significant degradation, which can negatively impact the quality and performance of downstream tasks. While convolutional neural networks (CNNs) and Transformer architectures have made significant progress in computer vision tasks, there are few efforts to harmonize them into a more concise framework for enhancing such images. To this end, this study proposes to aggregate the individual capability of self-attention (SA) and CNNs for accurate perturbation removal while preserving background contents. Based on this, we carry forward a Retinex-based framework, dubbed as Mutual Retinex, where a two-branch structure is designed to characterize the specific knowledge of reflectance and illumination components while removing the perturbation. To maximize its potential, Mutual Retinex is equipped with a new mutual learning mechanism, involving an elaborately designed mutual representation module (MRM). In MRM, the complementary information between reflectance and illumination components are encoded and used to refine each other. Through the complementary learning via the mutual representation, the enhanced results generated by our model exhibit superior color consistency and naturalness. Extensive experiments have shown the significant superiority of our mutual learning based method over thirteen competitors on the low-light task and ten methods on the underwater image enhancement task. In particular, our proposed Mutual Retinex respectively surpasses the state-of-the-art method MIRNet-v2 by 0.90 dB and 2.46 dB in PSNR on the LOL 1000 and FIVEK datasets, while with only 19.8% model parameters.
更多
查看译文
关键词
Image enhancement,mutual learning,retinex,self-attention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要