Existing video restoration (VR) methods have made promising progress in improving the quality of videos degraded by adverse weather. However, these approaches only restore videos with one specific type of degradation and ignore the diversity of degradations in the real world, which limits their application in realistic scenes with diverse adverse weathers. To address the aforementioned issue, in this paper, we propose a Cross-consistent Deep Unfolding Network (CDUN) to adaptively restore frames corrupted by different degradations via the guidance of degradation features. Specifically, the proposed CDUN incorporates (1) a flexible iterative optimization framework, capable of restoring frames corrupted by arbitrary degradations according to the corresponding degradation features given in advance. To enable the framework to eliminate diverse degradations, we devise (2) a Sequence-wise Adaptive Degradation Estimator (SADE) to estimate degradation features for the corrupted video. By orchestrating these two cascading procedures, the proposed CDUN is capable of an end-to-end restoration of videos under the diverse-degradation scene. In addition, we propose a window-based inter-frame fusion strategy to utilize information from more adjacent frames. This strategy involves progressive stacking of temporal windows in multiple iterations, effectively enlarging the temporal receptive field and enabling each frame's restoration to leverage information from distant frames. This work establishes the first explicit model for diverse-degraded videos and is one of the earliest studies of video restoration in the diverse-degradation scene. Extensive experiments indicate that our method achieves state-of-the-art.
Keywords: Deep unfolding network; Degradation-adaptive model; Diverse adverse weathers; Video restoration.
Copyright © 2025 Elsevier Ltd. All rights reserved.