CCD Net
CCD Net
Automation Department, School of Information Science and Engineering, East China University of Science
and Technology, Shanghai 200237, China; y30231011@[Link]
* Correspondence: haitaozhao@[Link]
Abstract: Image dehazing is a crucial task in computer vision, aimed at restoring the clarity
of images impacted by atmospheric conditions like fog, haze, or smog, which degrade
image quality by reducing contrast, color fidelity, and detail. Recent advancements in
deep learning, particularly convolutional neural networks (CNNs), have shown significant
improvements by directly learning features from hazy images to produce clear outputs.
However, color distortion remains an issue, as many methods focus on contrast and clarity
without adequately addressing color restoration. To overcome this, we propose a Color-
Correction Network (CCD-Net) based on dual-branch fusion of different color spaces for
image dehazing, that simultaneously handles image dehazing and color correction. The
dehazing branch utilizes an encoder–decoder structure aimed at restoring haze-affected
images. Unlike conventional methods that primarily focus on haze removal, our approach
explicitly incorporates a dedicated color-correction branch in the Lab color space, ensur-
ing both clarity enhancement and accurate color restoration. Additionally, we integrate
attention mechanisms to enhance feature extraction and introduce a novel fusion loss
function that combines loss in both RGB and Lab spaces, achieving a balance between
structural preservation and color fidelity. The experimental results demonstrate that CCD-
Net outperforms existing methods in both dehazing performance and color accuracy, with
CIEDE reduced by 40.81% on RESIDE-indoor and 45.57% on RESIDE-6K compared to the
second-best-performing model, showcasing its superior color-restoration capability.
Keywords: image dehazing; color correction; image restoration; deep learning; attention
Academic Editor: Pedro Couto
mechanism
Received: 14 February 2025
Revised: 10 March 2025
Accepted: 11 March 2025
Published: 14 March 2025
1. Introduction
Citation: Chen, D.; Zhao, H.
Image dehazing plays a pivotal role in computer vision tasks by restoring the clar-
CCD-Net: Color-Correction Network
Based on Dual-Branch Fusion of
ity of images degraded by atmospheric phenomena such as fog, haze, or smog [1]. The
Different Color Spaces for Image presence of these environmental factors significantly hampers the visibility and percep-
Dehazing. Appl. Sci. 2025, 15, 3191. tibility of important image details, often leading to severe reductions in contrast, color
[Link] fidelity, and overall image quality. This degradation poses substantial challenges for visual
app15063191
recognition systems, particularly in critical applications such as autonomous driving [2],
Copyright: © 2025 by the authors. target tracking [3–5], and outdoor object detection [6–8], where accurate and detailed scene
Licensee MDPI, Basel, Switzerland. understanding is paramount.
This article is an open access article
The atmospheric scattering model [9,10] provides insights into the formation of hazy
distributed under the terms and
images by explaining how haze affects an image through two main components: the scene
conditions of the Creative Commons
Attribution (CC BY) license
radiance that is attenuated by the medium and the atmospheric light that is scattered into
([Link] the camera. The observed hazy image is a combination of these two factors. The trans-
licenses/by/4.0/). mission map represents the extent to which the scene radiance is preserved after passing
through the haze, with denser haze leading to lower transmission values. Meanwhile,
the atmospheric light refers to the ambient light scattered by particles in the air, which
contributes to the overall brightness and color shift in the hazy image.
Most of the previous prior-based studies realized image dehazing by predicting
transmission maps and atmospheric light. For instance, He et al. [11] developed the Dark
Channel Prior (DCP) as a technique for estimating the transmission map. Fattal et al. [12]
proposed a method for estimating the transmission map by leveraging prior information
on object surface-shading and transmission characteristics. These methods based on prior
knowledge can produce dehazed images to some extent. However, if the assumed priors
do not align well with a particular scene or the parameter estimation is imprecise, the
resulting images may exhibit artifacts, color distortion, or other undesired effects.
In contrast, deep learning algorithms leverage the powerful feature extraction and
nonlinear mapping capabilities of convolutional neural networks (CNNs) to enhance the
accuracy of parameter estimation, enabling dehazing models to produce clear, haze-free
images autonomously, eliminating the reliance on explicit physical models, thereby driving
the rapid advancement of image-dehazing technologies. In recent years, the rapid devel-
opment of deep learning has led to significant breakthroughs in computer vision [13–15].
Many dehazing approaches based on these methods have shown remarkable success. Some
approaches employ convolutional neural networks to predict the transmission map and
atmospheric light [16–18], while others directly learn various features to generate haze-free
images [19–22]. Because these learning-based methods can handle large training datasets,
they effectively extract useful information, resulting in high-quality, clear outputs even in
hazy conditions.
Many dehazing methods focus on enhancing image contrast and clarity while over-
looking the precise restoration of color information. As a result, dehazed images may exhibit
color distortion and bias, leading to suboptimal visual quality [23–25]. Figure 1 shows ex-
amples of color difference in image dehazing. Color plays a pivotal role in image processing;
accurate color representation not only influences the visual experience but also significantly
impacts subsequent image analysis and computer vision tasks. Ancuti et al. [26] proposed
a local adaptive-fusion strategy that constructs a light attenuation map through the red
channel to quantify local attenuation, then fuses the original image with a globally adjusted
color map to dynamically correct the intensity. Zhang et al. [27] first preprocessed the
images using a white balance algorithm and a DCP dehazing technique, and applied a mul-
tiscale fusion strategy to correct color deviations. However, the computational efficiency of
this method for high-resolution images needs to be improved.
troduced a two-stage progressive network, which adjusts the pixel values of the three color
channels independently and employs a channel attention mechanism for color restoration.
Deep learning-based methods have significantly advanced the field of image dehazing.
These methods offer robust performance in tasks such as color restoration and image
enhancement. Huang et al. [30] proposed a multi-scale cascaded Transformer network with
adaptive group attention to dynamically select complementary channels, addressing image
color degradation caused by light absorption and scattering. Similarly, Khan et al. [31]
introduced a multi-domain query cascade Transformer network, which integrates local
transmission and global illumination features using a multi-domain query cascade attention
mechanism. However, the large number of parameters in these models often requires
substantial storage space and computational resources, which limits their practicality.
Therefore, it is crucial to design lightweight models for dehazing applications to address
these efficiency challenges.
It is difficult to strike a balance between preserving image color and ensuring high-
quality dehazing effect in image-dehazing tasks. In order to improve the comprehensive
performance of the dehazing algorithm, we propose a dual-branch network that focuses
on image dehazing along with color distortion correction. These two branches are the
image-dehazing branch and the color-correction branch.
The dehazing branch is to enhance the processing capability of the image by multi-
scale feature extraction and fusion, which utilizes different levels of feature representation
to gradually enhance the details of the image, and also enhances the multi-scale information
of the image by up- and downsampling operations. The branch uses attention mechanisms
(channel attention, CA, and pixel attention, PA) to dynamically adjust the important parts
of the feature map to further optimize the effect of image processing. These attention
mechanisms help the network to focus on the regions that are most important to the task
during processing, thus enhancing the effectiveness of the model.
The color-correction branch aims to enhance the color information and details of an
image. We introduce Lab color space, attention mechanism, and multilayer convolutional
neural network structure into the model design. In order to better process the color
information of the image, we convert the image from RGB color space to Lab color space.
A multilayer convolutional network is utilized to extract image features and generate an
attention map with the same size as the input image. This attention map is connected with
the original AB channel and input to the forward processing module to further enhance the
image processing. The features acquired from these two branches are effectively merged,
resulting in high-quality, visually satisfying images.
In summary, our main contributions are as follows:
1. An effective end-to-end two-branch image-dehazing network CCD-Net is proposed.
It focuses on image clarity and color features to obtain high-quality images.
2. A color-correction branch is proposed to utilize features from different color spaces
to avoid the difficulty of feature extraction in ordinary RGB color space. In addition,
we introduce the Convolutional Block Attention Module (CBAM) into the Color-
Correction Network to improve the feature-extraction capability of the network and
effectively recover the missing color and detail information.
3. We propose a loss function based on Lab space and form a fused loss function
with the loss function in RGB space, which more comprehensively considers image
color recovery.
4. Numerous pieces of experimental evidence show that the proposed CCD-Net achieves
a better dehazing effect, enhances the color quality of images, and consumes less
computational resources compared to other competing methods.
Appl. Sci. 2025, 15, 3191 4 of 20
The remaining sections of this paper are structured as follows: Section 2 reviews related
work, covering both prior-based and deep learning-based dehazing methods. Section 3
introduces the proposed CCD-Net, detailing its overall framework, dehazing branch, and
color-correction branch, along with the design of the loss function. Section 4 describes the
experimental setup, and presents a comparison with state-of-the-art dehazing methods and
an ablation study to evaluate the effectiveness of different components in our approach.
Finally, Section 5 provides the conclusion of the paper.
2. Related Work
In recent years, advances in image dehazing can be divided into two main phases,
which can be categorized into initial techniques based on a prior assumptions about the
image, and deep learning-based dehazing methods that utilize neural network models for
end-to-end mapping.
atmospheric coefficients to obtain the dehazed image. Similarly, Ren et al. [17] proposed
multi-scale convolutional neural networks (MSCNNs) and Zhang et al. [36] proposed a joint
dehazing network for transmission maps; Li et al. [37] creatively proposed a normalized
dehazing network in response to the fact that the previous networks mostly estimated
the atmospheric light value and transmittance separately, namely the All-in-one Dehazing
Network (AOD-Net), which unifies the two unknown parameters into one parameter and
reduces the computation and error.
The end-to-end algorithm does not require consideration of the foggy-image-
degradation causes or the imaging process. The network takes foggy images as input
and, after learning, outputs a clear image. Qu et al. [38] proposed an enhanced pix2pix
dehazing network, which treats image dehazing as an image-to-image conversion problem.
Chen et al. [39] introduced an end-to-end gated context-aggregation network for image de-
hazing. Qin et al. [40] proposed a feature-fusion attention (FFA) network, which processes
different types of feature information in order to realize image dehazing. Wu et al. [41]
proposed a compact image-dehazing method based on contrast learning, which further
constrains the dehazing problem by introducing negative samples and fully exploiting the
information in the negative samples to upper and lower bounds of the solution space. The
above end-to-end dehazing network that does not rely on atmospheric degradation models
can remove fog from hazy images to a certain extent, but it over-extracts global multiscale
features at the input resolution in the dehazing process, and lacks the ability to focus on
local detailed texture information (Table 1).
Dehazed images often suffer from color distortion due to a focus on contrast en-
hancement over color accuracy. To address this issue, we introduce a novel dual-branch
framework that integrates dehazing and color correction into a unified model, ensuring
effective haze removal while enhancing chromatic fidelity.
This section first provides an overview of the proposed CCD-Net’s overall structure,
offering a comprehensive understanding of the network architecture. It then proceeds to
introduce the individual components, the dehazing branch and color-correction branch
in detail, analyzing their respective functions and design objectives. Finally, the section
discusses the design of the loss function and its role in the training process.
3. Our Method
3.1. Overall Framework
Restoring a blurred image to a clear one typically involves addressing several degra-
dation issues, such as blurring, color bias, and uneven brightness. These degradations can
result in the loss of image details and degrade both the quality and utility of the image. To
Appl. Sci. 2025, 15, 3191 6 of 20
address these challenges, a dual-branch dehazing network, CCD-Net, has been proposed.
It primarily focuses on learning image clarity and color features, effectively mitigating the
aforementioned degradation issues to produce clear and naturally colored images.
CCD-Net is composed of two main components: a dehazing branch and a color-feature
learning branch, as illustrated in Figure 2. The input hazy image Iin ∈ R H ×W ×3 is processed
simultaneously by both branches. The dehazing branch extracts the feature map Fdehazing ,
while the color-feature learning branch generates the feature map Fcolor ∈ R H ×W ×3 . These
two feature maps are then combined to reconstruct a clear image.
The first branch focuses on restoring image clarity by employing advanced dehazing
techniques to enhance blurred regions and recover fine details and edges. The second
branch is dedicated to learning color features in the Lab color space, aiming to correct color
bias caused by atmospheric scattering and other factors, ensuring that the restored image
exhibits more natural and accurate colors.
By jointly optimizing these two objectives—clarity and color features—CCD-Net is
able to effectively retain the original color and detail of the image while removing haze,
avoiding the common problems of color distortion and edge blurring encountered in
traditional dehazing methods. This ensures high-quality image restoration across a wide
range of complex environments. Detailed descriptions of the dehazing and color-feature
learning branches are provided below.
i +1
of the previous layer Deout , while the number of channels is reduced by half. The main
process can be described as follows:
i i +1
Deout = U p Group Deout , i = {0, 1, 2, 3} (2)
The fundamental unit of the model is the Block Module, consisting of two convolu-
tional layers, a channel attention layer, and a pixel attention layer. Details of pixel attention
and channel attention are shown in Figure 4. This module is designed to process image
features at multiple levels, and by using residual connections, it fuses low-level features
with higher-level ones, preventing the loss of information. The output from each Block
module is further enhanced in subsequent layers, enabling the model to capture more
complex feature representations and improving the dehazing result.
FBlock = PA(CA(Conv(δ(Conv( Fin ))))) (3)
In the Group module, multiple Block modules are stacked together to form a more
complex unit responsible for multi-level feature fusion. Within each Group module, features
are processed through several layers to transform the input feature map into higher-level
representations, allowing the model to capture information from different scales and further
enhancing the dehazing effect.
Appl. Sci. 2025, 15, 3191 8 of 20
handle the color information, we first convert the image from the RGB color space to the Lab
color space, then integrate attention mechanisms and multilayer convolutional neural net-
work structures into the model design for enhanced performance. Unlike previous methods
that adjust color distribution based on the mean values of RGB channels [45] or compen-
sate for weak color channels using attenuation matrices to correct color distortion [46],
our approach operates in the Lab color space. Through this color-space transformation,
the model can independently process lightness and color information, allowing for more
precise adjustments to the color details of the image.
We input the AB channels X AB of the image. By combining the attention-map-
generation module and the forward processing module, it effectively enhances the AB
channels of the image. Additionally, to help the model focus more precisely on important
regions of the image, we introduce an attention mechanism. This mechanism generates an
attention map that highlights key regions of the image, aiding the network in focusing on
critical parts of the image during the training process.
to image dehazing. This approach not only recovers the lightness information but also
reconstructs more of the fine details during the dehazing process.
To further refine color correction, we introduce an L1 loss in the Lab color space,
specifically focusing on the a (red-green) and b (yellow-blue) chromatic channels. While
pixel-level L1 reconstruction loss enhances structural clarity, it remains insufficient for
precise color restoration, particularly in maintaining color fidelity. In our approach, the
a and b chromatic components of both the dehazed output and the ground truth are
extracted, and the L1 loss is computed between them. Given that the Lab color space
inherently decouples luminance (L) from chromaticity (a, b), this loss formulation enables
CCD-Net to more effectively restore natural colors and mitigate hue shifts introduced
by atmospheric scattering. By incorporating this constraint, the loss function not only
ensures accurate brightness restoration but also enhances color consistency, significantly
reducing potential chromatic distortions in the dehazing process. Lcolor can be expressed
mathematically as follows:
ab
Lcolor = ∥ X̂ AB − IGT ∥1 (5)
clear and color-accurate. The final objective function is a weighted combination of both
loss components:
L = L1 × 0.5 + Lcolor × 0.5 (6)
where L1 denotes the pixel-based L1 loss and Lcolor represents the L1 loss in the LAB
color space.
4. Experiments
This section presents the experimental evaluation of CCD-Net, including the datasets,
evaluation metrics, and implementation details. We conduct a comparative analysis against
state-of-the-art dehazing methods, providing both quantitative and qualitative assessments
to demonstrate the superiority of our approach. Additionally, we analyze the impact of the
Lab color space and perform an ablation study to evaluate the contributions of individual
components within the network.
where ∆E00 indicates the similarity between two colors, with a smaller value signifying
a closer color match. ∆L′ represents the difference in lightness, while ∆C ′ denotes the
chroma difference, and ∆H ′ corresponds to the hue difference. The weighting coefficients
k L , k C , and k H are typically set to 1 to balance the contributions of different components.
The scaling factors S L , SC , and S H are used to normalize the respective differences in
lightness, chroma, and hue. Additionally, R T is a rotation term specifically designed to
adjust color differences in the blue region, ensuring better perceptual accuracy.
A higher PSNR or SSIM value (closer to 1) suggests that the dehazed image preserves
more structural details of the ground-truth image, while a lower CIEDE value indicates
better color preservation.
Appl. Sci. 2025, 15, 3191 12 of 20
Parameter Values
Patch size 128 × 128
Batch size 12
Learning rate 0.0003
Iterations 300,000
Optimizer AdamW
Table 3. Quantitative results of different methods on RESIDE-indoor and RESIDE-6K datasets. Best
results are bolded, and the second-best results are underlined.
Figure 9. Visual comparison between our method and other approaches on the RESIDE-indoor dataset.
Figure 10. Visual comparison of various methods on haze-free images. The first row shows the error
maps between the resulting images and the ground truth.
Appl. Sci. 2025, 15, 3191 15 of 20
Figure 11. Visualization of dehazed image and corresponding ground-truth image in the Lab
color space.
Appl. Sci. 2025, 15, 3191 16 of 20
RESIDE-Indoor RESIDE-6K
Method
PSNR (dB) SSIM CIEDE PSNR (dB) SSIM CIEDE
(I) 34.20 0.981 3.83 29.57 0.973 4.87
(II) 35.12 0.982 2.78 30.38 0.975 3.96
(III) 35.90 0.991 1.90 30.71 0.979 2.25
5. Conclusions
In this work, we propose a two-branch dehazing network CCD-Net. In the image-
dehazing branch, multi-scale feature extraction and fusion are used to enhance the pro-
cessing capability of the image, and the image details are gradually enhanced by different
levels of feature representation. In addition, up- and downsampling operations are utilized
to further enhance the multi-scale information of the image. This branch combines the
channel attention (CA) and pixel attention (PA) mechanisms to dynamically adjust the
important parts of the feature maps so as to optimize the image-processing effect. In the
color-correction branch, the color information and details of the image are mainly enhanced
by introducing Lab color space, the attention mechanism, and the multilayer convolutional
neural network structure. Numerous experimental results demonstrate that CCD-Net
outperforms five other methods in dehazing and effectively mitigates the impact of haze on
image brightness and color. Most notably, CIEDE is reduced by 40.81% on RESIDE-indoor
and 45.57% on RESIDE-6K compared to the second-best-performing model, highlighting
CCD-Net’s superior color-restoration capabilities. Additionally, the ablation study confirms
the effectiveness of our network components.
Author Contributions: Conceptualization, D.C. and H.Z.; methodology, D.C. and H.Z.; software,
D.C.; validation, D.C.; formal analysis, D.C. and H.Z.; investigation, D.C. and H.Z.; resources,
D.C. and H.Z.; data curation, D.C.; writing—original draft preparation, D.C.; writing—review and
editing, D.C. and H.Z.; visualization, D.C.; supervision, H.Z.; project administration, H.Z.; funding
acquisition, H.Z. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the National Natural Science Foundation of China (NSFC)
under Grant 62173143.
Data Availability Statement: A publicly available dataset is analyzed in this study. These data can
be found here: [Link] (accessed on
15 November 2024).
References
1. Lin, C.; Rong, X.; Yu, X. MSAFF-Net: Multiscale attention feature fusion networks for single image dehazing and beyond. IEEE
Trans. Multimed. 2022, 25, 3089–3100. [CrossRef]
2. Mehra, A.; Mandal, M.; Narang, P.; Chamola, V. ReViewNet: A fast and resource optimized network for enabling safe autonomous
driving in hazy weather conditions. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4256–4266. [CrossRef]
Appl. Sci. 2025, 15, 3191 18 of 20
3. Xu, Y.; Osep, A.; Ban, Y.; Horaud, R.; Leal-Taixé, L.; Alameda-Pineda, X. How to train your deep multi-object tracker. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020;
pp. 6787–6796.
4. Cao, Z.; Huang, Z.; Pan, L.; Zhang, S.; Liu, Z.; Fu, C. TCTrack: Temporal contexts for aerial tracking. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 14798–14808.
5. Yin, J.; Wang, W.; Meng, Q.; Yang, R.; Shen, J. A unified object motion and affinity model for online multi-object tracking. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020;
pp. 6768–6777.
6. Pang, Y.; Xie, J.; Khan, M.H.; Anwer, R.M.; Khan, F.S.; Shao, L. Mask-Guided Attention Network for Occluded Pedestrian
Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea,
27 October–2 November 2019.
7. Nie, J.; Anwer, R.M.; Cholakkal, H.; Khan, F.S.; Pang, Y.; Shao, L. Enriched feature guided refinement network for object
detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, New Orleans, LA, USA, 19–24 June
2022; pp. 9537–9546.
8. Li, Y.; Pang, Y.; Shen, J.; Cao, J.; Shao, L. NETNet: Neighbor erasing and transferring network for better single shot object detection.
In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020;
pp. 13349–13358.
9. Nayar, S.K.; Narasimhan, S.G. Vision in bad weather. In Proceedings of the Seventh IEEE International Conference on Computer
Vision, Kerkyra, Greece, 20–25 September 1999; Volume 2, pp. 820–827.
10. Narasimhan, S.G.; Nayar, S.K. Vision and the atmosphere. Int. J. Comput. Vis. 2002, 48, 233–254. [CrossRef]
11. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010,
33, 2341–2353.
12. Fattal, R. Single Image Dehazing. ACM Trans. Graph. (TOG) 2008, 27, 1–9. [CrossRef]
13. Liu, J.; Li, S.; Liu, H.; Dian, R.; Wei, X. A lightweight pixel-level unified image fusion network. IEEE Trans. Neural Netw. Learn.
Syst. 2023, 35, 18120–18132. [CrossRef]
14. Jain, J.; Li, J.; Chiu, M.T.; Hassani, A.; Orlov, N.; Shi, H. Oneformer: One transformer to rule universal image segmentation. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Waikoloa, HI, USA, 2–8 January 2023;
pp. 2989–2998.
15. Zhou, J.; Li, B.; Zhang, D.; Yuan, J.; Zhang, W.; Cai, Z.; Shi, J. UGIF-Net: An efficient fully guided information flow network for
underwater image enhancement. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–17. [CrossRef]
16. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. Dehazenet: An end-to-end system for single image haze removal. IEEE Trans. Image
Process. 2016, 25, 5187–5198. [CrossRef]
17. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M.H. Single image dehazing via multi-scale convolutional neural networks. In
Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016;
Proceedings, Part II 14; Springer: Berlin/Heidelberg, Germany, 2016; pp. 154–169.
18. Deng, Z.; Zhu, L.; Hu, X.; Fu, C.W.; Xu, X.; Zhang, Q.; Qin, J.; Heng, P.A. Deep multi-model fusion for single-image dehazing. In
Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, South Korea, 27 October–2 November 2019;
pp. 2453–2462.
19. Zheng, L.; Li, Y.; Zhang, K.; Luo, W. T-net: Deep stacked scale-iteration network for image dehazing. IEEE Trans. Multimed. 2022,
25, 6794–6807. [CrossRef]
20. Zheng, C.; Zhang, J.; Hwang, J.N.; Huang, B. Double-branch dehazing network based on self-calibrated attentional convolution.
Knowl. Based Syst. 2022, 240, 108148. [CrossRef]
21. Yi, Q.; Li, J.; Fang, F.; Jiang, A.; Zhang, G. Efficient and accurate multi-scale topological network for single image dehazing. IEEE
Trans. Multimed. 2021, 24, 3114–3128. [CrossRef]
22. Zhou, Y.; Chen, Z.; Li, P.; Song, H.; Chen, C.P.; Sheng, B. FSAD-Net: Feedback spatial attention dehazing network. IEEE Trans.
Neural Netw. Learn. Syst. 2022, 34, 7719–7733. [CrossRef]
23. Han, M.; Lyu, Z.; Qiu, T.; Xu, M. A review on intelligence dehazing and color restoration for underwater images. IEEE Trans.
Syst. Man. Cybern. Syst. 2018, 50, 1820–1832. [CrossRef]
24. Li, C.; Guo, J. Underwater image enhancement by dehazing and color correction. J. Electron. Imaging 2015, 24, 033023. [CrossRef]
25. Deng, X.; Wang, H.; Liu, X. Underwater image enhancement based on removing light source color and dehazing. IEEE Access
2019, 7, 114297–114309. [CrossRef]
Appl. Sci. 2025, 15, 3191 19 of 20
26. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Garcia, R. Locally adaptive color correction for underwater image dehazing
and matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Venice, Italy,
22–29 October 2017; pp. 1–9.
27. Zhang, Y.; Yang, F.; He, W. An approach for underwater image enhancement based on color correction and dehazing. Int. J. Adv.
Robot. Syst. 2020, 17, 1729881420961643. [CrossRef]
28. Espinosa, A.R.; McIntosh, D.; Albu, A.B. An efficient approach for underwater image improvement: Deblurring, dehazing, and
color correction. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA,
2–8 January 2023; pp. 206–215.
29. Kong, L.; Feng, Y.; Yang, S.; Gao, X. A Two-stage Progressive Network for Underwater Image Enhancement. In Proceedings of
the 2024 5th International Conference on Computer Vision, Image and Deep Learning (CVIDL), Zhuhai, China, 19–21 April 2024;
pp. 1013–1017.
30. Huang, Z.; Li, J.; Hua, Z.; Fan, L. Underwater image enhancement via adaptive group attention-based multiscale cascade
transformer. IEEE Trans. Instrum. Meas. 2022, 71, 1–18. [CrossRef]
31. Khan, R.; Mishra, P.; Mehta, N.; Phutke, S.S.; Vipparthi, S.K.; Nandi, S.; Murala, S. Spectroformer: Multi-domain query cascaded
transformer network for underwater image enhancement. In Proceedings of the IEEE/CVF Winter Conference on Applications
of Computer Vision, Waikoloa, HI, USA, 2–8 January 2023; pp. 1454–1463.
32. Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process.
2015, 24, 3522–3533.
33. Berman, D.; Treibitz, T.; Avidan, S. Non-local image dehazing. In Proceedings of the IEEE Conference on Computer Vision And
Pattern Recognition, Las Vegas, NV, USA, 27 June–2 July 2016; pp. 1674–1682.
34. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient image dehazing with boundary constraint and contextual regularization.
In Proceedings of the IEEE international Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 617–624.
35. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Sbert, M. Color channel compensation (3C): A fundamental pre-processing step
for image enhancement. IEEE Trans. Image Process. 2019, 29, 2653–2665. [CrossRef]
36. Zhang, H.; Patel, V.M. Densely connected pyramid dehazing network. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3194–3203.
37. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. Aod-net: All-in-one dehazing network. In Proceedings of the IEEE International
Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4770–4778.
38. Qu, Y.; Chen, Y.; Huang, J.; Xie, Y. Enhanced Pix2pix Dehazing Network. In Proceedings of the 2019 IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 8152–8160.
39. Chen, D.; He, M.; Fan, Q.; Liao, J.; Zhang, L.; Hou, D.; Yuan, L.; Hua, G. Gated context aggregation network for image
dehazing and deraining. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV),
Waikoloa, HI, USA, 7–11 January 2019; pp. 1375–1383.
40. Qin, X.; Wang, Z.; Bai, Y.; Xie, X.; Jia, H. FFA-Net: Feature fusion attention network for single image dehazing. In Proceedings of
the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 11908–11915.
41. Wu, H.; Qu, Y.; Lin, S.; Zhou, J.; Qiao, R.; Zhang, Z.; Xie, Y.; Ma, L. Contrastive learning for compact single image dehazing. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 10551–10560.
42. Li, C.; Anwar, S.; Hou, J.; Cong, R.; Guo, C.; Ren, W. Underwater image enhancement via medium transmission-guided
multi-color space embedding. IEEE Trans. Image Process. 2021, 30, 4985–5000. [CrossRef]
43. Suny, A.H.; Mithila, N.H. A shadow detection and removal from a single image using LAB color space. Int. J. Comput. Sci. Issues
2013, 10, 270.
44. Chung, Y.S.; Kim, N.H. Saturation-based airlight color restoration of hazy images. Appl. Sci. 2023, 13, 12186. [CrossRef]
45. Alsaeedi, A.H.; Hadi, S.M.; Alazzawi, Y. Adaptive Gamma and Color Correction for Enhancing Low-Light Images. Int. J. Intell.
Eng. Syst. 2024, 17, 188.
46. Zhang, W.; Wang, Y.; Li, C. Underwater image enhancement by attenuated color channel correction and detail preserved contrast
enhancement. IEEE J. Ocean. Eng. 2022, 47, 718–735. [CrossRef]
47. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference
on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19.
48. Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking single-image dehazing and beyond. IEEE Trans. Image
Process. 2018, 28, 492–505. [CrossRef]
49. CIE. Commission Internationale de L’Eclariage. Colorimetry; Bureau Central de la CIE: Vienna, Austria, 1976.
50. Liu, X.; Ma, Y.; Shi, Z.; Chen, J. Griddehazenet: Attention-based multi-scale network for image dehazing. In Proceedings of the
IEEE/CVF international Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7314–7323.
51. Song, Y.; He, Z.; Qian, H.; Du, X. Vision transformers for single image dehazing. IEEE Trans. Image Process. 2023, 32, 1927–1941.
[CrossRef]
Appl. Sci. 2025, 15, 3191 20 of 20
52. Yang, Y.; Wang, C.; Liu, R.; Zhang, L.; Guo, X.; Tao, D. Self-augmented unpaired image dehazing via density and depth decom-
position. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA,
19–24 June 2022; pp. 2037–2046.
53. Wang, Z.; Zhao, H.; Peng, J.; Yao, L.; Zhao, K. ODCR: Orthogonal Decoupling Contrastive Regularization for Unpaired Image
Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada,
16–20 June 2024; pp. 25479–25489.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.