0% found this document useful (0 votes)
38 views13 pages

IEEE Transactions On Image Processing

IEEE_Transactions_on_Image_Processing

Uploaded by

traore faly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views13 pages

IEEE Transactions On Image Processing

IEEE_Transactions_on_Image_Processing

Uploaded by

traore faly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

A Fast Single Image Haze Removal Algorithm


Using Color Attenuation Prior
Qingsong Zhu, Member, IEEE, Jiaming Mai, Ling Shao, Senior Member, IEEE


Abstract—Single image haze removal has been a challenging
problem due to its ill-posed nature. In this paper, we propose a
simple but powerful color attenuation prior for haze removal from
a single input hazy image. By creating a linear model for modeling
the scene depth of the hazy image under this novel prior and
learning the parameters of the model with a supervised learning
method, the depth information can be well recovered. With the
depth map of the hazy image, we can easily estimate the
transmission and restore the scene radiance via the atmospheric
scattering model, and thus effectively remove the haze from a
single image. Experimental results show that the proposed
approach outperforms state-of-the-art haze removal algorithms in
terms of both efficiency and the dehazing effect.

Index Terms—Dehazing, defog, image restoration, depth


restoration

I. INTRODUCTION

O utdoor images taken in bad weather (e.g., foggy or hazy)


usually lose contrast and fidelity, resulting from the fact
that light is absorbed and scattered by the turbid medium such as
particles and water droplets in the atmosphere during the Fig. 1. An overview of the proposed dehazing method. Top-left: Input hazy
image. Top-right: Restored depth map. Bottom-left: Restored transmission
process of propagation. Moreover, most automatic systems, map. Bottom-right: Dehazed image.
which strongly depend on the definition of the input images, fail
to work normally caused by the degraded images. Therefore, researchers try to improve the dehazing performance with
improving the technique of image haze removal will benefit multiple images. In [18-20], polarization-based methods are
many image understanding and computer vision applications used for dehazing with multiple images which are taken with
such as aerial imagery [1], image classification [2-5], different degrees of polarization. In [21-23], Narasimhan et al.
image/video retrieval [6-8], remote sensing [9-11] and video propose haze removal approaches with multiple images of the
analysis and recognition [12-14]. same scene under different weather conditions. In [24, 25],
Since concentration of the haze is different from place to dehazing is conducted based on the given depth information.
place and it is hard to detect in a hazy image, image dehazing is Recently, significant progress has been made in single
thus a challenging task. Early researchers use the traditional image dehazing based on the physical model. Under the
techniques of image processing to remove the haze from a single assumption that the local contrast of the haze-free image is
image (for instance, histogram-based dehazing methods much higher than that in the hazy image, Tan [26] proposes a
[15-17]). However, the dehazing effect is limited, because a novel haze removal method by maximizing the local contrast of
single hazy image can hardly provide much information. Later, the image based on Markov Random Field (MRF). Although
Tan's approach is able to achieve impressive results, it tends to
Q. Zhu and J. Mai are with the Shenzhen Institutes of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China, and with the Chinese
produce over-saturated images. Fattal [27] proposes to remove
University of Hong Kong, Hong Kong, China (E-mail: [email protected]). the haze from color images based on Independent Component
L. Shao is with the Department of Computer Science and Digital Technologies, Analysis (ICA), but the approach is time-consuming and cannot
Northumbria University, Newcastle upon Tyne NE1 8ST, UK. (Corresponding
author, E-mail: [email protected]). be used for grayscale image dehazing. Furthermore, it has some

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

difficulties to deal with dense-haze images. Inspired by the II. ATMOSPHERIC SCATTERING MODEL
widely used dark-object subtraction technique [28] and based To describe the formation of a hazy image, the atmospheric
on a large number of experiments on haze-free images, He et al. scattering model, which is proposed by McCartney in 1976
[29] discover the dark channel prior (DCP) that, in most of the [54], is widely used in computer vision and image processing.
non-sky patches, at least one color channel has some pixels Narasimhan and Nayar [22, 23, 55, 56] further derive the model
whose intensities are very low and close to zero. With this prior, later, and the model can be expressed as follows:
I( x)  J( x)t ( x)  A(1  t ( x)),
they estimate the thickness of haze, and restore the haze-free
(1)
t ( x)  e
image by the atmospheric scattering model. The DCP approach
 d ( x)
is simple and effective in most cases. However, it cannot well , (2)
handle the sky images and is computationally intensive. Some where x is the position of the pixel within the image, I is the hazy

A is the atmospheric light, t is the medium transmission,  is


improved algorithms [30-36, 45-51] are proposed to overcome image, J is the scene radiance representing the haze-free image,
the weakness of the DCP approach. For efficiency, Gibson et al.
[31], Tarel et al. [45, 46], Yu et al. [32], and He et al. [43] the scattering coefficient of the atmosphere and d is the depth of
replace the time-consuming soft matting [44] with standard scene. I, J and A are all three-dimensional vectors in RGB
median filtering, “median of median filter”, guided joint space. Since I is known, the goal of dehazing is to estimate A
bilateral filtering [37-42] and guided image filtering, and t, then restore J according to Equation (1).

important information. Since the scattering coefficient  can be


respectively. In terms of dehazing quality, Nishino et al. [48, 49] It is worth noting that the depth of the scene d is the most
model the image with a factorial Markov random filed to
estimate the scene radiance more accurately; Meng et al. [50] regarded as a constant in homogeneous atmosphere condition
propose an effective regularization dehazing method to restore [55], the medium transmission t can be estimated easily
according to Equation (2) if the depth of the scene is given.
the haze-free image by exploring the inherent boundary
Moreover, in the ideal case, the range of d(x) is [0, +∞) as the
constraint; Tang et al. [51] combine four types of haze-relevant
scenery objects that appear in the image can be very far from the
features with Random Forest [52] to estimate the transmission.
observer, and we have:
I( x)  A, d ( x)  .
Despite the remarkable progress, the limitation of the
state-of-the-art methods lies in the fact that the haze-relevant (3)
priors or heuristic cues used are not effective or efficient Equation (3) shows that the intensity of the pixel, which makes
enough. the depth tend to infinity, can stand for the value of the
In this paper, we propose a novel color attenuation prior for atmospheric light A. Note that, if d(x) is large enough, t(x) tends
single image dehazing. This simple and powerful prior can help to be very small according to Equation (2), and I(x) equals A
to create a linear model for the scene depth of the hazy image. approximately. Therefore, instead of calculating the
By learning the parameters of the linear model with a supervised atmospheric light A by Equation (3), we can estimate A by the
following equation given a threshold dthresold:
I( x)  A, d ( x)  dthresold .
learning method, the bridge between the hazy image and its
corresponding depth map is built effectively. With the (4)
recovered depth information, we can easily remove the haze We also notice the fact that it is not hard to satisfy this
from a single hazy image. An overview of the proposed constraint: d(x)>dthresold. In most cases, a hazy image taken
dehazing method is shown in Figure 1. The efficiency of this outdoor has a distant view that is kilometres away from the
dehazing method is dramatically high and the dehazing observer. In other words, the pixel belonging to the region with
effectiveness is also superior to that of prevailing dehazing a distant view in the image should have a very large depth
algorithms as we will show in Section VI. A conference version dthreshold. Assuming that every hazy image has a distant view, we
of our work has been presented in [53]. have:
The remainder of this paper is organized as follows: In d ( x)  dthresold , x {x | y : d ( y)  d ( x)} (5)
Section II, we review the atmospheric scattering model which is
Based on this assumption, the atmospheric light A is given by:
A  I( x), x {x | y : d ( y)  d ( x)}.
widely used for image dehazing and give a concise analysis on
the parameters of this model. In Section III, we present a novel (6)
color attenuation prior. In Section IV, we discuss the approach On this condition, the task of dehazing can be further converted
of recovering the scene depth with the proposed color into depth information restoration. However, it is also a
attenuation prior. In Section V, the method of image dehazing challenging task to obtain the depth map from a single hazy
with the depth information is described. In Section VI, we image.
present and analyze the experimental results. Finally, we
summarize this paper in Section VII.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

Illumination
Illumination (energy) (energy) source
source

Scattering Haze

Haze Output (digitized) image


Output (digitized) image
Imaging Imaging
system system
Scattering

Scene element Scene element


(a) (b)
Fig. 3. The process of imaging under different weather conditions. (a) The process of imaging in sunny weather. (b) The process of imaging in hazy weather.

In the next section, we present a novel color attenuation prior


which is useful for restoring the depth information from a single
hazy image directly.
(b)

III. COLOR ATTENUATION PRIOR


To detect or remove the haze from a single image is a
challenging task in computer vision, because little information
about the scene structure is available. In spite of this, the human
(c)
brain can quickly identify the hazy area from the natural scenery
without any additional information. This inspired us to conduct
a large number of experiments on various hazy images to find
the statistics and seek a new prior for single image dehazing.
Interestingly, we find that the brightness and the saturation of
pixels in a hazy image vary sharply along with the change of the (a) (d)

haze concentration. Fig. 2. The concentration of the haze is positively correlated with the difference
Figure 2 gives an example with a natural scene to show how between the brightness and the saturation. (a) A hazy image. (b) The close-up
patch of a dense-haze region and its histogram. (c) The close-up patch of a
the brightness and the saturation of pixels vary within a hazy moderately hazy region and its histogram. (d) The close-up patch of a haze-free
image. As illustrated in Figure 2(d), in a haze-free region, the region and its histogram.
saturation of the scene is pretty high, the brightness is moderate
3(a)). In hazy weather, in contrast, the situation becomes more
and the difference between the brightness and the saturation is
complex (see Figure 3(b)). There are two mechanisms (the
close to zero. But it is observed from Figure 2(c) that the
direct attenuation and the airlight) in imaging under hazy
saturation of the patch decreases sharply while the color of the
weather [23]. On one hand, the direct attenuation caused by the
scene fades under the influence of the haze, and the brightness
reduction in reflected energy leads to low intensity of the
increases at the same time producing the high value of the
brightness. To understand this, we review the atmospheric
difference. Furthermore, Figure 2(b) shows that in a dense-haze
scattering model. The term J(x)t(x) in Equation (1) is used for
region, it is more difficult for us to recognize the inherent color
describing the direct attenuation. It reveals the fact that the
of the scene, and the difference is even higher than that in Figure
intensity of the pixels within the image will decrease in a
2(c). It seems that the three properties (the brightness, the
multiplicative manner. So it turns out that the brightness tends to
saturation and the difference) are prone to vary regularly in a
decrease under the influence of the direct attenuation. On the
single hazy image according to this observation.
other hand, the white or gray airlight, which is formed by the
Is this coincidence, or is there a fundamental reason behind this?
To answer this question, we first review the process of imaging. scattering of the environmental illumination, enhances the
Figure 3 illustrates the imaging process. In the haze-free brightness and reduces the saturation. We can also explain this
condition, the scene element reflects the energy that is from the by the atmospheric scatter model. The rightmost term A(1-t(x))
illumination source (e.g., direct sunlight, diffuse skylight and in Equation (1) represents the effect of the airlight. It can be
light reflected by the ground), and little energy is lost when it deduced from this term that the effect of the white or gray
reaches the imaging system. The imaging system collects the
airlight on the observed values is additive. Thus, caused by the
incoming energy reflected from the scene element and focuses it
onto the image plane. Without the influence of the haze, outdoor airlight, the brightness is increased while the saturation is
images are usually with vivid color (see Figure decreased. Since the airlight plays a more important role in
most cases, hazy regions in the image are characterized by high
brightness and low saturation. What’s more, the denser the haze

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

is, the stronger the influence of the airlight would be. This
allows us to utilize the difference between the brightness and the
saturation to estimate the concentration of the haze. In Figure 4,
we show that the difference increases along with the
concentration of the haze in a hazy image, as we expected.
Since the concentration of the haze increases along with the
change of the scene depth in general, we can make an
assumption that the depth of the scene is positively correlated
with the concentration of the haze and we have:
d ( x)  c( x)  v( x)  s( x), (7)
where d is the scene depth, c is the concentration of the haze, v is
the brightness of the scene and s is the saturation. We regard this
statistics as color attenuation prior. Figure 5 gives the geometric
description of the color attenuation prior through the HSV color
model. Figure 5(a) is the HSV color model, and Figure 5(b-d)
are the near, moderate-distance and far scene depths,
respectively. Vector I indicates the hazy image, passing through
the origin and performing the projection of the vector I onto a (a) (b)

horizontal plane Setting the angle between vector I and its Fig. 4. Difference between brightness and saturation increases along with the
concentration of the haze. (a) A hazy image. (b) Difference between brightness
projection as α, according to the HSV color model, when α and saturation.
varies between 0 and 90 degrees, the higher the value of α is, the v
higher the value of tangent α is, which indicates the greater the 100%

difference between the component of I in the direction of V and


the component of I in the direction of S. As the depth increases,
50%
the value v increases and the saturation s decreases, and
therefore α increases. In other words, the angle α is positively
1
I ( x1 )
correlated with the depth. 0%
0% s
50%
It is worth to point out that Equation (7) is just an intuitional 100%

result of the observation and it cannot be an accurate expression (a) (b)


about the links among d, v and s. We will find the way to create v v
100% 100%
a more robust expression in the following sections.
I ( x3 )
IV. SCENE DEPTH RESTORATION I ( x2 )
50% 50%

2 3
A. The Linear Model Definition
As the difference between the brightness and the saturation 0% 0%
0% s 0% s
can approximately represent the concentration of the haze, we 50% 100% 50% 100%

can create a linear model, i.e., a more accurate expression, as


(c) (d)

d ( x)  0  1v( x)  2 s( x)   ( x),
follows:
(8) Fig. 5. The geometric description of the color attenuation prior. (a) The HSV
color model. (b) The near scene depth condition. (c) The moderate-distance
where x is the position within the image, d is the scene depth, v is condition. (d) The far scene depth condition.

d ( x) ~ p(d ( x) | x,0 ,1 ,2 ,  2 )  N (0  1v  2 s,  2 ).


the brightness component of the hazy image, s is the saturation
component, θ0, θ1, θ2 are the unknown linear coefficients, ε(x) is (9)
a random variable representing the random error of the model, One of the most important advantages of this model is that it
and ε can be regarded as a random image. We use a Gaussian has the edge-preserving property. To illustrate this, we calculate
density for ε with zero mean and variable σ2 (i.e. ε(x) ~ N (0, the gradient of d in Equation (8) and we have:
d  1v  2s   .
σ2)). According to the property of the Gaussian distribution, we
have: (10)
Due to that σ can never be too large in practice, the value of ε(x)
tends to be very low and close to zero. In this case, the value of
▽ε is low enough to be ignored. A 600╳450 random image ε
with σ=0.05 and its corresponding gradient image ▽ε are

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

(a) (b) (c) (d) (e)

Fig. 6. Illustration of the edge-preserving property of the linear model. (a) The hazy image. (b) The Sobel image of (a). (c) The Sobel image ▽d =▽v-▽s+▽ε. (d)
The Sobel image of (e). (e) The random image ε with σ = 0.05.

shown in Figure 6(e) and Figure 6(d), respectively. As can be


seen, both the gradient image ▽ε and the random image ε are
very dark. It turns out that the edge distribution of d is
independent of ε given a small σ. In addition, since v and s are
actually the two single-channel images (the value channel and Fig. 7. The process of generating the training samples with the haze-free
the saturation channel of the HSV color space) into which the images. (a) The haze-free images. (b) The generated random depth maps. (c)
hazy image I splits, Equation (10) ensures that d has an edge The generated hazy images.
only if I has an edge. We give an example to illustrate this in
Figure 6. Figure 6(a) is the hazy image. Figure 6(b) shows the
edge distribution of the hazy image. Figure 6(c) shows the Sobel 500 synthetic hazy images).
image ▽d =θ1▽v+θ2▽s+▽ε, where θ1 is simply set to 1.0, θ2 C. Learning Strategy
is set to -1.0, and ε is a random image as mentioned. As we can What we are interested in is the joint conditional
see, Figure 6(b) is similar to Figure 6(c) representing that I and concentration:
L  p(d ( x1 ),..., d ( xn ) | x1 ,..., xn ,0 ,1 ,2 ,  2 ),
d have similar edge distributions. This further ensures that the
depth information can be well recovered even near the depth (11)
discontinuities in the scene. The linear model works well as we where n is the total number of pixels within the training hazy
will show later.
images, d(xn) is the depth of the nth scene point, and L is the
In the following sections, we use a simple and efficient
likelihood. Assuming that the random error at each scene point
supervised learning method to determine the coefficients θ0, θ1, is independent (i.e. p(ε1,…, εn) = Пi=1,…,n p(εi)), we can rewrite
θ2 and the variable σ2.

L   p(d ( xi ) | xi , 0 , 1 ,  2 ,  2 ).
Equation (11) as:
n
B. Training Data Collection (12)
i 1
In order to learn the coefficients θ0, θ1 and θ2 accurately, the
According to Equation (9) and Equation (12), we have:

L
training data are necessary. In our case, a training sample
dgi  (0 1v ( xi )  2 s ( xi ))

2 2
n
consists of a hazy image and its corresponding ground truth 1
2
,e (13)
depth map. Unfortunately, the depth map is very difficult to i 1
2

obtain due to the fact that there is no reliable means to measure where dgi represents the ground truth depth of the ith scene
the depths in outdoor scenes. Current depth cameras such as point. So the problem is to find the optimal values of θ0, θ1, θ2,
Kinect are not able to acquire the accurate depth information. and σ to maximum L. For convenience, instead of maximizing
Inspired by Tang et al.’s method for preparing the training the likelihood directly, we maximize the natural logarithm of the
data [51], we collect the haze-free images from Google Images likelihood lnL. Therefore, the problem can be expressed as
and Flickr and use them to produce the synthetic depth maps and follows:

arg max ln L   ln(


dgi  (0 1v ( xi )  2 s ( xi ))
the corresponding hazy images for obtaining enough training 
2 2
n
1
2
samples. The process of generating the training samples is e ). (14)
0 ,1 ,2 , i 1
2
illustrated in Figure 7. Firstly, for each haze-free image, we
generate a random depth map with the same size. The values of To solve the problem, we first calculate the partial derivative of
lnL with respect to σ and make it equal to zero:

   3  (dgi  (0  1v( xi )   2 s( xi )))  0. (15)


the pixels within the synthetic depth map are drawn from the
standard uniform distribution on the open interval (0, 1).  ln L 1 n
   i 1
n
Secondly, we generate the random atmospheric light A (k, k, k)
where the value of k is between 0.85 and 1.0. Finally, we According to Equation (15), the maximum likelihood estimate
for the variable σ2 is:
 2   (dgi  (0  1v( xi )  2 s( xi ))) 2 .
generate the hazy image I with the random depth map d and the
random atmospheric light A according to Equation (1) and 1 n
(16)
Equation (2). In our case, 500 haze-free images are used for n i 1
generating the training samples (500 random depth maps and

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

As for the linear coefficients θ0, θ1 and θ2, we use the gradient
descent algorithm to estimate their values. By taking the partial
derivatives of lnL with respect to θ0, θ1 and θ2 respectively, we
can obtain the following expressions:

 2  (dgi  (0  1v( xi )   2 s( xi ))),


 ln L 1 n
0  i 1
(17)

 v( x )(dg
 ln L
 2  (0  1v( xi )   2 s( xi ))), (18)
n

1 
1 (a) (b)

i 1
i i

 s( x )(dg
 ln L
 2  (0  1v( xi )   2 s( xi ))). (19)
n

 2 
1
i 1
i i

The expression for updating the linear coefficients can be


concisely expressed by:
 ln L
i : i  s. t. i  {0,1, 2}.
i
(20)
(c) (d)

It is worth noting that the expression above is used for iterating Fig. 8. Refinement of the depth map. (a) The hazy image. (b) The raw depth
map. (c) The depth map with scale r=15. (d) The refined depth map.
dynamically, and the notation: = does not express the
mathematical equality, but means that setting the value of θi in single hazy image. These parameters will be used for restoring
the left term to be the value of the right term. The procedure for the scene depths of the hazy images in this paper.
learning the linear coefficients θ0, θ1, θ2 and the variable σ2 is D. Estimation of the Depth Information
shown in Algorithm 1. As the relationship among the scene depth d, the brightness v
Algorithm 1 Parameters Estimation and the saturation s has been established and the coefficients
Input: the training brightness vector v, the training have been estimated, we can restore the depth map of a given
saturation vector s, the training depth vector d, and the input hazy image according to Equation (8). However, this
number of iterations t model may fail to work in some particular situations. For
Output: linear coefficients θ0, θ1, θ2, the variable σ2 instance, the white objects in an image are usually with high
Auxiliary functions:
values of the brightness and low values of the saturation.
function for obtaining the size of the vector: n = size(in)
Therefore, the proposed model tends to consider the scene
function for calculating the square: out = square(in)
Begin objects with white color as being distant. Unfortunately, this
1: n = size(v); misclassification will result in inaccurate estimation of the depth
2: θ0 = 0; θ1 = 1; θ2 = -1; in some cases. As shown in Figure 8, the white geese in the first
3: sum = 0; wSum = 0; vSum = 0; sSum = 0; image are the regions for which the model can hardly handle,
4: for iteration from 1 to t do and these regions are wrongly estimated with high depth values
5: for index from 1 to n do in the depth map (see Figure 8(b)).
6: temp = d[i] - θ0 - θ1 * v[i] - θ2 * s[i]; To overcome this problem, we need to consider each pixel in
7: wSum = wSum + temp; the neighborhood. Based on the assumption that the scene depth
8: vSum = vSum + v[i] * temp;
is locally constant, we process the raw depth map by:
dr ( x)  min d ( y),
9: sSum = sSum + s[i] * temp;
10: sum = sum + square(temp); (21)
yr ( x )
11: end for
12: σ2 = sum / n; where Ωr(x) is an r ╳ r neighborhood centered at x, and dr is the
13: θ0 = θ0 +wSum; θ1 = θ1 +vSum; θ2 = θ2 +sSum; depth map with scale r. As shown in Figure 8(c), the new depth
14: end for map d15 can well handle the geese regions. However, it is also
End obvious that the blocking artifacts appear in the image. To
We used 500 training samples containing 120 million scene refine the depth map, we use the guided image filtering [43] to
points to train our linear model. There are 517 epochs in our smooth the image. Figure 8(d) shows the final restored depth
map of the hazy image. As can be seen, the blocking artifacts are
case, and the best learning result is that θ0 = 0.121779, θ1
suppressed effectively.
=0.959710, θ2 = -0.780245, σ = 0.041337. Once the values of
the coefficients have been determined, they can be used for any In order to check the validity of the assumption, we collected a
large database of outdoor hazy images from several well-known
p ho to we b site s ( e . g. , G o o gle Images, Photosig,

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

(a)

(b) (c)
Fig. 9. Example images and the calculated depth maps. (a) Outdoor hazy images. (b) The corresponding calculated depth maps. (c) Haze-free images and their
calculated depth maps.

Picasaweb and Flickr) and computed the scene depth map of


each hazy image with its brightness and saturation components
according to Equation (8) and Equation (21). Some of the
results are shown in Figure 9. Figure 9(a) displays several
outdoor hazy images, Figure 9(b) shows the corresponding Fig. 10. Estimation of the atmospheric light. (a) Our recovered depth map and
the brightest region. (b) Input hazy image. (c) The patch from which our
estimated depth maps and Figure 9(c) gives the example
method obtains the atmospheric light.

I(x)  A I(x)  A
haze-free images and their estimated depth maps. As can be
seen, the restored depth maps have darker color in haze-free J ( x)   A    d ( x )  A. (22)
regions while having lighter color in dense-haze regions as t ( x) e
expected. With the estimated depth map, the task of dehazing is For avoiding producing too much noise, we restrict the value of
the transmission t(x) between 0.1 and 0.9. So the final function
no longer difficult. used for restoring the scene radiance J in the proposed method
can be expressed by:

I ( x)  A
V. SCENE RADIANCE RECOVERY
J ( x)   A,
min{max{e  d ( x ) , 0.1}, 0.9}
(23)
A. Estimation of the Atmospheric Light
We have explained the main idea of estimating the
atmospheric light in Section II. In this section, we describe the where J is the haze-free image we want. Figures 12-13 show
method in more detail. As the depth map of the input hazy image some final results of dehazing of the proposed method.
has been recovered, the distribution of the scene depth is known. Note that the scattering coefficient β, which can be
Figure 10(a) shows the estimated depth map of a hazy image. regarded as a constant [55] in homogeneous regions, represents
Bright regions in the map stand for distant places. According to the ability of a unit volume of atmosphere to scatter light in all
Equation (6), we pick the top 0.1 percent brightest pixels in the directions. In other words, β determines the intensity of
depth map, and select the pixel with highest intensity in the dehazing indirectly. We illustrate this in Figure 11. Figure 11
corresponding hazy image I among these brightest pixels as the (e-g) shows the restored transmission maps with different β, and
atmospheric light A (see Figure 10(b) and Figure 10(c)). Figure 11 (b-d) shows the corresponding dehazing results. As
B. Scene Radiance Recovery can be seen, on the one hand, a small β leads to small
transmission, and the corresponding result remains still hazy in
Now that the depth of the scene d and the atmospheric light A
the distant regions (see Figure 11(b) and Figure 11(e)). On the
are known, we can estimate the medium transmission t easily
other hand, a too large β may result in overestimation of the
according to Equation (2) and recover the scene radiance J in
transmission (see Figure 11(d) and Figure 11(g)). Therefore,
Equation (1). For convenience, we rewrite Equation (1) as
A moderate β is required when dealing with the images with
follows:
dense-haze regions. In most cases, β=1.0 is more than enough.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

Fig. 11. Results with a different scattering coefficient β. (a) The hazy image. (b) The final result with β =0.5. (c) The final result with β = 0.8. (d) The final result with
β = 1.2. (e) The restored transmission map with β = 0.5. (f) The restored transmission map with β = 0.8. (g) The restored transmission map with β = 1.2.

(a) (b) (c) (d) (e) (f)


Fig. 12. Qualitative comparison of different methods on real-world images. (a) The hazy images. (b) Tarel et al.’s results. (c) Nishino et al.’s results. (d) He et al.’s
results. (e) Meng et al.’s results. (f) Our results.
Figure 12 shows the qualitative comparison of results with
the four state-of-the-art dehazing algorithms [29, 46, 49, 50] on
VI. EXPERIMENTS challenging real-world images. Figure 12(a) depicts the hazy
In order to verify the effectiveness of the proposed dehazing images to be dehazed. Figure 12(b-e) shows the results of Tarel
method, we test it on various hazy images and compare with He et al. [46], Nishino et al. [49], He et al. [29], and Meng et al.
et al.’s [29], Tarel et al.’s [46], Nishino et al.’s [49] and Meng et [50], respectively. The results of the proposed algorithm are
al.’s [50] methods. All the algorithms are implemented in the given in Figure 12(f). As shown in Figure 12(b), most of the
MatlabR2013a environment on a P4-3.3GHz PC with 6GB haze is removed in Tarel’s results, and the details of the scenes
RAM. The parameters used in the proposed method are
and objects are well restored. However, the results significantly
initialized as follows: r = 15, β = 1.0, θ0 = 0.121779, θ1 =
suffer from over-enhancement (for instance, the sky region of
0.959710, θ2 = -0.780245 and σ = 0.041337. For fair
the first image is much darker than it should be, and the faces of
comparison, the parameters used in the four popular dehazing
methods are set to be optimal according to [29], [46], [49] and the women in the last image become brown). This is because
[50]. Tarel’s algorithm is based on He et al.’s algorithm which has an
inherent problem of overestimating the transmission as
A. Qualitative Comparison on Real-World Images
discussed in [29]. Moreover, halo artifacts appear near the
As all the dehazing algorithms are able to get really good discontinuities in Figure 12(b) (see the mountain in the first
results by dehazing the general outdoor images, it is difficult to image and the leaves of plant in the second image) due to that
rank them visually. In order to compare them, we carry out the the “median of the median filter” used in [46] is not an
algorithms on some challenging images with large white or gray edge-preserving filter. The results of Nishino et al.
regions, since most existing dehazing algorithms are not
sensitive to the white color.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

dolls
moebius
cones
books

(a) (b) (c) (d) (e) (f) (g)

Fig. 13. Results on stereo images where the ground truth solutions are known. (a) The hazy images. (b) Tarel et al.’s results. (c) Nishino et al.’s results. (d) He et al.’s
results. (e) Meng et al.’s results. (f) Our results. (g) Ground truth.

have a similar problem as Nishino et al.’s algorithm tends to Compared with the results of the four algorithms, our results
over enhance the local contrast of the image. As we can observe are free from oversaturation. As displayed in Figure 12(f), the
in Figure 12(c), the restored images are oversaturated and sky and the cloud in the images are clear and the details of the
distorted, especially in the third image (the color of the shirt is mountains are enhanced moderately.
changed to dark). B. Qualitative Comparison on Synthetic Images
In contrast, the results of He et al. are much better visually
In Figure 13, the five algorithms including the proposed one
(see Figure 12(d)). The dense haze in the distance can be well
are tested on the stereo images where the ground truth images
removed, and there are no halo artifacts. Nevertheless, color
distortion still appears in the regions with white objects such as are known. Figure 13(a) shows the hazy images which are
the shirt in the third image. The reason can be explained as synthesized from the haze-free images with known depth maps.
follows: As the method of recovering the transmission used in The results of the five algorithms are shown in Figure 13(b-f).
[29] is based on the dark channel prior, the accuracy of the Figure 13(g) gives the ground truth images for comparison.
estimation strongly depends on the validity of the dark channel These haze-free images and their corresponding ground truth
prior. Unfortunately, this prior is invalid when the scene depth maps are taken from the Middlebury stereo datasets
brightness is similar to the atmospheric light, and the estimated [57-61]. It is obvious that Tarel et al.’s results are quite different
transmission is thus not reliable enough in some cases. In from the ground truth images as the results are much darker (see
addition, the atmospheric light is also an important factor for the toy with red hair in the dolls image and the books in the
calculating the transmission in [29]. Therefore, in order to books image in Figure 13(b)). By observing the images in
obtain the correct transmission, an accurate estimation of the Figure 13(c), we can find that Nishino et al.’s results have a
atmospheric light is required. However, the approach for
similar problem. For example, the color of the toys in the dolls
estimating the atmospheric light proposed by He et al. has its
image is changed into yellow, and the color of background in the
limitation and the estimated result is an approximate value as
moebius image is darker. He et al.’s results are more similar to
discussed in [29]. For this reason, He et al.’s algorithm is prone
the ground truth images but still show some inaccuracies (see
to overestimating the transmission.
Meng et al.’s results are close to those obtained by He et al. Figure 13(d)). Note that the background in the books image is
as displayed in Figure 12(e). This is due to the fact that, darker than it should be. Similarly, Meng et al.’s results also
although Meng et al. improve the DCP approach [29] by adding suffer from over-enhancement as shown in Figure 13(e). It is
a boundary constraint, it does not address the problem of obvious that the color of the mask in the cones image is far from
ambiguity between the image color and haze. that in Figure 13(g). In contrast, our

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

TABLE I
TIME CONSUMPTION COMPARISON WITH HE [29], TAREL [46] , NISHINO [49] AND MENG [ 50].
Image He et al.’s Tarel et al.’s Nishino et al.’s Meng et al. ’s
Our method
Resolution method [29] method [46] method [49] method [50]
441  450 9.866s 4.141s 91.661s 6.171s 1.420s
600  450 12.228s 8.229s 104.670s 4.468s 2.219s
1024  768 36.896s 69.294s 317.386s 10.231s
1536  1024
4.278s
73.571s 218.033s 649.722s 17.334s 9.636s
1803  1080 90.717s 351.139s 861.360s 21.567s 12.314s

8.00
7.15
7.00 6.64

6.00 5.61 5.58


Mean Square Error(10-2)

5.00 4.59 4.60


4.21 4.18
3.88
4.00 3.62 3.60
2.90 2.99 2.95
3.00
2.34 2.15
2.00 1.67
1.43
0.90 0.89
1.00 0.65 0.59 0.52 0.61
0.41
0.00
dolls moebius cones books total

Tarel et al. Nishina et al. He et al. Meng et al. Ours

Fig. 14. Mean squared error (MSE) of different algorithms. Fig. 15. Structural similarity (SSIM) of different algorithms.

results do not have the problem of oversaturation and maintain which is more than twice smaller than the other three. In
the original colors of the objects (see Figure 13(f)). contrast, our method achieves the lowest MSEs in all cases.
The structural similarity (SSIM) image quality assessment
C. Quantitative Comparison
index [62] is introduced to evaluate the ability to preserve the
In order to quantitatively assess and rate the algorithms, we structural information of the algorithms. A high SSIM
calculate the mean squares error (MSE) and the structural represents high similarity between the dehazed image and the
similarity (SSIM) [62] of the results in Figure 13 for comparison. ground truth image, while a low SSIM conveys the opposite
The MSE of each result can be calculated by the following meaning. Figure 15 shows the SSIM of the results in Figure 13.
The SSIMs of Nishona et al.’s results are all lower than 0.7


equation:

e Jc  Gc ,
1 2
indicating that much structural information in the images has
(24) been lost. Tarel et al.’s SSIMs are similar to those of Meng et
3N c{r , g , b}
al., but neither of them can go to 0.8. It is obvious that the
where J is the dehazed image, G is the ground truth image, Jc SSIMs of He et al. are much higher than the other three in the
represents a color channel of J, Gc represents a color channel of four images. Our results achieve the highest SSIMs
G, N is the number of pixels within the image G, and e is the outperforming the four algorithms.
MSE measuring the difference between the dehazed image J
and the ground truth image G. Note that J and G have the same D. Complexity
size since they are corresponding with the hazy image I. Given J Given an image of size m×n and radius r, the complexity of
and G, a low MSE represents that the dehazed result is the proposed dehazing algorithm is only O(m×n×r), when the
satisfying while a high MSE means that the dehazing effect is linear coefficients θ0, θ1, θ2 in Equation (8) are obtained. In
not acceptable. Table I, we give the time consumption comparison with He et al.
We further show the MSEs of the results produced by
[29] (accelerated by the guided image filtering [43]), Tarel et al.
different algorithms in Figure 14. As can be seen, Nishino et
[46], Nishino et al. [49] and Meng et al. [50]. As we can see, our
al.’s results produce the highest MSEs overall. The high MSEs
are mainly because of the over-enhancement as mentioned approach is much faster than others and achieves efficient
earlier. Tarel et al.’s results outperform Nishino et al.’s in the processing even when the given hazy image is large. The high
first three images while they perform worse in the books images. efficiency of the proposed approach mainly benefits from the
Meng et al.’s results rank third in terms of the total performance. fact that the linear model based on the color attenuation prior
The total MSE of He et al.’s results is 0.0167, significantly simplifies the estimation of the scene depth and the
transmission.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

VII. DISCUSSIONS AND CONCLUSION [5] D. Tao, X. Li, X. Wu, and S. J. Maybank, “Geometric Mean for Subspace
Selection”, IEEE Transactions on Pattern Analysis and Machine
In this paper, we have proposed a novel linear color Intelligence, vol.31, no.2, pp. 260-274, February 2009.
attenuation prior, based on the difference between the [6] J. Han, X. Ji, X. Hu, D. Zhu, K. Li, X. Jiang, G. Cui, L. Guo, and T. Liu,
brightness and the saturation of the pixels within the hazy Representing and Retrieving Video Shots in Human-Centric Brain
Imaging Space, IEEE Transactions on Image Processing, 22(7):
image. By creating a linear model for the scene depth of the
2723-2736, 2013.
hazy image with this simple but powerful prior and learning the
[7] J. Han, K. Ngan, M. Li, and H. Zhang, A memory learning framework for
parameters of the model using a supervised learning method, the effective image retrieval, IEEE Transactions on Image Processing, Vol.
depth information can be well recovered. By means of the depth 14, 4:511- 524, 2005.
map obtained by the proposed method, the scene radiance of the [8] D. Tao, X. Tang, X. Li, and X. Wu, “Asymmetric Bagging and Random
hazy image can be recovered easily. Experimental results show Subspace for Support Vector Machines-based Relevance Feedback in
that the proposed approach achieves dramatically high Image Retrieval”, IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 28, no.7, pp. 1088-1099, July 2006.
efficiency and outstanding dehazing effects as well.
[9] J. Han, D. Zhang, G. Cheng, L. Guo, and J. Ren, “Object Detection in
Although we have found a way to model the scene depth with Optical Remote Sensing Images Based on Weakly Supervised Learning
the brightness and the saturation of the hazy image, there is still and High-Level Feature Learning,” IEEE Transactions on Geoscience
a common problem to be solved. That is, the scattering and Remote Sensing, vol. 53, no. 6, pp. 3325 – 3337, Jun. 2015.

coefficient β in the atmospheric scattering model cannot be [10] G. Cheng, J. Han, L. Guo, X. Qian, P. Zhou, X. Yao, X. Hu, Object
detection in remote sensing images using a discriminatively trained
regarded as a constant in inhomogeneous atmosphere mixture model, ISPRS Journal of Photogrammetry and Remote Sensing,

from the observer should have a very low value of . Therefore,


conditions [55]. For example, a region which is kilometers away 11, 2013.
[11] J. Han, P. Zhou, D. Zhang, G. Cheng, L. Guo, Z. Liu, S. Bu, J. Wu,
the dehazing algorithms which are based on the atmospheric Efficient, simultaneous detection of multi-class geospatial targets based
scattering model are prone to underestimating the transmission on visual saliency modeling and discriminative learning of sparse coding,
ISPRS Journal of Photogrammetry and Remote Sensing, 89: 37-48,

algorithms are based on the constant- assumption, a more


in some cases. As almost all the existing single image dehazing 2014.
[12] L. Liu and L. Shao, “Learning Discriminative Representations from
flexible model is highly desired. To overcome this challenge, RGB-D Video Data”, in Proc. of International Joint Conference on
some more advanced physical models [63] can be taken into Artificial Intelligence, Beijing, China, 2013.
account. We leave this problem for our future research. [13] D. Tao, X. Li, X. Wu, and S. J. Maybank, “General Tensor Discriminant
Analysis and Gabor Features for Gait Recognition”, IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp.
1700-1715, October 2007.
ACKNOWLEDGMENTS
[14] Z. Zhang and D. Tao, “Slow Feature Analysis for Human Action
This study has been financed partially by the Projects of Recognition”, IEEE Transactions on Pattern Analysis and Machine
National Natural Science Foundation of China (Grant No. Intelligence, vol. 34, no. 3, pp. 436-450, March 2012.
61303166, 50635030, 60932001, 61072031, 61375041), the [15] T. K. Kim, J. K. Paik and B. S. Kang, "Contrast enhancement system
National Basic Research (973) Program of China (Sub-grant 6 using spatially adaptive histogram equalization with temporal
filtering," IEEE Transactions on Consumer Electronics vol. 44, no. 1,
of Grant No. 2010CB732606) and the Knowledge Innovation pp. 82-87, 1998.
Program of the Chinese Academy of Sciences, and was also
[16] J. A. Stark, "Adaptive image contrast enhancement using generalizations
supported by the grants of Introduced Innovative R&D Team of of histogram equalization," IEEE Transactions on Image Processing
Guangdong Province: Image-Guided therapy technology. (TIP), vol. 9, no. 5, pp. 889-896, 2000.
[17] J. Y. Kim, L. S. Kim and S. H. Hwang. “An advanced contrast
enhancement using partially overlapped sub-block histogram
REFERENCES equalization.” IEEE Transactions on Circuits Systems for Video
Technology (TCSVT), vol. 11, no. 4, 2001, pp. 475-484.
[1] G. A. Woodell, D. J. Jobson, N. Rahman and G. D. Hines, Advanced
image processing of aerial imagery, Vis. Inf. Process. XV 6246 (2006) [18] Y. Y. Schechner and S. G. Narasimhan, and S. K. Nayar, “Instant
62460E.T. dehazing of images using polarization,” in Proc. IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), 2001.
[2] L. Shao, L. Liu and X. Li, “Feature Learning for Image Classification via
Multiobjective Genetic Programming”, IEEE Transactions on Neural [19] S. Shwartz, E. Namer, Y. Y. Schechner. “Blind haze separation, “ in
Networks and Learning Systems, vol. 25, no. 7, pp. 1359-1371, Jul. 2014. Proc. IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), vol. 2, 2006, pp. 1984-1991.
[3] F. Zhu and L. Shao, “Weakly-Supervised Cross-Domain Dictionary
Learning for Visual Recognition”, International Journal of Computer [20] Y. Schechner, S. Narasimhan and S. Nayar, “Polarization-based vision
Vision, vol. 109, no. 1-2, pp. 42-59, Aug. 2014. through haze,” Applied Optics, vol. 42 no. 3, pp. 511 -525, 2003.

[4] Y. Luo, T. Liu, D. Tao, and C. Xu, “Decomposition based Transfer [21] S. G. Narasimhan and S. K. Nayar. “Chromatic framework for vision in
Distance Metric Learning for Image Classification”, IEEE Transactions bad weather,” in Proc. IEEE Conference on Computer Vision and
on Image Processing, vol. 23, no. 9, pp. 3789-3801, September 2014. Pattern Recognition (CVPR), 2000.
[22] S. K. Nayar and S. G. Narasimhan. "Vision in bad weather," in Proc.
IEEE International Conference on Computer Vision (ICCV). vol. 2,
1999.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

[23] S. G. Narasimhan and S. K. Nayar. "Contrast restoration of weather [43] K. He, J. Sun and X. Tang, “Guided image filtering,” IEEE Trans.
degraded images," IEEE Trans. Pattern Analysis and Machine Pattern Analysis and Machine Intelligence (TPAMI), vol. 35, no. 6, pp.
Intelligence (TPAMI), vol. 25, no.6, pp. 713-724, 2003. 1397-1409, 2013.
[24] S. G. Narasimhan and S. K. Nayar, "Interactive (de) weathering of an [44] A. Levin, D. Lischinski, and Y. Weiss. "A closed-form solution to natural
image using physical models," IEEE Workshop on Color and image matting." IEEE Trans. Pattern Analysis and Machine Intelligence,
Photometric Methods in Computer Vision. vol. 6, no. 4, France, 2003. vol. 30, no.2, pp. 228-242, 2008.
[25] J. Kopt, B. Neubert, B. Chen, M. Cohen and D. Cohen-Or. “Deep [45] J. P. Tarel, and H. Nicolas, “Fast visibility restoration from a single color
photo: Model-based photograph enhancement and viewing,” ACM or gray level image,” in Proc. IEEE Conference on Computer Vision
Transactions on Graphics (TOG), vol. 27, no. 5, pp. 116, 2008. (ICCV), 2009, pp. 2201-2208.
[26] R. T. Tan, “Visibility in bad weather from a single image,” in Proc. IEEE [46] J. P. Tarel, N. Hautière, L. Caraffa, A. Cord, H. Halmaoui and D. Gruyer,
Conference on Computer Vision and Pattern Recognition (CVPR), 2008, "Vision enhancement in homogeneous and heterogeneous fog," IEEE
pp. 1-8. Trans Intelligent Transportation Systems Magazine, vol. 4, no. 2, pp.
6-20, 2012.
[27] R. Fattal, “Single image dehazing,” ACM Transactions on Graphics
(TOG), vol. 27, no. 3, pp. 72, 2008. [47] C. O. Ancuti, C. Ancuti, C. Hermans and P. Bekaet, “A fast semi-inverse
approach to detect and remove the haze from a single image.” in Proc.
[28] P. Chavez, “An Improved Dark-Object Subtraction Technique for Asian Conference on Computer Vision (ACCV), 2010, pp. 501-514.
Atmospheric Scattering Correction of Multispectral Data”, Remote
Sensing of Environment, vol. 24, pp. 450-479, 1988 [48] L. Kratz, and K. Nishino, “Factorizing scene albedo and depth from a
single foggy image,” in Proc. IEEE Conference on Computer Vision
[29] K. He, J. Sun, and X. Tang, “Single image haze removal using dark (ICCV), 2009, pp. 1701-1708
channel prior,” IEEE Trans. Pattern Analysis and Machine Intelligence
(TPAMI), vol. 33, no. 12, pp. 2341-2353, 2011. [49] K. Nishino, L. Kratz, and S. Lombardi, “Bayesian defogging,”
International Journal of Computer Vision (IJCV), vol. 98, no. 3, pp.
[30] S. C. Pei and T. Y. Lee, “Nighttime haze removal using color transfer 263-278, 2012.
pre-processing and dark channel prior”, in Proc. IEEE Conference on
Image Processing (ICIP), 2012, pp. 957-960. [50] G. F. Meng, Y. Wang, J. Y. Duan, S. M. Xiang and C. J. Pan. “Efficient
Image Dehazing with Boundary Constraint and Contextual
[31] K. B. Gibson, D. T. Vo and T. Q. Nguyen, “An investigation of dehazing Regularization.” in Proc. IEEE Comference on Computer Vision (ICCV),
effectss on image and video coding”, IEEE Trans. Image Processing 2013.
(TIP), vol. 12, no. 2, 2012.
[51] K. Tang, J. Yang, and J. Wang. "Investigating Haze-relevant Features in
[32] J. Yu, C. Xiao, and D. Li, "Physics-based fast single image fog A Learning Framework for Image Dehazing." in Proc. IEEE Conference
removal", in Proc. IEEE Conference on Signal Processing (ICSP), 2010. on Computer Vision and Pattern Recognition (CVPR), 2014.
[33] B. Xie, F. Guo and Z. Cai, “Improved single image dehazing using dark [52] L. Breiman, "Random forests." Machine learning, vol. 45, on. 1, pp.
channel prior and multi-scale retinex”, in: Proc. International Conference 5-32, 2001.
on Intelligent Systems Design and Engineering Application, 2010, pp.
848-851. [53] Q. Zhu, J. Mai and L. Shao, “Single Image Dehazing Using Color
Attenuation Prior”, in Proc. British Machine Vision Conference
[34] Q. Zhu, S. Yang, P. A. Heng and X. Li: “An adaptive and effective single (BMVC), Nottingham, UK, 2014.
image dehazing algorithm based on dark channel prior”, in Proc. IEEE
Conference on Robotics and Biomimetics (ROBIO), 2013, pp. [54] E. J. McCartney, “Optics of the atmosphere: scattering by molecules and
1796-1800. particles,” New York, John Wiley and Sons, Inc. 1976.
[35] C. Xiao, and J. Gan. "Fast image dehazing using guided joint bilateral [55] S. G. Narasimhan and S. K. Nayar. “Vision and the atmosphere,”
filter." Visual Computer, vol. 28, no. 6-8, pp. 713-721, 2012. International Journal of Computer Vision (IJCV), vol. 48, no. 3, pp.
233-254, 2002.
[36] Y. Xiang, R. R. Sahay and M. S. Kankanhalli, "Hazy image enhancement
based on the full-saturation assumption." IEEE Conference [56] S. G. Narasimhan and S. K. Nayar, “Removing weather effects from
on Multimedia and Expo Workshops (ICMEW), 2013. monochrome images,” in Proc. IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), 2001.
[37] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color
images. in Proc. IEEE Conference on Computer Vision (ICCV), 1998, [57] D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense
pp. 839–846. two-frame stereo correspondence algorithms,” International Journal of
Computer Vision (IJCV), vol. 47, no. 1-3, pp. 7-42, 2002.
[38] S. Paris and F. Durand, “A Fast Approximation of the Bilateral Filter
Using a Signal Processing Approach”, Proc. European Conf. Computer [58] D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using
Vision, 2006 structured light,” in Proc. IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2003.
[39] F. Porikli, “Constant Time O(1) Bilateral Filtering”, in Proc. IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 2008. [59] D. Scharstein and C. Pal, “Learning conditional random fields for stereo,”
in Proc. IEEE Conference on Computer Vision and Pattern Recognition
[40] Q. Yang, K. Tan and N. Ahuja, “Real-time o (1) bilateral filtering”, in (CVPR), 2007.
Proc. IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 2009, pp. 557–564. [60] H. Hirschmüller and D. Scharstein, “Evaluation of cost functions for
stereo matching,” in Proc. IEEE Conference on Computer Vision and
[41] A. Adams, N. Gelfand, J. Dolson and M. Levoy, “Gaussian KD-Trees for Pattern Recognition (CVPR), 2007.
Fast High-Dimensional Filtering”, in Proc. ACM Siggraph, 2009, pp.
21:1-21:12. [61] D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X.
Wang, and P. Westling, “High-resolution stereo datasets with
[42] A. Adams, J. Baek, and M. A. Davis, “Fast High-Dimensional Filtering subpixel-accurate ground truth,” in Proc. German Conference on Pattern
Using the Permutohedral Lattice”, Trans. Computer Graphics Forum, Recognition (GCPR), 2014.
vol. 29, no. 2, pp. 753-762, 2010.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIP.2015.2446191, IEEE Transactions on Image Processing
IEEE Transactions on Image Processing

[62] Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, “Image quality


assessment: from error visibility to structural similarity.” IEEE Trans. on
Image Processing, vol. 13, no. 4, pp. 600-612, 2004.
[63] A. J. Preetham, P. Shirley and B. Smits, “A practical analytic model for
daylight,” in Proc. ACM Special Interest Group for Computer Graphics
(SIGGRAPH), 1999, pp. 91-100.

Qingsong Zhu (M’12) received the B.Eng and M.Sc.


degrees in Computer Science from the University of
Science and Technology of China (USTC), Hefei,
China. He is currently an Associate Professor &
Principal Investigator with the Shenzhen Institutes of
Advanced Technology, Chinese Academy of Sciences,
Shenzhen, China. His research interests include
Computer Vision, Image/Video Signal Processing,
Statistical Pattern Recognition, Machine Learning,
Brain and Cognitive Sciences and Image-guided Minimally Invasive
Neurosurgery Robotic Systems. He has authored/co-authored over 100
academic papers in well-known journals and conference proceedings and over
50 invention patents. Mr. Zhu was a recipient of the Best Paper Award at the
IEEE International Conference on Robotics and Biomimetics in 2013. He was
the Area Chair of the International Conference on Intelligent Computing in
2012, and Session Chair of the International Conference on Acoustics, Speech
and Signal Processing in 2013, and International Conference on Robotics and
Biomimetics in 2013.

Jiaming Mai received the B.Eng degree in Software


Engineering from the South China Agricultural
University. He is currently a visiting student at
Shenzhen Institutes of Advanced Technology, Chinese
Academy of Sciences, and is pursuing M.Eng degree
with the South China Agricultural University. His
research interests include Computer Vision, Image
Processing, Pattern Recognition and Machine
Learning.

Ling Shao (M’09-SM’10) is currently a Full


Professor with the Department of Computer Science
and Digital Technologies, Northumbria University,
Newcastle upon Tyne, U.K. and an Advanced Visiting
Fellow with the University of Sheffield, U.K. He has
authored or co-authored over 160 academic papers in
well-known journals/conferences. His current
research interests include computer vision,
image/video processing, pattern recognition, and
machine learning. Prof. Shao is an Associate Editor of
the IEEE TRANSACTIONS ON IMAGE PROCESSING, the IEEE
TRANSACTIONS ON CYBERNETICS, and several other journals. He is also
a fellow of the British Computer Society and the Institution of Engineering and
Technology.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

You might also like