Computers and Electrical Engineering: Uche A. Nnolim

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Computers and Electrical Engineering 72 (2018) 670–681

Contents lists available at ScienceDirect

Computers and Electrical Engineering


journal homepage: www.elsevier.com/locate/compeleceng

Partial differential equation-based hazy image contrast


enhancementR
Uche A. Nnolim
Department of Electronic Engineering, University of Nigeria, Nsukka, Enugu, Nigeria

a r t i c l e i n f o a b s t r a c t

Article history: This study describes a modified partial differential equation (PDE)-based algorithm for con-
Received 16 June 2017 trast enhancement of hazy images. The formulation is supported by similarities between
Revised 29 January 2018
haze and low light conditions. The proposed approach employs multi-scale, local-global
Accepted 29 January 2018
enhancement of inverted log reflectance and illumination components. These multiple pro-
Available online 21 February 2018
cesses are combined in a suitable PDE framework for improved effectiveness. Furthermore,
Keywords: the de-hazing process is automated via gradient-based optimization, eliminating the man-
Local-global contrast enhancement ual determination of PDE stopping time. Solutions are also devised for visual halo effect,
Single image de-hazing over-enhancement and colour distortion. Based on subjective and quantitative analysis, the
Illumination correction proposed technique consistently outperforms several de-hazing algorithms from the liter-
Single- and multi-scale retinex-based ature.
operators © 2018 Elsevier Ltd. All rights reserved.
Gradient-guided enhancement
Partial differential equations

1. Introduction

Weather affects image scenes in the form of haze, resulting in low visibility caused by light scattering and absorption
due to particles in the atmosphere [1]. Several developed de-hazing techniques are generally grouped as single or multiple
image-based methods [1]. Multiple image de-hazing is not feasible in most instances since it is impossible to obtain all
the required information of a particular image scene under varying conditions. Single image-based de-hazing methods are
preferred due to low-cost, speed, relative simplicity and effectiveness in the absence of reference image data. Thus, the
scope of work is limited to single image de-hazing models, which can be restoration or enhancement-based schemes.
The restoration-based methods utilize the hazy image formation model [1] and the earlier works include those by Lee
et al. [1]. The most popular restoration-based single image de-hazing method is the dark channel prior (DCP) method pro-
posed by He et al. [2]. Detailed information and an excellent review of DCP-based methods can be found in work by Lee
et al. [1]. Recently, there has been some interesting work on using segmentation [3], fusion [4], variational [4] and regu-
larization approaches [3] using sparse priors [5] and other boundary constraints [6] to improve image de-hazing results.
Alternative methods include a biological retina-based model [7] and de-hazing using multi-scale convolutional neural net-
works [8]. However, these methods generally result in images with halos, drastic colour distortion and over-enhancement.
This is in addition to constant adjustment of parameters to obtain good results for different images. Furthermore, others
incur extensive run-time due to the transmission map refinement using soft matting.

R
Reviews processed and recommended for publication to the Editor-in-Chief by Area Editor Dr. E. Cabal-Yepez.
E-mail address: uche.nnolim@unn.edu.ng

https://doi.org/10.1016/j.compeleceng.2018.01.041
0045-7906/© 2018 Elsevier Ltd. All rights reserved.
U.A. Nnolim / Computers and Electrical Engineering 72 (2018) 670–681 671

Contrast enhancement is a feature of image de-hazing and some earlier works tackled the de-hazing problem from
such perspective [9]. Additionally, some authors used variational contrast for de-hazing [10], while others utilized histogram
equalization, stretching and Retinex, etc. The Retinex and contrast limited adaptive histogram equalization (CLAHE) are ef-
fective due to their multi-scale and localized nature. These methods yield images, which suffer from similar problems as the
restoration-based approaches.
The visual shortcomings of restoration and contrast enhancement-based de-hazing approaches are due to the lack of
adaptive and gradual control or regulation of enhancement. Furthermore, several of these algorithms respond poorly to a
wide variety of images. Thus, we incorporate selected enhancement operators into a partial differential equation (PDE)-based
formulation to improve results. The algorithm is updated and improved by introduction and optimization of a reliable image
metric for automated hazy image enhancement.
The rest of the paper is as follows; the second section provides key novel contributions and features of the proposed
algorithm. The third section presents relevant background, which forms the basis of the proposed modified algorithm. The
fourth section presents the details and formulation of the proposed algorithm and solutions to encountered problems. The
fifth section presents the results of experiments of the proposed approach and comparisons with established works from
the literature, while the final section contains the conclusions.

2. Key contributions and features of proposed approach

The key contributions and features of the proposed method include;


• PDE-based framework with local-global contrast enhancement of inverted log reflectance component.
• Optimization/maximization of reliable image gradient-based metrics to automatically determine stopping time of PDE.
• Processing image in the hue-saturation-intensity (HSI) colour space and utilization of hue-based metric to automatically
determine saturation channel tuning parameter.
• Schemes for processing hazy images with even and uneven haze density and sky regions.
• Complete absence of all stages of the DCP process.
• Stable PDE solutions for de-hazing using enhancement approaches.

3. Background

In this section, we present some relevant background that forms the foundation of the proposed algorithm and these
include aspects such as illumination correction and partial differential equations.

3.1. Illumination correction

Illumination-reflectance framework is mathematically similar to the hazy image restoration model. This is based on the
observation that both contain multiplicative expressions. Furthermore, the DCP de-hazing algorithm has been employed in
the enhancement of dark images and video frames with uneven illumination [11]. This was achieved by inverting the dark
images prior to processing. In this inversion, the estimated and refined transmission map resembles the extracted illumina-
tion component using the Gaussian surround functions of the Multi-Scale Retinex (MSR). Thus, it follows that illumination
correction algorithms could be utilized to process hazy images by similar inversion operation. This duality is theoretically
valid since illumination and haze are smooth and slowly varying. The general scheme for utilizing illumination correction
operators for hazy contrast enhancement is shown in Fig. 1.
The diagrams in Fig. 2 show the visual similarities between de-hazing and contrast enhancement.
Based on visual results in Fig. 2, conventional de-hazing algorithms do not perform optimally for all dark images by
default. Similarly, illumination correction algorithms are not expected to be optimal for image de-hazing without some
modifications.

3.2. Partial differential equation (PDE)-based contrast enhancement

PDE-based image processing approaches have been explored in depth and are suited to a wide range of inverse problems
where initial conditions or priors are unknown. Additionally, their key strengths lie in the ability to allow simultaneous
controlled aggregation of several processes into one continuous flow equation. The PDE-based methods also enable gradual
control of the weighted contribution of each individual process. We also utilize gradient-based metrics to adaptively control
the PDE-based process.

Input Output
Invert Enhancement Invert
hazy de-hazed
image image operator image image

Fig. 1. Generalized hazy image contrast enhancement scheme.


672 U.A. Nnolim / Computers and Electrical Engineering 72 (2018) 670–681

Fig. 2. (a) dark channel (b) estimated transmission map (c) refined transmission map (d) inverted hazy image (e) inverted illumination of hazy image, (f)
processed inverted illumination of hazy image.

4. Proposed algorithm

In order to handle hazy images, we invert the input image and process the inverted image with the proposed algo-
rithm. This processed image, which visually resembles the improved transmission map, is inverted once more after pro-
cessing to obtain the de-hazed image. We define the proposed approach using a logarithmic representation of the illumi-
nation/reflectance model with modifications for convenience in notation. We denote the initial red-green-blue (RGB) colour
hazy image as URGB (x,y) from which we obtain the HSI version, UHSI (x,y) = RGB2HSI{URGB (x,y)} and decomposition of the im-
age in terms of hue saturation and intensity as; {H, S, U} = split{UHSI (x,y)}. We then select the intensity channel, U(x, y)
and obtain its maximum pixel intensity value as Umax , from which the inverted intensity image, I(x, y) and its logarithmic
equivalent are evaluated as shown in (1) and (2);

I (x, y ) = Umax − U (x, y ) (1)

log [I (x, y )] = log [L(x, y )] + log [R(x, y )] (2)

Substituting the terms for intensity, i = log [I(x, y)], illumination, l = log [L(x, y)] and reflectance, r = log [R(x, y)], leads
to the expression; i = l + r. The illumination is then estimated using Retinex methods. We subsequently extract the log
reflectance from the log illumination using the expression; r = i − l. Furthermore, we process the log reflectance, r us-
ing gain offset correction (GOC) [12] for global contrast (as shown in (3)) and local contrast enhancement using CLAHE;
rCLAHE = CLAHE(rGOC ). In (3), rmin and rmax are the minimum and maximum values of the log reflectance, while D is the
number of levels (set to D = 256 for display purposes).
 r−r 
min
rGOC = GOC (r ) = [D − 1] (3)
rmax − rmin
With the obtained enhanced log reflectance, we minimize the energy function E{i(x, y, t)} given as;

E {i(x, y, t )} = α ( f {r (x, y, t )} − i(x, y, t ))dxdy (4)

In (4), the symbol,  is the image domain, t is the time parameter, r(x, y, t) and i(x, y, t) are the continuous reflectance
and intensity images, while α is the control parameter for the enhancement term. Thus, the PDE equation of the contrast
enhancement process of the hazy image is given as;
∂ i(x, y, t )
= α ( f {r (x, y, t )} − i(x, y, t ) ) (5)
∂t
U.A. Nnolim / Computers and Electrical Engineering 72 (2018) 670–681 673

In (5) the term f{r(x, y, t)} = CLAHE(GOC(r(x, y, t))) = rCLAHE . Further realization using finite difference method (FDM) in
the discrete domain yields;

it+1 (x, y ) = it (x, y ) + [α ( f {r (x, y, t )} − i(x, y, t ) )]t (6)


For the expression in Eq. (6), is grey level of pixel (x, y) at time t and t is the time step and after processing,
it (x,y)
I (x, y) ≈ iN (x,y), where iN (x,y) is the image obtained for the last (Nth) iteration. The modified inverted image, I (x, y) and its
maximum value, Imax  are then used to obtain the de-hazed intensity image, U (x, y) using the expression in (7).

U  (x, y ) = Imax
 − I (x, y ) (7)
We then obtain the modified HSI image as; UHSI  (x, y ) = merge{H, S, U  }; while the de-hazed RGB image is obtained by
 (x, y )}. The detailed system for the proposed PDE-
converting the HSI image back to RGB space; URGB (x, y ) = HSI2RGB{UHSI
based formulation of the algorithm is shown in Fig. 3.
We compare the PDE-based piece-wise linear-CLAHE (PDE-PWL-CLAHE) from previous work [12] with the single- and
multi-scale variants of the proposed approach against the DCP method by He et al. [2]. The results are shown in Fig. 4 and
we observe striking results when compared with He’s method. The halo effect is clearly observed in three of the images,

Fig. 3. Detailed flowchart of the PDE-based multi/single scale Retinex GOC-CLAHE-boosted algorithm.

Fig. 4. (a) PDE-PWL-CLAHE (b) PDE-GOC-CLAHE-MSR (c) PDE-GOC-CLAHE-SSR (d) He’s method.
674 U.A. Nnolim / Computers and Electrical Engineering 72 (2018) 670–681

though reduced in image (c). However, the image in (c) appears flattened with less highlighted detail and contrast. This halo
effect is one of the issues to be addressed to further improve on the proposed approach.

4.1. Problems and solutions

The multi-scale variant of the proposed approach yielded good de-hazing but with visible halos, poor colour and grey
world violation for some images. Thus, problems to solve included the automatic determination of stopping time, colour
distortion, halo effect and the dark image problem.

4.1.1. Optimized image metric guided evolution


We need to automate the algorithm in order to obtain best results adaptively without manual adjustment of parameters.
Thus, we require a metric whose increase closely mirrors the progress made in the de-hazing process. Initial experiments
with standard contrast measures and image formation metrics did not yield consistent results. Thus, measures focusing on
image edge features were studied. We finally settled on the average gradient (AG) measure due to consistency and use the
following optimization scheme to maximize the AG value; While ∂ AG (I )
∂ t ≥ 0, compute the evolving image, I
t + 1 (x, y).

4.1.2. Colour distortion problem and saturation tuning


In RGB space, there were mixed results as several images exhibited colour distortion due to the nonlinear processing of
each channel. Some images yielded rich colours while others exhibited faded or distorted colours with minimal enhance-
ment. We subsequently tested the scheme in the HSI/HSV colour space and results showed improvement with minimal
colour distortion. However, the issue of colour fading or bleeding persisted in some cases. This necessitated adaptive control
of the saturation component for consistent colour results. Thus, we devised the following scheme for saturation tuning after
processing in the HSI colour space;

• Compute the hue deviation index, HDI [13] from the hue of the processed image and derive the value of the saturation
tuning parameter, ksat = √1
2 HDI
• Perform the following computation; S = ksat ∗ S; where S and S are the tuned and initial saturation channel respectively
• Convert to RGB using the modified saturation and intensity channels.

This yielded good results for most images but also over-enhanced a few. Thus, a fixed value of 1.5 was assigned to
ksat , after additional experimental evaluation. This gave a more balanced and consistent result for all images. This modified
algorithm is termed the proposed algorithm version one (PA-1).

4.1.3. Solving the halo problem using optimum width selection


For improved speed and effectiveness, we focus on the single scale variant of the algorithm. We also ensure that we set
the best values for the width, cs of the surround function with α = 1. The summary of results is shown in Fig. 5 for the
transmission maps, de-hazed image and AG optimization plots. As the surround narrows, details become clearer and the
halo effect is reduced.

4.1.4. Solving the dark image problem via selective inverted illumination/reflectance channel enhancement
The darkening of the images, especially those with huge disparity between sky and non-sky regions, was addressed by
processing the inverted illumination component for images with such features. The inverted reflectance channel is utilized
for processing images with minimal sky regions. The results shown in Fig. 6 indicate that the proposed approach yields a
better result than He’s method for such images. This improved single scale version of the proposed algorithm is denoted
proposed algorithm version 2 (PA-2).

4.2. Computational complexity

The proposed algorithm (PA-2) is analysed for structural and computational complexity. The main contributors to this
aspect are the CLAHE and single scale Retinex (SSR) algorithms, which utilize tile-based operations and large Gaussian sur-
rounds, respectively. We modify a previous complexity calculation by Mukherjee and Mitra [14] as: aA + mM + eE + dD, in
which a denotes number of additions, m the number of multiplications, d the number of divisions and e the number of
exponential operations per pixel. Considering the HSI forward and reverse conversion, image inversion, global GOC and local
CLAHE operations in addition to the SSR processing, we estimate the complexity as: 1359490A + 311080M + 4E + 5D.

5. Experiments and results

Extensive experiments were performed to assess the performance of the developed algorithm compared to de-hazing
techniques proposed by various authors. Benchmark images from the literature and relevant quality metrics for image
de-hazing were also employed in experiments. The specifications of the computing platform are: Intel® Core i7-6500U
x64-based processor at 2.59 gigahertz (GHz), 12 gigabyte (GB) random access memory (RAM). The graphics processor is a
NVIDIA® GeForceTM 940M with compute capability of 5.0. All algorithms were implemented and executed in MATLAB®.
U.A. Nnolim / Computers and Electrical Engineering 72 (2018) 670–681 675

Fig. 5. Transmission maps with corresponding de-hazed images and AG plots for (a) cs = 20 (b) cs = 5 (c) cs = 1 (d) cs = 0.5.

5.1. Relevant quality measures

Reference images are usually non-existent in image de-hazing and such cases invalidate the use of full reference mea-
sures. However, relevant no-reference image quality metrics for de-hazing assessment exist. The most popular metric ac-
cording to Lee et al. [1] is the ratio of visible edges of both de-hazed and hazy images defined as Qe . This is based on
the idea that a de-hazed image would have sharper edges and enhanced details compared to hazy images [1]. The ratio of
visible edge gradients is also used and is represented as Qg [1]. For Qe and Qg , values greater than unity imply improvement
while those less than unity indicate degradation [1]. Other metrics include the percentage of dark/white pixels/saturation
parameter [1], image entropy and the Q metric. The saturation parameter should be low, while entropy and Q metric should
be high for improved results. Based on experiments performed in the course of this study, high or low image entropy does
not give a reliable indication of a visually pleasing de-hazed image and was omitted. Also, due to the absence of reference,
ground-truth images, we utilize only Qe , Qg , percentage of dark/white pixels in addition to the RAG value (which is similar
to Qe and Qg ).

5.2. Performance comparison

The proposed approach (PA-1 and PA-2) are visually compared against algorithms proposed by Fattal, Galdran et al.
[15] [4], Tan, He et al. [2], Zhang et al. [7], Kim et al. [10], Dai et al. [9], Tarel et al. [9], Hsieh et al. [16], Dong et al. [5],
Guo et al. [17], Ancuti et al. [18], Yeh et al. [19], Zhu et al. [20], Kratz and Nishino [21], Nishino et al. [22], Meng et al. [6],
Gibson et al. [23],Wang and He [24], Yang et al. [25] and Ren et al. [8]. Additionally, PA-2 is objectively compared against
available implementations of He et al., Zhu et al. and Ren et al.
In Fig. 7, first row (Tiananmen image), the first three images are from Zhu, et al amended with DCP by He et al., PA-2
and Ren et al. First six images in the eighth row are from Galdran et al. [4] amended with DCP method by He et al. and
PA-2. The first four images in the fourth row (City1 image) are from Yang [25] amended with Nishino et al., Zhu et al., He
676 U.A. Nnolim / Computers and Electrical Engineering 72 (2018) 670–681

Fig. 6. (a) Hazy image (b) He’s method (c) proposed method using reflectance channel and (d) illumination channel.

et al. and PA-2. The first six images from left-hand side in rows 2, 5, 3 and 7 are from Wang and He [24] with Tarel et al.,
He et al., Nishino et al., Galdran et al., Wang & He, amended with Dai et al., Dong et al., Zhu et al. and PA-2. The tenth row
shows several algorithms for de-hazing the brickhouse image, while the sixth row shows several algorithms for de-hazing
the pumpkins image. In the ninth row (train image), the first six images are from Galdran et al. [15] amended with results
from PA-1 and PA-2, Fattal et al. and Lu et al. All visual results are presented as obtained by the original authors from their
papers, except for works by He et al., Zhu et al. and Ren et al., for which available MATLAB implementations exist for both
visual and quantitative comparison.
Based on visual analysis, the proposed approach shows improved contrast, colour and edge enhancement in addition to
haze removal. For example, in Row 1 and 4, PA-2 avoids over-enhancement of the sky regions unlike He et al. and Tarel’s
method (has dark sky regions). Also, there is more brightness enhancement in regions with detail (which are dark in Zhu’s,
He’s and Ren’s method) for the Tiananmen image in Row 1. In Row 8, only PA-2 and He et al. method shows sufficient haze
removal at the upper ends of the cones image. However, He et al. yields a darker image, unlike the method by Ren et al.
For Row 4, PA-2 shows enhanced contrast in both sky and non-sky regions for the city1 image. Also, there is increased
detail in non-sky regions (see highest sky scraper) with uniform enhancement of sky regions. The result of Nishino et al. is
over-enhanced while Galdran et al. and Dai et al. methods have hazy features in both sky and non-sky regions, resulting in
minimal contrast enhancement. Conversely, the methods by Zhu et al., Ren et al. and He et al., yield dark images.
In Row 2, PA-2 shows enhanced contrast in both sky and non-sky regions for the canyon image. In this case, the sky
region is highly detailed, and properly enhanced with PA-2, followed by Yang et al. and He et al. Once more Nishino et al.
over-enhances the entire image, while Ren et al. yields an image with some hazy regions. PA-2 yields balanced enhancement
in both sky and non-sky regions by brightening regions, which are dark in images obtained by most of the other schemes.
In Row 5, images produced with methods by He et al., Wang and He, Dong et al. and PA-2 show the most enhanced
results. Conversely, methods by Tarel and Galdran et al. perform poorly for the canon image. Additionally, methods by Zhu
et al. and Ren et al. fail to sufficiently remove the haze in the upper section of the image.
In Row 3, the method by Wang and He gives most detailed result for the mountain image (though sky regions are over-
enhanced) followed by Tarel, PA-2 and He et al. Images produced by Ren et al. and Zhu et al. appear slightly hazy. For Row
10, only Guo et al., PA-2, Hsieh et al. and Fattal yield acceptable results for the brickhouse image.
In Row 6, Fattal, Dong et al., He et al., PA-2 and Nishino et al. yield good results for the pumpkins image. In Row 9, PA-1,
PA-2, Meng et al., Dai et al. and Lu et al. yield images with the most visually perceptible contrast enhancement for the
train image. In contrast, Fattal’s method produces an extremely dark image. In Row 7, PA-2 has the most enhanced result,
followed by methods of He et al., Wang and He.
U.A. Nnolim / Computers and Electrical Engineering 72 (2018) 670–681 677

Fig. 7. (a) Original and processed image results using various algorithms and PA-2 (b) key to figures.
678 U.A. Nnolim / Computers and Electrical Engineering 72 (2018) 670–681

Table 1
Obtained values for images processed with algorithms by He et al. [2], Zhu et al. [20], Ren et al. [8] and PA-2.

Images Algos

He et al. [2] ( = 0.95, Zhu et al. [20] β = 0.95, Ren et al. [8] γ = 1.3 (canyon PA-2 t = 0.25;
w = 15, A = 240, 1;θ 0 = 0.1893;θ 1 = 1.0267;θ 2 = −1.2966; image) 0.8 ≤ γ ≤ 1.5 (others) ksat = 1.5
r = 24) Guided filter: r = 60; t0 = 0.05; t1 = 1;
ε = 0.001
Tianamen 1.8455/0.9606/0.1879 1.1866/1.0041/0.0814 1.5649/0.8734/0.1288 2.8225/1.0386/0.0625
Cones 1.4977/1.1478/0.3878 0.9704/1.0873/0.2499 1.3818/1.1042/0.2956 2.7516/1.1999/0.2733
City1 1.1914/1.0332/0.1336 0.9303/1.0 075/0.20 02 1.2989/1.0232/0.2002 1.7762/1.1164/0.0562
Canyon 1.7481/1.1057/0.3796 1.2880/1.0679/0.2412 1.4564/1.0319/0.0446 2.5408/1.2070/0.3103
Canon 3.2903/1.0857/0.3947 1.7127/0.9089/0.3198 2.6871/1.0832/0.3831 2.8059/1.1188/0.3947
Mountain 1.7105/0.9348/0.0787 1.2092/0.9307/0.0984 1.60 05/0.9784/0.0 074 2.7275/1.0202/0.0074
Brickhouse 1.2006/0.9747/0.1172 0.8597/1.1395/0.0730 1.2118/1.0030/0.1288 1.0836/1.1135/0.1021
Pumpkins 1.5927/0.9501/0.1581 0.9311/0.6726/0.1333 1.4753/0.9511/0.1764 2.4539/1.0361/0.1516
Train 1.5206/1.0090/0.1664 0.9797/1.0509/0.3265 1.2036/1.0203/0.2412 1.5190/1.1106/0.3005
Toys 2.2566/0.9712/0.3840 1.6711/1.0117/0.2865 2.1568/0.9576/0.2827 2.9813/1.1095/0.3379

Table 2
Runtimes for images processed with algorithms by He et al. [2], Zhu et al. [20], Ren et al. and PA-2.

Images Algorithms

He et al. [2] TH (s) Zhu et al. [20] TZ (s) Ren et al. [8] TR (s) PA-2
(standard/fast)

TPA (s) No. of iterations (N)

Tianamen (450 × 600) 30.364180/1.253494 0.991586 2.362754 3.20 0 073 118


Cones (384 × 465) 21.205725/0.850155 0.661314 1.651447 2.169653 119
City1 (600 × 400) 28.019656/1.094910 0.875287 2.070620 3.011386 117
Canyon (600 × 450) 30.861073/1.237655 0.972741 2.529734 3.413434 119
Canon (525 × 600) 35.503302/1.431257 1.135376 2.890541 3.899115 118
Mountain (400 × 600) 27.063493/1.129231 0.880835 2.358143 3.074064 119
Brickhouse (711 × 693) 57.417363/2.230871 1.667610 5.234674 5.826943 116
Pumpkins (400 × 600) 27.23115/1.125475 0.901815 2.253179 2.967405 120
Train (400 × 600) 27.443427/1.105757 0.849072 2.075004 3.064349 118
Toys (360 × 500) 20.462751/0.844945 0.657376 1.578068 2.282819 118

We utilize the ratio of average gradients (RAG) (similar to Qg ), ratio of edges (Qe ) and saturation parameter (σ ) with
numerical results shown in Table 1. The available de-hazing implementations include;

• DCP (standard and fast versions) by He et al. [2] using the following optimal parameters (constant coefficient, ω = 0.95,
patch size,  = 15, regularization parameter, λ = 0.0 0 01 (standard version) and radius of guided filter, r = 24 (fast ver-
sion)).
• Colour Attenuation Prior (CAP) by Zhu et al. [20] (with the following optimal parameters from their paper and MATLAB
implementation: scattering coefficient, β = 0.95 or 1; linear coefficients: θ 0 = 0.1893, θ 1 = 1.0267, θ 2 = −1.2966; trans-
mission lower and upper bounds: t0 = 0.05; t1 = 1; and regularization parameter, ε = 0.001).
• Multi-Scale Convolutional Neural Network by Ren et al. [8] (with parameter: γ = 1.3 for canyon image and 0.8 ≤ γ ≤ 1.5
for other images)
• PA-2, set to the default parameters for minimal run-time (t = 0.25; ksat = 1.5).

Based on results in Table 1, PA-2 almost always yields images with the highest relative average gradient (RAG) values
compared with the other algorithms. Additionally, all results with PA-2 are obtained without adjustment of parameters,
unlike the others, which may have to be tuned for different images. Thus, we can conclude that the RAG measure is a
reliable metric for image de-hazing, and its optimization yields the results for PA-1 and PA-2.
We also present the run-times of the available algorithms evaluated on the same computing platform in Table 2. Results
indicate that PA-2 has the longest running time. However, this is not an equal comparison since PA-2 is an iterative algo-
rithm (with numerous iterations depending on the input image) while the rest are not. The available implementation of He’s
method include the faster version (using guided filter) and the standard version (utilizing soft matting). Consequently, the
standard DCP variant incurs a much higher execution time than PA-2 due to the soft matting. The multi-scale convolutional
neural network by Ren et al. is highly optimized to work with graphics processing units (GPU). Furthermore, key implemen-
tation details are hidden in the MATLAB p-code format and thus, unavailable for analysis. The method by Zu et al. is the
fastest but yields the least visually acceptable images and lowest numerical results. PA-2 gives the most consistent degree
of hazy image contrast enhancement in terms of both subjective and objective evaluation. Future objectives would be to
completely eliminate halo effects and reduce computational complexity and run-time, while preserving its effectiveness and
consistency.
U.A. Nnolim / Computers and Electrical Engineering 72 (2018) 670–681 679

6. Conclusion

An adaptive partial differential equation-based scheme for enhancement of hazy images has been presented. The scheme
avoids the extensive stages of the standard DCP-based approaches while yielding comparable visual and quantitative re-
sults. The algorithm performs relatively well in the absence of priors, assumptions and physical models normally utilized
for image de-hazing. The proposed approach is also fully automated, avoiding the manual tuning of parameters hindering
the optimal operation of conventional enhancement-based de-hazing algorithms. This automated process is due to the uti-
lization of a reliable haziness metric which guides the de-hazing process and solves the problem of the PDE stopping time.
The algorithm also solved the problem of colour distortion by utilizing adaptive and fixed saturation control parameters.
Furthermore, the problem of visual halos was mitigated by employing a fixed optimum width of the surround function
for processing all images. The problem of dark images was solved by processing the logarithm of either the inverted il-
lumination or reflectance component obtained from the hazy image. Future work will explore the possibilities of further
improving the de-hazing results via improved skylight discrimination, haze density detection and processing by automating
the selection of the illumination or reflectance component.

Acknowledgments

The author would like to thank the editors and reviewers for helpful suggestions and to Prof. (Mrs) D. A. Nnolim for
proof-reading the final drafts and useful comments.

Supplementary materials

Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.compeleceng.
2018.01.041.

References

[1] Lee Sungmin, Yun Seokmin, Nam Ju-Hun, Won Chee Sun, Jung Seung-Won. A review on dark channel prior based image dehazing algorithms. EURASIP
J Image Video Process 2016;2016(4):1–23.
[2] He Kaimin, Sun Jian, Tang Xiaoou. Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell (PAMI)
2010;33(12):2341–53.
[3] Cui Tong, Tian Jiandong, Wang Ende, Tang Yandong. Single image dehazing by latent region-segmentation based transmission estimation and weighted
L1-norm regularisation. IET Image Process 16 2017;11(January (2)):145–54.
[4] Galdran Adrian, Vazquez-Corral Javier, Pardo David, Bertalmıo Marcelo. Fusion-based variational image dehazing. IEEE Signal Process Lett
2017;24(February (2)):151–5.
[5] Dong Xue-Mei, Hu Xi-Yuan, Peng Si-Long, Wang Duo-Chao. Single color image dehazing using sparse priors. 17th IEEE international conference on
image processing (ICIP); 2010. 26-29 Sept.
[6] Meng Gaofeng, Wang Ying, Duan Jiangyong, Xiang Shiming, Pan Chunhong. Efficient Image dehazing with boundary constraint and contextual regu-
larization. In: IEEE international conference on computer vision (ICCV-2013); 2013. p. 617–24.
[7] Zhang Xian-Shi, Gao Shao-Bing, Li Chao-Yi, Li Yong-Jie. A retina inspired model for enhancing visibility of hazy images. Front Comput Sci 22nd De-
cember 2015;9(151):1–13.
[8] Ren Wenqi, Liu Si, Zhang Hua, Pan Jinshan, Cao Xiaochun, Yang Ming-Hsuan. Single image dehazing via multi-scale convolutional neural networks. In:
European conference on computer vision. Springer International Publishing; Oct. 8, 2016. p. 154–69.
[9] Dai Sheng-kui, Tarel Jean-Philippe. Adaptive sky detection and preservation in dehazing algorithm. In: IEEE international symposium on intelligent
signal processing and communication systems (ISPACS); November 2015. p. 634–9.
[10] Kim Jin-Hwan, Sim Jae-Young, Kim Chang-Su. Single image dehazing based on contrast enhancement. In: IEEE International conference on acoustics,
speech and signal processing (ICASSP); May 22-27 2011. p. 1273–6.
[11] Jiang Xuesong, Yao Hongxun, Zhang Shengping, Lu Xiusheng, Zeng Wei. Night video enhancement using improved dark channel prior. In: 20th IEEE
international conference on image processing (ICIP); Sep 15 2013. p. 553–7.
[12] Nnolim Uche Afam. Improved partial differential equation (PDE)-based enhancement for underwater images using local-global contrast operators and
fuzzy homomorphic processes. IET Image Process 2017;11(November (11)):1059–67.
[13] Shen Xiaole, Li Qingquan, Tan Yingjie, Shen Linlin. An uneven illumination correction algorithm for optical remote sensing images covered with thin
clouds. Remote Sens 2015;7(September (9)):11848–62.
[14] Mukherjee Jayanta, Mitra Sanjit K. Enhancement of colour images by scaling the DCT coefficients. IEEE Trans Image Process 2008;17(19):1783–94.
[15] Galdran Adrian, Vazquez-Corral Javier, Pardo David, Bertalmio Marcelo. Enhanced variational image dehazing. SIAM J Imaging Sci September
2015:1–26.
[16] Hsieh Cheng-Hsiung, Weng Zhen-Ming, Lin Yu-Sheng. Single Image haze removal with pixel-based transmission map estimation. In: WSEAS recent
advances in information science; 2016. p. 121–6.
[17] Guo Fan, Cai Zixing, Xie Bin, Tang Jin. Automatic image haze removal based on luminance component. In: 6th international conference on wireless
communications networking and mobile computing (WiCOM); 23-25 Sept. 2010. p. 1–4.
[18] Ancuti Cosmin, Ancuti Codruta Orniana, Haber Tom, Bekaert Philippe. Enhancing underwater images and videos by fusion. In: IEEE conference on
computer vision and pattern recognition. Providence, RI, USA; Jun 16 2012. p. 81–8.
[19] Yeh Chia-Hung, Kang Li-Wei, Lee Ming-Sui, Lin Cheng-Yang. Haze effect removal from image via haze density estimation in optical model. Opt Express
2013;21(November (22)):27127–41.
[20] Zhu Qingsong, Mai Jiaming, Shao Ling. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans Image Process
2015;24(November (11)):3522–33.
[21] Kratz Louis, Nishino Ko. Factorizing scene albedo and depth from a single foggy image. In: IEEE international conference on computer vision (ICCV).
Kyoto, Japan; 2009. p. 1701–8. September.
[22] Nishino Ko, Kratz Louis, Lombardi Stephen. Bayesian defogging. Int J Comput Vision 2012;98(July (3)):263–78.
[23] Gibson Kristofor B, Nguyen Truong Q. Fast single image fog removal using the adaptive Wiener Filter. In: 2013 20th IEEE international conference on
image processing (ICIP); 2013. p. 714–18. September.
680 U.A. Nnolim / Computers and Electrical Engineering 72 (2018) 670–681

[24] Wang Wei, He Chuanjiang. Depth and reflection total variation for single image dehazing. College of Mathematics and Statistics, Chongqing University,
Chongqing, China; 2016. Technical report 1601.05994, 13 October.
[25] Yang Shuai, Xie Qing Yaoqin. An improved single image haze removal algorithm based on dark channel prior and histogram specification. In: 3rd
international conference on multimedia technology (ICMT-13); 2013. p. 279–92. Nov. 22.
U.A. Nnolim / Computers and Electrical Engineering 72 (2018) 670–681 681

Uche A. Nnolim received the B.E.E. degree in electrical engineering from University Of Minnesota, Minneapolis, USA, in 2003 and M.Sc (Research) and Ph.D
degrees in electronic engineering from University of Kent-Canterbury, UK in 2007 and 2010 respectively. His research interests include but not limited to:
image processing, fuzzy logic, partial differential equations, fractals, chaos theory and embedded systems.

You might also like