An Automated Multi Scale RETINEX With Color Restoration For Image Enhancement
An Automated Multi Scale RETINEX With Color Restoration For Image Enhancement
Abstract—The dynamic range of a camera is much lesser ii. color independence from the spectral distribution of the
than that of human visual system. This causes images taken scene illuminant and,
by the camera to look different from how the scene would iii. color and lightness rendition.
have looked to a naked eye. Multi Scale Retinex with Color
Restoration(MSRCR) algorithm enhances images taken under The first property can be achieved by applying logarithmic
a wide range of nonlinear illumination conditions to the level transformations on the image [1]. The second property can
that a user would have perceived it in real time. But there are
be achieved by eliminating the illuminance component in the
parameters used in this enhancement method that are image
dependent and have to be varied based on the images under image. Every pixel in an image can be represented as a product
consideration. In this paper we propose a completely automated of illuminance and reflectance i.e.
approach for MSRCR by obtaining parameter values from the
image being enhanced.
S(x, y) = R(x, y) ∗ L(x, y) (1)
I. I NTRODUCTION
where L represents illuminance, R represents reflectance
Standard image enhancement techniques modify the and S represents the image pixel. Our aim is to eliminate
image by using techniques such as histogram equalization, L(x, y). Illumination varies slowly across the image unlike
specification etc. [1] so that the enhanced image is more reflectance. So illuminance of an image can be obtained by
pleasing to the visual system of the user than the original low pass filtering the image.
image. There is a difference between the way our visual
system perceives a scene when observed directly and in
Instead of obtaining R = S/L, we use logarithmic approach
the way a digital camera captures the scene. Our eyes can
to achieve the same, since applying logarithm on an image
perceive the color of an object irrespective of the illuminant
gives us dynamic range compression. Let s = log(S), r1 =
source. But the color of the captured image depends on the
log(R), l = log(L). So now equation 1 can be represented as,
lighting conditions at the scene. Our aim is to enhance the
quality of the recorded image as to how a human being would
have perceived the scene. This property that we aim to achieve r1(x, y) = s(x, y) − l(x, y) (2)
is called ‘color constancy’. This property cannot be achieved
using standard image enhancement techniques. Histogram L can be obtained by convolving a low pass filter F with
equalization is basically a contrast enhancement technique image S. Initially Land
� proposed[5] 1/r2 as the low pass
that works well on images that are uni-modal (i.e.very filter F , where r = (x2 + y 2 ), x and y are the pixel
dark or bright images). Advanced variants of histogram locations. By using this function, the first 2 properties of
equalisation like Adaptive Histogram Equalisation(AHE) [2], color constancy were achieved but not the third one. Zia et
Contrast Limiting Adaptive Histogram Equalisation(CLAHE) al. [6], proposed a new method popularly known as Single
[3], Multi Scale Adaptive Histogram Equalisation (MAHE) Scale Retinex(SSR) to overcome this problem.
[4] give strong contrast enhancement. But these methods
are not used in color image processing as strong contrast In this paper we will look at the SSR algorithm in Section
enhancement, might make the image look unnatural. One II, a modified version of SSR named as Multi Scale Retinex
of the enhancement techniques that tries to achieve color (MSR) in Section III, a variant of MSR known as Multi
constancy is Retinex(Retina+Cortex) [5]. Scale Retinex with Color Restoration(MSRCR) in Section IV
and finally we present our proposed automated approach to
A color constancy algorithm must be able to simultaneously MSRCR in Section V.
achieve the 3 properties given below [6],
i. dynamic range compression,
III. M ULTI S CALE R ETINEX (MSR) where β is a gain constant, α controls the strength of the non
linearity, G and b are final gain and offset values. The values
In the multi scale retinex method [7], we find the retinex
specified for these constants by Zia et al. [6] are β = 46,
output according to equation 5 for various values of surround
α = 125, b = −30, G = 192.
constants and add all the retinex outputs by giving them equal
After performing color restoration it was found that the
weight as,
N restored images were still “greyed-out”. Though final gain
�
RM SRi = wn Rni (5) and offset values, G and b, are used for the adjustment
n=1 from logarithmic domain to display domain, “greying -out”
Fig. 2. Histogram of a single scale retinex enhanced image Fig. 3. Histogram explaining clipping points using the variance
V. P ROPOSED M ETHOD after clipping the histogram and rescaling the clipped region
to 0 to 255 as shown in Fig. 2. But after testing this method
Though many modified versions of MSRCR are available across various images, we came to a conclusion that a unique
in the literature [9]-[11], none of these propose an automated ‘x’ value would not work for all images. So the procedure
approach for Retinex. We propose an automated (image in- of finding clipping points cannot be automated if variance is
dependent) method to choose the upper and lower clipping chosen as a control measure.
points. We can choose the upper and lower clipping points
using two methods,
In the second method, control measure depends on the
i. by using variance of the histogram as a control measure frequency of occurrence of pixels. For most of the images,
or the histogram of the enhanced image is similar to a gaussian
ii. by using the frequency of occurrence of pixels as shown (but not exact). The frequency of occurrence of pixel value
in the histogram as a control measure. ‘0’ in the enhanced image is found. Let this value be ‘max’
Our initial approach was to use variance as a control as shown in the Fig. 4. The lower and upper clipping points
measure. A particular test image was taken. After performing are obtained as shown in the Fig. 4. After testing across
single scale retinex on that image, the histogram of the many images, y = 0.05 was found to be an optimum value
enhanced (single scale retinexed) image was plotted and that can be used for many types of images, meaning that 5
variance was found from that. The clipping point was chosen percent of pixels on either side of the histogram are discarded.
as ‘x’ times the variance where ‘x’ can take any value from This approach has removed the image dependency that the
1 to 5 as shown in Fig. 3. The output image was obtained previous method has and this is a really great advantage
in real time applications [12], [13] where the user would
not have time to choose the optimum clipping points for a
particular image.
A. Results
We use the standard Retinex test images that are given in
the website of NASA for comparison. The enhanced outputs
provided for each test image in the website have been obtained
by adjusting the parameters depending on the image. This
Retinex algorithm is also commercially provided by True View
Imaging Company as a part of Photoflair software. The demo
version of this software is available for free. This software
also uses some image independent (automated) approach for
Retinex. Their method gives very good results for some images
but very bad results for certain images (especially dark or
shadowed images). But our automated approach gives very
good enhancement for all types of images.
We present here three images for comparing the software
output with our automated approach. The first image Fig. 5
a) is obtained from the NASA Retinex website. Fig. 5 shows
clearly that our method is better than software output. The
red colour uniform and the subjects are better visible in our
enhanced image than in the software enhanced image. The
second image Fig. 6 a) was taken by us under poor lighting Fig. 6. a) Input image b) Our output c) Software output
conditions. For this image too, our enhanced image is much
better than the software’s. The third image Fig. 7 a) is also
taken from NASA’s website. For this image, both the software al., have compared in their work [14] the difference in the
and our proposed method perform equally well. outputs obtained when MSRCR is applied only on the luma
components that are obtained from different colour transforms.
B. Luma based approach According to this paper, the outputs obtained after applying
Though automated MSRCR performs well in most of the MSRCR on the luma components of YUV and Lab transforms
cases, it fails to render exact colour when there are large is as close as the outputs obtained after applying MSRCR
constant areas of constant colour. Applying MSRCR algo- on the luma component of PCA transform. Figure 8 shows
rithm on the luma component of the image turned out to the disadvantage of using the original MSRCR and how luma
be a good solution for this problem. The PCA is the best based MSRCR overcomes it. Even color rendition is better
transform from RGB space to luma and chroma space since in luma based MSRCR for this particular example. Another
the luma and chroma components are made to be completely advantage of luma based MSRCR is the simulation time is
independent(orthogonal) of each other. But PCA is not widely reduced approximately to one-thirds of the time it took for
used since the transform is image dependent. Meylan et the original MSRCR since here the MSRCR algorithm is
Fig. 7. a) Input image b) Our output c) Software output
applied only on the luma component. One of the reasons Fig. 8. a) Input image b) original MSRCR output c) luma based MSRCR
for luma based MSRCR to give better results than original output
MSRCR is the automation step. Since the optimal value to
automate the MSRCR algorithm was found using the trial
and error procedure, a uniform optimal value of y = 0.05 [8] A. C. Hulbert, “Formal connections between lightness algorithms,”
Journal of Optical Society of America, vol. 3,issue 10, pp. 1684-
was used across all the three RGB colour bands. This might 1693,1986.
work for some images but need not necessarily work for all [9] L. Yong, Y. Xian, “Multi dimensional multi scale image enhancement
images. In luma based MSRCR approach, since the MSRCR algorithm,” Second International Conference on Signal Processing
Systems, 2010.
algorithm(automated) is applied only on the luma channel, our [10] Li Tao, R. Tompkins, V .K . Ansari, “An illuminance reflectance model
automation algorithm is found to be more consistent across for non linear enhancement of color images,” IEEE Conference on
different images. Computer Vision and Pattern Recognition, 2005.
[11] H. Hu, G. Ni, “Color Image Enhancement based on the Improved
VI. F UTURE W ORK Retinex,” International Conference on Multimedia Technology, October
2010.
The automation process proposed in this paper has been [12] W. J. Kyung, Y. H. Ha, D. C. Kim, “Real time Multi Scale Retinex
achieved by testing across various images. Future work would to enhance night scene of vehicular camera,” 17th Korea-Japan Joint
Workshop on Frontiers of Computer Vision, 2011.
concentrate on achieving this automated value through math- [13] G. D. Hines, Zia-ur-Rahman, D. J. Jobson, G. A. Woodell, S. D. Harrah,
ematical analysis. “Real time enhanced vision system,” Proceedings of the SPIE, vol.
5802, pp. 127-134, 2005.
R EFERENCES [14] L. Meylan, S. Susstrunk, “The Influence of Luminance on Local Tone
Mapping, ” ISTSID Color Imaging Conference, 2005.
[1] R. C. Gonzalez, R. E. Woods, “Digital Image Processing,” Third
Edition, Pearson Publications.
[2] S. M. Pizer, J. B. Zimmerman, E. Stab, “Adaptive grey level assignment
in CT scan display,” Journal of Computer Assistant Tomography, vol.
8, pp. 300-305, 1984.
[3] S. M. Pizer, E. P. Amburn, “Adaptive histogram equalization and its
variations,” Computer Vision, Graphics and Image Processing, vol. 39,
pp. 355-368, 1987.
[4] Y. Jin, L. M. Fayad, A. F. Laine, “Contrast enhancement by multiscale
Adaptive Histogram Equalization,” Proceedings of SPIE, vol. 4478, pp.
206-213, 2001.
[5] E. Land, “An alternative technique for the computation of the desig-
nator in the Retinex theory of color vision,” in Proceedings of Natural
Academy of Science, vol. 83, pp. 3078-3080, 1986.
[6] D. J. Jobson, Zia-ur-Rahman, G. A. Woodell, “Properties and perfor-
mance of a Center/Surround Retinex,” IEEE Transactions on Image
Processing, vol. 6, no. 3, March 1997.
[7] D. J. Jobson, Zia-ur Rahman, G .A. Woodell, “A Multiscale Retinex
for Bridging the Gap Between color images and the human observation
of scenes,” IEEE Transactions on Image Processing, vol. 6, no. 7,July
1997.