Output
Output
Output
MODE
B.Tech. in
By
BATTULA MANU(621121)
EC399
MINI PROJECT II
NATIONAL INSTITUTE OF TECHNOLOGY
ANDHRA PRADESH-534101
APRIL 2024
BONAFIDE CERTIFICATE
This is to certify that the project titled LOW LIGHT ENHANCEMENT WITH
in partial fulfillment of the requirements for the award of the degree of Bachelor of
i
ABSTRACT
ii
ACKNOWLEDGEMENT
We would like to thank the following people for their support and guidance without
Dr. A. ArunKumar, our project incharge, for helping us and guiding us in the course
of this project .
Dhanashree for their insight and advice provided during the review sessions.
We would also like to thank our individual parents and friends for their constant support.
iii
TABLE OF CONTENTS
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
ACKNOWLEDGEMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
TABLE OF CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Review Of Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 Challenges in low-light Imaging . . . . . . . . . . . . . . . . . . . . . 2
2.2 Existing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
4 Loss Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
iv
4.4 Illumination Smoothness Loss . . . . . . . . . . . . . . . . . . . . . . 10
5 Experimental procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.1 Dataset Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.2 Code For Low Light Enhancement with Zero DCE mode . . . . . . . . 11
6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.1 Evaulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
v
List of Figures
vi
Chapter 1
Introduction
1.1 Introduction
pixel-wise and high-order tonal curves for dynamic range adjustment of a given image.
Zero-DCE takes a low-light image as input and produces high-order tonal curves as its
output. These curves are then used for pixel-wise adjustment on the dynamic range of
the input to obtain an enhanced image. The curve estimation process is done in such
a way that it maintains the range of the enhanced image and preserves the contrast
of neighbouring pixels. This curve estimation is inspired by curves adjustment used in
photo editing software such as Adobe Photoshop where users can adjust points through-
out an image’s tonal range. Zero-DCE is appealing because of its relaxed assumptions
with regard to reference images: it does not require any input/output image pairs dur-
ing training. This is achieved through a set of carefully formulated non-reference loss
functions, which implicitly measure the enhancement quality and guide the training of
the network
1
Chapter 2
Review Of Literature
We discuss the inherent challenges associated with capturing and processing images in
low- light conditions, setting the stage for exploring potential solutions. These chal-
lenges include limited photon count, high noise levels, and reduced contrast, all of
A critical analysis of current methods and algorithms used for low-light image enhance-
ment is presented, highlighting their strengths and limitations. Existing techniques of-
ten rely on deep curve estimation (DCE), which can be computationally expensive and
may not always produce accurate results. Additionally, traditional techniques such as
histogram equalization and dehazing algorithms may result in loss of details and color
accuracy. We discuss the need for innovative approaches that overcome these limita-
tions and provide robust enhancement capabilities.
2
a basis for identifying gaps in existing techniques and informing the development of
Here, we define the scope of our project, including the specific goals and objectives we
aim to achieve in the context of low-light image enhancement. Our primary objective
is to develop a novel enhancement technique that improves brightness and visibility in
low-light conditions while preserving image details and minimizing artifacts. We also
aim to provide a zero DCE mode, ensuring enhancement without overexposure or loss
of quality. We outline the key aspects of our approach, including the integration of
traditional image processing techniques and deep learning methodologies.
3
Chapter 3
A light-enhancement curve is a kind of curve that can map a low-light image to its
enhanced version automatically, where the self-adaptive curve parameters are solely
dependent on the input image. When designing such a curve, three objectives should be
taken into account:
• Each pixel value of the enhanced image should be in the normalized range [0,1],
in order to avoid information loss induced by overflow truncation.
• The shape of this curve should be as simple as possible, and the curve should be
of solely on the illumination channel. The three-channel adjustment can better preserve
the inherent color and reduce the risk of over-saturation.
4
Figure 3.1: Zero DCE Framework
3.2 DCE-Net
The DCE-Net is a lightweight deep neural network that learns the mapping between
an input image and its best-fitting curve parameter maps. The input to the DCE-Net
is a low-light image while the outputs are a set of pixel-wise curve parameter maps
for corresponding higher-order curves. It is a plain CNN of seven convolutional layers
layer is followed by the Tanh activation function, which produces 24 parameter maps
for 8 iterations, where each iteration requires three curve parameter maps for the three
channels.
5
Figure 3.2: Architecture of DCE-Net
carefully defined non-reference loss functions into account. This strategy allows output
image quality to be implicitly evaluated, the results of which would be reiterated for net-
work learning. Third, our method is highly efficient and cost-effective.. The efficiency
of our method precedes current deep models by a large margin. These advantages ben-
efit from our zero-reference learning framework, lightweight network structure, and
effective non-reference loss functions.
6
Chapter 4
Loss Functions
The spatial consistency loss Lspa encourages spatial coherence of the enhanced image
through preserving the difference of neighboring regions between the input image and
its enhanced version:
where K is the number of local region, and (i) is the four neighboring regions (top,
down, left, right) centered at the region i. We denote Y and I as the average intensity
value of the local region in the enhanced version and input image, respectively. We em-
pirically set the size of the local region to 4 X 4. This loss is stable given other region
sizes. We illustrate the process of computing the spatial consistency loss in Figure.
7
Figure 4.1: Spatial Consistency Loss
the average intensity value of a local region to the well-exposedness level E. We follow
existing ] to set E as the gray level in the RGB color space. We empirically set E to 0.6 in
8
Figure 4.2: Exposure Control Loss
Following the Gray-World color constancy hypothesis that color in each sensor chan-
nel averages to gray over the entire image, we design a color constancy loss to cor-
rect the potential color deviations in the enhanced image and also build the relations
among the three adjusted channels. The color constancy loss Lcol can be expressed as:
where Jp denotes the average intensity value of p channel in the enhanced image, a pair
9
Figure 4.3: Color Constancy Loss
10
Chapter 5
Experimental procedure
We use 300 low-light images from the LoL Dataset training set for training, and we use
the remaining 185 low-light images for validation. We resize the images to size 256 x
256 to be used for both training and validation. Note that in order to train the DCE-Net,
5 # Loading Images
6
7 def load data ( image path ) :
8 image = t f . i o . r e a d f i l e ( i m a g e p a t h )
9 image = t f . image . d e c o d e p n g ( image , c h a n n e l s = 3 )
10 image = t f . image . r e s i z e ( i m a g e s =image , s i z e = [ IMAGE SIZE ,
IMAGE SIZE ] )
11 image = image / 2 5 5 . 0
12 r e t u r n image
13
14 def d a t a g e n e r a t o r ( low light images ) :
15 dataset = t f . data . Dataset . from tensor slices (( low light images ) )
16 d a t a s e t = d a t a s e t . map ( l o a d d a t a , n u m p a r a l l e l c a l l s = t f . d a t a .
AUTOTUNE)
17 d a t a s e t = d a t a s e t . b a t c h ( BATCH SIZE , d r o p r e m a i n d e r = T r u e )
18 return dataset
19
20 t r a i n l o w l i g h t i m a g e s = s o r t e d ( g l o b ( ” . / l o l d a t a s e t / o u r 4 8 5 / low / * ” ) ) [ :
MAX TRAIN IMAGES ]
21 v a l l o w l i g h t i m a g e s = s o r t e d ( g l o b ( ” . / l o l d a t a s e t / o u r 4 8 5 / low / * ” ) ) [
MAX TRAIN IMAGES : ]
11
22 t e s t l o w l i g h t i m a g e s = s o r t e d ( g l o b ( ” . / l o l d a t a s e t / e v a l 1 5 / low / * ” ) )
23
24 train dataset = data generator ( train low light images )
25 val dataset = data generator ( val low light images )
26
27 p r i n t ( ” Train Dataset : ” , t r a i n d a t a s e t )
28 print ( ” Validation Dataset : ” , val dataset )
29
30 # B u i l d i n g DCE Net
31
32 def b u i l d d c e n e t ( ) :
33 i n p u t i m g = k e r a s . I n p u t ( s h a p e = [ None , None , 3 ] )
34 conv1 = l a y e r s . Conv2D (
35 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
36 ) ( input img )
37 conv2 = l a y e r s . Conv2D (
38 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
39 ) ( conv1 )
40 conv3 = l a y e r s . Conv2D (
41 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
42 ) ( conv2 )
43 conv4 = l a y e r s . Conv2D (
44 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
45 ) ( conv3 )
46 i n t c o n 1 = l a y e r s . C o n c a t e n a t e ( a x i s = −1) ( [ conv4 , conv3 ] )
47 conv5 = l a y e r s . Conv2D (
48 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
49 ) ( int con1 )
50 i n t c o n 2 = l a y e r s . C o n c a t e n a t e ( a x i s = −1) ( [ conv5 , conv2 ] )
51 conv6 = l a y e r s . Conv2D (
52 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
53 ) ( int con2 )
54 i n t c o n 3 = l a y e r s . C o n c a t e n a t e ( a x i s = −1) ( [ conv6 , conv1 ] )
55 x r = l a y e r s . Conv2D ( 2 4 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” t a n h ”
, p a d d i n g =” same ” ) (
56 int con3
57 )
58 r e t u r n k e r a s . Model ( i n p u t s = i n p u t i m g , o u t p u t s = x r )
59
60 # Loss F u n c t i o n s
61
62 def c o l o r c o n s t a n c y l o s s ( x ) :
63 mean rgb = t f . reduce mean ( x , a x i s =(1 , 2) , keepdims=True )
64 mr , mg , mb = (
65 mean rgb [ : , : , : , 0 ] ,
66 mean rgb [ : , : , : , 1 ] ,
67 mean rgb [ : , : , : , 2 ] ,
68 )
69 d r g = t f . s q u a r e ( mr − mg )
70 d r b = t f . s q u a r e ( mr − mb )
71 d g b = t f . s q u a r e ( mb − mg )
72 r e t u r n t f . s q r t ( t f . s q u a r e ( d r g ) + t f . s q u a r e ( d r b ) + t f . s q u a r e ( d gb
))
73
74 def e x p o s u r e l o s s ( x , mean val =0.6) :
75 x = t f . r e d u c e m e a n ( x , a x i s =3 , k e e p d i m s = T r u e )
76 mean = t f . nn . a v g p o o l 2 d ( x , k s i z e =16 , s t r i d e s =16 , p a d d i n g =”VALID” )
77 r e t u r n t f . r e d u c e m e a n ( t f . s q u a r e ( mean − m e a n v a l ) )
12
78
79 def i l l u m i n a t i o n s m o o t h n e s s l o s s ( x ) :
80 b a t c h s i z e = t f . shape ( x ) [ 0 ]
81 h x = t f . shape ( x ) [ 1 ]
82 w x = t f . shape ( x ) [ 2 ]
83 count h = ( t f . shape ( x ) [ 2 ] − 1) * t f . shape ( x ) [ 3 ]
84 count w = t f . shape ( x ) [ 2 ] * ( t f . shape ( x ) [ 3 ] − 1)
85 h tv = t f . reduce sum ( t f . square ( ( x [ : , 1 : , : , : ] − x [ : , : h x − 1 ,
: , :]) ))
86 w tv = t f . reduce sum ( t f . s q u a r e ( ( x [ : , : , 1 : , : ] − x [ : , : , : w x −
1, :]) ))
87 b a t c h s i z e = t f . c a s t ( batch size , dtype= t f . f l o a t 3 2 )
88 count h = t f . c a s t ( count h , dtype= t f . f l o a t 3 2 )
89 c o u n t w = t f . c a s t ( co un t w , d t y p e = t f . f l o a t 3 2 )
90 r e t u r n 2 * ( h t v / count h + w tv / count w ) / b a t c h s i z e
91
92 c l a s s S p a t i a l C o n s i s t e n c y L o s s ( k e r a s . l o s s e s . Loss ) :
93 def i n i t ( s e l f , ** k w a r g s ) :
94 s u p e r ( ) . i n i t ( r e d u c t i o n =” none ” )
95
96 self . left kernel = tf . constant (
97 [ [ [ [ 0 , 0 , 0]] , [[ −1 , 1 , 0]] , [[0 , 0 , 0 ] ] ] ] , dtype= t f .
float32
98 )
99 self . right kernel = tf . constant (
100 [ [ [ [ 0 , 0 , 0]] , [[0 , 1 , −1]] , [[0 , 0 , 0 ] ] ] ] , dtype= t f .
float32
101 )
102 self . up kernel = tf . constant (
103 [ [ [ [ 0 , −1 , 0 ] ] , [ [ 0 , 1 , 0 ] ] , [ [ 0 , 0 , 0 ] ] ] ] , dtype= t f .
float32
104 )
105 s e l f . down kernel = t f . constant (
106 [[[[0 , 0 , 0]] , [[0 , 1 , 0]] , [[0 , −1 , 0 ] ] ] ] , d t y p e = t f .
float32
107 )
108
13
130 )
131 d o r i g i n a l u p = t f . nn . conv2d (
132 o r i g i n a l p o o l , s e l f . up kernel , s t r i d e s =[1 , 1 , 1 , 1] ,
p a d d i n g =”SAME”
133 )
134 d o r i g i n a l d o w n = t f . nn . conv2d (
135 original pool ,
136 s e l f . down kernel ,
137 s t r i d e s =[1 , 1 , 1 , 1] ,
138 p a d d i n g =”SAME” ,
139 )
140
141 d e n h a n c e d l e f t = t f . nn . conv2d (
142 enhanced pool ,
143 self . left kernel ,
144 s t r i d e s =[1 , 1 , 1 , 1] ,
145 p a d d i n g =”SAME” ,
146 )
147 d e n h a n c e d r i g h t = t f . nn . conv2d (
148 enhanced pool ,
149 self . right kernel ,
150 s t r i d e s =[1 , 1 , 1 , 1] ,
151 p a d d i n g =”SAME” ,
152 )
153 d e n h a n c e d u p = t f . nn . conv2d (
154 enhanced pool , s e l f . up kernel , s t r i d e s =[1 , 1 , 1 , 1] ,
p a d d i n g =”SAME”
155 )
156 d e n h a n c e d d o w n = t f . nn . conv2d (
157 enhanced pool ,
158 s e l f . down kernel ,
159 s t r i d e s =[1 , 1 , 1 , 1] ,
160 p a d d i n g =”SAME” ,
161 )
162
163 d l e f t = t f . square ( d o r i g i n a l l e f t − d enhanced left )
164 d right = t f . square ( d o r i g i n a l r i g h t − d enhanced right )
165 d up = t f . s q u a r e ( d o r i g i n a l u p − d enhanced up )
166 d down = t f . s q u a r e ( d o r i g i n a l d o w n − d e n h a n c e d d o w n )
167 r e t u r n d l e f t + d r i g h t + d u p + d down
168
169 # Deep Curve E s t i m a t i o n Model
170
14
182 name=” i l l u m i n a t i o n s m o o t h n e s s l o s s ”
183 )
184 s e l f . s p a t i a l c o n s t a n c y l o s s t r a c k e r = k e r a s . m e t r i c s . Mean (
185 name=” s p a t i a l c o n s t a n c y l o s s ”
186 )
187 s e l f . c o l o r c o n s t a n c y l o s s t r a c k e r = k e r a s . m e t r i c s . Mean (
188 name=” c o l o r c o n s t a n c y l o s s ”
189 )
190 s e l f . e x p o s u r e l o s s t r a c k e r = k e r a s . m e t r i c s . Mean ( name=”
exposure loss ” )
191
192 @property
193 def metrics ( self ) :
194 return [
195 self . total loss tracker ,
196 self . illumination smoothness loss tracker ,
197 self . spatial constancy loss tracker ,
198 self . color constancy loss tracker ,
199 self . exposure loss tracker ,
200 ]
201
202 def get enhanced image ( s e l f , data , output ) :
203 r1 = output [ : , : , : , : 3 ]
204 r2 = output [ : , : , : , 3:6]
205 r3 = output [ : , : , : , 6:9]
206 r4 = output [ : , : , : , 9:12]
207 r5 = output [ : , : , : , 12:15]
208 r6 = output [ : , : , : , 15:18]
209 r7 = output [ : , : , : , 18:21]
210 r8 = output [ : , : , : , 21:24]
211 x = data + r1 * ( t f . square ( data ) − data )
212 x = x + r2 * ( t f . square ( x ) − x )
213 x = x + r3 * ( t f . square ( x ) − x )
214 enhanced image = x + r4 * ( t f . square ( x ) − x )
215 x = enhanced image + r5 * ( t f . square ( enhanced image ) −
enhanced image )
216 x = x + r6 * ( t f . square ( x ) − x )
217 x = x + r7 * ( t f . square ( x ) − x )
218 enhanced image = x + r8 * ( t f . square ( x ) − x )
219 r e t u r n enhanced image
220
221 def c a l l ( self , data ) :
222 d c e n e t o u t p u t = s e l f . dce model ( data )
223 r e t u r n s e l f . get enhanced image ( data , d c e n e t o u t p u t )
224
225 def compute losses ( s e l f , data , output ) :
226 enhanced image = s e l f . get enhanced image ( data , output )
227 l o s s i l l u m i n a t i o n = 200 * i l l u m i n a t i o n s m o o t h n e s s l o s s ( o u t p u t
)
228 l o s s s p a t i a l c o n s t a n c y = t f . reduce mean (
229 s e l f . s p a t i a l c o n s t a n c y l o s s ( enhanced image , d a t a )
230 )
231 l o s s c o l o r c o n s t a n c y = 5 * t f . reduce mean (
c o l o r c o n s t a n c y l o s s ( enhanced image ) )
232 l o s s e x p o s u r e = 10 * t f . r e d u c e m e a n ( e x p o s u r e l o s s (
enhanced image ) )
233 total loss = (
234 loss illumination
15
235 + loss spatial constancy
236 + loss color constancy
237 + loss exposure
238 )
239
240 return {
241 ” total loss ”: total loss ,
242 ” illumination smoothness loss ” : loss illumination ,
243 ” spatial constancy loss ” : loss spatial constancy ,
244 ” color constancy loss ” : loss color constancy ,
245 ” exposure loss ” : loss exposure ,
246 }
247
274 self . total loss tracker . update state ( losses [” total loss ” ])
275 self . illumination smoothness loss tracker . update state (
276 losses [” illumination smoothness loss ”]
277 )
278 self . spatial constancy loss tracker . update state (
279 losses [” spatial constancy loss ”]
280 )
281 self . color constancy loss tracker . update state ( losses [”
color constancy loss ” ])
282 self . exposure loss tracker . update state ( losses [ ” exposure loss
” ])
283
284 r e t u r n { m e t r i c . name : m e t r i c . r e s u l t ( ) f o r m e t r i c i n s e l f .
metrics }
285
16
286 d e f s a v e w e i g h t s ( s e l f , f i l e p a t h , o v e r w r i t e = True , s a v e f o r m a t =None
, o p t i o n s =None ) :
287 ” ” ” While s a v i n g t h e w e i g h t s , we s i m p l y s a v e t h e w e i g h t s o f
t h e DCE− Net ” ” ”
288 s e l f . dce model . s a v e w e i g h t s (
289 filepath ,
290 overwrite=overwrite ,
291 save format=save format ,
292 options=options ,
293 )
294
295 d e f l o a d w e i g h t s ( s e l f , f i l e p a t h , by name = F a l s e , s k i p m i s m a t c h =
F a l s e , o p t i o n s =None ) :
296 ” ” ” While l o a d i n g t h e w e i g h t s , we s i m p l y l o a d t h e w e i g h t s o f
t h e DCE− Net ” ” ”
297 s e l f . dce model . l o a d w e i g h t s (
298 filepath=filepath ,
299 by name =by name ,
300 skip mismatch=skip mismatch ,
301 options=options ,
302 )
303
304
305 # # T r a i n i n g ##
306
17
338
339 # Inference
340
341 def i n f e r ( original image ) :
342 image = k e r a s . u t i l s . i m g t o a r r a y ( o r i g i n a l i m a g e )
343 image = image . a s t y p e ( ” f l o a t 3 2 ” ) / 2 5 5 . 0
344 image = np . e x p a n d d i m s ( image , a x i s = 0 )
345 o u t p u t i m a g e = z e r o d c e m o d e l ( image )
346 o u t p u t i m a g e = t f . c a s t ( ( o u t p u t i m a g e [ 0 , : , : , : ] * 2 5 5 ) , d t y p e =np
. uint8 )
347 o u t p u t i m a g e = Image . f r o m a r r a y ( o u t p u t i m a g e . numpy ( ) )
348 return output image
349
18
Chapter 6
Results
6.1 Evaulation
19
Figure 6.2: Illumination smoothness loss vs Epochs
20
Figure 6.4: Color Constancy loss vs Epochs
21
6.2 Results
22
Chapter 7
Conclusion
We proposed a deep network for low-light image enhancement. It can be trained end-
to-end with zero reference images. This is achieved by formulating the low-light image
enhancement task as an image-specific curve estimation problem, and devising a set
tical applications. Our method excels in both enhancement performance and efficiency.
Experiments demonstrate the superiority of our method against existing light enhance-
ment methods. [1] [2] [3]
23
Bibliography
[1] Zhen Tian, Peixin Qu, Jielin Li, Yukun Sun, Guohou Li, Zheng Liang, and Weidong
[2] Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam
Kwong, and Runmin Cong. Zero-reference deep curve estimation for low-light
[3] Jiawei Guo, Jieming Ma, Ángel F Garcı́a-Fernández, Yungang Zhang, and Haining
Liang. A survey on image enhancement for low-light images. Heliyon, 2023.
24