Output

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

LOW LIGHT ENHANCEMENT WITH ZERO DCE

MODE

A project report submitted in partial fulfillment of the requirements for


the award of the degree of

B.Tech. in

Electronics and Communication Engineering

By

BOYAPATI VISHNU (621128)

AMGOTH KARTHIK NAIK (621110)

BATTULA MANU(621121)

KASSE ASWAN KUMAR (621162)

KONDRU SRIVARDHAN REDDY(621168)

EC399
MINI PROJECT II
NATIONAL INSTITUTE OF TECHNOLOGY
ANDHRA PRADESH-534101
APRIL 2024
BONAFIDE CERTIFICATE

This is to certify that the project titled LOW LIGHT ENHANCEMENT WITH

ZERO DCE MODE is a bonafide record of the work done by

BOYAPATI VISHNU (621128)


AMGOTH KARTHIK NAIK (621110)
BATTULA MANU(621121)

KASSE ASWAN KUMAR (621162)


KONDRU SRIVARDHAN REDDY(621168)

in partial fulfillment of the requirements for the award of the degree of Bachelor of

Technology in ECE of the NATIONAL INSTITUTE OF TECHNOLOGY, ANDHRA


PRADESH, during the year 2023-2024.

Dr.A.Arun Kumar Dr.S.Yuvaraj


Project Incharge Head of the Department

i
ABSTRACT

Zero-Reference Deep Curve Estimation (Zero-DCE), which for-


mulates light enhancement as a task of image-specific curve estima-
tion with a deep network. Our method trains a lightweight deep net-
work, DCE-Net, to estimate pixel-wise and high-order curves for dy-
namic range adjustment of a given image. The curve estimation is
specially designed, considering pixel value range, monotonicity, and
differentiability. Zero-DCE is appealing in its relaxed assumption on
reference images, i.e., it does not require any paired or unpaired data
during training. This is achieved through a set of carefully formulated
non-reference loss functions, which implicitly measure the enhance-
ment quality and drive the learning of the network.
Our method is efficient as image enhancement can be achieved by
an intuitive and simple nonlinear curve mapping. Despite its sim-
plicity, we show that it generalizes well to diverse lighting conditions.
Extensive experiments on various benchmarks demonstrate the advan-
tages of our method over state-of-the-art methods qualitatively and
quantitatively. Furthermore, the potential benefits of our Zero-DCE to
face detection in the dark are discussed.

ii
ACKNOWLEDGEMENT

We would like to thank the following people for their support and guidance without

whom the completion of this project in fruition would not be possible.

Dr. A. ArunKumar, our project incharge, for helping us and guiding us in the course
of this project .

Dr. S. Yuvaraj, the Head of the Department, Department of ECE.

Our internal reviewers, Ch. Chaitanya Krishna , Mr. J. Kondalarao , Mrs. J.

Dhanashree for their insight and advice provided during the review sessions.

We would also like to thank our individual parents and friends for their constant support.

iii
TABLE OF CONTENTS

Title Page No.

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii

ACKNOWLEDGEMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

TABLE OF CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Review Of Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 Challenges in low-light Imaging . . . . . . . . . . . . . . . . . . . . . 2
2.2 Existing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2.3 Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 2


2.4 Scope of the project . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

3 Zero DCE Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3.1 Understanding light-enhancement curves . . . . . . . . . . . . . . . . . 4


3.2 DCE-Net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

4 Loss Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4.1 Spatial Consistency Loss . . . . . . . . . . . . . . . . . . . . . . . . . 7


4.2 Exposure Control Loss . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.3 Color Constancy Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

iv
4.4 Illumination Smoothness Loss . . . . . . . . . . . . . . . . . . . . . . 10

5 Experimental procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.1 Dataset Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.2 Code For Low Light Enhancement with Zero DCE mode . . . . . . . . 11

6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.1 Evaulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

v
List of Figures

3.1 Zero DCE Framework . . . . . . . . . . . . . . . . . . . . . . . . . . 5


3.2 Architecture of DCE-Net . . . . . . . . . . . . . . . . . . . . . . . . . 6

4.1 Spatial Consistency Loss . . . . . . . . . . . . . . . . . . . . . . . . . 8


4.2 Exposure Control Loss . . . . . . . . . . . . . . . . . . . . . . . . . . 9

4.3 Color Constancy Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . 10


4.4 Illumination Smoothness Loss . . . . . . . . . . . . . . . . . . . . . . 10

6.1 Total Loss vs Epochs . . . . . . . . . . . . . . . . . . . . . . . . . . . 19


6.2 Illumination smoothness loss vs Epochs . . . . . . . . . . . . . . . . . 20

6.3 Spatial Constancy loss vs Epochs . . . . . . . . . . . . . . . . . . . . . 20


6.4 Color Constancy loss vs Epochs . . . . . . . . . . . . . . . . . . . . . 21
6.5 Exposure loss vs Epochs . . . . . . . . . . . . . . . . . . . . . . . . . 21

6.6 Original and Enhanced Images(a) . . . . . . . . . . . . . . . . . . . . . 22


6.7 Original and Enhanced Images(b) . . . . . . . . . . . . . . . . . . . . . 22

vi
Chapter 1

Introduction

1.1 Introduction

Zero-Reference Deep Curve Estimation or Zero-DCE formulates low-light image en-


hancement as the task of estimating an image-specific tonal curve with a deep neural
network. In this example, we train a lightweight deep network, DCE-Net, to estimate

pixel-wise and high-order tonal curves for dynamic range adjustment of a given image.
Zero-DCE takes a low-light image as input and produces high-order tonal curves as its
output. These curves are then used for pixel-wise adjustment on the dynamic range of

the input to obtain an enhanced image. The curve estimation process is done in such
a way that it maintains the range of the enhanced image and preserves the contrast
of neighbouring pixels. This curve estimation is inspired by curves adjustment used in

photo editing software such as Adobe Photoshop where users can adjust points through-
out an image’s tonal range. Zero-DCE is appealing because of its relaxed assumptions
with regard to reference images: it does not require any input/output image pairs dur-

ing training. This is achieved through a set of carefully formulated non-reference loss
functions, which implicitly measure the enhancement quality and guide the training of
the network

1
Chapter 2

Review Of Literature

2.1 Challenges in low-light Imaging

We discuss the inherent challenges associated with capturing and processing images in
low- light conditions, setting the stage for exploring potential solutions. These chal-
lenges include limited photon count, high noise levels, and reduced contrast, all of

which contribute to degraded image quality. We highlight the importance of addressing


these challenges to improve the performance of low-light image enhancement algo-
rithms.

2.2 Existing System

A critical analysis of current methods and algorithms used for low-light image enhance-

ment is presented, highlighting their strengths and limitations. Existing techniques of-
ten rely on deep curve estimation (DCE), which can be computationally expensive and
may not always produce accurate results. Additionally, traditional techniques such as

histogram equalization and dehazing algorithms may result in loss of details and color
accuracy. We discuss the need for innovative approaches that overcome these limita-
tions and provide robust enhancement capabilities.

2.3 Comparative Analysis

We provide a comparative analysis of different approaches, including their effectiveness

in addressing various aspects of low-light image enhancement. This analysis serves as

2
a basis for identifying gaps in existing techniques and informing the development of

our proposed approach. We compare the performance of different algorithms based


on criteria such as image quality, computational efficiency, and robustness to varying
lighting conditions.

2.4 Scope of the project

Here, we define the scope of our project, including the specific goals and objectives we

aim to achieve in the context of low-light image enhancement. Our primary objective
is to develop a novel enhancement technique that improves brightness and visibility in
low-light conditions while preserving image details and minimizing artifacts. We also

aim to provide a zero DCE mode, ensuring enhancement without overexposure or loss
of quality. We outline the key aspects of our approach, including the integration of
traditional image processing techniques and deep learning methodologies.

3
Chapter 3

Zero DCE Framework

The goal of DCE-Net is to estimate a set of best-fitting light-enhancement curves (LE-


curves) given an input image. The framework then maps all pixels of the input’s RGB
channels by applying the curves iteratively to obtain the final enhanced image.

3.1 Understanding light-enhancement curves

A light-enhancement curve is a kind of curve that can map a low-light image to its

enhanced version automatically, where the self-adaptive curve parameters are solely
dependent on the input image. When designing such a curve, three objectives should be
taken into account:

• Each pixel value of the enhanced image should be in the normalized range [0,1],
in order to avoid information loss induced by overflow truncation.

• It should be monotonous, to preserve the contrast between neighbouring pixels.

• The shape of this curve should be as simple as possible, and the curve should be

differentiable to allow backpropagation.

The light-enhancement curve is separately applied to three RGB channels instead

of solely on the illumination channel. The three-channel adjustment can better preserve
the inherent color and reduce the risk of over-saturation.

4
Figure 3.1: Zero DCE Framework

3.2 DCE-Net

The DCE-Net is a lightweight deep neural network that learns the mapping between

an input image and its best-fitting curve parameter maps. The input to the DCE-Net
is a low-light image while the outputs are a set of pixel-wise curve parameter maps
for corresponding higher-order curves. It is a plain CNN of seven convolutional layers

with symmetrical concatenation. Each layer consists of 32 convolutional kernels of


size 3×3 and stride 1 followed by the ReLU activation function. The last convolutional

layer is followed by the Tanh activation function, which produces 24 parameter maps
for 8 iterations, where each iteration requires three curve parameter maps for the three
channels.

5
Figure 3.2: Architecture of DCE-Net

The proposed method is superior to existing data-driven methods in three aspects.


First, it explores a new learning strategy, i.e., one requires zero reference, hence elimi-
nating the need for paired and unpaired data. Second, the network is trained by taking

carefully defined non-reference loss functions into account. This strategy allows output
image quality to be implicitly evaluated, the results of which would be reiterated for net-
work learning. Third, our method is highly efficient and cost-effective.. The efficiency

of our method precedes current deep models by a large margin. These advantages ben-
efit from our zero-reference learning framework, lightweight network structure, and
effective non-reference loss functions.

6
Chapter 4

Loss Functions

To enable zero-reference learning in DCE-Net, we propose a set of differentiable non-


reference losses that allow us to evaluate the quality of enhanced images. The following
four types of losses are adopted to train our DCE-Net.

4.1 Spatial Consistency Loss

The spatial consistency loss Lspa encourages spatial coherence of the enhanced image

through preserving the difference of neighboring regions between the input image and
its enhanced version:

where K is the number of local region, and (i) is the four neighboring regions (top,
down, left, right) centered at the region i. We denote Y and I as the average intensity
value of the local region in the enhanced version and input image, respectively. We em-

pirically set the size of the local region to 4 X 4. This loss is stable given other region
sizes. We illustrate the process of computing the spatial consistency loss in Figure.

7
Figure 4.1: Spatial Consistency Loss

4.2 Exposure Control Loss

To restrain under-/over-exposed regions, we design an exposure control loss Lexp to


control the exposure level. The exposure control loss measures the distance between

the average intensity value of a local region to the well-exposedness level E. We follow
existing ] to set E as the gray level in the RGB color space. We empirically set E to 0.6 in

our experiments. The loss Lexp can be expressed as:


where M represents the number of non-overlapping local regions of size 16 X 16, the
average intensity value of a local region in the enhanced image is represented as Y .

8
Figure 4.2: Exposure Control Loss

4.3 Color Constancy Loss

Following the Gray-World color constancy hypothesis that color in each sensor chan-

nel averages to gray over the entire image, we design a color constancy loss to cor-
rect the potential color deviations in the enhanced image and also build the relations
among the three adjusted channels. The color constancy loss Lcol can be expressed as:

where Jp denotes the average intensity value of p channel in the enhanced image, a pair

of channels is represented as (p,q).

9
Figure 4.3: Color Constancy Loss

4.4 Illumination Smoothness Loss

To preserve the monotonicity relations between neighboring pixels, we add an illumina-


tion smoothness loss to each curve parameter map A. The illumination smoothness loss

Ltv,A is defined as:


where N is the number of iteration, the horizontal and vertical gradient operations are

represented as delta x and delta y, respectively.

Figure 4.4: Illumination Smoothness Loss

10
Chapter 5

Experimental procedure

5.1 Dataset Creation

We use 300 low-light images from the LoL Dataset training set for training, and we use
the remaining 185 low-light images for validation. We resize the images to size 256 x
256 to be used for both training and validation. Note that in order to train the DCE-Net,

we will not require the corresponding enhanced images.

5.2 Code For Low Light Enhancement with Zero DCE


mode
1 IMAGE SIZE = 256
2 BATCH SIZE = 16
3 MAX TRAIN IMAGES = 400
4

5 # Loading Images
6
7 def load data ( image path ) :
8 image = t f . i o . r e a d f i l e ( i m a g e p a t h )
9 image = t f . image . d e c o d e p n g ( image , c h a n n e l s = 3 )
10 image = t f . image . r e s i z e ( i m a g e s =image , s i z e = [ IMAGE SIZE ,
IMAGE SIZE ] )
11 image = image / 2 5 5 . 0
12 r e t u r n image
13
14 def d a t a g e n e r a t o r ( low light images ) :
15 dataset = t f . data . Dataset . from tensor slices (( low light images ) )
16 d a t a s e t = d a t a s e t . map ( l o a d d a t a , n u m p a r a l l e l c a l l s = t f . d a t a .
AUTOTUNE)
17 d a t a s e t = d a t a s e t . b a t c h ( BATCH SIZE , d r o p r e m a i n d e r = T r u e )
18 return dataset
19
20 t r a i n l o w l i g h t i m a g e s = s o r t e d ( g l o b ( ” . / l o l d a t a s e t / o u r 4 8 5 / low / * ” ) ) [ :
MAX TRAIN IMAGES ]
21 v a l l o w l i g h t i m a g e s = s o r t e d ( g l o b ( ” . / l o l d a t a s e t / o u r 4 8 5 / low / * ” ) ) [
MAX TRAIN IMAGES : ]

11
22 t e s t l o w l i g h t i m a g e s = s o r t e d ( g l o b ( ” . / l o l d a t a s e t / e v a l 1 5 / low / * ” ) )
23
24 train dataset = data generator ( train low light images )
25 val dataset = data generator ( val low light images )
26
27 p r i n t ( ” Train Dataset : ” , t r a i n d a t a s e t )
28 print ( ” Validation Dataset : ” , val dataset )
29
30 # B u i l d i n g DCE Net
31
32 def b u i l d d c e n e t ( ) :
33 i n p u t i m g = k e r a s . I n p u t ( s h a p e = [ None , None , 3 ] )
34 conv1 = l a y e r s . Conv2D (
35 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
36 ) ( input img )
37 conv2 = l a y e r s . Conv2D (
38 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
39 ) ( conv1 )
40 conv3 = l a y e r s . Conv2D (
41 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
42 ) ( conv2 )
43 conv4 = l a y e r s . Conv2D (
44 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
45 ) ( conv3 )
46 i n t c o n 1 = l a y e r s . C o n c a t e n a t e ( a x i s = −1) ( [ conv4 , conv3 ] )
47 conv5 = l a y e r s . Conv2D (
48 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
49 ) ( int con1 )
50 i n t c o n 2 = l a y e r s . C o n c a t e n a t e ( a x i s = −1) ( [ conv5 , conv2 ] )
51 conv6 = l a y e r s . Conv2D (
52 3 2 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” r e l u ” , p a d d i n g =” same ”
53 ) ( int con2 )
54 i n t c o n 3 = l a y e r s . C o n c a t e n a t e ( a x i s = −1) ( [ conv6 , conv1 ] )
55 x r = l a y e r s . Conv2D ( 2 4 , ( 3 , 3 ) , s t r i d e s = ( 1 , 1 ) , a c t i v a t i o n =” t a n h ”
, p a d d i n g =” same ” ) (
56 int con3
57 )
58 r e t u r n k e r a s . Model ( i n p u t s = i n p u t i m g , o u t p u t s = x r )
59
60 # Loss F u n c t i o n s
61
62 def c o l o r c o n s t a n c y l o s s ( x ) :
63 mean rgb = t f . reduce mean ( x , a x i s =(1 , 2) , keepdims=True )
64 mr , mg , mb = (
65 mean rgb [ : , : , : , 0 ] ,
66 mean rgb [ : , : , : , 1 ] ,
67 mean rgb [ : , : , : , 2 ] ,
68 )
69 d r g = t f . s q u a r e ( mr − mg )
70 d r b = t f . s q u a r e ( mr − mb )
71 d g b = t f . s q u a r e ( mb − mg )
72 r e t u r n t f . s q r t ( t f . s q u a r e ( d r g ) + t f . s q u a r e ( d r b ) + t f . s q u a r e ( d gb
))
73
74 def e x p o s u r e l o s s ( x , mean val =0.6) :
75 x = t f . r e d u c e m e a n ( x , a x i s =3 , k e e p d i m s = T r u e )
76 mean = t f . nn . a v g p o o l 2 d ( x , k s i z e =16 , s t r i d e s =16 , p a d d i n g =”VALID” )
77 r e t u r n t f . r e d u c e m e a n ( t f . s q u a r e ( mean − m e a n v a l ) )

12
78

79 def i l l u m i n a t i o n s m o o t h n e s s l o s s ( x ) :
80 b a t c h s i z e = t f . shape ( x ) [ 0 ]
81 h x = t f . shape ( x ) [ 1 ]
82 w x = t f . shape ( x ) [ 2 ]
83 count h = ( t f . shape ( x ) [ 2 ] − 1) * t f . shape ( x ) [ 3 ]
84 count w = t f . shape ( x ) [ 2 ] * ( t f . shape ( x ) [ 3 ] − 1)
85 h tv = t f . reduce sum ( t f . square ( ( x [ : , 1 : , : , : ] − x [ : , : h x − 1 ,
: , :]) ))
86 w tv = t f . reduce sum ( t f . s q u a r e ( ( x [ : , : , 1 : , : ] − x [ : , : , : w x −
1, :]) ))
87 b a t c h s i z e = t f . c a s t ( batch size , dtype= t f . f l o a t 3 2 )
88 count h = t f . c a s t ( count h , dtype= t f . f l o a t 3 2 )
89 c o u n t w = t f . c a s t ( co un t w , d t y p e = t f . f l o a t 3 2 )
90 r e t u r n 2 * ( h t v / count h + w tv / count w ) / b a t c h s i z e
91
92 c l a s s S p a t i a l C o n s i s t e n c y L o s s ( k e r a s . l o s s e s . Loss ) :
93 def i n i t ( s e l f , ** k w a r g s ) :
94 s u p e r ( ) . i n i t ( r e d u c t i o n =” none ” )
95
96 self . left kernel = tf . constant (
97 [ [ [ [ 0 , 0 , 0]] , [[ −1 , 1 , 0]] , [[0 , 0 , 0 ] ] ] ] , dtype= t f .
float32
98 )
99 self . right kernel = tf . constant (
100 [ [ [ [ 0 , 0 , 0]] , [[0 , 1 , −1]] , [[0 , 0 , 0 ] ] ] ] , dtype= t f .
float32
101 )
102 self . up kernel = tf . constant (
103 [ [ [ [ 0 , −1 , 0 ] ] , [ [ 0 , 1 , 0 ] ] , [ [ 0 , 0 , 0 ] ] ] ] , dtype= t f .
float32
104 )
105 s e l f . down kernel = t f . constant (
106 [[[[0 , 0 , 0]] , [[0 , 1 , 0]] , [[0 , −1 , 0 ] ] ] ] , d t y p e = t f .
float32
107 )
108

109 def c a l l ( self , y true , y pred ) :


110 original mean = tf . reduce mean ( y t r u e , 3 , keepdims=True )
111 enhanced mean = t f . reduce mean ( y pred , 3 , keepdims=True )
112 original pool = tf . nn . a v g p o o l 2 d (
113 original mean , k s i z e =4 , s t r i d e s =4 , p a d d i n g =”VALID”
114 )
115 enhanced pool = t f . nn . a v g p o o l 2 d (
116 e n h an c e d m e a n , k s i z e =4 , s t r i d e s =4 , p a d d i n g =”VALID”
117 )
118
119 d o r i g i n a l l e f t = t f . nn . conv2d (
120 original pool ,
121 self . left kernel ,
122 s t r i d e s =[1 , 1 , 1 , 1] ,
123 p a d d i n g =”SAME” ,
124 )
125 d o r i g i n a l r i g h t = t f . nn . conv2d (
126 original pool ,
127 self . right kernel ,
128 s t r i d e s =[1 , 1 , 1 , 1] ,
129 p a d d i n g =”SAME” ,

13
130 )
131 d o r i g i n a l u p = t f . nn . conv2d (
132 o r i g i n a l p o o l , s e l f . up kernel , s t r i d e s =[1 , 1 , 1 , 1] ,
p a d d i n g =”SAME”
133 )
134 d o r i g i n a l d o w n = t f . nn . conv2d (
135 original pool ,
136 s e l f . down kernel ,
137 s t r i d e s =[1 , 1 , 1 , 1] ,
138 p a d d i n g =”SAME” ,
139 )
140
141 d e n h a n c e d l e f t = t f . nn . conv2d (
142 enhanced pool ,
143 self . left kernel ,
144 s t r i d e s =[1 , 1 , 1 , 1] ,
145 p a d d i n g =”SAME” ,
146 )
147 d e n h a n c e d r i g h t = t f . nn . conv2d (
148 enhanced pool ,
149 self . right kernel ,
150 s t r i d e s =[1 , 1 , 1 , 1] ,
151 p a d d i n g =”SAME” ,
152 )
153 d e n h a n c e d u p = t f . nn . conv2d (
154 enhanced pool , s e l f . up kernel , s t r i d e s =[1 , 1 , 1 , 1] ,
p a d d i n g =”SAME”
155 )
156 d e n h a n c e d d o w n = t f . nn . conv2d (
157 enhanced pool ,
158 s e l f . down kernel ,
159 s t r i d e s =[1 , 1 , 1 , 1] ,
160 p a d d i n g =”SAME” ,
161 )
162
163 d l e f t = t f . square ( d o r i g i n a l l e f t − d enhanced left )
164 d right = t f . square ( d o r i g i n a l r i g h t − d enhanced right )
165 d up = t f . s q u a r e ( d o r i g i n a l u p − d enhanced up )
166 d down = t f . s q u a r e ( d o r i g i n a l d o w n − d e n h a n c e d d o w n )
167 r e t u r n d l e f t + d r i g h t + d u p + d down
168
169 # Deep Curve E s t i m a t i o n Model
170

171 c l a s s ZeroDCE ( k e r a s . Model ) :


172 def i n i t ( s e l f , ** k w a r g s ) :
173 s u p e r ( ) . i n i t (** kwargs )
174 s e l f . dce model = b u i l d d c e n e t ( )
175
176 d e f c o m p i l e ( s e l f , l e a r n i n g r a t e , ** k w a r g s ) :
177 s u p e r ( ) . compile (** kwargs )
178 s e l f . o p t i m i z e r = k e r a s . o p t i m i z e r s . Adam ( l e a r n i n g r a t e =
learning rate )
179 self . spatial constancy loss = SpatialConsistencyLoss (
r e d u c t i o n =” none ” )
180 s e l f . t o t a l l o s s t r a c k e r = k e r a s . m e t r i c s . Mean ( name=” t o t a l l o s s
”)
181 self . illumination smoothness loss tracker = keras . metrics .
Mean (

14
182 name=” i l l u m i n a t i o n s m o o t h n e s s l o s s ”
183 )
184 s e l f . s p a t i a l c o n s t a n c y l o s s t r a c k e r = k e r a s . m e t r i c s . Mean (
185 name=” s p a t i a l c o n s t a n c y l o s s ”
186 )
187 s e l f . c o l o r c o n s t a n c y l o s s t r a c k e r = k e r a s . m e t r i c s . Mean (
188 name=” c o l o r c o n s t a n c y l o s s ”
189 )
190 s e l f . e x p o s u r e l o s s t r a c k e r = k e r a s . m e t r i c s . Mean ( name=”
exposure loss ” )
191
192 @property
193 def metrics ( self ) :
194 return [
195 self . total loss tracker ,
196 self . illumination smoothness loss tracker ,
197 self . spatial constancy loss tracker ,
198 self . color constancy loss tracker ,
199 self . exposure loss tracker ,
200 ]
201
202 def get enhanced image ( s e l f , data , output ) :
203 r1 = output [ : , : , : , : 3 ]
204 r2 = output [ : , : , : , 3:6]
205 r3 = output [ : , : , : , 6:9]
206 r4 = output [ : , : , : , 9:12]
207 r5 = output [ : , : , : , 12:15]
208 r6 = output [ : , : , : , 15:18]
209 r7 = output [ : , : , : , 18:21]
210 r8 = output [ : , : , : , 21:24]
211 x = data + r1 * ( t f . square ( data ) − data )
212 x = x + r2 * ( t f . square ( x ) − x )
213 x = x + r3 * ( t f . square ( x ) − x )
214 enhanced image = x + r4 * ( t f . square ( x ) − x )
215 x = enhanced image + r5 * ( t f . square ( enhanced image ) −
enhanced image )
216 x = x + r6 * ( t f . square ( x ) − x )
217 x = x + r7 * ( t f . square ( x ) − x )
218 enhanced image = x + r8 * ( t f . square ( x ) − x )
219 r e t u r n enhanced image
220
221 def c a l l ( self , data ) :
222 d c e n e t o u t p u t = s e l f . dce model ( data )
223 r e t u r n s e l f . get enhanced image ( data , d c e n e t o u t p u t )
224
225 def compute losses ( s e l f , data , output ) :
226 enhanced image = s e l f . get enhanced image ( data , output )
227 l o s s i l l u m i n a t i o n = 200 * i l l u m i n a t i o n s m o o t h n e s s l o s s ( o u t p u t
)
228 l o s s s p a t i a l c o n s t a n c y = t f . reduce mean (
229 s e l f . s p a t i a l c o n s t a n c y l o s s ( enhanced image , d a t a )
230 )
231 l o s s c o l o r c o n s t a n c y = 5 * t f . reduce mean (
c o l o r c o n s t a n c y l o s s ( enhanced image ) )
232 l o s s e x p o s u r e = 10 * t f . r e d u c e m e a n ( e x p o s u r e l o s s (
enhanced image ) )
233 total loss = (
234 loss illumination

15
235 + loss spatial constancy
236 + loss color constancy
237 + loss exposure
238 )
239
240 return {
241 ” total loss ”: total loss ,
242 ” illumination smoothness loss ” : loss illumination ,
243 ” spatial constancy loss ” : loss spatial constancy ,
244 ” color constancy loss ” : loss color constancy ,
245 ” exposure loss ” : loss exposure ,
246 }
247

248 def t r a i n s t e p ( self , data ) :


249 with t f . GradientTape ( ) as tape :
250 outp ut = s e l f . dce model ( data )
251 l o s s e s = s e l f . compute losses ( data , output )
252
253 gradients = tape . gradient (
254 l o s s e s [ ” t o t a l l o s s ” ] , s e l f . dce model . t r a i n a b l e w e i g h t s
255 )
256 s e l f . o p t i m i z e r . a p p l y g r a d i e n t s ( zip ( g r a d i e n t s , s e l f . dce model .
trainable weights ) )
257
258 self . total loss tracker . update state ( losses [” total loss ” ])
259 self . illumination smoothness loss tracker . update state (
260 losses [” illumination smoothness loss ”]
261 )
262 self . spatial constancy loss tracker . update state (
263 losses [” spatial constancy loss ”]
264 )
265 self . color constancy loss tracker . update state ( losses [”
color constancy loss ” ])
266 self . exposure loss tracker . update state ( losses [ ” exposure loss
” ])
267
268 r e t u r n { m e t r i c . name : m e t r i c . r e s u l t ( ) f o r m e t r i c i n s e l f .
metrics }
269
270 def t e s t s t e p ( self , data ) :
271 outp ut = s e l f . dce model ( data )
272 l o s s e s = s e l f . compute losses ( data , output )
273

274 self . total loss tracker . update state ( losses [” total loss ” ])
275 self . illumination smoothness loss tracker . update state (
276 losses [” illumination smoothness loss ”]
277 )
278 self . spatial constancy loss tracker . update state (
279 losses [” spatial constancy loss ”]
280 )
281 self . color constancy loss tracker . update state ( losses [”
color constancy loss ” ])
282 self . exposure loss tracker . update state ( losses [ ” exposure loss
” ])
283

284 r e t u r n { m e t r i c . name : m e t r i c . r e s u l t ( ) f o r m e t r i c i n s e l f .
metrics }
285

16
286 d e f s a v e w e i g h t s ( s e l f , f i l e p a t h , o v e r w r i t e = True , s a v e f o r m a t =None
, o p t i o n s =None ) :
287 ” ” ” While s a v i n g t h e w e i g h t s , we s i m p l y s a v e t h e w e i g h t s o f
t h e DCE− Net ” ” ”
288 s e l f . dce model . s a v e w e i g h t s (
289 filepath ,
290 overwrite=overwrite ,
291 save format=save format ,
292 options=options ,
293 )
294
295 d e f l o a d w e i g h t s ( s e l f , f i l e p a t h , by name = F a l s e , s k i p m i s m a t c h =
F a l s e , o p t i o n s =None ) :
296 ” ” ” While l o a d i n g t h e w e i g h t s , we s i m p l y l o a d t h e w e i g h t s o f
t h e DCE− Net ” ” ”
297 s e l f . dce model . l o a d w e i g h t s (
298 filepath=filepath ,
299 by name =by name ,
300 skip mismatch=skip mismatch ,
301 options=options ,
302 )
303
304
305 # # T r a i n i n g ##
306

307 zero dce model = ZeroDCE ( )


308 zero dce model . c o m p i l e ( l e a r n i n g r a t e =1e − 4 )
309 history = zero dce model . f i t ( t r a i n d a t a s e t , v a l i d a t i o n d a t a =
val dataset , epochs =100)
310
311

312 def p l o t r e s u l t ( item ) :


313 p l t . p l o t ( h i s t o r y . h i s t o r y [ item ] , l a b e l =item )
314 p l t . p l o t ( h i s t o r y . h i s t o r y [ ” v a l ” + i t e m ] , l a b e l =” v a l ” + i t e m )
315 p l t . x l a b e l ( ” Epochs ” )
316 p l t . y l a b e l ( item )
317 p l t . t i t l e ( ” T r a i n and V a l i d a t i o n {} Over Epochs ” . f o r m a t ( i t e m ) ,
f o n t s i z e =14)
318 p l t . legend ( )
319 plt . grid ()
320 p l t . show ( )
321
322

323 plot result (” total loss ”)


324 plot result (” illumination smoothness loss ”)
325 plot result (” spatial constancy loss ”)
326 plot result (” color constancy loss ”)
327 plot result ( ” exposure loss ” )
328

329 # Plotting Results


330
331 d e f p l o t r e s u l t s ( images , t i t l e s , f i g u r e s i z e = ( 1 2 , 1 2 ) ) :
332 fig = plt . figure ( figsize=figure size )
333 f o r i in range ( l e n ( images ) ) :
334 f i g . a d d s u b p l o t ( 1 , l e n ( images ) , i + 1) . s e t t i t l e ( t i t l e s [ i ] )
335 = p l t . imshow ( i m a g e s [ i ] )
336 plt . axis ( ” off ” )
337 p l t . show ( )

17
338

339 # Inference
340
341 def i n f e r ( original image ) :
342 image = k e r a s . u t i l s . i m g t o a r r a y ( o r i g i n a l i m a g e )
343 image = image . a s t y p e ( ” f l o a t 3 2 ” ) / 2 5 5 . 0
344 image = np . e x p a n d d i m s ( image , a x i s = 0 )
345 o u t p u t i m a g e = z e r o d c e m o d e l ( image )
346 o u t p u t i m a g e = t f . c a s t ( ( o u t p u t i m a g e [ 0 , : , : , : ] * 2 5 5 ) , d t y p e =np
. uint8 )
347 o u t p u t i m a g e = Image . f r o m a r r a y ( o u t p u t i m a g e . numpy ( ) )
348 return output image
349

350 for val image file in test low light images :


351 o r i g i n a l i m a g e = Image . open ( v a l i m a g e f i l e )
352 enhanced image = i n f e r ( o r i g i n a l i m a g e )
353 plot results (
354 [ o r i g i n a l i m a g e , ImageOps . a u t o c o n t r a s t ( o r i g i n a l i m a g e ) ,
enhanced image ] ,
355 [ ” O r i g i n a l ” , ” PIL A u t o c o n t r a s t ” , ” Enhanced ” ] ,
356 (20 , 12) ,
357 )
Listing 5.1: Data loading code

18
Chapter 6

Results

6.1 Evaulation

Figure 6.1: Total Loss vs Epochs

19
Figure 6.2: Illumination smoothness loss vs Epochs

Figure 6.3: Spatial Constancy loss vs Epochs

20
Figure 6.4: Color Constancy loss vs Epochs

Figure 6.5: Exposure loss vs Epochs

21
6.2 Results

Figure 6.6: Original and Enhanced Images(a)

Figure 6.7: Original and Enhanced Images(b)

22
Chapter 7

Conclusion

We proposed a deep network for low-light image enhancement. It can be trained end-
to-end with zero reference images. This is achieved by formulating the low-light image
enhancement task as an image-specific curve estimation problem, and devising a set

of differentiable non-reference losses. By re-designing the network structure, refor-


mulating the curve estimation, and controlling the sizes of input image, the proposed
Zero-DCE can be further improved, which is significant light-weight and fast for prac-

tical applications. Our method excels in both enhancement performance and efficiency.
Experiments demonstrate the superiority of our method against existing light enhance-
ment methods. [1] [2] [3]

23
Bibliography

[1] Zhen Tian, Peixin Qu, Jielin Li, Yukun Sun, Guohou Li, Zheng Liang, and Weidong

Zhang. A survey of deep learning-based low-light image enhancement. Sensors,


23(18):7763, 2023.

[2] Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam
Kwong, and Runmin Cong. Zero-reference deep curve estimation for low-light

image enhancement. pages 1780–1789, 2020.

[3] Jiawei Guo, Jieming Ma, Ángel F Garcı́a-Fernández, Yungang Zhang, and Haining
Liang. A survey on image enhancement for low-light images. Heliyon, 2023.

24

You might also like