DEMOSAICKING IMAGES WITH MOTION BLUR
Shay Har-Noy, Stanley H. Chan, and Truong Q. Nguyen
UC - San Diego, ECE Dept.
http://videoprocessing.ucsd.edu
harnoy@alumni.rice.edu, h5chan@ucsd.edu, nguyent@ece.ucsd.edu
ABSTRACT
In standard digital color imaging, each pixel position acquires
data for only one color plane and the remaining two color
planes must be inferred through a process known as demosaicking. Furthermore, the image is susceptible to blurring artifacts due to a moving camera or fast moving subject. In this
work we develop a robust framework to demosaick the color
filter array (CFA) image while reducing the blur corrupting
the image. We begin by defining a color motion blur model
that describes the motion blur artifacts affecting color images.
We then integrate the motion blur model in the demosaicking
algorithm to obtain a computationally efficient framework for
deblurring while demosaicking.
Index Terms— Cameras, image restoration, image sensors.
1. INTRODUCTION
The typical digital image acquisition pipeline uses a single
sensor along with a color filter array (CFA) that captures only
one component at each spatial location (e.g. the Bayer Pattern [1]). Demosaicking algorithms that leverage the spatial
and spectral redundancies in the image are then applied to the
M × N output of the CFA to produce an M × N × 3 color
image (see [2] for a review).
Traditional image demosaicking methods usually do not
consider the blurring artifacts and assume that the ”known”
M × N pixels are accurate. Furthermore, image restoration [3] techniques typically assumes that no demosaicking
took place and that all M × N × 3 pixels are accurate. In
this paper we consider the problem of joint deblurring and
demosaicking where both of these assumptions are broken.
The fundamental tradeoff between temporal and spatial resolution [4] causes the problem of motion blur to be particularly
relevant. As the density of pixels on a CCD sensor increases,
the size of each pixel gets smaller and therefore so does the
amount of light hitting the sensor at each pixel location. In
This work is supported by Broadcom Corporation and UC Discovery.
The authors would like to thank Quang Dam Le, Sunkwang Hong for their
assistance and support. Also Dr. Keigo Hirakawa for his insight and for
providing assistance in gathering relevant code and sample images.
978-1-4244-4296-6/10/$25.00 ©2010 IEEE
1006
order to maintain a high SNR at each pixel location, the exposure time must therefore be increased leading to motion
blur in the case of camera or subject motion. Indeed, demosaicking and deblurring has been previously discussed in a
few prior works [5–8], to which we will compare our results.
The proposed deblurring-demosaicking framework uses
recently developed methods of CFA signal decomposition [9,
10] to extract the M × N luminance image. The estimate of
the luminance image is then sharpened using any image deblurring technique with the result combined with an estimate
of the demosaicked color image to form a final color image
consistent with the deblurred M × N luminance image. Due
to the flexibility in choosing the demosaicking and deblurring
methods, the algorithm is able to utilize both the state of the
art demosaicking techniques and image restoration techniques
based on the computational constraints of the application and
desired perceptual performance.
In section 2 we introduce a model for motion blur in color
images acquired with a digital camera using a CFA. Section 3
discusses demosaicking in the frequency domain along with
a method to extract the full resolution luminance component.
Section 4 introduces an algorithm that combats blurring artifacts during the demosaicking process. We apply our method
to natural and simulated images to show its effectiveness in
section 5.
2. COLOR MOTION BLUR MODEL
Having an accurate motion blur model allows us to quantify improvements provided by algorithms aimed at reducing
motion blur. As the exposure time on a digital imaging device becomes longer, a scene with significant motion will
be more likely to blur across multiple sensor pixels. Consider a filter bank operating on a row of the ”true” color
channel of an image, C(z), that suffers from motion blur
(Fig. 1). The color channel is first downsampled by 2
at which point a blurring kernel, Hc (z), is applied. The
blurred and downsampled color channel is then upsampled
and interpolated/demosaicked using some some filter F (z).
Although most modern demosaicking algorithms rely on
the inter-channel redundancies in a CFA image to improve
demosaicking, for our purposes the simple model will suffice.
ICASSP 2010
C(z)
Hc (z)
↓2
F (z)
↑2
Cmb (z)
(a)
C(z)
↓2
↑2
U (z)
Hc (z 2 )
F (z)
Cmb (z)
(b)
Fig. 1. Two equivalent filter banks depicting horizontal motion blur on a row captured using a Bayer color filter.
This filter model allows us to write an expression for the
final blurred color channel:
Cmb (z) =
1
[C(z) + C(−z)] Hc (z 2 )F (z)
2
(1)
where Hc (z) is the blurring kernel that depends on the motion in the scene and may depend on the color channel. Consequently, to model motion blur in a given image we apply
the blurring kernels to each of the channels in the downsampled CFA domain or equivalently apply Hc (z 2 ) to each of the
color channels after demosaicking.
3. CFA SIGNAL DECOMPOSITION
The M × N multiplexed Bayer CFA image can be interpreted
in the frequency domain as a luminance component at baseband and two chrominance components modulated to a higher
frequency [9]. Let fCF A [n1 , n2 ] be the M × N Bayer CFA
output, fL [n1 , n2 ] the luminance component, and fC1 [n1 , n2 ]
and fC2 [n1 , n2 ] the two chroma components we get:
fCF A [n1 , n2 ] = fL [n1 , n2 ] + fC1 [n1 , n2 ](−1)n1 +n2
+ fC2 [n1 , n2 ] ((−1)n1 − (−1)n2 )
(2)
where
⎛
⎞ ⎛ 1
fL
4
⎝ fC1 ⎠ = ⎝ − 1
4
fC2
− 14
1
2
1
2
0
1
4
− 14
1
4
⎞
fR
⎠ ⎝ fG ⎠
fB
Fig. 2. FFT of a Bayer CFA image with approximate locations
of the luminance and chrominance components.
⎞⎛
(3)
and fR , fG , and fB are the full resolution RGB color planes.
Fig. 2 shows the approximate locations of the modulated
chroma and baseband luminance components in the frequency domain. The demosaicking algorithm presented
in [10] operates by extracting the luminance component, fL ,
and demodulating the chroma components (i.e. extract fC1
and fC2 ) allowing it to recover the full resolution M × N × 3
RGB image using the inverse of the matrix in eq. (3). The
optimal filter coefficients for extracting these signals can be
obtained by minimizing the total squared error between the
true components fL , fC1 , fC2 and the estimated ones fˆL ,
fˆC1 , fˆC2 over a training set. In our algorithm, we use the optimal coefficients computed in [10] to extract a high quality
estimate of the luminance image.
1007
4. DEBLURRING DURING DEMOSAICKING
The proposed deblurring and demosaicking framework sharpens the luminance image extracted using the method described in section 3 and combines it with an estimate of the
standard demosaicked image. The combination is performed
sh ˆsh
, fB , such
by choosing the final sharp color planes, fˆRsh , fˆG
that they are close to the demosaicked result, i.e. fˆRsh ≈ fˆR ,
sh
fˆG
≈ fˆG , fˆBsh ≈ fˆB , while still being consistent with
the estimate for the sharpened luminance component. More
precisely:
sh
− fˆG 22 + fˆBsh − fˆB 22
argmin fˆRsh − fˆR 22 + fˆG
sh ,fˆsh ,fˆsh
fˆR
G
B
subject to
sh
fˆRsh + 2fˆG
+ fˆBsh
= fˆLsh .
4
(4)
Notice that there are no spatial dependencies between the
pixels in eq. (4) and thus we can solve the procedure element
by element through a non-iterative solution using Lagrange
multipliers:
sh ˆsh
sh
, fB , λ) = (fˆRsh − fˆR )2 + (fˆG
− fˆG )2
Λ(fˆRsh , fˆG
ˆsh + 2fˆsh + fˆsh
f
G
B
.
+ (fˆBsh − fˆB )2 + λ fˆLsh [i, j] − R
4
To find the minimum of the above equation we set its scalar
partial derivatives equal to 0 and solve the system of equations
to arrive at the optimal result:
⎛ ˆsh
fR
⎜ fˆsh
⎜ G
⎝ fˆsh
B
λ
⎞
⎛
⎟ ⎜
⎟=⎜
⎠ ⎝
5
6
− 13
− 16
− 43
− 31
1
3
− 31
− 38
− 16
− 13
5
6
− 43
2
3
4
3
2
3
16
3
⎞⎛ ˆ
fR
⎟ ⎜ fˆG
⎟⎜
⎠ ⎝ fˆB
fˆsh
⎞
⎟
⎟ . (5)
⎠
L
Thus the optimal final image can be obtained through a
linear combination of the estimates of the RGB image {fˆR ,
fˆG , fˆB }, and the deblurred luminance image fˆLsh using eq.
(5) (Fig. 3).
Algorithm 1 Demosaick Deblurring Algorithm
derived based on all of the images in the Kodak dataset1 .
These optimal coefficients were also used for extracting the
fˆL estimate to which a total variation deblurring procedure
was applied [11].
Given a blurry M × N Bayer CFA image, and the blurring
PSF:
5.2. Simulated Results
1. Obtain an estimate of luminance component fˆL using
the approach detailed in section 3.
2. Sharpen the estimate of the luminance component to
form fˆsh using any image deblurring algorithm.
L
3. Form an estimate of the demosaicked M × N × 3 RGB
image {fˆR , fˆG , fˆB } using any fast demosaicking algorithm.
4. Combine the sharp luminance component fˆLsh , and the
blurred demosaicked RGB color planes according to
eq. (5).
fCF A
G
fˆL
Deblur
Demosaick
fˆLsh
fˆR
fˆG
fˆB
Σ
Fig. 3. Block diagram of the proposed algorithm. The block
labelled G corresponds to the procedure detailed in section 3
to extract an estimate of the luminance component.
Fig. 3 shows a block diagram of Alg. 1 where we are able
to choose both the demosaicking and deblurring algorithms
based on our complexity constraints and desired perceptual
quality.
5. SIMULATION RESULTS
5.1. Natural Motion Blur
To assess the effectiveness of Alg. 1 in a real world application we acquired images using a commercial digital camera
where we introduced significant panning camera motion and
chose an appropriate exposure time to create the motion blur
artifact. Alg. 1 was then applied to the acquired blurry mosaicked image. For deblurring-demosaicking purposes, the
length of the motion blur filter was chosen based on amount
of perceived blur in the image and the direction was known a
priori.
The demosaicking procedure used in Fig. 4 was the procedure described in [10] with the optimal filter coefficients
1008
To quantify the perceptual improvements seen in Fig. 4 we
simulate motion blur on the standard Kodak set of images [12]
using a known blurring kernel to the images as described in
section 2 to simulate a Bayer ICF A acquired with a moving
camera. The blurring kernel had a length of 5 and had a nonzero response time. We found that applying the perfect box
blurring kernel had little effect on the objective or subjective
results.
To obtain an objective measure of quality we calculate the
PSNR of each of the demosaicked RGB color planes as well
∗
distance [13], Structural SIMilarity
as the SCIELAB ΔEab
Index [14] (SSIM) metric averaged across all three RGB color
planes (Q̄), and the color extension to SSIM (Qcolor ) [15]. A
∗
lower ΔEab
indicates better quality whereas a higher value
for all of the other metrics indicates better quality.
Table 1 shows the performance of Alg. 1 implemented
with the TV deblurring method and the demosaicking procedure described in [10]2 compared to demosaicking alone
and the joint demosaicking-deblurring method of [8]. The
optimal filter coefficients used to extract the {fˆL , fˆC1 , fˆC2 }
images from ICF A were rederived using the first twelve images in the Kodak dataset. The results indicate that Alg. 1
offers improvements over simple demosaicking and the joint
deblurring-demosaicking method described in [8].
6. CONCLUSION
In this work we developed a framework for joint demosaicking and deblurring of images acquired using a color filter array (CFA). The framework leverages models of color motion
blur and recent advances in CFA signal decomposition and is
able to leverage state-of-the-art image restoration and demosaicking techniques. Objective and subjective results both indicate that the presented algorithm indeed reduce the amount
of blur visible in the image and improves the image quality.
7. REFERENCES
[1] B.E. Bayer, Color imaging Array, U.S. Patent 3971065,
1976.
[2] B.K. Gunturk, J. Glotzbach, Y. Altunbasak, R.W.
Schafer, and R.M. Mersereau, “Demosaicking: color
1 The authors
would like to express their gratitude to E. Dubois for making
his code available and offering suggestions.
2 Additional full resolution results and a sample implementation are available at: http://videoprocessing.ucsd.edu/∼sharnoy/IEEETranDemosaick/
Fig. 4. Natural image acquired with a moving digital camera comparing the output of Alg. 1 (right) and the demosaicking
procedure described in [10] (left).
Table 1. A sample of the objective results showing the performance of Alg. 1 on the Kodak data set.
∗
Q̄
Image
Method ΔEab
Qcolor P SN RR P SN RG P SN RB
Alg. 1
2.02
0.85
0.83
27.7 dB
28.2 dB
27.9 dB
Airplane
[8]
2.74
0.81
0.77
25.8 dB
26.8 dB
25.4 dB
[10]
2.29
0.83
0.79
26.9 dB
26.9 dB
27.3 dB
Alg. 1
2.63
0.83
1.21
26.5 dB
27.0 dB
26.9 dB
Lighthouse
[8]
3.69
0.75
1.03
24.3 dB
26.0 dB
24.2 dB
[10]
3.01
0.80
1.15
25.8 dB
25.8 dB
26.2 dB
Alg. 1
4.01
0.70
1.14
24.4 dB
24.1 dB
23.3 dB
House
[8]
4.85
0.63
0.99
23.2 dB
23.6 dB
21.5 dB
[10]
4.60
0.64
1.03
23.7 dB
23.0 dB
22.7 dB
filter array interpolation,” Signal Processing Magazine,
IEEE, 2005.
[3] Peter A. Jansson, Ed., Deconvolution of Images and
Spectra, Academic Press, 2nd edition, 1997.
[4] S.K. Nayar and M. Ben-Ezra, “Motion-based motion
deblurring,” Pattern Analysis and Machine Intelligence,
IEEE Transactions on, 2004.
[5] Mejdi Trimeche, Dmitry Paliy, Markku Vehvilainen,
and Vladimir Katkovnic, “Multichannel image deblurring of raw color components,” in Proc. SPIE, San Jose,
CA, USA, Mar. 2005, vol. 5674, pp. 169–178, SPIE.
[6] Dmytro Paliy, Alessandro Foi, Radu Bilcu, Vladimir
Katkovnik, and Karen Egiazarian, “Joint deblurring
and demosaicing of poissonian bayer-data based on local adaptivity,” in Signal Processing, 2008. EUSIPCO.
European Conference on, 2008.
[7] J. Portilla, D. Otaduy, and C. Dorronsoro, “Lowcomplexity linear demosaicing using joint spatialchromatic image statistics,” in Image Processing, 2005.
ICIP 2005. IEEE International Conference on, 2005.
[8] Takashi Komatsu and Takahiro Saito, “Demosaicking
for a color image sensor with removal of blur due to an
optical low-pass filter,” in Proceedings of SPIE, 2004.
1009
View publication stats
[9] D. Alleysson, S. Susstrunk, and J. Herault, “Linear demosaicing inspired by the human visual system,” Image
Processing, IEEE Transactions on, 2005.
[10] E. Dubois, “Filter design for adaptive frequency-domain
bayer demosaicking,” in Image Processing, 2006 IEEE
International Conference on, 2006.
[11] Stanley H. Chan and Truong Q. Nguyen, “Lcd motion
blur: Model, analysis and algorithm,” Image Processing, IEEE Transactions on, (Submitted).
[12] 40 Scanned Images, “Eastman Kodak photographic
color image database,” 1993.
[13] Xuemei Zhang, Brian A. Wandell, and Brian A. W, “A
spatial extension of CIELAB for digital color image reproduction,” in Proc. of the SID Symposiums, 1996.
[14] Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: from error visibility
to structural similarity,” Image Processing, IEEE Transactions on, 2004.
[15] Alexander Toet and Marcel P. Lucassen, “A new universal colour image fidelity metric,” Elsevier Displays,
Dec. 2003.