The Flutter Shutter Paradox
Yohann Tendero
Jean-Michel Morel
Bernard Rougé
November 4, 2011
1
Abstract
Acquiring good quality images of moving objects by a digital camera remains
a valid question, particularly if the velocity of the photographed object is
only partially known, it is virtually impossible to tune an optimal exposure
time. The same question arises when a solid object is being photographed
by a moving camera. This is not only frequent, but even necessary when the
camera is embarked on terrestrial vehicle, a plane, or an Earth observation
satellite. In the latter cases, the camera velocity is constrained by physical
laws. For this reason the recent Raskar and al. flutter shutter and the Durand and al. motion invariant photography methods have generated much
interest. This paper proposes a mathematical analysis of moving photography which permits to treat in the same formalism the flutter shutter and the
motion invariant photography. A general flutter shutter method is defined
and explored, which allows for nonbinary shutter sequences and permits to
formalize the question of an optimal flutter shutter code. The first result of
this analysis is actually negative, and raises what we call the flutter shutter
paradox. We prove that both the flutter shutter and the motion-invariant
photography methods cannot increase indefinitely the signal to noise ratio
compared to an exposure short enough to avoid motion blur despite the use
of a possibly infinite exposure time. Bounds are shown. We also resolve the
flutter shutter paradox under the assumption that the relative velocity of
object and camera is not a priori known. Then a significant increase of the
average SNR can be expected provided there is an a priori knowledge of the
probability density for the observed velocities. In this set up we show how
to compute analytically an optimal flutter shutter strategy.
Key words: Motion blur, flutter shutter, optimization, motion invariant
photography, SNR.
1
2
Introduction
If a scene being photographed moves during the exposition process, or if the
scene is steady and the camera moves, the resulting images are degraded
by motion blur. Motion deblurring is the combination of two dependent
problems a) the kind of kernel applied to the images which depends here
on the motion b) the actual deblurring method, where the kernel may have
to be estimated a posteriori, or not. Motion blur arises from multiples
causes and is very common even for consumer level photography where it is
partly compensated by optical, mechanical, or digital stabilizers. Lens based
optical stabilizers use a floating sensor moving orthogonally to the optical
axis. Vibrations are detected using gyroscopic sensors estimating the camera
motion in the x and y axis of the focal plane. Hence it cannot compensate
for camera rotations. Sensor shift stabilizers work similarly but do not use
any specific lens. Hence, they tend to be cheaper. These techniques cannot
compensate for motion blur of arbitrary length (support), since they are
limited by mechanical and technical issues. In most cases the size of the
blur support will increase proportionally to the exposure time. Thus they
require a ”small” exposure time despite the stabilization device.
The difficulty of motion blur is illustrated by its simplest example, the one
dimensional uniform motion blur. The result of a too long exposure during
the motion on the image is nothing but a convolution of the image with a
one dimensional window shaped kernel. The support of the kernel increases
linearly with the exposure time and the velocity of the motion. As soon as
the exposure time is too long, this blur is no more invertible and makes the
restoration process to an ill posed problem.
An exciting alternative to classic photography was proposed in [1, 21] where
the authors suggest modifications in the acquisition process to get invertible
blur kernels by using a flutter shutter. If the shutter sequence is well chosen,
invertibility is guaranteed for blurs with arbitrary size support. Hence,
replacing the classic camera shutter by a flutter shutter, it becomes possible
to use any integration time. This also means that the exposure time on a
given scene can be much longer: many more photons are therefore sensed.
This problem is the core problem of photography. The first photographs
taken by Nicephore Niepce required several hours a time incompatible with
live subjects or even with outdoors static scenes exposed to the sun. Ever
since, photography has been subject to the problem of finding the right
compromise between a short exposure time, which avoids the effects of motion blur, and a longer exposure time, which permits many more photons to
reach the sensor and therefore increases the SNR.
At first sight the flutter shutter looks like a magic solution that should
equip all cameras. It could indefinitely increase the SNR by increased exposure, at no cost from the motion blur side. We shall see that this hasty
conclusion is actually wrong, and that the SNR increase is limited, no mat2
ter how long the exposure time is. It is this surprising fact that we shall call
the “flutter shutter paradox”. However, we shall also see that the method
can make more flexible the use of certain cameras, particularly when their
motion or when the motion of the scene can be modeled.
2.1
Related work
Blind deconvolution techniques [7, 19, 31, 13, 22] aimed at estimating the
blur and recovering the sharp image directly from the blurred one. Deconvolution algorithms have been developed intensively [15, 25]. For example
in [38] the authors suggest a modification of the Richardson Lucy method
[28, 23] to control the artifacts of the restored image. In [20] Fergus and al.
use natural image statistics to estimate the blur. In [18, 24, 10, 14, 35, 17]
good results are shown on the blur estimation and/or deblurring. However the power spectrum of the acquired images with blur of more than two
pixels contains several zeros. Thus useful information for image quality is
lost. Hence, no matter how sophisticated the image reconstruction is, it
is virtually impossible to recover a de-blurred image without very strong
hypotheses on the underlying landscape. Such strong hypotheses are unrealistic for most images. The results are therefore in practice poor [29].
In an attempt to transform the blur problem into a well posed problem the
authors of [6, 8, 30, 27, 9] proposed to use two photographs with different
blurs instead of one. In [37] the authors use a long exposure taken image
and another one, sharp but noisy. In [33] the authors use a full multi-image
framework acquiring a bunch of sharp but noisy images and recovering a
sharp image with increased SNR.
In [4, 32, 36, 12, 34] the authors use hybrid or complex camera systems.
Unfortunately this kind of scheme may lead to other problems such as an
expensive computational cost or hardware issues.
A solution was proposed in [1, 3, 2] by Raskar and al. The new acquisition
process modulates the photon flux into the camera by opening and closing
the camera shutter according to certain well chosen pseudo random binary
codes. In the case of a uniform motion in front of the camera, the resulting
blur kernel becomes invertible (there is no zero in its Fourier transform),
however big the velocity is. The visual result of an image acquired by flutter
shutter is close to a stroboscopic image, which can nonetheless give back a
neat image by deconvolution.
Similarly in [21] Levin and al. suggested to move the camera in the direction of the motion during the exposure time. The authors use a constant
acceleration motion in order to make the resulting kernel invertible and spatially invariant to the velocity. Hence an a priori knowledge of the motion
direction is required. Nevertheless, this approach causes blur in static parts
of the scene.
Yet, thanks to the invertibility (well posedness of the recovering problem),
3
in both cases, the sharp image can be recovered by a deconvolution. Notice
that only one image is acquired and recovered at the end of the process.
2.2
Overview
Section 3 proposes a general mathematical framework for image acquisition
using a physical Poisson model for the photons capture process, including
the obscurity noise. This model suits well our context since all noise terms
inherent to image sensing are taken into account without any approximation.
The model is detailed in a static context in section 3.1 and applied in 3.1.3.
The non stationarity motion induced case is introduced in section 3.2 and
applied to standard image acquisition : snapshots 3.3 and multi-images 3.2.2.
In section 4 the mathematical model is used to analyze the flutter shutter
methods and verify that they actually work. Generalizations for digital 4.1
and analogical 4.2 implementations of the flutter shutter are proposed and
analyzed. Section 5 shows that the numerical flutter shutter beats the analog
flutter shutter for the image quality and is more flexible to use. Nevertheless,
Section 6 shows that the use of the flutter shutter does not improve on
indefinitely the SNR of the sharp recovered image compared to a traditional
snapshot with a (“quick-enough-to-be-sharp”) aperture time. This is the
flutter shutter paradox. In section 4.3 the motion-invariant photography is
shown to be a particular case of our generalized Flutter-Shutter.Results are
illustrated in section 7. Section 8 proposes a solution to the flutter shutter
paradox theorem provided the probability density of the observed velocities
is known. Section 9.1 constructs optimized codes for centered-Gaussian and
uniform (with different sdt-dev σ and ranges) distribution over the velocity.
Simulations, predictive results and evaluation on simulations are shown and
discussed in section 9.
3
Mathematical modeling
This section presents a continuous stochastic model of photons captured by
a sensor array. The model applies to a standard image acquisition on still or
moving landscapes, provided the motion is uniform and stationary. Without
loss of generality the formalization will be done in the case where the sensor array is 1D and the landscape moves in the direction given by the sensor.
Let Pl : R+ ×R be a bi-dimensional Poisson process of intensity l(t, x), ∀(t, x) ∈
R+ × R (here l is called landscape, t resp. x are the time resp. spatial
positions). This means that for every a, b, t1 , t2 (with a < b and 0 ≤
t1 < t2 ) Pl ([t1 , t2 ] × [a, b]) is a Poisson random variable with intensity
R t2 R b
t1 a l(t, x)dxdt. The theoretical observation of a pixel sensor (photon
counter) of unit length centered at x during the time span [0, ∆t] is a Poisson
4
random variable
1
1
Pl [0, ∆t] × [x − , x + ] ∼ P
2
2
Z
0
∆t Z x+ 21
x− 12
l(t, y)dydt
!
where [x − 21 , x + 21 ] represents the normalized sensor unit, and by X ∼ P we
mean that a random variable X has law P . In other terms the probability
to observe k photons coming from the landscape l seen at the position x on
the time interval [0, ∆t] and using a normalized sensor is equal to
1
2
1
2
P Pl [0, ∆t] × [x − , x + ] = k =
3.1
R ∆t R x+ 21
0
x− 21
l(t, y)dydt
k
−
e
R ∆t R x+ 21
0
x− 1
2
l(t,y)dydt
.
k!
The still photography case
In the classic still photography set up, l(t, x) = l(x), which makes of Pl :
R+ × R a bi-dimensional time stationary and spatially inhomogeneous Poisson process of intensity l(x), ∀(t, x) ∈ R+ × R. Thus for every a, b, t1 , t2 (s.t
a < b and 0 ≤ t1 < t2 ) Pl ([t1 , t2 ] × [a, b]) is a Poisson random variable with
Rb
Rt Rb
intensity t12 a l(x)dxdt = |t2 − t1 | a l(x)dx (by stationarity of the process).
Then the theoretical observation of a pixel sensor (photon counter) of unit
length centered at x using a exposure time of ∆t is a Poisson random variable
!
Z ∆t Z x+ 1
2
1
1
l(y)dydt
Pl [0, ∆t] × [x − , x + ] ∼ P
2
2
0
x− 12
!
Z x+ 1
2
∼ P ∆t
l(y)dy ∼ P ∆t(1[− 1 , 1 ] ∗ l)(x)
x− 21
2 2
where ∆t is the exposure time, using a normalized sensor of unit length, and
∗ denotes the convolution (viii). (Here and in the rest of the text, Latin
numerals refer to the formulas in the final glossary page 68.) For sampling
purposes we assume that the theoretical landscape l is seen through an
optical system with a point spread function g.
Definition We call ideal landscape the deterministic function
u = 1[− 1 , 1 ] ∗ g ∗ l
2 2
(1)
where g is the point spread function of the optical system providing a cut
off frequency.
In other words, thanks to the convolution with g the acquisition system is
able to sample u. From the imaging point of view if the center of the sensor
is x then u(x) represents the ideal (noiseless) pixel landscape value, which
unfortunately can only be obtained after infinite exposure.
5
Definition (Ideal acquisition system.) The image acquired by the ideal
acquisition system, before sampling, corresponds to samples of the Poisson process Pl . The intensity u (ideal landscape value) is related to the
landscape l by (1) and is band limited.
!
Z t2 Z x+ 1
2
1
1
u(y)dydt ∼ P (|t2 − t1 |u(x)) .
Pl ([t1 , t2 ] × [x − , x + ]) ∼ P
2
2
t1
x− 1
2
(2)
Definition (Real acquisition system with noise included in the landscape.)
A more realistic acquisition system adds a landscape independent noise also
known as dark noise (or obscurity noise or thermal noise). Assuming that
this noise has variance η (2) entails
1
1
(3)
Pl+η ([t1 , t2 ] × [x − , x + ])) ∼ P (|t2 − t1 |(u(x) + η)) .
2
2
Since all computations using the ”noisy” landscape u + η remain formally
the same as with a noiseless ideal system, we will assume that u already
contains the obscurity noise in itself. Notice that η being a constant, u + η
and u have the same cut off frequency. We will also always assume in the
sequel that u ∈ L1 ∩ L2 (R). This assumption will be necessary to apply
some of the mathematical formulas, but represents no artificial restriction
on the acquisition physical model. Indeed, first, the average photon emission
is always bounded. Second, taking a large enough support, we can always
suppose w.l.o.g. that the landscape has bounded support and that the acquisition time is large but not infinite. Thus we can assume that the noise
is zero at infinity. Under these conditions u ∈ L1 ∩ L2 .
3.1.1
Sampling, interpolation
In the following u is assumed band limited, namely û (see the definition
(xxiii) of Fourier transform in the glossary) is supported in [−π, π] and
therefore can be sampled at a unit rate. This hypothesis is again no restriction, being justified by the frequency cutoff provided by the optical kernel
g. The discrete sensor observations, or samples, will be denoted by e(n) for
n ∈ Z. (Under a very long exposure T of a static scene, by the law of large
numbers we have e(n)
T → u(n), and u(n) = Ee(n).)
Definition Given a discrete array observation e(n), n ∈ Z, its band limited
interpolate e(x) x ∈ R can be defined by Shannon-Whittaker interpolation
as
X
e(n)sinc(x − n)
( see in (xxviii) the definition of sinc.)
e(x) =
n∈Z
(4)
6
Notice that in the deterministic ideal case, e(n) = u(n) and, u being bandlimited, we would deduce e(x) = u(x) from e(n) = u(n). By (4) we also
have
X
(5)
e(n)e−inξ 1[−π,π](ξ).
ê(ξ) =
n∈Z
3.1.2
Noise measurement
Definition In the following we call signal to noise ratio (SNR) of a random
variable X the ratio
SN R(X) := p
|EX|
.
var(X)
(6)
For example if uest (x) (resp. ûest (ξ)) is an estimation of the landscape u(x)
(resp. û(ξ)) based on a noisy observation of u,
SN R(uest(x)) := p
Likewise, we call “spectral SN R” of
defined by
SN Rspectral (ûest (ξ)) := p
SN R
spectral−averaged
|Euest (x)|
.
var(uest (x))
uest
the frequency dependent ratio
|Eûest (ξ)|
(var(ûest (ξ))
(ûest) := q
1
2π
1
2π
R
R
(7)
for ξ ∈ [−π, π];
|Eûest (ξ)|1[−π,π] (ξ)dξ
var(ûest(ξ)1[−π,π](ξ))dξ
(8)
.
(9)
The reason of these definitions is to compare in terms of SNR the flutter
shutter method to a standard “quick enough to be sharp” snapshot image
acquisition strategy in the sense of (6), (7), (8), (9).
3.1.3
Standard acquisition, the SNR with no blur
From the acquisition system definition (3) the observed image at position x
using the standard image acquisition strategy with an exposure time N ∆t
can be any realization of
!
Z
1
1
Pl ([0, N ∆t] × [x − , x + ]) ∼ P
u(x)dt ∼ P (N ∆tu(x)) .
2
2
[0,N ∆t]
(10)
Notice that in this case the expected value and variance of a pixel sensor at
position x are equal to N ∆tu(x).
7
Theorem 3.1. (Fundamental theorem of photography)
p
In the case of a still image, the SNR satisfies SN R(x) = u(x)N ∆t, where
N ∆t is the total exposure time. It is therefore proportional to the square
root of both the exposure time and the light intensity.
Remark In a passive optical system we have no control over the landscape
light emission u(x). Thus the only secure way to increase the SNR is to
increase the exposure time N ∆t.
P ([0,N ∆t]×[x− 21 ,x+ 12 ])
Proof. Let uest ∼ l
N ∆t
u. The relation (10) entails
uest(x) ∼
be the estimate of the ideal landscape
P (N ∆tu(x))
.
N ∆t
Then E(uest (x)) = u(x) and the variance of
uest(x) is
u(x)
N ∆tu(x)
=
.
(N ∆t)2
N ∆t
p
It follows from (7) that SN R(uest )(x) = u(x)N ∆t.
var(uest (x)) =
3.2
Image acquisition with a moving landscape
In the following the camera focal plane is always taken as reference frame.
The camera is always assumed to move parallel to its focal plane in a straight
trajectory in the x direction (Fig. 1). Hence the acquisition process is made
in the x direction only. The camera moves straight at a speed v(t) (counted
in pixels per second) and the landscape is static. These assumptions may
seem simplistic, but are actually technologically relevant, as they include
the scanning of planar scenes like documents, or of scenes seen at a long
distance, like aerial video and push broom satellites.
Thus from now on we assume l(t, x) = l(x − tv(t)), and mainly v(t) ≡ v
where l is the underlying stationary landscape model (section 3.1). Hence all
the former discussion made on the acquisition system, sampling and interpolation holds. This section focuses on the mathematical model for standard
photography, where there is either a motion blur in a single continuous exposure, or the acquisition of an image burst (several consecutive snapshots
with shorter exposure). We shall give the conditions under which the exposure is short enough to get a quality photograph, and evaluate the SNR
in all three cases: snapshots, longer exposure with motion blur, and image
burst made of short enough snapshots.
3.2.1
Standard motion blur
In the following we fix a exposure time unit ∆t, and measure exposure times
as its multiples N ∆t.
8
Figure 1: Acquisition system: the camera plane moves parallel to the focal
plane in fixed direction, with scalar speed v(t). The exposure time N ∆t is
assumed short enough, or the camera elevation large enough to ensure that
there is no apparent deformation of the landscape due to its varying height.
Theorem 3.2. The standard motion blur is equivalent to an image obtained
by a convolution of the ideal landscape u by a fixed (window shaped) kernel
1[0,b] where b is the blur length, equal to N v∆t.
Proof. The ideal landscape u is moving in the camera frame at a speed v
(counted in pixels per second) and using (3) we get that the acquired image
at position x is (a realization of)
1
1
Pl ([0, N ∆t] × [x − , x + ]) ∼ P
2
2
∼P
Z
0
Z
0
N ∆t
u(x − vt)dt
N v∆t
1
u(x − t)dt
v
1
∼ P ( 1[0,N v∆t] ∗ u)(x) .
v
In this case the expected value and variance of a pixel sensor at position
9
x are equal to v1 (1[0,N v∆t] ∗ u)(x). The quantity N v∆t is nothing but the
length of the blur b (in pixels).
Remark The convolution with h = 1[0,N v∆t] (standard blur) function is a
non-invertible transformation as soon as the first zero of the Fourier transform (FT) of h is in the support of û. This makes ill posed any restoration
process of u. The purpose of the flutter shutter method will be to replace
1[0,N v∆t] with a function whose convolution remains invertible for arbitrary
N v∆t = b. If instead, as considered in the next paragraph, the first zero of
ĥ is outside the support of û, then the motion blur is said negligible and it
is actually invertible.
Snapshot The acquired image at position x for a short snapshot is (a
realization of)
1
1
Pl ([0, ∆t] × [x − , x + ]) ∼ P
2
2
Z
∆t
u(x − vt)dt
Z
v∆t
∼P
0
1
∼P
(1
∗ u)(x)dt ∼ obs(x)
v [0,v∆t]
0
1
u(x − t)dt
v
(11)
where (11) is known only for x ∈ Z. Taking the Fourier transform (xxiii)
of the expected value in (11) yields
1
F(1[0,v∆t] )(ξ)û(ξ).
v
(12)
Moreover we have (xxiii)
F(1[0,v∆t] )(ξ) =
Z
v∆t
e−ixξ dx = 2
0
sin( ξv∆t
2 ) −iξ v∆t
2 .
e
ξ
(13)
From (13) we see that we must have |v|∆t < 2 to guarantee the invertibility
of the blur kernel on [−π, π].
Definition Given v, we call “standard snapshot” the use of an integration
time (∆t) such that (|v|∆t < 2).
We call snapshot samples at position n of the landscape u at velocity v the
random variables
1
∗ u)(n)dt .
(1
obs(n) ∼ P
v [0,v∆t]
We call band limited interpolated snapshot its band limited interpolate (5)
X
obs(x) ∼
obs(n)sinc(x − n).
n∈Z
10
By definition of the standard snapshot, ∆t is small enough so (13) has no
zero on [−π, π] (the support of û). Thus (12, 13) lead to the definition of
the inverse filter γ satisfying
û(ξ) = γ̂(ξ)F(E(obs))(ξ)
implying
γ̂(ξ) =
v 1[−π,π](ξ)
sin( ξv∆t
) −iξ v∆t
2
2
2
e
ξ
.
(14)
Remark The inverse filter γ would give back the ideal observable landscape
u if we knew E(obs)(x). But, in fact we can only apply it to the band limited
interpolate of E(obs)(n), n ∈ Z, which contains noise. It is nevertheless
interesting to explore the noiseless ideal case.
The noiseless discrete case This is the ideal case where (from (11))
1
1
(1[0,v∆t] ∗ u)(n)dt
= (1[0,v∆t] ∗ u)(n).
e(n) = E P
v
v
We need now to compute ê(ξ) from the samples (e(n))n∈Z in order to perform
the deconvolution. Since u is band limited (see section 3.1), e(x) is also band
limited and we can interpolate it using the sinc interpolation (4). Thus the
band limited interpolated ideal observation is
X
e(n)sinc(x − n).
(15)
e(x) =
n∈Z
Then from (15) we have
ê(ξ) =
X
n∈Z
e(n)e−inξ 1[−π,π](ξ).
(16)
So the ideal deconvolved image is obtained by combining (14) and (16) to
give
X
ˆ = γ̂(ξ)
(17)
e(n)e−inξ 1[−π,π](ξ).
d(ξ)
n∈Z
Step2: the real, noisy case
Definition We call estimated landscape uest,sna of the standard snapshot
the function defined by (using the observed obs(n) samples (11) instead of
the ideal e(n) in (17))
X
obs(n)e−inξ 1[−π,π](ξ).
F(uest,sna )(ξ) = γ̂(ξ)
(18)
n∈Z
11
Theorem 3.3. For a standard snapshot using a exposure time of ∆t the
spectral SN R (8) is
s
sin( ξv∆t
∆t
2 )
|2
|;
SN R(ξ) = 1[−π,π](ξ)|û(ξ)|
||u||L1
ξv∆t
the expected value of the estimated landscape F(uest,sna )(ξ) from the observed
samples is
E(F(uest,sna )(ξ))) = û(ξ)1[−π,π] (ξ);
(19)
and the variance is
var(F(uest,sna (ξ))) =
||u||L1 1[−π,π](ξ)
∆t|2
) 2
sin( ξv∆t
2
ξv∆t |
.
(20)
Proof. Now we need to compute var(F(uest,sna (ξ))) in order to estimate the
effect of the deconvolution of the final restored image. From (18) we have
X
(21)
var(obs(n))1[−π,π] (ξ).
var(F(uest,sna (ξ))) = |γ̂(ξ)|2
n∈Z
The above formula works under the realistic assumption that obs(n) are independent random variables. Indeed, they correspond to disjoint sensors in
the CCD and receive separate sets of photons. They are therefore two independent Poisson variables. By (11), obs(n) being Poisson random variables
we have var(obs(n)) = v1 (1[0,v∆t] ∗ u)(n). Combining this with the above
(21) equation and (14), we obtain
P
v n∈Z (1[0,v∆t] ∗ u)(n)1[−π,π] (ξ)
(ξ))) =
var(F(u
est,sna
|e−iξ
=
=
v∆t
2
2
sin( ξv∆t
) 2
2
|
ξ
v||1[0,v∆t] ∗ u||L1 1[−π,π](ξ)
|e−iξ
v∆t
2
2
) 2
sin( ξv∆t
2
|
ξ
(using the Poisson summation formula (xxix))
v||1[0,v∆t] ||L1 ||u||L1 1[−π,π] (ξ)
|2
sin( ξv∆t
)
2
ξ
|2
(22)
=
||u||L1 1[−π,π](ξ)
sin( ξv∆t )
∆t|2 ξv∆t2 |2
.
Here the crucial point is the use of the Poisson summation formula in equation (22). From (11) since obs(n) are Poisson random variables E(obs(n)) =
1
v (1[0,v∆t] ∗u)(n) and following the same scheme we can compute E(F(uest ))(ξ).
12
P
E(F(uest,sna )(ξ))) =
P
=
−inξ 1
[−π,π] (ξ)
n∈Z E(obs(n))e
=
P
n∈Z (1[0,v∆t]
F(1[0,v∆t] )(ξ)
m∈Z F(1[0,v∆t] ∗ u)(ξ + 2πm)1[−π,π] (ξ)
F(1[0,v∆t] )(ξ)
∗ u)(n)e−inξ 1[−π,π](ξ)
F(1[0,v∆t] )(ξ)
using the first form of (xxix)
F(1[0,v∆t] )(ξ)û(ξ)1[−π,π] (ξ)
= û(ξ)1[−π,π] (ξ).
F(1[0,v∆t] )(ξ)
=
Combining these two last results and using the definition of the spectral
q
sin( ξv∆t
)
∆t
2
|2
SNR (8) we obtain SN Rspectral (ξ) = 1[−π,π](ξ)|û(ξ)| ||u||
ξv∆t |.
1
L
The only remaining is the computation of the best exposure time ∆t for a
known v providing the best SN R without the use of a flutter shutter.
Theorem 3.4. (Best exposure time for landscape recovery)
Consider a landscape u(x−vt) moving at velocity v. Then for a snapshot the
SN Rspectral−averaged (9) is maximized when |v|∆t∗ ≈ 1.0909” and is equal
to
q
R
Rπ
∆t∗ π
0.1359 −π |û(ξ)|dξ
2π −π |û(ξ)|dξ
spectral−averaged
p
SN R
=s
.
≈ p
Rπ
dξ
|v|
||u||
1
||u||L1
L
∗
ξv∆t
−π
|
)
sin(
2
ξv∆t∗
2
|2
Proof. From (20) the energy to be minimized in order to guarantee the best
SN Rspectral−averaged after deconvolution is
Z π
Z
dξ
ξ2
1
v 2 ∆t π
dξ.
E(∆t) =
dξ
=
v∆t
2 v∆t
∆t −π |2 sin(ξ 2 ) |2
4
−π sin (ξ 2 )
ξv∆t
Then
v2
E (∆t) =
4
′
=
v2
4
Z
π
π
ξ2
−π
Z
−π
ξ2
dξ + ∆t
sin2 (ξ v∆t
2 )
Z
π
−ξ 3 vcos(ξ v∆t
2 )
sin3 (ξ v∆t
2 )
v∆t
v∆t
sin(ξ 2 ) − ξv∆tcos(ξ 2 )
dξ.
sin3 (ξ v∆t
2 )
13
−π
dξ
!
Which vanishes when b∗ = v∆t∗ ≈ 1.0909. Then (9, 19, 20) entail
q
q R
R
π
1
∆t∗ π
|û(ξ)|dξ
2π −π
2π −π |û(ξ)|dξ
spectral−averaged
SN R
=v
=s
Rπ
u ||u|| 1 R π
dξ
dξ
||u||L1 −π
u L∗
ξv∆t∗
∗
sin(
)
t ∆t
−π sin( ξv∆t )
2
|
|2
∗
2
2
|
≈s
q
1.0909
2πv
||u||L1
Rπ
Rπ
ξv∆t∗
2
−π
−π
|
|
|û(ξ)|dξ
dξ
1.0909ξ
)
sin(
2
|2
1.0909ξ
2
ξv∆t
2
0.1359
≈ √
v
Rπ
−π
p
|û(ξ)|dξ
||u||L1
.
Definition (Best snapshot.)
Given a moving landscape u(x − vt) moving at velocity v, we call best
snapshot strategy the use of the time-exposure ∆t∗ ≈ 1.0909
|v| .
3.2.2
The multi-image fusion to improve the SNR
The multi-image acquisition strategy consists in taking a burst of N images
using a ∆t exposure time. ∆t is taken small enough, so that the elementary
blur of each one of the N images is negligible (see section 3.2.1). Each image
is registered and added to the former ones to obtain a single accumulated
image. See [5] for an analysis of the feasibility of this technique. We shall assume here that the registration operation is error-free, so that the operation
amounts to an increased exposure time.
Theorem 3.5. The multi-image
√ acquisition strategy using N images increases the SNR by a factor of N compared to a standard snapshot.
Proof. Before the registration operation the k-th image observed (for k ∈
{0, N − 1}) at position x is (a realization of)
!
Z ∆t
Z (k+1)∆t
1
1
u(x − vk∆t − vt)dt
u(x − vt)dt ∼ P
Pl ([k∆t, (k + 1)∆t] × [x − , x + ]) ∼ P
2
2
0
k∆t
Z v∆t
1
1
u(x − vk∆t − t)dt ∼ P
(1
∼P
∗ u)(x − vk∆
v
v [0,v∆t]
0
The registration operation consists in a perfect translation τ (x) = x + vk∆t
of the previous observed, hence simulating a stationary landscape. After
the registration operation the k-th image observed for k ∈ {0, N − 1} at
position x is (a realization of)
1
(1
∗ u)(x)dt .
P
v [0,v∆t]
14
Finally the image is constructed by adding all observations, so it is (a realization of)
!
N
−1
N
−1
X
X
1
1
1
(1
(1
∗ u)(x)dt ∼ P
∗ u)(x)dt ∼ P N (1[0,v∆t] ∗ u)(x)dt .
P
v [0,v∆t]
v [0,v∆t]
v
k=0
k=0
(23)
(This last equivalence uses the independence of the Poisson random variables, justified by the disjoint time intervals). Following the same scheme
used in (3.3) from (23) we get the estimated
1
v
uest,mul ∼ P N (1[0,v∆t] ∗ u)(x)dt
N
v
whose variance is N times smaller. Thus the N factor remains in all equa√
tions. Finally from (11, 20) we deduce that the SNR is increased by a N
factor. In other words the use of a multi-image fusion with a sufficient N
permits to achieve any SNR.
4
The flutter shutter
After having treated the classic image acquisition strategies, we are now
in a position to treat the various flutter shutter strategies and to compare
them to the classic ones. Two things are at stake: first, to prove that the
various flutter shutters actually work, and second to evaluate the SNR of
the resulting image and to compare it to the SNR of classic strategies. The
hope would be that the flutter shutter retains the very interesting feature
of the multi image
√ denoising, namely an increase of the SNR by a factor
proportional to N ∆t, the total exposure time. We shall see that this is
not so.
4.1
The numerical flutter shutter
The numerical flutter shutter method consists in a numerical sensor gain
modification taking place after the acquisition by the sensor. Roughly speaking the camera takes a burst of N images using a exposure time ∆t. The
k-th image is multiplied, for k ∈ 0, ..., N − 1, by an αk ∈ R gain. Then
all images are added to obtain one observed image, the flutter shutter. ∆t
must be small enough so the blur of each image is negligible (definition in
section 3.2.1). This technique is similar to the multi-image acquisition strategy but doesn’t use any registration technique. If it is realizable, it seems
that its interest is limited: why not keeping all images instead of adding
them all? One of the obvious reasons is compression, particularly for Earth
observation satellites. In that case the motion blur due to a drift in satellite
15
trajectory estimate could be eliminated by a flutter shutter, without any
additional transmission burden if only the flutter shutter image (the sum)
is transmitted.
The k-th acquired elementary image at a pixel at position n is a realization
of
!
Z (k+1)∆t
P
u(n − vt)dt .
k∆t
The observation flutter shutter observation is obtained by combining the kth output with weight αk . Thus the flutter shutter output at a pixel centered
at n is
!
Z (k+1)∆t
N
−1
X
obs(n) ∼
u(n − vt)dt
(24)
αk P
k∆t
k=0
where by construction obs(n) are obtained for n ∈ Z and are independent.
Indeed, the sensors are disjoint and do not receive the same photons. In
the following it will be useful to associate with the flutter shutter its code
defined as the vector (αk )k=0,··· ,N −1 , and its flutter shutter function defined
by α(t) = αk for k ∈ [k, k + 1[.
Definition Let (α0 , ..., αN −1 ) ∈ RN be a flutter shutter code. We call
numerical flutter shutter samples at position n of the landscape u at velocity
v the random variable
!
Z (k+1)∆t
N
−1
X
obs(n) ∼
αk P
u(n − vt)dt .
(25)
k∆t
k=0
We call numerical flutter shutter its band limited interpolate
X
obs(x) ∼
obs(n)sinc(x − n).
n∈Z
We call flutter shutter function the function
α(t) =
N
−1
X
αk 1[k,k+1[(t).
(26)
k=0
Remark It is good to keep in mind the following trivial and less trivial
examples:
1. αk = 1 ∀k ∈ {0, ..., N − 1} (pure accumulation prone to motion blur)
P
2. αk = 0 or 1 ∀k ∈ {1, ..., N − 1} with
α0 = N2 (Raskar’s flutter
shutter has this generic form)
3. α0 = 1 and αk = 0 ∀k ∈ {1, ..., N − 1} (standard snapshot)
16
The motionless v = 0 case
In order to check what this means, we shall first consider the ideal motionless
case where v = 0. In this situation the acquired image has simply lost some
photons with respect to an accumulation image, because some of the αk are
null. Thus in the motionless case, the flutter shutter cannot intuitively but
decrease the SNR. This is easily confirmed by the following calculation. We
have
obs(x) ∼
N
−1
X
k=0
αk P (∆tu(x)) .
Thus
E (obs(x)) =
N
−1
X
αk ∆tu(x)
(27)
k=0
and
var (obs(x)) =
N
−1
X
α2k ∆tu(x).
(28)
k=0
From (27) we deduce that the estimated landscape
obs(x)
uest(x) ∼ PN −1
k=0
αk ∆t
uest is obtained using
.
(29)
Let r ∼ uest − u be the residual error then from (27-28) we get
E (obs(x))
− u(x) = 0.
E(r(x)) = PN −1
k=0
αk ∆t
Hence F(uest ) is an unbiased estimator for û. Moreover, from (28, 29) we
deduce
PN −1 2
PN −1 2
u(x) k=0
αk
u(x) k=0
αk
var (obs(x))
= PN −1
= PN −1
.
var (r(x)) = PN −1
( k=0 αk )2 ∆t2
( k=0 αk )2 ∆t
( k=0 αk )2 ∆t
r
P
P 2
( N−1 αk )2
Thus, by (7), SN R(uest,sna) = u(x)∆t Pk=0
N−1 2 . Since always
k αk ≤
α
k
k=0
P
( k αk )2 , the flutter shutter is less favorable than a mere accumulation.
To give an example, in the case of a Raskar code whereqαk = 0 or 1 and
2
=
the number of non null αk is N/2. Then the SNR is u(x)∆t (N/2)
N/2
q
u(x)∆tN
, to be compared to the pure accumulation case where αk = 1
2
q
p
2
and the SNR is u(x)∆t NN = u(x)∆tN . Thus the Raskar flutter shutter
√
entails in the motionless case a 2 loss factor for the SNR. We now pass to
the general more interesting case, namely when there is a motion.
17
Flutter shutter in presence of motion
Theorem 4.1. The observed samples of the numerical flutter shutter are
such that, for n ∈ Z
.
1
α(
) ∗ u (n)
(30)
E (obs(n)) =
v v∆t
and
var(obs(n)) =
1 2 .
α (
) ∗ u (n).
v
v∆t
(31)
Proof. From the numerical flutter shutter samples definition (25),
E (obs(x)) =
N
−1
X
αk
k=0
Z
(k+1)∆t
u(x − vt)dt =
k∆t
N
−1
X
αk
k=0
Z
(k+1)
k
∆tu(x − v∆ts)ds =
(32)
Z
0
N
∆tα(s)u(x − v∆t
where we recall that
α=
N
−1
X
αk 1[k,k+1[.
k=0
Thus,
E (obs(x)) =
Z
N v∆t
0
y
1
α(
)u(x − y)dy =
v v∆t
Z
+∞
−∞
1
1
y
.
α(
)u(x − y)dy =
α(
) ∗ u (x).
v v∆t
v v∆t
(33)
Similarly from (24)
var(obs(x)) =
N
−1
X
k=0
α2k
Z
(k+1)∆t
k∆t
u(x − vt)dt =
1 2 .
α (
) ∗ u (x).
v
v∆t
Notice that obs(x) is not necessarily a Poisson random variable. However, if all αk are equal to 0 or 1 then by (32)
.
1
α(
) ∗ u (x),
obs(x) ∼ P
v v∆t
because we are adding independent Poisson random variables. On the other
hand, if for some k, αk 6∈ {0, 1}, then if X ∼ P(λ) then αk X is not a Poisson
random variable.
18
4.1.1
Computing the inverse filter
Step 1: the noiseless case Let
us examine first the discrete noiseless
.
case, when obs(n) = v1 α( v∆t
) ∗ u (n) and obs(n) is obtained for n ∈ Z but
being band limited, can be interpolated to obs(x), for any x ∈ R. Then
1
.
F
α(
) ∗ u (ξ) = ∆tû(ξ)α̂(ξv∆t).
v v∆t
By hypothesis (3.1) we assumed that û(ξ) = 0 for |ξ| > π. Hence for the
invertibility we must only require that |α̂(ξv∆t)| > 0 for ξ ∈ [−π, π].
Definition We say that the flutter shutter with code (αk )k=0,··· ,N −1 is invertible (for velocities v smaller than v0 ) if |α̂(ξ)| > 0 for ξ ∈ [−πv0 ∆t, πv0 ∆t].
If the flutter shutter is invertible, we can consider the inverse filter γ defined
by
γ̂(ξ) =
1[−π,π](ξ)
∆tα̂(ξv∆t)
.
(34)
Since α ∈ L1 (R), ξ 7→ α̂(ξ) is bounded, continuous, nonzero on [−π, π]
and therefore γ̂ is bounded and supported on [−π, π]. Therefore, γ is C ∞ ,
bounded, and band limited.
We shall as usual define the recovered landscape from noisy data by
the formulae that would be valid for noiseless data. Assume we observe
e(n) = E(obs(n)) for n ∈ Z and wish to compute ê(ξ) from the discrete
observed (e(n))n∈Z . Since e(x) is band limited, we can interpolate it using
the band limited interpolation (4).
The band limited interpolate of the ideal observation is
X
e(n)sinc(x − n).
(35)
e(x) =
n∈Z
Then from (35) we have
ê(ξ) =
X
n∈Z
e(n)e−inξ 1[−π,π](ξ).
(36)
So the ideal deconvolved landscape d(x) obtained by combining (34) and
(36) is
P
e(n)e−inξ 1[−π,π](ξ)
ˆ
.
(37)
d(ξ) = n∈Z
∆tα̂(ξv∆t)
We shall now adopt the same formulae for the noisy case.
19
Flutter shutter landscape recovery in the real noisy case
Definition Assume that the flutter shutter with code (αk )k=0,··· ,N −1 is invertible. We call estimated landscape uest,num of the numerical flutter shutter the function defined by (using the obtained obs(n) samples (24) instead
of the ideal e(n) in (37))
P
obs(n)e−inξ 1[−π,π] (ξ)
.
(38)
F(uest,num )(ξ) = n∈Z
∆tα̂(ξv∆t)
Theorem 4.2. The numerical flutter shutter has a spectral SN R (8) equal
to
√
∆t|û(ξ)||α̂(ξv∆t)|
SN R(ξ) = 1[−π,π](ξ) p
;
||u||L1 ||α|||L2
the expected value of the estimated landscape F(uest,num )(ξ) from the observed samples is
E(F(uest,num )(ξ))) = û(ξ)1[−π,π] (ξ); ;
and the variance is
var(F(uest,num (ξ))) =
||α||2L2 ||u||L1 1[−π,π](ξ)
.
∆t|α̂(ξv∆t)|2
(39)
Proof. By (31, 38),
var(F(uest,num )(ξ)) =
=
var
P
P
n∈Z
−inξ 1
[−π,π] (ξ)
n∈Z obs(n)e
2
2
∆t |α̂(ξv∆t)|
1 2
.
v α ( v∆t ) ∗ u (n)1[−π,π](ξ)
∆t2 |α̂(ξv∆t)|2
=
.
|| v1 α2 ( v∆t
) ∗ u||L1 1[−π,π](ξ)
2
∆t |α̂(ξv∆t)|2
=
1
.
2
v ||α ( v∆t )||L1 ||u||L1 1[−π,π](ξ)
∆t2 |α̂(ξv∆t)|2
=
v∆t
2
v ||α||L2 ||u||L1 1[−π,π](ξ)
∆t2 |α̂(ξv∆t)|2
=
=
P
n∈Z
var(obs(n))|e−inξ 1[−π,π](ξ)|2
∆t2 |α̂(ξv∆t)|2
(by Poisson summation formula)
(40)
=
.
1
2
v ||α( v∆t )||L2 ||u||L1 1[−π,π](ξ)
∆t2 |α̂(ξv∆t)|2
||α||2L2 ||u||L1 1[−π,π](ξ)
.
∆t|α̂(ξv∆t)|2
Here again the crucial point is the use of the Poisson summation formula
(xxix) in equation (40). Following the same scheme and starting from (33)
E(F(uest,num (ξ))(ξ) can be computed by the Poisson formula, using the fact
that u is band limited in [−π, π].
P
P
1
.
E n∈Z obs(n)e−inξ 1[−π,π](ξ)
m∈Z F v α( v∆t ) ∗ u (ξ + 2πm)1[−π,π](ξ)
=
E(F(uest,num (ξ))) =
∆tα̂(ξv∆t)
∆tα̂(ξv∆t)
v∆t
1
.
F v α( v∆t ) ∗ u (ξ)1[−π,π](ξ)
α̂(ξv∆t)û(ξ)1[−π,π](ξ)
= v
= û(ξ)1[−π,π] (ξ).
=
∆tα̂(ξv∆t)
∆tα̂(ξv∆t)
20
From (39) and using the definition of the spectral SNR (8), we obtain
√
∆t|û(ξ)||α̂(ξv∆t)|
spectral
.
SN R
(uest,num (ξ)) = 1[−π,π](ξ) p
||u||L1 ||α|||L2
Remark From (39) we deduce that var(F(uest,num )(ξ)) is invariant by
changing α into λα for λ 6= 0 (rescaling): as could be expected, the flutter
shutter code is defined up to a multiplicative constant.
Remark From (26) follows that
α̂(ξ) =
N
−1
X
k=0
N −1
N −1
2sin( 2ξ ) X
ξ −iξ X
−iξ 2k+1
2
2
= sinc( )e
ak e
ak e−ikξ .
αk F(1[k,k+1[ )(ξ) =
ξ
2π
k=0
k=0
(41)
Notice that this is not the DFT of the vector α, which means that in the
literature on the flutter shutter, the simulations are neglecting the motion
blur on the intervals with ∆t length.
Remark There are two different acquisition tools implementing a flutter
shutter with a moving sensor (or landscape). The first one has been discussed previously and consists in a mere computational device, using the
maximal sensor capability. In that case obs(x) is given by (24) and is not a
Poisson random variable in general. The other technical possibility is to implement the flutter shutter function on the sensor as an optical (temporally
changing) filter. This case is discussed below as Analog Flutter-Shutter.
4.2
The analog flutter shutter
Raskar’s flutter shutter method consists in a (binary) temporal mask in front
of the sensor. From a practical point of view the shutter of the camera opens
and closes during the acquisition process. The proposed generalization uses
temporal sunglasses allowing smoother (nonbinary, non piecewise constant)
gain modifications than a simple shutter could perform. The gain at time
t α(t) is here defined as the proportion of photons coming from the noisy
landscape u that are allowed to travel to the pixel sensor, meaning that
only positive (actually in [0, 1]) kernels are feasible. The device (roughly
speaking a generalized shutter) controlling the percentage of photons from
the landscape allowed to travel to the sensor obviously takes place before
the sensor. Hence the observation is always a Poisson random variable. The
analog flutter shutter method consists in the design of an invertible flutter
shutter function α(t).
21
Definition (Analog flutter shutter function.)
Let α(t) ∈ [0, 1] be the gain used at time t. We call analog flutter shutter
function a positive function α ∈ L1 (R) ∪ L2 (R).
Let α be an (analog) flutter shutter function then the acquired image at
position n is (a realization of)
Z ∞
Z ∞
1 s
1
.
α(t)u(n − vt)dt ∼ P
obs(x) ∼ P
α( )u(n − s)ds ∼ P
(α( ) ∗ u)(n)
v
v
v
−∞ v
−∞
where obs(n) is known only for n ∈ Z.
Definition Let α be an analog flutter shutter function. We call analog
flutter shutter samples at position n of the landscape u at velocity v the
random variable
.
1
(α( ) ∗ u)(n) .
(42)
obs(n) ∼ P
v
v
We call analog flutter shutter its band limited interpolate
X
obs(x) ∼
obs(n)sinc(x − n).
n∈Z
Theorem 4.3. The observed samples of the analog flutter shutter are such
that, for n ∈ Z
1
v
.
v
E(obs(x)) = (α( ) ∗ u)(x)
(43)
and
var(obs(x)) =
.
1
(α( ) ∗ u)(x).
v
v
(44)
Proof. Directly from the analog flutter shutter samples definition (42).
Remark The main difference with the numerical flutter shutter is that the
observed image is always a Poisson random variable.
The calculations on the analog flutter shutter are almost identical to those
of the numerical flutter shutter.
Theorem 4.4. The analog flutter shutter method has a spectral SN R equal
to
|û(ξ)||α̂(ξv)|
;
SN R(F(uest,ana )(ξ)) = 1[−π,π](ξ) p
||u||L1 ||α||L1
22
the expected value of the estimated landscape F(uest,ana )(ξ) from the observed samples is
E(F(uest,ana )(ξ))) = û(ξ)1[−π,π] (ξ);
and the variance is
var(F(uest,ana (ξ))) =
||α||L1 ||u||L1
1[−π,π](ξ).
|α̂|2 (ξv)
(45)
Proof. Similarly to the numerical flutter shutter the inverse filter is the
1
then
inverse filter defined by α̂(vξ)
var (F(uest,ana )(ξ)) = var
=
=
P
−inξ
n∈Z obs(n)e
α̂(ξv)
!
1[−π,π](ξ) =
.
1
v ||α( v ) ∗ u)||L1
1[−π,π](ξ)
|α̂|2 (ξv)
.
1
v ||α( v )||L1 ||u||L1
1[−π,π](ξ)
|α̂|2 (ξv)
P
1
.
n∈Z v (α( v ) ∗
|α̂|2 (ξv)
(by (xxix))
=
||α||L1 ||u||L1
1[−π,π](ξ).
|α̂|2 (ξv)
Moreover by the same calculations as for the numerical flutter shutter,
!
P
E n∈Z obs(n)e−inξ
E (F(uest,ana )(ξ)) =
1[−π,π] (ξ) = û(ξ).
α̂(ξv)
Therefore,
|û(ξ)||α̂(ξv)|
SN Rspectral (F(uest,ana ))(ξ) = 1[−π,π](ξ) p
.
||u||L1 ||α||L1
4.3
Motion Invariant Photography
We shall see that the motion-invariant photography method proposed in
[21] is equivalent to a flutter shutter method using a specific (non piecewiseconstant) gain-function αM IP = 1[0,a(N.∆t)2 ] 2√1at , where a is the acceleration
and N ∆t is the exposure time. In the case of the motion-invariant photography method the gain remains constant equal to one. The landscape is
moving at a constant acceleration, meaning that the apparent induced velocity is v(t) = a.t+v. More precisely, the landscape is moving at a constant
speed v and the camera is moving in the opposite direction with a constant
acceleration. Thus, with the formalism proposed in the former sections the
23
u)(n)
1[−π,π](ξ)
observed value is a realization of
Z
Z N.∆t
u(x − v.t + t.v(t))dt ∼ P
obs(x) ∼ P
0
0
∼P
Z
N.∆t
a(N.∆t)2
0
1
√ u(x − t)dt
2 at
!
2
u(x − at )dt
(46)
1
∼ P (1[0,a(N.∆t)2 ] (t) √ ∗ u)(x) .
2 at
(47)
Thus the motion-invariant photography method is equivalent to the use of
an analog flutter shutter method.
Corollary 4.5. The Motion Invariant Photography (MIP) inherits of the
same limitation as the analog flutter shutter concerning the SNR.
Proof. A consequence of (46-47 and section 4.2) is that the motion-invariant
photography is an analog flutter shutter with a flutter function of the from
of αM IP = 1[0,a(N.∆t)2 ] 2√1at .
4.4
Piecewise constant analog flutter shutter
The goal of this subsection is to give an easy comparison between analog
and numerical flutter shutter in term of SNR. The comparison is to be
done for positive piecewise constant flutter-shutter functions to satisfy the
requirements of both numerical (section 4.1) and analog (section 4.2) flutter
shutters.
Theorem 4.6. (Piecewise constant analog flutter shutter.) Let α(t)
be a piecewise constant analog flutter shutter function. Then the analog
flutter shutter samples are
.
1
(α(
) ∗ u)(n) .
(48)
obs(n) ∼ P
v
v∆t
PN −1
Proof. Similarly to (26) let α(t) = k=0
αk 1[k,k+1[(t) be a piecewise constant analog flutter shutter function where αk ∈ [0, 1] is the gain used on
the time interval [k, k + 1[. Then the acquired image at position n is (a
realization of)
!
!
Z (k+1)∆t
Z (k+1)
N
−1
N
−1
X
X
obs(n) ∼ P
u(n − vt)dt ∼ P
αk
∆tu(n − v∆ts)ds
αk
k=0
N
∼P
Z
0
k∆t
∆tα(s)u(n − v∆ts)ds
24
∼P
Z
k=0
N v∆t
0
k
1
s
α(
)u(n − s)ds
v v∆t
∼P
1
.
(α(
) ∗ u)(n
v
v∆t
Corollary 4.7. The observed samples of the analog flutter shutter are such
that, for n ∈ Z
1
v
E(obs(x)) = (α(
.
) ∗ u)(x)
v∆t
and
var(obs(x)) =
1
.
(α(
) ∗ u)(x).
v
v∆t
Proof. Directly from (48).
The only significant difference is illustrated in the next lemma.
Lemma 4.8. The variance of the analog flutter shutter observation (44) is
larger or equal to the variance of the numerical flutter shutter observation,
(31), with equality when ∀ k αk = 0 or 1.
Proof. Since 0 ≤ α(t) ≤ 1 (because it is a proportion of incoming photons
allowed to travel through the sensor) we have α( vt ) ≥ α2 ( vt ). Hence (α( v. ) ∗
u)(x) ≥ (α2 ( v. ) ∗ u)(x) (because u ≥ 0). Using (44) and (31) concludes the
proof.
Since the expected value of (43) is equal to the expected value of the
numerical flutter shutter (30) the inverse filter is equal to the inverse filter
of the numerical flutter shutter (34).
Theorem 4.9. Let α a positive piecewise constant flutter shutter function
then the analog flutter shutter method has a spectral SN R equal to
√
∆t|û(ξ)||α̂(ξv∆t)|
SN R(F(uest,ana )(ξ)) = 1[−π,π](ξ) p
||u||L1 ||α||L1
which is smaller or equal to the spectral SNR of the numerical flutter shutter
,
√
∆t|û(ξ)||α̂(ξv∆t)|
.
SN R(F(uest,num )(ξ)) = 1[−π,π](ξ) p
||u||L1 ||α|||L2
Proof. Since the inverse filter is the inverse filter defined by (34) we have
! P
P
1
−inξ
(α( . ) ∗ u)(n)
n∈Z obs(n)e
var (F(uest,ana )(ξ)) = var
1[−π,π](ξ) = n∈Z v2 2v∆t
1[−π,π](ξ)
∆tα̂(ξv∆t)
∆t |α̂| (ξv∆t)
=
=
1
.
v ||α( v∆t ) ∗ u)||L1
(ξ)
1
∆t2 |α̂|2 (ξv∆t) [−π,π]
.
1
v ||α( v∆t )||L1 ||u||L1
1[−π,π](ξ)
∆t2 |α̂|2 (ξv∆t)
25
(by (xxix))
=
||α||L1 ||u||L1
(ξ).
1
∆t|α̂|2 (ξv∆t) [−π,π]
Moreover by the same calculations as for the numerical flutter shutter,
!
P
E n∈Z obs(n)e−inξ
E (F(uest,ana )(ξ)) =
1[−π,π] (ξ) = û(ξ).
∆tα̂(ξv∆t)
Therefore,
SN R
spectral
(F(uest,ana ))(ξ) = 1[−π,π](ξ)
√
∆t|û(ξ)||α̂(ξv∆t)|
p
.
||u||L1 ||α||L1
From (45) and using Lemma 4.8 we obtain
P
1
(α2 ( . ) ∗ u)(n)
var (F(uest,ana )(ξ)) ≥ n∈Z v 2 2v∆t
1[−π,π](ξ) = var (F(uest,num )(ξ)) .
∆t |α̂| (ξv∆t)
Taking into account that the expected values of numerical and analog flutter
shutter are equal (30,43) yields
SN Rspectral (F(uest,ana ))(ξ) ≤ SN Rspectral (F(uest,num ))(ξ).
This also means that the variance of the estimated landscape (45) using
an analog flutter shutter method var(F(uest,ana )) is larger or equal to the
variance of var(F(uest,num )) (39) using a numerical flutter shutter method
when α is positive and piecewise constant. Notice that (45) is not invariant
by changing α into tα for t 6= 0 (rescaling) anymore. This means that the
optimization of α will be easier with the numerical Flutter-Shutter.
5
Equivalence of piece-wise constant and continuous flutter shutter functions
As stated in section (4) the numerical flutter shutter can deal with piecewise constant positive or negative gains. An analog flutter shutter can be
used with continuous (at least theoretically) but positive gains. It may seem
at first sight that neither is more flexible than the other. Indeed piecewise
constant functions are easier to implement practically on the camera. On
the other hand there is no reason for the optimal gain function to be neither piecewise constant nor positive. However we shall see a way to deal
with continuous function with a numerical flutter shutter and compare the
resulting SNR.
Definition (Equivalent flutter shutter function.)
Consider a numerical flutter shutter code (αk )k=0,··· ,N −1 and an analog flutter shutter function
PN −1β(t). We say that the numerical flutter shutter function
using α(t) = k=0 αk 1[k,k+1[(t) is equivalent to the analog flutter shutter
function using β(t) if their observed samples have the same expected value,
namely if E (obsα (n)) = E (obsβ (n)) for n ∈ Z.
26
In other words two flutter-shutter functions are said equivalent if the observed values are equal up to the noise. This is a rather necessary definition.
Indeed the landscape u being band limited, from (30, 43) we deduce that
there are many different flutter shutter functions leading to the “same” observation. This does not mean that the acquisition strategies are equivalent
as their variance may differ.
Theorem 5.1. (Bridge between coded and continuous flutter shutter functions)
Let α(t) be a piecewise constant gain function, then there exists β(t) a continuous gain function such that the two flutter shutter strategies using α(t)
and β(t) are equivalent.
Similarly from a continuous gain function β(t) we are able to construct α(t)
a piecewise constant gain function such that the two flutter shutter strategies
are equivalent.
Proof. Step1. Consider a landscape u(x −
Pvt) moving at velocity v and α
a numerical flutter shutter function α = k∈Z ak 1[k− 1 ,k+ 1 [ be a piecewise
2
2
constant function (with α̂(ξ) 6= 0 for ξ ∈ [−πv∆t, πv∆t] for obvious reason)
then from (30) we have
E (ôbs(ξ)) = ∆tα̂(ξv∆t)û(ξ) = ∆tα̂(ξv∆t)1[−π,π] (ξ)û(ξ) = β̂(ξv)û(ξ).
The previous relations emphasizes the fact that the observable landscape
u is band-limited with û supported by [−π, π]. Consequently, no matter what α̂(ξv∆t) is equal to outside [−π, π]. The observed remains unchanged. In other words the apparent gain control function is equal to
β = ∆tF −1 α̂(ξ∆t)1[−π,π]( vξ ) , which is C ∞ .
Step2. Reciprocally consider a landscape u(x − vt) moving at velocity
v and β(t) be a C 0 (or C ∞ ) function. The goal is here to construct α(t) a
piecewise constant function such that (from 30, 43) : α̂(ξv∆t) =
∀ξ ∈ 1[−π,π] . Let
1
1
and ak =
∆t =
|v|
∆t
Moreover let α =
P
Z
π
β̂(vs)
−is
−π sinc( sv∆t )e 2
2π
k∈Z ak 1[k− 21 ,k+ 21 [ .
27
eiks ds
β̂(vξ)
∆t
for
(49)
Let ξ ∈ [−π, π] then from (41) we
have
ξ −iξ X
ak e−ikξ (from (49) v∆t = 1)
)e 2
2π
k∈Z
!
Z π
ξ −iξ X 1
β̂(vs)
iks
= sinc( )e 2
ds e−ikξ
−is e
s
2π
∆t
2
−π
sinc( 2π )e
k∈Z
!
Z π
X
ξ −iξ
β̂(vs)
1
ik(t−ξ)
sinc( )e 2
ds
e
=
−is
s
∆t
2π
−π
sinc( 2π
)e 2 k∈Z
Z
ξ −iξ π
1
β̂(vξ)
β̂(vs)
2
=
sinc( )e
.
−is δ0 (t − ξ)ds =
s
∆t
2π
∆t
−π sinc( )e 2
α̂(ξv∆t) = sinc(
(50)
(51)
2π
Such α(t) being piecewise constant by construction can be implemented with
a numerical flutter shutter. Then (50- 51) implies from (30, 43) that the
analog flutter shutter using β is equivalent to the numerical flutter shutter
using the constructed α. In other words for any continuous flutter shutter
function there exists an implementable equivalent numerical flutter function.
1
(49) the requirement (41) namely
Notice that since we choose ∆t = |v|
|v|∆t ≤ 2 is reached. No relation between the spatial sampling of u and the
temporal sampling of α can be stated except |v|∆t ≤ 2. Thus in the case of
an implementation using the lines of a time delay integration device where
∆t is imposed, the drift between any line and the next one must be smaller
than two pixels.
In Thm. 4.9 we compared the analog and numerical flutter shutter for
positive piecewise constant functions. The left to be done to conclude is to
compare when α(t) is an analog flutter shutter function. Being an analog
flutter shutter function α(t) is not piecewise constant in general. Hence we
shall compare both strategies using the framework of Thm. 5.1.
Theorem 5.2. (Comparative efficiency of numerical and analog
flutter shutter)
Let β be an analog flutter shutter function and α its equivalent numerical
flutter shutter function. Then the spectral SNR of the analog flutter shutter
using β
|û(ξ)||β̂(ξv)|
SN Rspectral (F(uest,ana ))(ξ) = 1[−π,π] (ξ) p
||u||L1 ||α||L1
is smaller or equal to the spectral SNR of the numerical flutter shutter using
α,
√
∆t|û(ξ)||α̂(ξv∆t)|
spectral
SN R
(F(uest,num )(ξ)) = 1[−π,π](ξ) p
.
||u||L1 ||α|||L2
28
Proof. From Thm. 5.1 let ak =
1
∆t
Rπ
β̂(vs)
−π
−is
sinc( sv∆t
)e 2
2π
eiks ds, and α =
P
k∈Z ak 1[k− 21 ,k+ 21 [ .
ˆ ana (ξ)
Thus by construction the expected value of the analog flutter shutter E obs
ˆ num (ξ) the expected value of its equivalent nuusing β is equal to E obs
merical flutter shutter, namely the numerical flutter shutter using α. This
imply that the filters to deconvolve and get uest in both case are equal so
(ξ)). Thus to estimate the SNR the only
ˆ (ξ)) and E (uest,num
ˆ
are E (uest,ana
left is to compare the variances, namely (45)
var (uest,ana
ˆ (ξ)) =
||β||L1 ||u||L1
var (uest,num
ˆ
(ξ)) =
||α||2L2 ||u||L1 1[−π,π](ξ)
.
∆t|α̂(ξv∆t)|2
|β̂|2 (ξv)
and (39)
1[−π,π](ξ)
(52)
By construction of α we have (50, 51)
∆t|α̂(ξv∆t)|2 =
|β̂(ξv)|2
.
∆t
Which yields to (52)
(ξ)) =
var (uest,num
ˆ
∆t||α||2L2 ||u||L1 1[−π,π](ξ)
|β̂(ξv)|2
.
Here we shall compare ||β||L1 and ∆t||α||2L2 but thanks to the construction
of α we have ∆t = v1 . Then (49) entails
∆t||α||2L2 =
=
=
=
=
1
||α̂||2 2
2πv Z L
Z
1
1
|α̂(ξ)|2 dξ =
|α̂(ξv∆t)|2 dξ by (49)
2πv
2πv
Z
1
β̂(ξv) 2
| dξ by construction of α (50- 51)
|
2πv
∆t
Z
v2
|β̂(ξv)|2 dξ by (49) again
2πv
Z
Z
v
1
2
|β̂(ξv)| dξ =
|β̂(ξ)|2 dξ = ||β||2L2
2π
2π
Then the only remaining is to compare ||β||L1 and ||β||2L2 . β being an analog
flutter shutter function we have β(t) ∈ [0, 1] implying β(t) = |β(t)| ≥ |β(t)|2 .
Thus we have ||β||L1 ≥ ||β||2L2 . Thus
(ξ)) ≤ var (uest,ana
ˆ (ξ)) .
var (uest,num
ˆ
which conclude the proof (8).
29
The flutter shutter paradox theorem
6
Theorem 6.1. (The ideal flutter shutter function)
Consider a landscape u(x − vt) moving at velocity v. Then the ideal continuous gain control function is equal to α∗ (t) = sinc(tv∆t).
Proof. Among all gain control functions α(t) one giving the best SN Rspectral−averaged
(9) is given by minimizing the averaged variance of ûest (39)
Z
dξ
||α||22 π
(dropping the irrelevant constant in u)
F (α) =
2π∆t −π |α̂(ξv∆t)|2
Z
||α||22 πv∆t
1
||α||22
1
dξ
=
≥
.
R
∆t −πv∆t |α̂(ξ)|2 2πv∆t
∆t πv∆t |α̂(ξ)|2 dξ
−πv∆t
2πv∆t
The crucial point here is the use of the Jensen formula. Notice that since x >
0 7→ x1 is strictly convex the equality occurs when |α̂(ξ)|2 = 1[−πv∆t,−πv∆t] (ξ)
up to a meaningless constant renormalization for a numerical flutter shutter implementation (see 4.1.1). This leads to α∗ (t) = sinc(tv∆t) (up to a
normalization constant).
Being non positive this ideal gain control function is not implementable
using an analog Flutter-Shutter, and being non piecewise-constant is not
directly implementable using a numerical Flutter-Shutter strategy. However
a piecewise constant approximation can be used in a numerical FlutterShutter strategy with Thm. 5.1.
Corollary 6.2. (Upper bound on the SNR)
Consider a landscape u(x − vt) moving at velocity v. The ideal numerical
flutter shutter strategy using α∗ (t) = sinc(tv∆t) has a spectral SN R (8)
equal to
1[−π,π](ξ) |û(ξ)|
√
p
SN Rspectral (ξ) =
.
v
||u||L1
Moreover the averaged spectral SNR (9) is equal to
Rπ
|û(ξ)|dξ
1
p
SN Rspectral−averaged = √ −π
.
2π v
||u||L1
Proof. From Thm. 6.1 and using the Parseval formula we deduce that
||α∗ ||2L2 = v∆t. Then from Thm. 4.2 we deduce that SN Rspectral (ξ) =
1[−π,π] (ξ) |û(ξ)|
√
√
. Moreover from (39) we have
v
||u||L1
Z
π
−π
var(F(uest,num )(ξ))dξ =
=
Z
π
||α∗ ||2L2 ||u||L1
dξ =
∆t|α̂∗ (ξv∆t)|2
−π
πv∆t
Z
−πv∆t
Z
πv∆t
−πv∆t
v∆t||u||L1
dξ
v∆t2 |α∗ (ξ)|2
v∆t||u||L1
dξ = 2πv||u||L1 .
v∆t2
30
1
R π2π
Then
1√
2π v
Rπ
u
−π var(F( est,num )(ξ))dξ
|û(ξ)|dξ
−π
√
||u||L1
= v||u||L1 and SN Rspectral−averaged =
.
Corollary 6.3. (The flutter shutter paradox)
The use of a flutter shutter strategy increasing the total time-exposure does
not permits to achieve an arbitrary SNR.
Consider a landscape u(x−vt) moving at velocity v. Then the SN Rspectral−average
of the flutter shutter strategy is bounded independently of the total exposure
time N ∆t. In other words increasing the time-exposure has a limited effect
on the SNR.
Proof. This is a direct consequence of Cor. 6.2 since the measure of the
support of α∗ which is the exposure time is infinite, but the SN R of the
restored is nevertheless finite. Notice that both analog and numerical flutter
shutter inherit this behavior.
Corollary 6.4. (Efficiency of the numerical flutter shutter )
Consider a landscape u(x − vt) moving at velocity v. Then the ratio R of
SN Rspectral−average between the ideal flutter shutter and the best snapshot
Rspectral−average (f lutter,ideal)
≈
with exposure time equal to ∆t∗ is equal to R = SN
SN
Rspectral−average (snapshot)
v
u
1.0909 × 2π
u
≈ 1.1715.
uR π
dξ
u
u −π
ξv∆t∗
u
)
sin(
u
2
u
|2
|
∗
t
ξv∆t
2
Proof. This is a direct consequence of Thm. 3.4 and Cor. 6.2.
Those negative results are the flutter shutter paradox namely none of
those flutter shutter strategies are efficient.
7
Experiments
The purpose of this section is to compare experimentally different acquisition strategies : snapshot, flutter shutter using Raskar’s code [1], a random
uniform over [−1, 1] code and the sinc code. All strategies are compared
using the RMSE, the contrast invariant RMSE (RM SE CI ) and the visual
image quality. A benchmark of acquisition strategies is given (Fig. 12) to
illustrate the flutter shutter paradox theorem 6.3.
7.1
Contrast invariant image quality measure
In many cases reconstruction errors inherent to a method can be quantified using the Root-Mean-Squared-Error (RMSE) (xxx). However, when a
31
flutter function α(t)
1[0,T [ (t), v = 0
Accumulation
1[0, 1.0909 [ (t)
Best snapshot
v
PN −1
Raskar
k=0
Analog flutter
αk 1[k,k+1[(t), ak ∈ {0, 1}
Motion invariant photography
Numerical flutter
α(t) =
PN −1
k=0
Ideal flutter shutter
α(t) ≥ 0
1
√
1
t [0,T [
αk 1[k,k+1[(t), ak ∈ R
sinc(tv∆t)
SNR
p
(spatial) u(x)T
q
sin( 1.092
1.0909
(spectral)1[−π,π] (ξ)|û(ξ)| v||u||
|2
1.0909
1
(spectral) 1[−π,π](ξ)
L
∆t|α̂(ξv∆t)| |û(ξ)|
√
||α||L1
||u||L1
|α̂(ξv)| |û(ξ)|
√
(spectral)1[−π,π] (ξ) ||α|| 1
||u||L1
L
|α̂(ξv)| |û(ξ)|
√
(spectral) 1[−π,π](ξ) ||α|| 1
||u||L1
L
√
∆t|α̂(ξv∆t)| |û(ξ)|
(spectral) 1[−π,π](ξ) ||α||| 2 √
||u||L1
L
1[−π,π] (ξ) |û(ξ)|
√
√
(spectral)
v
||u||L1
Figure 2: This summarize the results on the different flutter shutter strategies and their SNR. The best flutter shutter is a sinc and has a finite SNR
but use an infinite exposure time. This is the flutter shutter paradox.
small contrast change occurs between the original and the processed image,
the RMSE can become substantial, while the images remain perceptually
indistinguishable. For example take an image I(m, n) defined over a subdomain of Z2 , and another one J = I + 10. Then RM SE(I, J) = 10 is
large but does not reflect the quality of the reconstruction J. Comparatively a convolution with for example a Gaussian can give a smaller RMSE
while making considerable damage. This bias is avoided by normalizing the
images before computing the RMSE. The principle of the normalization is
that two images related to each other by a contrast change are perceptually
equivalent. Their distance should reflect this fact and be zero. The midway
equalization [11] is best suited for that purpose, because it equalizes the
image histogram to a “midway” histogram depending on both images. By
the midway operation both images undergo a minimal distortion and adopt
exactly the same histogram. Thus we shall define the contrast invariant
RMSE (RM SE CI ) by
RM SE CI = RM SE(Imid(I,J) , Jmid(I,J) )
where mid(I, J)(= mid(J, I)) is the midway histogram between I and J.
Imid(I,J) is the image I specified on the mid(I, J) histogram (having an
histogram equal to mid(J, I)) and Jmid(I,J) is J specified on mid(I, J). In
all experiments results will be compared by comparing their visual quality
and quantitatively using the RM SE (xxx).
32
√
7.2
7.2.1
Numerical flutter shutter simulations
Numerical implementation
From (33, 31) the acquired image can be simulated using the following
scheme (without loss of generality, for simplicity we choose ∆t = 1 and
v = 1 in the experiments):
1. Simulation of the observed
(a) Take a sharp image
(b) Convolve with the gain control function α, (26). This I1 image
is a simulation of the expected value of the acquired one. Notice
this can be done in Fourier domain using (41)
(c) Convolve with the squared gain control function α2 (31). As
previously this can be done using (41). This I2 image (map) is a
simulation of the noise variance
(d) The deterministic part of the observed is v1 u ∗ α( v. ) − v1 u ∗ α2 ( v. ) .
The stochastic part is a Poisson random variable with intensity
λ(x) = 1v u ∗ α2 ( v. ) (x).
(e) The simulated observed value at pixel x is obtained by simulating
1
.
1
2 ( . ) (x)+
the random variable obs(x)
∼
u
∗
α(
)
(x)−
u
∗
α
v
v
v
v
P oisson v1 u ∗ α2 ( v. ) (x) with the desired SNR
(f) Crop the result omitting the last N columns to avoid unrealistic
periodization effects
2. Restoration
(a) Use classic mirror-extension in the direction of the blur
(b) Divide the Fourier transform of I by the Fourier transform of the
gain control function α, (26)
(c) Crop the result to avoid border effects
Notice that the sharp snapshot can be simulated using a “degenerated” gain
control function defined from (26) and using α0 = 1 and αk = 0 ∀k ≥ 1.
7.3
7.3.1
Analog flutter shutter simulations
Numerical implementation
From (43,44) the acquired image can be simulated using the following scheme
(without loss of generality and for simplicity we choose again ∆t = 1 and
v = 1):
1. Simulation of the observed
33
(a) Take a sharp image
(b) Simulate the observed pixel value at pixel x by simulating
a Pois.
1
son random variable with intensity λ(x) = v u ∗ α( v ) (x) with
the desired SNR
(c) Crop the result omitting the last N columns to avoid unrealistic
periodization effects
2. Restoration
(a) Use classic mirror-extension in the direction of the blur
(b) Divide the Fourier transform of I by the Fourier transform of the
gain controlled function (26)
(c) Crop the result to avoid border effects
The snapshot is simulated using the scheme described in (7.2.1).
7.4
Examples
The test image is the ”boat” (Fig. 3) image. Four acquisition strategies are
simulated and evaluated : snapshot, flutter shutter using Raskar’s code [1],
a random uniform over [−1, 1] code and the sinc code. For the four strategies
the observed image is obtained by simulating the underlying Poisson random
variable (Figs. 4,6,8,10), for a normalized velocity v = 1 and a SNR of 100
for the level 100. In other words each elementary image of the numerical
flutter shutter acquisition process (and therefore each elementary image of
an analog flutter shutter with gain 1) has a SNR equal to 100 for the level
100. Details of flutter shutter codes including the modulus of their Fourier
transform are given (Fig. 13). The RMSE is calculated on truncated images
(omitting the last N columns) to avoid border effects (Figs. 5, 7, 9, 11) which
are negligible for big enough images (depending on the ratio of the blur
support b over the image support). A benchmark of acquisition strategies is
given (12) to illustrate the flutter shutter paradox theorem 6.3.
7.4.1
Snapshot
7.4.2
Using Raskar’s code
Raskar’s code given in [1] is a binary sequence of length 52 defined by (1 0
100001110000010100001100111101110101110
0 1 0 0 1 1 0 0 1 1 1) found by extensive binary research. Since it is binary
the numerical and analog flutter shutter methods are here equivalent. The
acquired image presents the (Fig. 6) stroboscope like pattern resulting from
both the motion and the code. Notice that the restored image (Fig. 7) has
a bigger RMSE (2.30) than the restored image (Fig. 5) using a snapshot
(1.38). This illustrates the flutter shutter paradox (Thm. 6.3).
34
Figure 3: Test-image.
Figure 4: Acquired image (snapshot), the (log) modulus its Fourier transform .
7.4.3
Using a random code
The following code of length 52 was generated randomly (we picked the
first one thus generated) from a uniform distribution over [−1, 1]. The
obtained code was ( 0.5491, -0.4903, -0.6919, -0.5125, -0.9079, 0.3560, 35
Figure 5: Restored image (snapshot) (RM SE = 1.38), and the residual
noise (contrast adjusted to [0, 255]). Notice that there is almost no boundary
effect, the support of the code being small.
0.6357, 0.0275, 0.9568, 0.8670, 0.4087, 0.4097, 0.8659, 0.3344, 0.9198, 0.8389, 0.7476, 0.9808, 0.1366, -0.7247, 0.9320, 0.8069, -0.5848, 0.9493, 0.1682, 0.9533, -0.2173, -0.5834, 0.7483, 0.4785, 0.2266, 0.9764, -0.4708,
0.6723, 0.8312, 0.0084, 0.6892, -0.5245, -0.8651, 0.3417, 0.5183, 0.1317, 0.1301, 0.3432, -0.3262, 0.4367, -0.9771, -0.2120, 0.1160, 0.6648, 0.9446,
0.0590 ).
Notice that this sequence is not positive, thus cannot be implemented with
an analog flutter shutter method. The acquired image (Fig. 8) seems a bit
more degraded than with Raskar’s code. Nevertheless the RMSE (2.09) of
the restored image (Fig. 9) is slightly better than the RMSE (2.30) of the
restored using Raskar’s code (Fig. 7).
7.4.4
Using The sinc code
The sinc code is ( 0.0002, -0.0002, 0.0002, -0.0003, 0.0003, -0.0003, 0.0003,
-0.0004, 0.0004, -0.0005 ,0.0005 , -0.0006 , 0.0007 , -0.0008 , 0.0009 , -0.0011
, 0.0014 , -0.0017, 0.0021 , -0.0027 , 0.0037, -0.0053 , 0.0082, -0.0141, 0.0296,
-0.0917, 1.0000, -0.0917, 0.0296 , -0.0141 , 0.0082 , -0.0053, 0.0037 , -0.0027,
0.0021 , -0.0017, 0.0014 , -0.0011 , 0.0009 , -0.0008 , 0.0007 , -0.0006 , 0.0005 ,
-0.0005, 0.0004, -0.0004 , 0.0003 , -0.0003 , 0.0003, -0.0003 , 0.0002 , -0.0002).
36
Figure 6: Acquired image with Raskar’s code and the (log) modulus of its
Fourier transform. Notice the vertical pattern trace of the flutter shutter
code.
This is the best L2 approximation of the ideal gain function namely, given
v, the sinc with cutoff frequency equal to πv (given here for |v| = 1). The
acquired image presents the (Fig. 6) stroboscope like pattern resulting from
both the motion and the code. Notice that the restored image (Fig. 11)
has a RMSE (1.42) slightly better than the restored image (Fig. 5) using a
snapshot (1.44).
8
Why is the flutter shutter still interesting ?
When the motion is perfectly known then the best flutter shutter function
is a sinc as proved in Thm. 6.1. However as stated in Cor. 6.4 the use
of this sinc function is not a breakthrough compared to the best snapshot
since the SNR is increased by small factor. Nevertheless, if the motion is
not well-known then without some flutter shutter method the only way to
ensure a sharp image for a broad range of velocities v is to use a very small
exposure time leading to a poor SNR. It is then possible to achieve better
results on average, and to ensure anyway always a sharp image. To optimize
the SNR gain, however, we must know the probability distribution of the
velocities. Fixing the norm of the code and maximizing its DFT as done in
37
Figure 7: Restored image using the Raskar’s code (RM SE = 2.30,
RM SE CI = 2.32), and the residual noise (contrast adjusted to [0, 255]).
Notice the boundary effects which depend on the blur support. In practice
if the image is big enough this should be negligible.
[16, 3, 2] does not lead to optimal codes in a strong sense.
8.1
Computation of the optimal α(t) function
Let ρ(v) be a probability density for v ∈ R (which is equivalent to a probability density over the blur length), such that ρ(v) = 0, ∀|v| > vmax . Notice
that without the knowledge of the maximum speed vmax it is virtually impossible to guarantee |v|∆t < 2 (see section 3.2.1) for v feasible (in the
support of ρ). Eventually ρ can be estimated during a phase of calibration
taking a bunch of images without flutter shutter and using them to compute the actual blur density function by tracking zeros in the DFT of the
observed for instance.
We want to discover α the best flutter function (the best code), designed to
obtain the minimal noise for the restored image. The noise level is measured
by the variance of the restored, leading to the best SN Rspectral−average thus
from (39) as
Z
||α||2L2 ||u||L1 1[−π,π](ξ) ||u||L1
||α||22
dξ
∆t|α̂(ξv∆t)|2
∆t
Z
38
1[−π,π](ξ)dξ
|α̂|2 (ξv∆t)
=
||u||L1
||α||22
∆t
Z
1[−πv∆t,πv∆t] (ξ)dξ
v∆t|α̂|2 (ξ)
.
Figure 8: Acquired image with a random code and the (log) modulus of its
Fourier transform. Notice the vertical pattern trace of the numerical flutter
shutter code.
Thus, dropping the multiplicative constants we posit the following energy
to be minimized for a fixed v,
Z
1[−πv∆t,πv∆t] (ξ)dξ
2
.
Ev (α̂) = ||α̂||2
|v| |α̂|2 (ξ)
As stated previously if v is known the optimization of Ev is easy. Notice
that for ρ(v) = δv0 the best code is not a snapshot. The flutter shutter
compensate to avoid de-whitening which yields to a sinc-like code which
concentrate the support (over [−πv∆t, πv∆t]). The real challenge is to find
the bests ak coefficients when the motion v is known up to a density of
probability. Remark that this is the case in many situations, as for example
1. When the scene is not planar, the speed of objects will change according to their distance;
2. For camera shakes (with non-constant and or unknown amplitudes);
3. In Raskar’s examples : the background is static, the observed vehicle
moves.
39
Figure 9: Restored image from the random code (RM SE = 2.09,
RM SECI = 2.09), and the residual noise (contrast adjusted to [0, 255]).
Notice the boundary effects, which depend on the blur support. In practice
if the image is large enough this should be negligible.
Given ρ(v) the α function leading to the best possible SNR on average is
given by minimizing
Z
Z ∞
Z
1[−πv∆t,πv∆t] (ξ)dξ
2
ρ(v)dv
Ev (α)ρ(v)dv =
||α̂||2
E(α̂) =
|v||α̂|2 (ξ)
R
R
−∞
Z Z ∞
1[−πv∆t,πv∆t] (ξ)
= ||α̂||22
ρ(v)dvdξ (Fubini)
|v||α̂|2 (ξ)
R −∞
Z ∞
Z
ρ(v)1[−v∆tπ,v∆tπ] (ξ)
1
= ||α̂||22
dvdξ
2
|v|
−∞ |α̂| (ξ) R
Z
Z ∞
ρ(v)1[−v∆tπ,v∆tπ] (ξ)
1
2
dv dξ.
(53)
= ||α̂||2
2
|v|
R
−∞ |α̂| (ξ)
Then from (53) defining
w(ξ) =
the minimization reduces to
Z
ρ(v)1[−v∆tπ,v∆tπ] (ξ)
dv
|v|
R
E(α̂) =
||α̂||22
Z
∞
−∞
40
w(ξ)
dξ.
|α̂|2 (ξ)
(54)
Figure 10: Acquired image with the sinc code and the (log) modulus of its
Fourier transform. For a band limited image, the convolution a small enough
sinc function is equivalent to a convolution against a dirac. This explains
that almost no blur can be seen.
Notice that w(ξ) = 0 for |ξ| ≥ vmax ∆tπ, and that
Z
Z
ρ(v)
|w(ξ)|dξ =
2v∆tπ
dv = 2π∆t.
v
R
R
√
Thus w is in L1 (R) is compactly supported, so that 4 w ∈ L1 ∩ L2 (R).
Theorem 8.1. The optimal functions α∗ minimizing E(α̂) satisfy |α̂∗ | =
√
4
w on the support of w. If we extend α̂∗ by zero outside this support, we
obtain bounded continuous codes belonging to L2 (R).
Proof. The easy to compute unknown is f (ξ) := |α|2 (ξ). All integrals are
on R and we shall omit the ξ variable in the integrals. We have E(f ) =
R
R
( f ) w(ξ)
f (ξ) dξ. Thus its weak differential in the direction of every bounded
and compactly supported perturbation g satisfies
Z
Z
Z
Z
w
w
′
− ( f)
g.
E (f )(g) = ( g)
f
f2
41
Figure 11: Restored image using the sinc code (RM SE = 1.42, RM SE CI =
2.32), and the residual noise (contrast adjusted to [0, 255]). Despite the deconvolution the noise remains almost a white noise since the Fourier transform of the sinc code is nearly constant.
Acquisition strategy
RM SE
RM SE CI
Snapshot
1.44
1.44
Raskar’s code
2.550
2.76
Random code
2.16
2.49
Sinc code
1.42
1.44
Figure 12: Quantitative (RM SE) comparison of different strategies for fixed
velocity v = 1 (so the blur support is of 52 pixels, except for the snapshot).
The best RMSE is obtained using a the sinc code beating only slightly the
snapshot as predicted flutter shutter paradox (Thm. 6.3).
A minimal f therefore satisfies for every bounded compactly supported g
E ′ (f )(g) = 0 which yields
Z
Z Z
w
w
) − ( f) 2 = 0
g (
f
f
√
and therefore f = C w. Being nonnegative this solution is admissible and
√
we deduce |α̂(ξ)| = 4 w. as announced. It remains to show that this is our
solution. To this aim, we notice that the minimization of E(f ) is equivalent
R
R
to minimizing F (f ) := w(ξ)
f = 1, f ≥ 0, f = 0
f (ξ) under the constraints
42
R
outside the support of w. But we can replace these constraints by f ≤ 1,
R
f ≥ 0, f = 0 outside the support of w, since if f < 1 by picking f˜ := Rff ,
R
we get back f˜ = 1 and obviously F (f˜) < F (f ). Thus our problem is
equivalent
to minimizing the strictly convex energy F over the convex set
R
f ≤ 1, f ≥ 0, f = 0 outside the support of w. The solution of this
problem,
√ if it exists, is therefore unique. On the other hand the function
f0 := R √ww is a critical point for this problem, and therefore is this unique
solution.
Nevertheless, this calculation only fixes the modulus of α̂. Since w is
symmetric with respect to the origin, so is |α̂| and there are therefore many
optimal real codes α.
Here the only thing left is, given N , to compute αk ∀k ∈ {0, ..., N − 1}
coefficients approximating the above solution.
PN −1
Lemma 8.2. The function α = k=0
αk 1[k,k+1[ is, using (26, 41) the best
L2 approximation of α̂ using a piecewise constant function, when the ak are
given (up to a constant normalization constant) by
Z
Z
√
2k+1
ξ
ξ
∗
iξ 2k+1
4
2 dξ =
F(α
)(ξ)e
w(ξ)eiξ 2 dξ.
ak =
ξ
ξ
2sin( 2 )
2sin( 2 )
Nevertheless, there is no guarantee that α̂(ξ) 6= 0 for ξ ∈ [−π, π].
8.2
A “reverse engineering” of Raskar’s code
Given a code (α0 , ..., αN −1 ) ∈ RN we are able to estimate its underlying
velocity probability distribution. The algorithm is the following:
1. compute α̂ using (26)
2. estimate w the weight (54) function by w̃(ξ) = |α̂(ξ)|4
3. Now ρ(x) ≈ −xw̃(π∆tx)
Indeed from (54) we have by differentiating,
Z
ρ(v)δ−πv∆t (ξ) − δπv∆t (ξ)
dv
w′ (ξ) =
v
R
π∆t π∆t
−π∆t −π∆t
ρ(
)−
ρ(
)
=
ξ
ξ
ξ
ξ
Hence
w′ (π∆tx) =
−2
ρ(x), x ≥ 0 assuming ρ symmetric.
x
43
Applying this scheme to Raskar’s code gives (using a normalized ∆t = 1)
the probability density of Fig 14. This distribution can be understood in
two different ways. It either means that there is a high probability that the
scene is steady and then otherwise uniformly distributed motions occur on
a certain interval of velocities. But this is an unlikely model for a camera
motion. Instead, it is a quite natural model given the kind of motions
considered in the Raskar et al. codes. Indeed the velocity histogram of Fig.
14 considers that there is a fixed background with a large area and then
some moving objects whose velocity probability distribution is indeed close
to uniform in a broad interval. All proposed codes [2, 26, 1] are actually
optimal - contrarily to several statements - but for non understood -and
probably unlikely- camera or objects motion.
9
Experiments on optimized codes
The goal of this section is to explore numerically different natural models
of motion, give the corresponding optimized codes and to compare the efficiency to the snapshot. The considered models of motion are centered
Gaussians and uniforms. We begin (Figs. 16,17 & 19) by showing how the
codes are changing according to the probability densities they have been
optimized for (Gaussian and uniform for various standard deviations and
ranges). Then we illustrate, for a fixed code, how the quality of the restored
image is evolving with various blur supports. A comparison is made with
the Raskar code given in [2] and “brute-force” optimization by a random
research among binary sequences.
We shall predict the impact on the noise of the flutter shutter analyzing
(39) for the numerical and (45) for the analog case. Thus the pessimistic
case (guaranteeing an upper bound on the SNR) is given by the noise boost
factor defined from (Thm. 4.2) by
||α||L2
minξ∈[−π,π]
|α̂(ξv)|
for the numerical and from (4.4) by
maxξ∈[−π,π]
||α||L1
|α̂(ξv)|
for the analog case.
The normalized noise boost factor reflect the fact that even for a small
exposure time the resulting window shaped (standard blur) kernel doesn’t
have a constant Fourier transform. Thus to ease the comparison between
the different methods we shall normalize the noise boost factor using the
corresponding snapshot (α0 = (1, 0, ..., 0) with the same N ) leading to the
44
following definitions
minξ∈[−π,π]
||α||L2 |α̂0 (ξv)|
|α̂(ξv)|
||α||L1 |α̂0 (ξv)|
|α̂(ξv)|
for the numerical and by
maxξ∈[−π,π]
for the analog case.
In the following worst case scenarios will be predicted using the normalized version of the noise boost factor. Notice that a normalized noise boost
factor smaller than one means that the noise is actually reduced (compared
to the snapshot).
A summary of the impact on the noise is given both on a worst case noise
boost factor (Figs. 21, 22, 23, 24)- and on simulations using RMSE (Figs.
35 & 36) for Gaussianly distributed motion in case of both numerical and
analog Flutter-Shutter. The same is made in made in (Figs. 25, 26, 27, 28)
and in (Figs. 37 & 38) for the RMSE in the case of uniformly distributed
motion. Simulation of a flutter shutter acquisition device are shown (Figs.
29, 30, 31, 32, 33, 34).
9.1
Codes associated with various probability densities ρ
To ease the comparison with Raskar’s code all experiments are made with
N = 52 which is the length of the Raskar’s code. All following codes are the
best possible choices for αk approximating the ideal α∗ (defined in 8.1).
9.1.1
Gaussian distribution over the velocity
Optimal codes are shown assuming a centered Gaussian density -for several
standard deviation (σ)- hypotheses on the velocity. Notice that the weightfunction w (defined in 54) being concave, one can deduce that the modulus of
the Fourier transform of the ideal gain-control function (8.1) is also concave.
Thus all optimized codes (Figs. 16 & 17) inherit this low-pass behavior more or less peaked at the null frequency- depending on σ, the standard
deviation of the (centered) Gaussian assumed for the optimization. In the
following σ is measured in pixels. Optimized codes are given explicitly in
(Fig. 18). To summarize the results, if increasing the exposure time by a
N
standard deviation,
factor of N leads to a Gaussian motion blur with ≈ 12
(only) then the SNR can be increased by an ≈ 3 factor using a numerical
Flutter-Shutter.
45
46
0.35
0
0.3
−2
0.25
−4
0.2
−6
0.15
−8
0.1
−10
0.05
−12
0
−14
−0.05
−60
−40
−20
0
20
40
60
−16
−60
−40
−20
Figure 14: On the left : The density of probability associated with the
Raskar et al code : x-axis (signed) motion (in pixels), y-axis probability. On
the right : with a logarithmic scale, notice that the probability is nonzero
even for big blurs (velocity). It corresponds to a steady background and some
fast moving objects in a broad velocity interval. This is precisely the caption
model used in the experiments: a fixed camera (a steady background) and
some moving vehicles at an unknown speed.
47
0
20
40
60
0.3
0
−2
0.25
−4
0.2
−6
−8
0.15
−10
0.1
−12
−14
0.05
−16
0
−18
−0.05
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
−20
−1
−0.8
−0.6
−0.4
−0.2
Figure 15: On the left : The density of probability associated with the
Raskar et al code : x-axis (signed) motion (in pixels), y-axis probability. On
the right : with a logarithmic scale, notice that the probability is nonzero
even for big blurs (velocity). It corresponds to an attemps to optimize both
the SNR and the PSF estimation.
48
0
0.2
0.4
0.6
0.8
1
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
10
20
30
40
50
60
60
50
40
30
20
10
0
−4
−3
−2
−1
0
1
2
3
4
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
60
30
25
20
15
10
5
49
0
−4
1
0.9
0.8
0.7
0.6
0.5
−3
−2
−1
0
1
2
3
4
Notice that the optimal codes coming from a Gaussian probability distribution on the motions are positive. Thus we can explicitly compare the
analog and numerical flutter-shutter.
50
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
60
7
6
5
4
3
2
1
0
−4
−3
−2
−1
0
1
2
3
4
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
60
3.5
3
2.5
2
1.5
1
51
0.5
−4
−3
−2
−1
0
1
2
3
4
Figure 17: Codes obtained assuming a Gaussian density over the velocities
for several standard deviations (σ), the blur being measured in pixels. On
the left from top to bottom: the code for σ = 5,σ = 10 (in pixels). For
Gaussian velocities the blur length (support) was cropped to bmax := 4σ
thus v
= bmax at the optimization step. On the right the corresponding
σ =0.001
σ =1
σ =2
σ =5
σ =10
(1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
11111111)
(1.0000 0.9979 0.9938 0.9877 0.9796 0.9695 0.9575 0.9437 0.9281 0.9109 0.8920 0.8717
0.8500 0.8271 0.8030 0.7784 0.7518 0.7250 0.6975 0.6695 0.6412 0.6126 0.5839 0.5552
0.5266 0.4982 .4702 0.4427 0.4157 0.3894 0.3639 0.3392 0.3154 0.2925 0.2707 0.2500
0.2303 0.2118 0.1959 0.1783 0.1633 0.1495 0.1368 0.1253 0.1149 0.1041 0.0973 0.0900
0.0837 0.0783 0.0737 0.0699 )
(1.0000 0.9918 0.9755 0.9516 0.9205 0.8828 0.8395 0.7912 0.7392 0.6843 0.6276 0.5702
0.5130 0.4570 0.4031 0.3520 0.3043 0.2606 0.2213 0.1866 0.1565 0.1312 0.1103 0.0937
0.0810 0.0718 0.0656 0.0620 0.0604 0.0603 0.0613 0.0628 0.0646 0.0662 0.0674 0.0680
0.0679 0.0670 0.0652 0.0627 0.0596 0.0559 0.0518 0.0475 0.0431 0.0388 0.0348 0.0312
0.0280 0.0254 0.0233 0.0218)
(1.0000 0.9497 0.8558 0.7304 0.5888 0.4470 0.3190 0.2147 0.1391 0.0916 0.0680 0.0610
0.0629 0.0669 0.0683 0.0650 0.0571 0.0467 0.0361 0.0276 0.0224 0.0208 0.0218 0.0239
0.0257 0.0260 0.0246 0.0215 0.0178 0.0143 0.0119 0.0108 0.0111 0.0122 0.0134 0.0141
0.0139 0.0127 0.0110 0.0091 0.0076 0.0068 0.0068 0.0073 0.0081 0.0087 0.0089 0.0085
0.0076 0.0065 0.0055 0.0048)
(1.0000 0.9497 0.8558 0.7304 0.5888 0.4470 0.3190 0.2147 0.1391 0.0916 0.0680 0.0610
0.0629 0.0669 0.0683 0.0650 0.0571 0.0467 0.0361 0.0276 0.0224 0.0208 0.0218 0.0239
0.0257 0.0260 0.0246 0.0215 0.0178 0.0143 0.0119 0.0108 0.0111 0.0122 0.0134 0.0141
0.0139 0.0127 0.0110 0.0091 0.0076 0.0068 0.0068 0.0073 0.0081 0.0087 0.0089 0.0085
0.0076 0.0065 0.0055 0.0048)
Figure 18: Optimized codes for a Gaussian blur modell. Notice that the
support of the codes are decreasing with σ. Again this a consequence of the
flutter shutter paradox (Thm. 6.3).
52
9.1.2
Uniform distribution
For several ranges optimal codes have been computed using a uniform hypothesis on the motion velocities. The weight function w (54) being concave,
we shall deduce that the modulus of the Fourier transform of the ideal gain
control function (8.1) is also concave. Thus all optimized codes (Fig. 19)
inherit this low pass behavior (with a peak at the zero frequency) depending on the range assumed on the motion. Codes are given explicitly in (Fig.
20). To summarize, if increasing the exposure time by a N factor leads to a
N
], then the SNR can be increased by an ≈ 3 factor
uniform motion over [0, 10
using a numerical Flutter-Shutter.
9.2
Comparison of acquisition strategies
This comparison is made with the only alternative choice : a standard snapshot. Actually without flutter shutter the only way to guarantee a sharp image is to take a standard, “quick enough to be sharp” snapshot. This method
has a nearly constant SNR for all blur supports (all v such that v∆t < 2).
Now, because of the flutter shutter paradox for each v ∈ suppport(ρ) the
code is never better than the corresponding snapshot using the adapted exposure time. Nevertheless, compared to the very restrictive case when the
snapshot is fixed to vmax (which is the only way to ensure a sharp image
in all cases without a flutter shutter ) we shall observe better results, on
average. Of course the SNR will depend on the blur support. Actually the
better SNRs we shall observe depend on the shape of ρ, the assumed probability density of velocities. Roughly speaking the idea is to accept more
risks, to expect a better SNR on average. The optimization step guarantees
the smallest risk for average SNR. Necessarily, the optimization concentrates
the gains on the more probable motions while things are made worse for unlikely motions. We can’t expect to increase all results, the flutter shutter
paradox indicating that the worst case optimal code is realized by classic
photography.
9.3
Simulation of acquired images
Let (α0 , ..., αN −1 ) be a code found using (8.1). We need to simulate the
obtained images for several blur supports (v) but using the same code to
illustrate the increase in SNR and the effect of the optimization on the
restored image.
9.3.1
Numerical implementation
From (33, 31) the acquired image can be simulated using the following
scheme (without loss of generality, for simplicity we choose ∆t = 1 and
v = 1 in the experiments):
53
1. Simulation of the observed
(a) Take a sharp image
(b) Convolve with the gain control function α, (26). This I1 image
is a simulation of the expected value of the acquired one. Notice
this can be done in Fourier domain using (41)
(c) Convolve with the squared gain control function α2 (31). As
previously this can be done using (41). This I2 image (map) is a
simulation of the noise variance
(d) The deterministic part of the observed is v1 u ∗ α( v. ) − v1 u ∗ α2 ( v. ) .
The stochastic part is a Poisson random variable with intensity
λ(x) = 1v u ∗ α2 ( v. ) (x).
(e) The simulated observed value at pixel x is obtained by simulating
the random variable obs(x)
∼ 1v u ∗ α( v. ) (x)− v1 u ∗ α2 ( v. ) (x)+
P oisson v1 u ∗ α2 ( v. ) (x) with the desired SNR
(f) Crop the result omitting the last N columns to avoid unrealistic
periodization effects
2. Restoration
(a) Use classic mirror-extension in the direction of the blur
(b) Divide the Fourier transform of I by the Fourier transform of the
gain control function α, (26)
(c) Crop the result to avoid border effects
Notice that the sharp snapshot can be simulated using a “degenerated” gain
control function defined from (26) and using α0 = 1 and αk = 0 ∀k ≥ 1.
9.3.2
Examples
54
1
0.9
0.8
0.7
0.6
0.5
0.4
0
10
20
30
40
50
60
40
35
30
25
20
15
10
5
0
−4
−3
−2
−1
0
1
2
3
4
1.2
1
0.8
0.6
0.4
0.2
0
−0.2
0
10
20
30
40
50
60
25
20
15
10
5
0
−4
−3
−2
−1
0
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
55
1
2
3
4
U[0,1]
U[0,1]
U[0,1]
U[0,1]
(1.0000 0.9994 0.9982 0.9964 0.9939 0.9909 0.9873 0.9831 0.9783 0.9729 0.9670 0.9605
0.9534 0.9459 0.9377 0.9291 0.9200 0.9103 0.9002 0.8896 0.8786 0.8671 0.8553 0.8430
0.8303 0.8173 0.8039 0.7902 0.7762 0.7619 0.7473 0.7324 0.7174 0.7021 0.6866 0.6709
0.6551 0.6392 0.6231 0.6070 0.5907 0.5745 0.5582 0.5419 0.5255 0.5093 0.4930 0.4768
0.4608 0.4448 0.4289 0.4131)
(1.0000 0.9976 0.9927 0.9855 0.9759 0.9640 0.9499 0.9337 0.9154 0.8952 0.8731 0.8494
0.8241 0.7973 0.7692 0.7401 0.7099 0.6790 0.6473 0.6152 0.5828 0.5502 0.5176 0.4851
0.4529 0.4211 0.3900 0.3595 0.3298 0.3011 0.2734 0.2469 0.2215 0.1975 0.1748 0.1534
0.1335 0.1151 0.0981 0.0826 0.0685 0.0560 0.0448 0.0350 0.0266 0.0194 0.0135 0.0088
0.0052 0.0026 0.0009 0.0001)
(1.0000 0.9849 0.9552 0.9121 0.8569 0.7918 0.7189 0.6406 0.5595 0.4781 0.3987 0.3235
0.2542 0.1924 0.1390 0.0946 0.0594 0.0331 0.0152 0.0047 0.0004 0.0012 0.0057 0.0126
0.0206 0.0287 0.0359 0.0417 0.0455 0.0471 0.0466 0.0441 0.0400 0.0346 0.0285 0.0221
0.0160 0.0106 0.0061 0.0028 0.0008 0.0001 0.0005 0.0019 0.0040 0.0066 0.0092 0.0118
0.0139 0.0155 0.0163 0.0164)
( 1.0000 0.9407 0.8306 0.6849 0.5228 0.3638 0.2250 0.1177 0.0466 0.0100 0.0008
0.0092 0.0248 0.0391 .0467 0.0457 0.0376 0.0255 0.0134 0.0045 0.0004 0.0012 0.0053
0.0106 0.0148 0.0165 0.0153 0.0118 0.0072 0.0031 0.0006 0.0002 0.0016 0.0040 0.0065
0.0081 0.0082 0.0070 0.0048 0.0025 0.0008 0.0001 0.0005 0.0017 0.0033 0.0045 0.0050
0.0047 0.0036 0.0021 0.0009 0.0001)
Figure 20: Optimized codes for an uniformly distributed blur model. Notice that the supports of the codes are decreasing with the support of the
probability. Again this a consequence of the flutter shutter paradox (Thm.
6.3).
v
0
0.25
0.5
0.75
1
1.5
blur support (pixels)
0
13
26
39
52
78
Snapshot
1
1
1
1
1
1
Raskar
0.1961
2.6414
3.0608
3.0608
3.0608
3.0608
Rand
0.6359
1.5641
2.5447
3.3463
3.3463
41.1231
αN (0,1)
0.1646
1.7006
3.3463
4.8680
6.6639
8.5474
αN (0,2)
0.2120
1.1422
2.3018
3.4322
4.4776
5.7872
Figure 21: (Normalized) noise boost factor guaranteed for the numerical
flutter shutter (in the worst case when the power spectrum of the image
or the PSF g is not taken into account (see Thm.4.2)) for different codes.
For noise boost factors lesser than one the noise is reduced. Notice that for
v = 1 for all codes the boost factor is larger than or equal to one, this is a
consequence of the flutter shutter paradox (Thm. 6.3).
56
αN (0,5)
0.3136
0.5827
1.3571
2.1059
2.7418
3.5916
αN (0,10)
0.4306
0.6172
0.8159
1.1739
1.7973
2.4850
Averaged noise boost factor
Std-dev on the average
Snapshot
1
0
αN (0,1)
0.1764
0.0183
αN (0,2)
0.2463
0.0390
αN (0,5)
0.3895
0.0630
αN (0,10)
0.5483
0.0877
Figure 22: Averaged (according to the assumed density of probability) noise
boost factor guaranteed for the numerical flutter shutter (in the worst case
when the power spectrum of the image or the PSF g is not taken into
account (see Thm.4.2)) for different codes. On the second row : the standard
deviation of this averaged value.
v
0
0.25
0.5
0.75
1
1.5
blur support (pixels)
0
13
26
39
52
78
Snapshot
1
1
1
1
1
1
Raskar
0.6359
2.6414
3.0608
3.0608
3.0608
3.0608
αN (0,1)
0.2077
2.0028
3.9319
5.7330
7.8480
10.0661
αN (0,2)
0.2610
1.4061
2.8335
4.2251
5.5119
7.1214
αN (0,5)
0.3977
0.7391
1.7212
2.6710
3.4775
4.5553
Figure 23: Noise boost factor guaranteed for the analog flutter shutter (in
the worst case when the power spectrum of the image or the PSF g is not
taken into account (see lem.4.8)) for several codes. For noise boost factors
lesser than one the noise is reduced. The random code being not positive is
not implementable with an analog flutter shutter device.
Averaged noise boost factor
Std-dev on the average
Snapshot
1
0
αN (0,1)
0.2077
0.0216
αN (0,2)
0.3032
0.0480
αN (0,5)
0.4921
0.0799
αN (0,10)
0.6979
0.1160
Figure 24: Averaged (according to the assumed density of probability) noise
boost factor guaranteed for the analog flutter shutter (in the worst case when
the power spectrum of the image or the PSF g is not taken into account (see
Thm.4.2)) for different codes. On the second row : the standard deviation
of this averaged value.
57
αN (0,10)
0.5481
0.7857
1.0386
1.4943
2.2877
3.1631
v
0
0.25
0.5
0.75
1
1.5
blur support (pixels)
0
13
26
39
52
78
Snapshot
1
1
1
1
1
1
Raskar
0.6359
2.6414
3.0608
3.0608
3.0608
3.0608
αU [0,1]
0.1426
3.4811
6.8435
9.8925
13.6382
17.5944
αU [0,2]
0.1771
1.5938
3.1732
4.6119
5.8726
7.6794
αU 0,5]
0.2638
0.9116
1.9424
2.8933
3.7645
4.8722
αU [0,10]
0.3649
0.5409
1.2650
1.9789
2.5729
3.4086
Figure 25: Noise boost factor guaranteed for the numerical flutter shutter
(in the worst case when the power spectrum of the image or the PSF g
is not taken into account) for several codes. For noise boost factors lesser
than one the noise is reduced. The random code being not positive is not
implementable with an analog flutter shutter device.
Averaged noise boost factor
Std-dev on the average
Snapshot
1
0
αU [0,1]
0.1473
0.0043
αU [0,2]
0.1889
0.0108
αU 0,5]
0.3021
0.0276
αU [0,10]
1.4262
0.0383
Figure 26: Averaged (according to the density of probability assumed) noise
boost factor guaranteed for the numerical flutter shutter (in the worst case
when the power spectrum of the image or the PSF g is not taken into
account (see Thm.4.2)) for different codes. On the second row : the standard
deviation of this averaged value.
v
0
0.25
0.5
0.75
1
1.5
blur support (pixels)
0
13
26
39
52
78
Snapshot
1
1
1
1
1
1
Raskar
0.6359
2.6414
3.0608
3.0608
3.0608
3.0608
αU [0,1]
0.1576
3.8477
7.5643
11.0340
15.0746
19.4476
αU [0,2]
0.2064
1.8572
3.6976
5.3741
6.8432
8.9486
αU [0,5]
0.3162
1.0926
2.3161
3.4678
4.5120
5.8397
Figure 27: Noise boost factor guaranteed for the analog flutter shutter (in
the worst case when the power spectrum of the image or the PSF g is
not taken into account) for several codes. For noise boost factors smaller
than one the noise is reduced. The random code being not positive is not
implementable with an analog flutter shutter device.
58
αU [0,10]
0.4408
0.6534
1.5281
2.3906
3.1081
4.1177
Averaged noise boost factor
Std-dev on the average
Snapshot
1
0
αU [0,1]
0.1628
0.0047
αU [0,2]
0.2201
0.0126
αU 0,5]
0.3621
0.0331
αU [0,10]
0.5149
0.0463
Figure 28: Averaged (according to the density of probability assumed) noise
boost factor guaranteed for the analog flutter shutter (in the worst case,
when the power spectrum of the image or the PSF g is not taken into
account (see Thm.4.2)) for different codes. On the second row : the standard
deviation of this averaged value.
Figure 29: On the left : the acquired image for v = 1, using the optimized
code for Gaussian distributed motion with σ = 1. On the right its Fourier
transform (on a logarithmic scale).
59
Figure 30: On the left : the restored image for v = 1, using the optimized code for Gaussian distributed motion with σ = 1, (RM SE =
5.6628,RM SECI = 5.6266. On the right : the noise (rescaled on [0, 255]).
Figure 31: Acquired image for v = 0.25, using the optimized code for Gaussian distributed motions with σ = 1. On the right its Fourier transform (on
a logarithmic scale).
60
Figure 32: On the left : the restored image for v = 0.25, using the optimized code for Gaussian distributed motions with σ = 1, (RM SE =
1.6600,RM SECI = 1.2572. On the right : the noise (rescaled on [0, 255]).
Figure 33: On the left : the acquired image for v = 0, using the optimized
code for Gaussian distributed motions with σ = 1. On the right its Fourier
transform (on a logarithmic scale).
61
Figure 34: On the left : the restored image for v = 0, using the optimized code for Gaussian distributed motion with σ = 1, (RM SE =
0.1897,RM SECI = 0.4319. On the right : the noise (rescaled on [0, 255])
62
9.4
Comparison with the Raskar’s code
The following results illustrate how the noise can be reduced using a flutter
shutter method (depending on the motion probabilistic model).
v
0
0.25
0.5
0.75
1
1.5
v
0
0.25
0.5
0.75
1
1.5
blur support (pixels)
0
13
26
39
52
78
blur support (pixels)
0
13
26
39
52
78
Snapshot
1.1525—1.2454
1.1591—1.2540
1.1972—1.2842
1.2677—1.3493
1.3822—1.4559
1.8836—1.9341
αN (0,2)
0.2444—0.4960
0.7841—0.9245
1.6722—1.7311
2.6946—2.7185
3.9826—3.9759
8.7119—8.5549
Raskar’s
0.2259—0.4844
1.6022—1.6635
2.0533—2.0966
1.9574—1.9988
2.2669—2.3136
4.0575—4.0273
αN (0,5)
0.3618—0.6099
0.5687—0.7631
0.9508—1.0657
1.5974—1.6594
2.4276—2.4580
5.4171—5.3805
Rand
0.7336—0.8874
1.1161—1.2125
1.5275—1.5908
1.6761—1.7221
2.0834—2.0978
4.4067—4.3047
αN (0,10)
0.4961—0.7065
0.6290—0.8070
0.8058—0.9476
1.0242—1.1298
1.5435—1.6093
3.6735—3.6723
Figure 35: Numerical flutter shutter RMSE for several Gaussian probability
distributions, N = 52, ∆t = 1 for the velocity. We denote by αN (0,σ) the
codes optimized for a Gaussian distribution of std-dev σ. As previously
mentioned the noise boost factor is quite pessimistic (see Fig 21).
10
Estimating the kernel
In a real context to perform the deconvolution we need to estimate, v the
(a priori unknown) velocity of the motion. If the apparatus allows to build
simultaneously two images, the problem can be made easier (by storing a
classic motion blur, image without flutter shutter , which shall be used to
estimate the blur (by tracking zeros in its Fourier transform for example)
and another one with flutter shutter which is thereafter deconvolved).
Otherwise, we can try to estimate v from the observed image itself. This
is an apparently ill posed problem, since any observed pattern could come
either from the landscape or the code applied. Nevertheless, codes like
Raskar’s, because of the oscillating pattern of their Fourier transform leave
a distinguishable trace in the Fourier transform of the observed image (Fig.
6).
Omitting the noise term, we are observing the landscape u convolved
1
α(t/v∗) for an unknown v∗. The oscillating pattern of
with αv∗ (t) = v∗
the Fourier transform modulus of the observed image most likely belongs
63
αN (0,1)
0.1897 —0.4319
1.6600 —1.2572
2.3945 —2.4258
3.8271 —3.8244
5.6688 —5.6266
12.2717—11.9195
v
0
0.25
0.5
0.75
1
1.5
v
0
0.25
0.5
0.75
1
1.5
blur support (pixels)
0
13
26
39
52
78
blur support (pixels)
0
13
26
39
52
78
Snapshot
Raskar
αN (0,1)
1.1525—1.2454 0.2259—0.4844
0.2233—0.4769
1.1591—1.2540 1.6022—1.6635
1.3663—1.4414
1.1972—1.2842 2.0533—2.0966
2.8235—2.8445
1.2677—1.3493 1.9574—1.9988
4.4963—4.4815
1.3822—1.4559 2.2669—2.3136
6.6898—6.6184
1.8836—1.9341 4.0575—4.0273 14.5246—14.0169
αN (0,2)
αN (0,5)
αN (0,10)
0.3007—0.5582
0.4584—0.6872 0.6317—0.8086
0.9361—1.0564
0.7021—0.8649 0.7951—0.9373
2.0512—2.0945
1.1856—1.2753 1.0060—1.1134
3.3129—3.3211
2.0289—2.0718 1.2932—1.3733
4.9086—4.8822
3.0678—3.0806 1.9534—1.9980
10.6810—10.4235 6.8663—6.7884 4.6624—4.6437
Figure 36: Analog flutter shutter RMSE for various probability distributions, N = 52, ∆t = 1. Notice that the cost is ≈ +20% of the noise compared
to the numerical flutter shutter except for binary codes like Raskar’s. The
random code used previously is not positive and cannot be implemented in
an analog flutter shutter context. We denote by αN (0,σ) the code optimized
for a centered Gaussian distribution of std-dev σ. As seen in (Fig. 21) the
noise boost factor was quite pessimistic.
to the α kernel, and not to the landscape (see Fig 6). We shall use this
observation to extract the α kerne. Notice that without Flutter-Shutter,
even for a snapshot to perform a total deconvolution an estimation of the
blur is needed and not easier.
1
α̂(t/v∗)|(ξ) +
The modulus of the observed image is log(|ôbs(ξ)|) = log | v∗
log (|û(ξ)|). Let us compute log(|ôbs(ξ)|) − log | v1 α̂(t/v)|(ξ) for a set of
values for v. For most of the v values, the result will present all sorts of oscillating patterns, because we are subtracting phase-shifted oscillations (see
Fig 39). On the other hand for v close to v ∗ the oscillations cancel (see Fig
39). Thus for wrong v Log(|ôbs(ξ)|) will have a large total variation norm.
Enforcing the total variation norm to be finite using a convolution with
a small Gaussian (which also perform a denoising) leads to the following
definition of vest estimator of v
vest
10.1
1
= argminv∈support(ρ) ||▽ (log(|ôbs|) − log(| α̂(t/v)|) (ξ)||L1 .
v
Numerical implementation
1. For each line on the observed image in the direction of the blur:
64
v
0
0.25
0.5
0.75
1
1.5
v
0
0.25
0.5
0.75
1
1.5
blur support (pixels)
0
13
26
39
52
78
blur support (pixels)
0
13
26
39
52
78
Snapshot
Raskar
Rand
αU [0,1]
1.1525—1.2454 0.2259—0.4844 0.7336—0.8874
0.1645—0.4192
1.1591—1.2540 1.6022—1.6635 1.1161—1.2125
1.7480—1.8029
1.1972—1.2842 2.0533—2.0966 1.5275—1.5908
3.4470—3.4507
1.2677—1.3493 1.9574—1.9988 1.6761—1.7221
5.3906—5.3579
1.3822—1.4559 2.2669—2.3136 2.0834—2.0978
8.1403—8.0122
1.8836—1.9341 4.0575—4.0273 4.4067—4.3047 17.7451—16.9643
αU [0,2]
αU [0,5]
αU [0,10]
0.2043—0.4556
0.3039—0.5613 0.4213—0.589
1.0882—1.1863
0.6425—0.8145 0.5621—0.7583
2.2641—2.3023
1.3902—1.4642 0.8860—1.0141
3.6157—3.6181
2.2559—2.2925 1.5022—1.5688
5.3427—5.3086
3.3456—3.3537 2.2849—2.3201
11.6713—11.3558 7.3583—7.2592 5.1182—5.0867
Figure 37: Numerical flutter shutter RMSE for several probability distributions, N = 52, ∆t = 1. αU [0,x] denotes the codes optimized for an uniform
distribution over [0, x]. As stated previously the noise boost factor is quite
pessimistic.
v
0
0.25
0.5
0.75
1
1.5
v
0
0.25
0.5
0.75
1
1.5
blur support (pixels)
0
13
26
39
52
78
blur support (pixels)
0
13
26
39
52
78
Snapshot
Raskar
αU [0,1]
1.1525—1.2454 0.2259—0.4844
0.1814—0.4277
1.1591—1.2540 1.6022—1.6635
1.9373—1.9835
1.1972—1.2842 2.0533—2.0966
3.8119—3.8110
1.2677—1.3493 1.9574—1.9988
5.9764—5.9269
1.3822—1.4559 2.2669—2.3136
8.9632—8.8031
1.8836—1.9341 4.0575—4.0273 19.6137—18.6535
αU [0,2]
αU [0,5]
αU [0,10]
0.2367—0.4951
0.3645—6156
0.5091—0.7188
1.2643—1.3481
0.7476—0.8977 0.6646—0.8327
2.6311—2.6537
1.6468—1.7057 1.1573—0.6646
4.2456—4.2355
2.6937—2.7175 1.8069—1.8595
6.2489—6.1826
4.0332—4.0249 2.7650—2.7869
13.6281—13.1823 8.7709—8.6144 6.1573—6.0979
Figure 38: Analog flutter shutter RMSE for different distribution of probability, N = 52, ∆t = 1. Notice that the cost is ≈ +20% of noise compared
to the numerical flutter shutter except for binary codes like Raskar’s. The
random code used previously being not positive cannot be implemented in
an analog flutter shutter context.
65
(a) Compute ||▽ (log(|ôbs|) − log(| v1 α̂(t/v)|) ∗ G (ξ)||L1 , for different v ∈ support(ρ);
(b) keep the one that minimizes the previous;
2. average the estimator of each line to get vest .
10.2
Example : Raskar’s code
12
12
11.5
11
11
10
10.5
9
10
9.5
8
9
7
8.5
6
8
5
−4
−3
−2
−1
0
1
2
3
4
7.5
−4
12
12
11.5
11.5
−3
−2
−1
0
1
2
3
4
−3
−2
−1
0
1
2
3
4
11
11
10.5
10.5
10
10
9.5
9.5
9
9
8.5
8.5
8
−4
8
−3
−2
−1
0
1
2
3
4
7.5
−4
Figure 39: The observed image is simulated for a velocity v = 1. Here is
shown the log of the sum of the (modulus) Fourier transform in the direction
of the blur minus the log of the (modulus) Fourier transform of the code
applied for several velocities v (log(|ôbs|)−log(| v1 α̂(t/v)|)). From top to bottom and left to right for v = 0 (TV-norm=31.28), v = 0.5 (TV-norm=33),
v = 1 (TV-norm=25), v = 1.5 (TV-norm=44). The estimated velocity is 1
since the estimated velocity is the velocity which minimizes the TV-norm of
log(|ôbs|) − log(| v1 α̂(t/v)|). In other words deconvolving with wrong kernels
leads to an oddly oscillating (modulus) of the Fourier transform which is
not credible in a natural image.
Unfortunately optimal codes have to be smooth, hence the previous
scheme is not robust in practice. However codes like Raskar’s work. Moreover, trivial constant codes equal to one permit to guess the velocity v and
66
the direction analyzing the Fourier transform of such an acquired image.
Hence the reasonable scheme (particularly for aerial or satellite imaging)
would be to build simultaneously two images, one with, and one without a
Flutter-Shutter, use the standard image to estimate accurately v, then use
this value of v and to deconvolve the flutter shutter image.
11
Conclusion
This paper has started by modeling the stochastic photon acquisition model
of a landscape by a light sensor. The model intrinsically contains noise
terms due to the Poisson photon emission. This model permits to formalize
the Flutter-Shutter and to prove what we called the flutter shutter paradox
namely the fact that even for an infinite exposure time the SNR remains
finite, contrarily to the classic steady photography. Two kinds of flutter
shutter setups have been considered: an analog flutter shutter and a numerical flutter shutter permitting smoother,negative gain-control-function
and leading to a better SNR of the restored image. It also appeared that
the motion-invariant photography is a particular case of an analog FlutterShutter. It is proved that knowing the velocity the best flutter shutter function is a sinc, however the SNR raise is small compared to the best snapshot
leading to a poor efficiency of such an acquisition system. Eventually a better mouse trap was set up to increase the efficiency of the Flutter-Shutter.
On average, the SNR can increase significantly provided we a priori know
the probability density on the motion blur. The conclusion is that the flutter shutter is useful in presence of (unknown) motion blur, and can even
become SNR efficient if the density probability of motions is known.
67
12
Glossary of Notations, Formulas
(i) t ∈ R+ time variable
(ii) ∆t length of a time interval
(iii) x ∈ R spatial variable
(iv) X ∼ Y means that the random variables X and Y have the same law
(v) P(A) probability of an event A
(vi) EX expected value of a random variable X
(vii) var(X) variance of a random variable X
(viii) f ∗g convolution of two L2 (R) functions (f ∗g)(x) =
R +∞
−∞
f (y)g(x−y)dy
(ix) l(t, x) > 0 ∀ x ∈ R+ × R continuous landscape before passing through
the optical system
(x) P(λ) Poisson random variable with intensity λ > 0
+
(xi) Pl bi-dimensional Poisson process on
RR R×R associated
to the intensity
t2 b
field l(t, x), Pl ([t1 , t2 ] × [a, b]) ∼ P t1 a l(t, x)dtdx
(xii) g point-spread-function of the optical system
(xiii) u = 1[− 1 , 1 ] ∗ g ∗ l ideal observable landscape just before sampling.
2 2
Assumption: u ∈ L1 ∩ L2 , band-limited
(xiv) obs(n), n ∈ Z observation of the landscape at pixel n
(xv) e(n) = E(obs(n)), n ∈ Z and e(x) its band limited interpolation
Q
P
(xvi) a = n∈Z δna Dirac comb
(xvii) v relative velocity between the landscape and the camera (unit: pixels
per second)
(xviii) b apparent length of the support of the blur (unit: pixels), N v∆t = b
(xix) α(t) piecewise constant gain control function (analog flutter shutter &
numerical flutter shutter methods)
(xx) w(x) ≥ 0 weight function associated with the probability distribution
of the velocity v
(xxi) α∗ (t) optimal gain control function
qR
R
|f (x)|2 dx
(xxii) ||f ||L1 = |f (x)|dx, ||f ||L2 =
68
(xxiii) let f, g ∈ L1 (R) or L2 (R), then
F(f )(ξ) := fˆ(ξ) :=
F
−1
Z
∞
f (x)e−ixξ dx
−∞
1
(F(f ))(x) = f (x) =
2π
Z
∞
−∞
F(f )(ξ)eixξ dξ
Moreover F(f ∗ g)(ξ) = F(f )(ξ)F(g)(ξ) and (Plancherel)
Z ∞
Z ∞
1
1
2
2
|f (x)| dx = ||f ||L2 =
||F(f )||2L2
|F(f )|2 (ξ)dξ =
2π −∞
2π
−∞
(xxiv) SN R(X) := √ |EX| , signal to noise ratio of a random variable X
var(X)
(xxv) Let uest (x) be an estimation of the landscape u based on a stochastic
observation of u then SN R(uest (x)) := √|Euest (x)|
var(uest (x))
(xxvi) Let uˆest be an estimation of û. Then SN Rspectral (uest (ξ)) := √ |Eûest (ξ)| ,
(var(ûest (ξ))
for ξ ∈ [−π, π]
(xxvii) Let
1
q 2π
1
2π
uR ˆest
R
be an estimation of û. Then SN Rspectral−averaged (ûest ) :=
|Eûest (ξ)|1[−π,π] (ξ)dξ
var(ûest (ξ))1[−π,π] (ξ)dξ
(xxviii) sinc(x) =
sin(πx)
πx
=
1
2π F(1[−π,π] )(x)
= F −1 (1[−π,π])(x)
1
(xxix) (Poisson
summation
Then
P
P ˆ formula) Let ˆf ∈ L (R) be band-limited.
P
f
(n)
=
f
(2πm),
hence
if
f
(ξ)
=
0
∀|ξ|
>
π
then
f
(n)
=
n
m
n
P
ˆ
ˆ
f (0). Moreover if f is positive then n∈Z f (n) = f (0) = ||f ||L1
(xxx) Root-mean-squared-error (RMSE) between a groundtruth u and an
estimation uest based on a noisy observation. When u and uest are
defined
on a domain D (D ⊂ R2 for images) RM SE (u, uest ) :=
q
R
2
D |u(x)−uest (x)| dx
measure(D)
(xxxi) Contrast invariant RMSE (see 7.1) RM SE CI (u, uest ) := RM SE umid(u,uest ) , uestmid(u,uest )
69
References
[1] A. Agrawal and R. Raskar. Resolving objects at higher resolution from
a single motion-blurred image. In 2007 IEEE Conference on Computer
Vision and Pattern Recognition, pages 1–8. IEEE, 2007.
[2] A. Agrawal and Y. Xu. Coded exposure deblurring: Optimized codes
for psf estimation and invertibility. In IEEE Conf. on Computer Vision
and Pattern Recognition, pages 2066–2073. Citeseer, 2009.
[3] A. K. Agrawal and R. Raskar. Optimal single image capture for motion
deblurring. In CVPR, pages 2560–2567. IEEE, 2009.
[4] M. Ben-Ezra and S.K. Nayar. Motion-based motion deblurring. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 26(6):689–
698, 2004.
[5] T. Buades, Y. Lou, JM Morel, and Z. Tang. A note on multi-image
denoising. In Local and Non-Local Approximation in Image Processing,
2009. LNLA 2009. International Workshop on, pages 1–15. IEEE, 2009.
[6] J.F. Cai, H. Ji, C. Liu, and Z. Shen. High-quality curvelet-based motion deblurring from an image pair. In Computer Vision and Pattern
Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1566–1573.
IEEE.
[7] J.F. Cai, H. Ji, C. Liu, and Z. Shen. Blind motion deblurring using
multiple images. Journal of Computational Physics, 228(14):5057–5071,
2009.
[8] W.G. Chen, N. Nandhakumar, and W.N. Martin. Image motion estimation from motion smear-a new computational model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(4):412–425,
1996.
[9] S. Cho, Y. Matsushita, and S. Lee. Removing non-uniform motion
blur from images. Computer Vision, 2007. ICCV 2007. IEEE 11th
International Conference on, pages 1–8, Oct. 2007.
[10] S. Dai and Y. Wu. Motion from blur. In In Proc. Conf. Computer
Vision and Pattern Recognition, pages 1–8, 2008.
[11] J. Delon. Midway image equalization. Journal of Mathematical Imaging
and Vision, 21(2):119–134, 2004.
[12] M. E. Gehm, R. John, J.D. Brady, R.M. Willett, and T.J. Schulz.
Single-shot compressive spectral imaging with a dual-disperser architecture. Opt. Express, 2007.
70
[13] A. Gupta, N. Joshi, C. Lawrence Zitnick, M. Cohen, and B. Curless.
Single image deblurring using motion density functions. ECCV, 2010.
[14] A. Jalobeanu, L. Blanc-Féraud, and J. Zerubia. Estimation of blur
and noise parameters in remote sensing. In Acoustics, Speech, and
Signal Processing (ICASSP), 2002 IEEE International Conference on,
volume 4, pages IV–3580. IEEE, 2002.
[15] P.A. Jansson and M. Richardson. Deconvolution of images and spectra.
Optical Engineering, 36:3224, 1997.
[16] J. Jelinek. Designing the optimal shutter sequences for the flutter
shutter imaging method. In Proceedings of SPIE, volume 7701, page
77010N, 2010.
[17] H. Ji and C. Liu. Motion blur identification from image gradients. In
Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE
Conference on, pages 1–8, 2008.
[18] J. Jia. Single image motion deblurring using transparency. In 2007
IEEE Conference on Computer Vision and Pattern Recognition, pages
1–8. IEEE, 2007.
[19] A. Levin. Blind motion deblurring using image statistics. Advances in
Neural Information Processing Systems, 19:841, 2007.
[20] A. Levin, R. Fergus, F. Durand, and W.T. Freeman. Deconvolution
using natural image priors. ACM Trans. Graphics, 26(3).
[21] A. Levin, P. Sand, T.S. Cho, F. Durand, and W.T. Freeman. Motioninvariant photography. In ACM SIGGRAPH 2008 papers, pages 1–9.
ACM, 2008.
[22] A. Levin, Y. Weiss, F. Durand, and W.T Freeman. Efficient marginal
likelihood optimization in blind deconvolution. CVPR, 2011.
[23] L.B. Lucy. An iterative technique for the rectification of observed distributions. The astronomical journal, 79:745, 1974.
[24] N.Joshi, R. Szeliski, and D. Kriegman. Psf estimation using sharp
edge prediction. In IEEE Conference on Computer Vision and Pattern
Recognition. IEEE, 2008.
[25] A. Raj and R. Zabih. A graph cut algorithm for generalized image
deconvolution. In Computer Vision, 2005. ICCV 2005. Tenth IEEE
International Conference on, volume 2, pages 1048–1054. IEEE, 2005.
[26] Raskar; Ramesh and Tumblin; Jackand Agrawal; Amit. Method for
deblurring images using optimized temporal coding patterns, 05 2006.
71
[27] A. Rav-Acha and S. Peleg. Two motion-blurred images are better than
one. Pattern Recognition Letters, 26(3):311–317, 2005.
[28] W.H. Richardson. Bayesian-based iterative method of image restoration. JOSA, 62(1):55–59, 1972.
[29] S. Schuon and K. Diepold. Comparison of motion de-blur algorithms
and real world deployment. Acta Astronautica, 64(11-12):1050–1065,
2009.
[30] A. Sellent, M. Eisemann, and M. Magnor. Calculating motion fields
from images with two different exposure times. Technical report, Citeseer, 2008.
[31] Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from
a single image. In ACM SIGGRAPH 2008 papers, pages 1–10. ACM,
2008.
[32] Y.W. Tai, H. Du, M.S. Brown, and S. Lin. Correction of spatially varying image and video motion blur using a hybrid camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1012–1028,
2010.
[33] J. Telleen, A. Sullivan, J. Yee, O. Wang, P. Gunawardane, I. Collins,
and J. Davis. Synthetic shutter speed imaging. In Computer Graphics
Forum, volume 26, pages 591–598. Wiley Online Library, 2007.
[34] A. Veeraraghavan, D. Reddy, and R. Raskar. Coded strobing photography: Compressive sensing of high speed periodic videos. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 33(4):671
–686, 2011.
[35] O. Whyte, J. Sivic, A. Zisserman, and J. Ponce. Non-uniform deblurring for shaken images. In Computer Vision and Pattern Recognition
(CVPR), 2010 IEEE Conference on, pages 491–498. IEEE, 2010.
[36] B. Wilburn, N. Joshi, V. Vaish, E.V. Talvala, E. Antunez, A. Barth,
A. Adams, M. Horowitz, and M. Levoy. High performance imaging
using large camera arrays. volume 24, pages 765–776. ACM, 2005.
[37] L. Yuan, J. Sun, L. Quan, and H.Y. Shum. Image deblurring with
blurred/noisy image pairs. In ACM SIGGRAPH 2007 papers, pages
1–es. ACM, 2007.
[38] L. Yuan, J. Sun, L. Quan, and H.Y. Shum. Progressive inter-scale and
intra-scale non-blind image deconvolution. In ACM SIGGRAPH 2008
papers, pages 1–10. ACM, 2008.
72