Introduction To Modern Digital Holography With Matlab PDF
Introduction To Modern Digital Holography With Matlab PDF
Introduction To Modern Digital Holography With Matlab PDF
With MATLAB
Get up to speed with digital holography with this concise and straightforward
introduction to modern techniques and conventions.
Building up from the basic principles of optics, this book describes key tech-
niques in digital holography, such as phase-shifting holography, low-coherence
holography, diffraction tomographic holography, and optical scanning holography.
Practical applications are discussed, and accompanied by all the theory necessary
to understand the underlying principles at work. A further chapter covers advanced
techniques for producing computer-generated holograms. Extensive MATLAB
code is integrated with the text throughout and is available for download online,
illustrating both theoretical results and practical considerations such as aliasing,
zero padding, and sampling.
Accompanied by end-of-chapter problems, and an online solutions manual
for instructors, this is an indispensable resource for students, researchers, and
engineers in the elds of optical image processing and digital holography.
TING-CHUNG POO N
Virginia Tech, USA
JUNG-PING LIU
Feng Chia University, Taiwan
University Printing House, Cambridge CB2 8BS, United Kingdom
Published in the United States of America by Cambridge University Press, New York
www.cambridge.org
Information on this title: www.cambridge.org/9781107016705
T-C. Poon & J-P. Liu 2014
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2014
Printing in the United Kingdom by TJ International Ltd. Padstow Cornwall
A catalog record for this publication is available from the British Library
Library of Congress Cataloging in Publication data
Poon, Ting-Chung.
Introduction to modern digital holography : with MATLAB / Ting-Chung Poon, Jung-Ping Liu.
pages cm
ISBN 978-1-107-01670-5 (Hardback)
1. HolographyData processing. 2. Image processingDigital techniques. I. Liu, Jung-Ping. II. Title.
TA1542.P66 2014
621.360 75dc23
2013036072
ISBN 978-1-107-016705-Hardback
Additional resources for this publication at www.cambridge.org/digitalholography
Cambridge University Press has no responsibility for the persistence or accuracy of
URLs for external or third-party internet websites referred to in this publication,
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
Contents
Preface page ix
1 Wave optics 1
1.1 Maxwells equations and the wave equation 1
1.2 Plane waves and spherical waves 3
1.3 Scalar diffraction theory 5
1.3.1 Fresnel diffraction 9
1.3.2 Fraunhofer diffraction 11
1.4 Ideal thin lens as an optical Fourier transformer 14
1.5 Optical image processing 15
Problems 24
References 26
2 Fundamentals of holography 27
2.1 Photography and holography 27
2.2 Hologram as a collection of Fresnel zone plates 28
2.3 Three-dimensional holographic imaging 33
2.3.1 Holographic magnications 38
2.3.2 Translational distortion 39
2.3.3 Chromatic aberration 40
2.4 Temporal and spatial coherence 42
2.4 1 Temporal coherence 43
2.4.2 Coherence time and coherence length 45
2.4.3 Some general temporal coherence considerations 46
2.4.4 Fourier transform spectroscopy 48
2.4.5 Spatial coherence 51
2.4.6 Some general spatial coherence considerations 53
Problems 56
References 58
v
vi Contents
3 Types of holograms 59
3.1 Gabor hologram and on-axis (in-line) holography 59
3.2 Off-axis holography 61
3.3 Image hologram 64
3.4 Fresnel and Fourier holograms 68
3.4.1 Fresnel hologram and Fourier hologram 68
3.4.2 Lensless Fourier hologram 70
3.5 Rainbow hologram 73
Problems 78
References 78
4 Conventional digital holography 79
4.1 Sampled signal and discrete Fourier transform 79
4.2. Recording and limitations of the image sensor 89
4.2.1 Imager size 91
4.2.2 Pixel pitch 91
4.2.3 Modulation transfer function 92
4.3 Digital calculations of scalar diffraction 95
4.3.1 Angular spectrum method (ASM) 95
4.3.2 Validity of the angular spectrum method 97
4.3.3 Fresnel diffraction method (FDM) 99
4.3.4 Validation of the Fresnel diffraction method 101
4.3.5 Backward propagation 103
4.4 Optical recording of digital holograms 105
4.4 1 Recording geometry 105
4.4 2 Removal of the twin image and the zeroth-order light 108
4.5 Simulations of holographic recording and reconstruction 111
Problems 116
References 117
5 Digital holography: special techniques 118
5.1 Phase-shifting digital holography 118
5.1.1 Four-step phase-shifting holography 119
5.1.2 Three-step phase-shifting holography 120
5.1.3 Two-step phase-shifting holography 120
5.1.4 Phase step and phase error 122
5.1.5 Parallel phase-shifting holography 124
5.2 Low-coherence digital holography 126
5.3 Diffraction tomographic holography 133
5.4 Optical scanning holography 137
Contents vii
Owing to the advance in faster electronics and digital processing power, the past
decade has seen an impressive re-emergence of digital holography. Digital
holography is a topic of growing interest and it nds applications in three-
dimensional imaging, three-dimensional displays and systems, as well as bio-
medical imaging and metrology. While research in digital holography continues
to be vibrant and digital holography is maturing, we nd that there is a lack of
textbooks in the area. The present book tries to serve this need: to promote and
teach the foundations of digital holography. In addition to presenting traditional
digital holography and applications in Chapters 14, we also discuss modern
applications and techniques in digital holography such as phase-shifting holog-
raphy, low-coherence holography, diffraction tomographic holography, optical
scanning holography, sectioning in holography, digital holographic microscopy
as well as computer-generated holography in Chapters 57. This book is geared
towards undergraduate seniors or rst-year graduate-level students in engineer-
ing and physics. The material covered is suitable for a one-semester course in
Fourier optics and digital holography. The book is also useful for scientists and
engineers, and for those who simply want to learn about optical image processing
and digital holography.
We believe in the inclusion of MATLAB in the textbook because digital
holography relies heavily on digital computations to process holographic data.
MATLAB will help the reader grasp and visualize some of the important
concepts in digital holography. The use of MATLAB not only helps to illustrate
the theoretical results, but also makes us aware of computational issues such as
aliasing, zero padding, sampling, etc. that we face in implementing them. Never-
theless, this text is not about teaching MATLAB, and some familiarity with
MATLAB is required to understand the codes.
ix
x Preface
Ting-Chung Poon would like to thank his wife, Eliza, and his children, Christina
and Justine, for their love. This year is particularly special to him as Christina gave
birth to a precious little one Gussie. Jung-Ping Liu would like to thank his wife,
Hui-Chu, and his parents for their understanding and encouragement.
1
Wave optics
where and are the permittivity (F/m) and permeability (H/m) of the medium,
respectively. In the case of a linear, homogenous, and isotropic medium such as in
vacuum or free space, and are scalar constants. Using Eqs. (1.3)(1.6), we can
1
2 Wave optics
derive a wave equation in E or B in free space. For example, by taking the curl of
E in Eq. (1.3), we can derive the wave equation in E as
2 E JC 1
r2 E rv , 1:7
t 2 t
1 2 E
r2 E 0: 1:8
v2 t 2
p
Note that v 1= is the velocity of the wave in the medium. Equation (1.8) is
equivalent to three scalar equations, one for every component of E. Let
E Ex ax E y ay E z az , 1:9
where ax, ay, and az are the unit vectors in the x, y, and z directions, respectively.
Equation (1.8) then becomes
2 2 2 1 2
E x a x E y ay E z az E x ax Ey ay Ez az : 1:10
x2 y2 z2 v2 t 2
2 E x 2 E x 2 E x 1 2 E x
:
x2 y2 z2 v2 t 2
Similarly, we can derive the same type of equation shown above for the Ey and Ez
components by comparison with other components in Eq. (1.10). Hence we can
write a compact equation for the three components as
2 2 2 1 2
1:11a
x2 y2 z2 v2 t 2
or
1 2
r2 , 1:11b
v2 t2
where can represent a component, Ex, Ey, or Ez, of the electric eld E. Equation
(1.11) is called the three-dimensional scalar wave equation. We shall look at some
of its simplest solutions in the next section.
1.2 Plane waves and spherical waves 3
2 2 1 2 1 2 cot
r2 2 :
R2 R R R2 sin 2 2 R2 2 R
2 2 1 2
: 1:16
R2 R R v2 t 2
Since
2
2 2 R
R ,
R2 R R t 2
2 R 1 2 R
2 : 1:17
R2 v t 2
By comparing the above equation with Eq. (1.14), which has a solution given by
Eq. (1.13), we can construct a simple solution to Eq. (1.17) as
R R, t A exp j0 t k 0 R,
or
A
R, t exp j0 t k 0 R: 1:18
R
The above equation is a spherical wave of amplitude A, which is one of the
solutions to Eq. (1.16). In summary, plane waves and spherical waves are some
of the simplest solutions of the three-dimensional scalar wave equation.
1.3 Scalar diffraction theory 5
The quantity p0(x, y) is called the complex amplitude in optics. This complex
amplitude is the initial condition, which is given by p0(x, y) A t(x, y), the
amplitude of the incident plane wave multiplied by the transparency function of
the aperture. To nd the eld distribution at z away from the aperture, we model
the solution in the form of
x, y, z, t p x, y; zexp j0 t, 1:20
where p (x, y; z) is the unknown to be found with initial condition p0(x, y) given.
To nd p(x, y; z), we substitute Eq. (1.20) into the three-dimensional scalar wave
equation given by Eq. (1.11a) to obtain the Helmholtz equation for p(x, y; z),
2 p 2 p 2 p
2 2 k 20 p 0: 1:21
x2 y z
To nd the solution to the above equation, we choose to use the Fourier transform
technique. The two-dimensional Fourier transform of a spatial signal f(x, y) is
dened as
where kx and ky are called spatial radian frequencies as they have units of radian
per unit length. The functions f(x, y) and F(xx, ky) form a Fourier transform pair.
Table 1.1 shows some of the most important transform pairs.
By taking the two-dimensional Fourier transform of Eq. (1.21) and using
transform pair number 4 in Table 1.1 to obtain
2
p
F jkx 2 p k x , k y ; z
x 2
2 1:23
p 2
F jk y p k x , k y ; z,
y2
where F f p x, y; zg p kx , ky ; z , we have a differential equation in p(kx, ky; z)
given by
!
2
d2 p k2x k y
k 0 1 2 2 p 0
2
1:24
dz2 k0 k0
tothe initial known condition F f p x, y; z 0g p kx , ky ; z 0
subject
p0 kx , ky . The solution to the above second ordinary different equation is
straightforward and is given by
h q i
p kx , ky ; z p0 k x , k y exp jk 0 1 k2x =k 20 k 2y =k 20 z 1:25
1.
f(x, y) F(kx, ky)
2. Shifting
f(x x0, y y0) F(kx, ky) exp( jkxx0 jkyy0)
3. Scaling
1 kx ky
f(ax, by) f ,
jabj a b
4. Differentiation
f(x, y) / x jk x F kx , k y
5. Convolution integral Product of spectra
f1 2 f 1 x0 , y0 f 2 xx0 , yy0 dx0 dy0 F 1 kx , ky F 2 k x , k y
where F f f 1 x, yg F 1 kx , ky and
F f f 2 x, yg F 2 kx , ky
6. Correlation
f 1 f 2 f 1 x0 ,y0 f 2 x x0 ,y y0 dx0 dy0 F 1 kx , ky F 2 k x , k y
k2x k 2x
expx2 y2 exp
4
8. Constant of unity Delta function
1 4 2 x,y 1expjk x x jk y ydkx dk y
j j 2
expjax2 y2 exp k x k2y
a 4a
8 Wave optics
Hence the complex amplitude p(x, y; z) is given by the inverse Fourier transform
of Eq. (1.25):
The physical meaning of the above integral is that we rst recognize a plane wave
propagating with propagation vector k0, as illustrated in Fig. 1.1. The complex
amplitude of the plane wave, according to Eq. (1.12), is given by
A exp jk0x x jk 0y y jk 0z z: 1:29
The eld distribution at z = 0 or the plane wave component is given by
exp jk0x x jk 0y y:
Comparing the above equation with Eq. (1.28) and recognizing that the spatial
radian frequency variables kx and ky of the eld distribution p0(x, y) are k0x and k0y
of the plane wave in Eq. (1.29), p0(kx, ky) is called the angular plane wave
spectrum of the eld distribution p0(x, y). Therefore, p0(kx, ky) exp (jkxx jkyy)
is the plane wave component with amplitude p0(kx, ky) and by summing over
various directions of kx and ky, we have the eld distrition p0(x, y) at z 0 given
by Eq. (1.28). To nd the eld distribution a distance of z away, we simply let the
1.3 Scalar diffraction theory 9
various plane wave components propagate over a distance z, which means acquir-
ing a phase shift of exp(jkzz) or exp(jk0zz) by noting that the variable kz is k0z of
the plane wave so that we have
1
p x, y; z 2 p0 kx , ky exp jk x x jk y y jk z zdk x dk y
4 1:30
F 1 fp0 kx , ky exp jk 0z zg:
q q
Note that k0 k 20x k20y k 20z and hence k z k0z k 0 1 k 2x =k20 k 2y =k20
and with this relation in Eq. (1.29), we immediately recover Eq. (1.27) and provide
physical meaning to the equation. Note that we have kept the sign in the above
relation to represent waves traveling in the positive z-direction. In addition, for
propagation of plane waves, 1 k2x =k 20 k 2y =k 20 0 or k 2x k2y k20 . If the reverse
is true, i.e., k2x k 2y k 20 , we have evanescent waves.
H(kx, ky; z) is called the spatial frequency transfer function in Fourier optics [1].
The transfer function is simply a paraxial approximation to H kx , ky ; z: The
inverse Fourier transform of H(kx, ky; z) is known as the spatial impulse response
in Fourier optics, h(x, y; z) [1]:
10 Wave optics
1 jk 0 jk 0 2
hx, y; z F fHkx , k y ; zg exp jk0 z exp x y :
2
1:34
2z 2z
To nd the inverse transform of the above equation, we have used transform pair
number 13 in Table 1.1. We can express Eq. (1.32) in terms of the convolution
integral by using transform pair number 5:
p x, y; z p0 x, y hx, y; z
i
jk0 0 0 jk0 h
exp jk 0 z p0 x , y exp x x y y dx0 dy0 :
0 2 0 2
2z 2z
1:35
Equation (1.35) is called the Fresnel diffraction formula and describes the Fresnel
diffraction of a beam during propagation which has an initial complex amplitude
given by p0(x, y).
If we wish to calculate the diffraction pattern at a distance far away from the
aperture, Eq. (1.35) can be simplied. To see how, let us complete the square in the
exponential function and then re-write Eq. (1.35) as
jk 0 jk 0 2
p x, y; z exp jk0 z exp x y 2
2z 2z
i jk
0 0 jk 0 h 0 2 0 2
xx yy dx0 dy0 :
0 0
0
p0 x , y exp x y exp
2z z
1:36
In terms of Fourier transform, we can write the Fresnel diffraction formula as
follows:
jk0 jk 0 2
p x, y; z exp jk 0 z exp x y 2
2z 2z
jk 0 2
F p0 x, yexp x y 2
: 1:37
2z k x k y
kx 0 , ky 0
z z
In the integral shown in Eq. (1.36), p0 is considered the source, and therefore
the coordinates x0 and y0 can be called the source plane. In order to nd the eld
distribution p on the observation plane z away, or on the xy plane, we need to
multiply the source by the two exponential functions as shown inside the integrand
of Eq. (1.36) and then to integrate over the source coordinates. The result of
the integration is then multiplied by the factor exp(jk0z) (ik0/2z) exp[(jk0/2z)
(x2 y2)] to arrive at the nal result on the observation plane given by Eq. (1.36).
1.3 Scalar diffraction theory 11
Figure 1.4 (a) t(x,y) is a diffracting screen in the form of circ(r / r0), r0 0.5
mm. (b) Fresnel diffraction at z 7 cm, |p(x, y; z 7 cm)|. (c) Fresnel diffraction
at z 9 cm, |p(x, y; z 9 cm)|. See Table 1.2 for the MATLAB code.
Table 1.2 MATLAB code for Fresnel diffraction of a circular aperture, see Fig. 1.4
QPexp(1i*pi/lambda/z.*(RR.^2));
FDfftshift(fft2(fftshift(OB.*QP)));
FDabs(FD);
FDFD/max(max(FD));
gure; imshow(OB);
title(Circular aperture)
gure; imshow(FD);
title(Modulus of the Fresnel diffraction pattern)
The term [(x0 )2 (y0 )2]max is like the maximum area of the source and if this area
divided by the wavelength is much less than the distance z under consideration, the
term exp{(jk0/2z)[(x0 )2 (y0 )2]} inside the integrand can be considered to be
unity, and hence Eq. (1.36) becomes
jk0 jk 0 2
p x, y; z exp jk 0 z exp x y
2
2z 2z
0 0 jk0 0
p0 x , y exp xx yy dx0 dy0 :
0
1:39
z
Equation (1.39) is the Fraunhofer diffraction formula and is the limiting case of
Fresnel diffraction. Equation (1.39) is therefore called the Fraunhofer approximation
or the far eld approximation as diffraction is observed at a far distance. In terms of
Fourier transform, we can write the Fraunhofer diffraction formula as follows:
1.3 Scalar diffraction theory 13
Table 1.3 MATLAB code for Fraunhofer diffraction of a circular aperture, see Fig. 1.5
c1:M;
r1:M;
[C, R ]meshgrid(c, r);
THOR(((R-M/2-1).^2(C-M/2-1).^2).^0.5)*delta;
OBzeros(M); % Object
for a1:M;
for b1:M;
if THOR(a,b)<5*10^-4; % aperture radius unit:m
OB(a,b)1;
end
end
end
FDfftshift(fft2(fftshift(OB)));
FDabs(FD);
FDFD/max(max(FD));
CC*lambda*z/M/delta*1000;
RR*lambda*z/M/delta*1000;
jk 0 jk 0 2
p x, y; z exp jk 0 z exp x y2
F f p0 x, ygk k0 x, k k0 y :
2z 2z x z y z
1:40
Figure 1.4 shows the simulation of Fresnel diffraction of a circular aperture function
p
circ (r/r0), i.e., p0(x, y) circ(r/r0), where r x y2 and circ(r/r0) denotes a
2
value 1 within a circle of radius r0 and 0 otherwise. The wavelength used for the
simulation is 0.6 m. Since p(x, y; z) is a complex function, we plot its absolute
value in the gures. Physically, the situation corresponds to the incidence of a plane
wave with unit amplitude on an opaque screen with a circular opening with radius
r0 as p(x, y; z) 1 t(x, y) with t(x, y) circ(r/r0). We would then observe the
intensity pattern, which is proportional to |p(x, y; z)|2, at distance z away from the
aperture. In Fig. 1.5, we show Fraunhofer diffraction. We have chosen the distance
of 1 m so that the Fraunhofer approximation from Eq. (1.38) is satised.
14 Wave optics
jk 0 2
t f x, y exp x y ,
2
1:41
2f
where we have assumed that the lens is of innite extent. For a plane wave of
amplitude A incident upon the lens, we can employ the Fresnel diffraction formula
to calculate the eld distribution in the back focal plane of the lens. Using
Eq. (1.37) for z f, we have
jk0 jk 0 2
p x, y; f exp jk 0 f exp x y 2
2f 2f
jk0 2
F p0 x, yexp x y 2
, 1:42
2f k x k y
kx 0 , ky 0
f f
where p0(x, y) is given by p0(x, y) A t(x, y), the amplitude of the incident
plane wave multiplied by the transparency function of the aperture. In the present
case, the transparency function of the aperture is given by the lens function tf (x, y),
i.e., t(x, y) tf (x, y). Hence p0(x, y) A t(x, y) A tf (x, y). The eld
distribution f away from the lens, according to Eq. (1.37), is then given by
jk 0 jk 0 2
p x, y; f exp jk0 f exp x y 2
2f 2f
jk 0 2 jk0 2
F Aexp x y2 exp x y2 / x, y: 1:43
2f 2f k x k y
kx 0 , ky 0
f f
We see that the lens phase function cancels out exactly the quadratic phase function
associated with Fresnel diffraction, giving the Fourier transform of constant A
proportional to a delta function, (x, y), which is consistent with the geometrical
optics which states that all input rays parallel to the optical axis converge behind the
lens to a point called the back focal point. The discussion thus far in a sense justies
the functional form of the phase function of the lens given by Eq. (1.41).
We now look at a more complicated situation shown in Fig. 1.6, where a
transparency t(x, y) illuminated by a plane wave of unity amplitude is located in
the front focal plane of the ideal thin lens.
We want to nd the eld distribution in the back focal plane. The eld
immediately after t(x, y) is given by 1 t(x, y). The resulting eld is then
1.5 Optical image processing 15
The above equation can be rearranged to become, apart from some constant,
k0x k0y
p x, y F ftx, ygk k0 x, k k0 y T , , 1:45
x f y f f f
where T(k0x/f, k0y/f ) is the Fourier transform or the spectrum of t(x, y). We see that
we have the exact Fourier transform of the input, t(x, y), on the back focal plane
of the lens. Hence an ideal thin lens is an optical Fourier transformer.
is T(k0x/f, k0y/f )p(x, y). According Eq. (1.45) again, this eld will be Fourier
transformed to give the eld on the image plane as
k0 x k0 y
pi F T , px, y , 1:46
f f k x k y
kx 0 , ky 0
f f
where the negative sign in the argument of t(x, y) shows that the original input on
the image plane has been ipped and inverted on the image plane. P is the Fourier
transform of p. From Eq. (1.47), we dene hc(x, y) as the coherent point spread
function (CPSF) in optics, which is given by [1]
k0 x k0 y
hc x, y F fpx, ygkx k0 x, ky k0 y P , : 1:48
f f f f
By denition, the Fourier transform of the coherent point spread function is the
coherent transfer function (CTF) given by [1]
k0 x k0 y f kx f ky
H c k x , k y F fhc x, yg F P , p , : 1:49
f f k0 k0
The expression given by Eq. (1.47) can be interpreted as the ipped and inverted
image of t(x, y) being processed by the coherent point spread function given by
Eq. (1.48). Therefore, image processing capabilities can be varied by simply
designing the pupil function, p(x, y). Or we can interpret this in the spatial
frequency domain as spatial ltering is proportional to the functional form of the
1.5 Optical image processing 17
pupil function as evidenced by Eq. (1.46) together with Eq. (1.49). Indeed,
Eq. (1.46) is the backbone of so-called coherent image processing in optics [1].
Let us look at an example. If we take p(x, y) 1, this means that we do not
modify the spectrum of the input image according to Eq. (1.46). Or from
Eq. (1.49), the coherent transfer function becomes unity, i.e., all-pass ltering,
for all spatial frequencies of the input image. Mathematically, using Eq. (1.48) and
item number 8 of Table 1.1, hc(x, y) becomes
2
k0 x k0 y f
hc x, y F f1gkx k0 x, ky k0 y 4 , 4 x, y,
f f f f k0
To obtain the last step of the result in Eq. (1.50), we have used the properties of
(x, y) in Table 1.4.
If we now take p(x, y) circ(r/r0), from the interpretation of Eq. (1.49) we see
that, for this kind of chosen pupil, ltering is of lowpass characteristic as the
opening of the circle on the pupil plane only allows the low spatial frequencies to
physically go through. Figure 1.8 shows examples of lowpass ltering. In Fig. 1.8
(a) and 1.8(b), we show the original of the image and its spectrum, respectively. In
Fig. 1.8(c) and 1.8(e) we show the ltered images, and lowpass ltered spectra are
shown in Fig. 1.8(d) and 1.8(f), respectively, where the lowpass ltered spectra are
obtained by multiplying the original spectrum by circ(r/r0) [see Eq. (1.46)]. Note that
the radius r0 in Fig. 1.8(d) is larger than that in Fig. 1.8(f). In Fig. 1.9, we show
highpass ltering examples where we take p(x, y) 1 circ(r/r0).
So far, we have discussed the use of coherent light, such as plane waves derived
from a laser, to illuminate t(x, y) in the optical system shown in Fig. 1.7. The
optical system is called a coherent optical system in that complex quantities are
18 Wave optics
Figure 1.8 Lowpass ltering examples: (a) original image, (b) spectrum of (a);
(c) and (e) lowpass images; (d) and (f ) spectra of (c) and (e), respectively. See
Table 1.5 for the MATLAB code.
Figure 1.9 Highpass ltering examples: (a) original image, (b) spectrum of (a);
(c) and (e) highpass images; (d) and (f ) spectra of (c) and (e), respectively. See
Table 1.6 for the MATLAB code.
1.5 Optical image processing 19
Table 1.5 MATLAB code for lowpass ltering of an image, see Fig. 1.8
c1:512;
r1:512;
[C, R ]meshgrid(c, r);
CI((R-257).^2(C-257).^2);
lterzeros(512,512);
% produce a high-pass lter
for a1:512;
for b1:512;
manipulated. Once we have found the complex eld on the image plane given by
Eq. (1.47), the corresponding image intensity is
which is the basis for coherent image processing. However, light from extended
sources, such as uorescent tube lights, is incoherent. The system shown in Fig. 1.7
becomes an incoherent optical system upon illumination from an incoherent source.
20 Wave optics
Table 1.6 MATLAB code for highpass ltering of an image, see Fig. 1.9
c1:512;
r1:512;
[C, R ]meshgrid(c, r);
CI((R-257).^2(C-257).^2);
lterzeros(512,512);
% produce a high-pass lter
for a1:512;
for b1:512;
Figure 1.10 Incoherent spatial ltering examples using p(x, y) circ(r / r0): (a)
original image, (b) spectrum of (a); (c) and (f ) ltered images; (d) and (g) spectra
of (c) and (f ), respectively; (e) and (h) cross sections through the center of the
OTF using different r0 in the pupil function for the processed images in (c) and
(f ), respectively. The full dimension along the horizontal axis contains 256 pixels
for gures (b) to (h), while gures (e) and (h) zoom in the peak with 30 pixels
plotted. See Table 1.7 for the MATLAB code.
require a bipolar point spread function. Novel incoherent image processing tech-
niques seek to realize bipolar point spread functions (see, for example, [36]).
The Fourier transform of the IPSF gives a transfer function known as the optical
transfer function (OTF) of the incoherent optical system:
n o
OTFkx , ky F jhc x, yj2 : 1:53
Using Eq. (1.49), we can relate the coherent transfer function to the OTF as
follows:
22 Wave optics
Figure 1.11 Incoherent spatial ltering examples using p(x, y) 1 circ(r / r0):
(a) original image, (b) spectrum of (a); (c) and (f) ltered images; (d) and (g)
spectra of (c) and (f ), respectively; (e) and (h) cross sections through the center of
the OTF using different r0 in the pupil function for the processed images of (c)
and (f ), respectively. The full dimension along x contains 256 pixels for gures
(b) to (h). See Table 1.8 for the MATLAB code.
OTFkx , ky H c kx , ky H c k x , ky
where denes correlation [see Table 1.1]. The modulus of the OTF is called the
modulation transfer function (MTF), and it is important to note that
jOTFk x , k y j jOTF0, 0j, 1:55
1.5 Optical image processing 23
Table 1.7 MATLAB code for incoherent spatial ltering, circ(r/r0), see Fig. 1.10
c1:512;
r1:512;
[C, R ]meshgrid(c, r);
CI((R-257).^2(C-257).^2);
pupzeros(512,512);
% produce a circular pupil
for a1:512;
for b1:512;
if CI(a,b)>30^2; %pupil diameter 30 or 15
pup(a,b)0;
else
pup(a,b)1;
end
end
end
hifft2(fftshift(pup));
OTFfftshift(fft2(h.*conj(h)));
OTFOTF/max(max(abs(OTF)));
Gabs(OTF.*SP);
GG(129:384,129:384);
gure;imshow(30.*mat2gray(G));
title(Filtered spectrum)
Iabs(fftshift(ifft2(fftshift(OTF.*SP))));
gure;imshow(mat2gray(I));
title(Filtered image)
which states that the MTF always has a central maximum. This signies that we
always have lowpass ltering characteristics regardless of the pupil function used
in an incoherent optical system. In Figs. 1.10 and 1.11, we show incoherent spatial
ltering results in an incoherent optical system [1] using p(x, y) circ(r/r0) and
p(x, y) 1 circ(r/r0), respectively.
24 Wave optics
Table 1.8 MATLAB code for incoherent spatial ltering, 1circ(r/r0), see Fig. 1.11
c1:512;
r1:512;
[C, R ]meshgrid(c, r);
CI((R-257).^2(C-257).^2);
pupzeros(512,512);
% produce a circular pupil
for a1:512;
for b1:512;
if CI(a,b)>350^2; % pupil diameter 300 or 350
pup(a,b)1;
else
pup(a,b)0;
end
end
end
hifft2(fftshift(pup));
OTFfftshift(fft2(h.*conj(h)));
OTFOTF/max(max(abs(OTF)));
Gabs(OTF.*SP);
GG(129:384,129:384);
gure;imshow(30.*mat2gray(G));
title(Filtered spectrum)
Iabs(fftshift(ifft2(fftshift(OTF.*SP))));
gure;imshow(mat2gray(I));
title(Filtered image)
Problems
1.1 Starting from the Maxwell equations, (a) derive the wave equation for E in a
linear, homogeneous, and isotropic medium characterized by and , and (b)
do the same as in (a) but for H.
1.2 Verify the Fourier transform properties 2, 3 and 4 in Table 1.1.
1.3 Verify the Fourier transform pairs 5 and 6 in Table 1.1.
Problems 25
1.4 Verify the Fourier transform pairs 7, 8, 9, 10, and 11 in Table 1.1.
1.5 Assume that the solution to the three-dimensional wave equation in Eq. (1.11)
is given by (x, y, z, t) p(x, y; z)exp( j0t), verify that the Helmholtz
equation for p(x, y; z) is given by
2 p 2 p 2 p
2 2 k 20 p 0,
x2 y z
where k0 0/.
1.6 Write down functions of the following physical quantities in Cartesian coord-
inates (x, y, z).
(a) A plane wave on the xz plane in free space. The angle between the
propagation vector and the z-axis is .
(b) A diverging spherical wave emitted from a point source at (x0, y0, z0) under
paraxial approximation.
1.7 A rectangular aperture described by the transparency function t(x, y)
rect(x/x0, y / y0) is illuminated by a plane wave of unit amplitude. Determine
the complex eld, p(x, y; z), under Fraunhofer diffraction. Plot the intensity,
|p(x, y; z)|2, along the x-axis and label all essential points along the axis.
1.8 Repeat P7 but with the transparency function given by
x X=2 y x X=2 y
tx, y rect , rect , , X x0 :
x0 y0 x0 y0
1.9 Assume that the pupil function in the 4-f image processing system in Fig. 1.7
is given by rect(x/x0, y / y0). (a) Find the coherent transfer function, (b) give an
expression for the optical transfer function and express it in terms of the
coherent transfer function, and (c) plot both of the transfer functions.
1.10 Repeat P9 but with the pupil function given by the transparency function in
P8.
1.11 Consider a grating with transparency function t x, y 12 12 cos 2x=,
where is the period of the grating. Determine the complex eld, p(x, y; z),
under Fresnel diffraction if the grating is normally illuminated by a unit
amplitude plane wave.
1.12 Consider the grating given in P11. Determine the complex eld, p(x, y; z),
under Fraunhofer diffraction if the grating is normally illuminated by a unit
amplitude plane wave.
1.13 Consider the grating given in P11 as the input pattern in the 4-f image
processing system in Fig. 1.7. Assuming coherent illumination, nd the
intensity distribution at the output plane when a small opaque stop is located
at the center of the Fourier plane.
26 Wave optics
References
1. T.-C. Poon, and P. P. Banerjee, Contemporary Optical Image Processing with
MATLAB (Elsevier, Oxford, UK, 2001).
2. T.-C. Poon, and T. Kim, Engineering Optics with MATLAB (World Scientic, River
Hackensack, NJ, 2006).
3. A. W. Lohmann, and W. T. Rhodes, Two-pupil synthesis of optical transfer functions,
Applied Optics 17, 11411151 (1978).
4. W. Stoner, Incoherent optical processing via spatially offset pupil masks, Applied
Optics 17, 24542467 (1978).
5. T.-C. Poon, and A. Korpel, Optical transfer function of an acousto-optic heterodyning
image processor, Optics Letters 4, 317319 (1979).
6. G. Indebetouw, and T.-C. Poon, Novel approaches of incoherent image processing with
emphasis on scanning methods, Optical Engineering 31, 21592167 (1992).
2
Fundamentals of holography
27
28 Fundamentals of holography
holography combines parts of two Greek words: holos, meaning complete, and
graphein, meaning to write or to record. Thus, holography means the recording
of complete information. Hence, in the holographic process, the recording medium
records the original complex amplitude, i.e., both the amplitude and phase of the
complex amplitude of the object wave. The result of the recorded intensity
variations is now called a hologram. When a hologram is properly illuminated at
a later time, our eyes observe the intensity generated by the same complex eld. As
long as the exact complex eld is restored, we can observe the original complex
eld at a later time. The restored complex eld preserves the entire parallax and
depth information much like the original complex eld and is interpreted by our
brain as the same three-dimensional object.
scattered by the point object generates a diverging spherical wave toward the
recording medium. This diverging wave is known as an object wave in holography.
The plane wave that directly illuminates the recording medium is known as a
reference wave. Let 0 represent the eld distribution of the object wave on the
plane of the recording medium, and similarly let r represent the eld distribution
of the reference wave on the plane of the recording medium. The recording
medium now records the interference of the reference wave and the object wave,
i.e., what is recorded is given by |0 r|2, provided the reference wave and the
object wave are mutually coherent over the recording medium. The coherence of
the light waves is guaranteed by the use of a laser source (we will discuss
coherence in Section 2.4). This kind of recording is known as holographic
recording, distinct from a photographic recording in that the reference wave does
not exist and hence only the object wave is recorded.
We shall discuss holographic recording of a point source object mathematically.
Let us consider the recording of a point object at a distance z0 from the recording
medium as shown in Fig. 2.1. The pinhole aperture is modeled as a delta function,
(x, y), which gives rise to an object wave, 0, according to Fresnel diffraction
[see Eq. (1.35)], on the recording medium as
jk0 jk0 2
0 x, y; z0 x, y hx, y; z0 x, y exp jk 0 z0 exp x y2
2z0 2z0
jk 0 jk 0 2
exp jk 0 z0 exp x y2 : 2:2
2z0 2z0
This object wave is a paraxial spherical wave. For the reference plane wave, we
assume that the plane wave has the same initial phase as the point object at a
distance z0 away from the recording medium. Therefore, its eld distribution on the
recording medium is r a exp( jk0z0), where a, considered real for simplicity
here, is the amplitude of the plane wave. Hence, the recorded intensity distribution
on the recording medium or the hologram with amplitude transmittance is given by
2
jk0 jk 0 2
2
tx, y j r 0 j a exp jk 0 z0 exp jk 0 z0 exp 2
x y
2z0 2z0
or
2
k0 jk0 jk 0 2 jk 0 jk0 2
t x, y a
2
a exp x y2
a exp x y :
2
2z0 2z0 2z0 2z0 2z0
2:3
Note that the last term, which is really the desirable term of the equation, is the total
complex eld of the original object wave [see Eq. (2.2)] aside from the constant
30 Fundamentals of holography
terms a and exp( jk0z0). Now, Eq. (2.3) can be simplied to a real function and we
have a real hologram given by
k0 2
tx, y A B sin x y ,
2
2:4
2z0
where A a2 (k0/2z0)2 is some constant bias, and B ak0/z0. The expression
in Eq. (2.4) is often called the sinusoidal Fresnel zone plate (FZP), which is the
hologram of the point source object at a distance z z0 away from the recording
medium. Plots of the FZPs are shown in Fig. 2.2, where we have set k0 to be some
constant but for z z0 and z 2z0.
When we investigate the quadratic spatial dependence of the FZP, we notice that
the spatial rate of change of the phase of the FZP, say along the x-direction, is
given by
1 d k0 2 k0x
f local x : 2:5
2 dx 2z0 2z0
This is a local fringe frequency that increases linearly with the spatial coordinate, x. In
other words, the further we are away from the center of the zone, the higher the local
spatial frequency, which is obvious from Fig. 2.2. Note also from the gure, when we
double the z value, say from z z0 to z 2z0, the local fringes become less dense as
evident from Eq. (2.5) as well. Hence the local frequency carries the information on z,
i.e., from the local frequency we can deduce how far the object point source is away
from the recording medium an important aspect of holography.
To reconstruct the original light eld from the hologram, t(x, y), we can simply
illuminate the hologram with plane wave rec, called the reconstruction wave in
2.2 Hologram as a collection of Fresnel zone plates 31
Evaluation of the above equation gives three light elds emerging from the holo-
gram. The light eld due to the rst term in the hologram is a plane wave as recA*h
(x, y; z) / rec A, which makes sense as the plane wave propagates without
diffraction. This out-going plane wave is called a zeroth-order beam in holography,
which provides a uniform output at the observation plane. In the present analysis
the interference is formed using a paraxial spherical wave and a plane wave. So the
zeroth-order beam is uniform. However, if the object light is not a uniform eld, the
zeroth-order beam will not be uniform. Now, the eld due to the second term is
jk0 jk 0 2
rec a exp x y hx, y; z
2
2z0 2z0
jk0 jk 0 jk 0 2 jk 0 2
/ exp x y exp
2
x y
2
2z0 2z 2z0 2z
jk0 jk 0 jk 0
exp x y :
2 2
2:7
2z0 2z 2z0 z
This is a converging spherical wave if z < z0. However, when z > z0, we have a
diverging wave. For z z0, the wave focuses to a real point source z0 away from
the hologram and is given by a delta function, (x, y). Now, nally for the last term
in the equation, we have
jk 0 jk0 2
rec a exp x y hx, y; z
2
2z0 2z0
jk 0 jk 0 jk0 2 jk 0 2
/ exp x y exp
2
x y
2
2z0 2z 2z0 2z
jk 0 jk 0 jk 0
exp x2 y2 , 2:8
2z0 2z 2z0 z
and we have a diverging wave with its virtual point source at a distance z z0,
which is behind the hologram, on the opposite side to the observer. This recon-
structed point source is at the exact location of the original point source object.
32 Fundamentals of holography
FZP
reconstruction wave
observer
Figure 2.3 Reconstruction of a FZP with the existence of the twin image (which
is the real image reconstructed in the gure).
The situation is illustrated in Fig. 2.3. The reconstructed real point source is called
the twin image of the virtual point source.
Although both the virtual image and the real image exhibit the depth of the
object, the virtual image is usually used for applications of three-dimensional
display. For the virtual image, the observer will see a reconstructed image with
the same perspective as the original object. For the real image, the reconstructed
image is a mirror and inside-out image of the original object, as shown in Fig. 2.4.
This type of image is called the pseudoscopic image, while the image with normal
perspective is called the orthoscopic image. Because the pseudoscopic image
cannot provide natural parallax to the observer, it is not suitable for three-
dimensional display.
Let us now see what happens if we have two point source objects given by
(x, y) (x x1, y y1). They are located z0 away from the recording medium.
The object wave now becomes
0 x, y; z0 b0 x, y b1 x x1 , y y1 hx, y; z0 , 2:9
where b0 and b1 denote the amplitudes of the two point sources. The hologram now
becomes
2.3 Three-dimensional holographic imaging 33
tx, y j r 0 x, y; z0 j2
jk 0 2
jk 0
a exp jk0 z0 b0 exp jk 0 z0 exp x y 2
2z0 2z0
jk 0 h i
2
jk 0 2 2
b1 exp jk 0 z0 exp x x1 y y1 : 2:10
2z0 2z0
Again, the above expression can be put in a real form, i.e., we have
ab0 k 0 k0 2 ab1 k0 k0 2 2
tx, y C sin x y
2
sin x x1 y y1
z0 2z0 z0 2z0
2
k0 k0
2b0 b1 cos x y1 2xx1 2yy1 ,
2 2
2:11
2z0 2z0 1
Figure 2.5 Recording geometry for the two point objects, 1 and 2. The reference
point source is labeled R.
i
h jk 0 jk 0 h 2
p1 x, y x , y hx, y; R exp jk 0 R exp x h=2 y 2
2 2R 2R
i
jk 0 h 2
/ exp x h=2 y 2
, 2:12
2R
h
p2 x, y x , y hx, y; R d
2
i
jk0 jk 0 h
exp jk0 R d exp x h=22 y2
2R d 2R d
h i
jk 0
/ exp x h=22 y2 , 2:13
2R d
and
i
jk 0 jk 0 h 2
pR x, y x a, y hx, y; l1 exp jk0 l1 exp x a y2
2l1 2l1
jk0 h i
/ exp x a2 y2 : 2:14
2l1
These spherical waves interfere on the recording medium to yield a hologram
given by
tx, y j p1 x, y p2 x, y pR x, yj2
p1 x, y p2 x, y pR x, y p1 x, y p2 x, y pR x, y,
2:15
2.3 Three-dimensional holographic imaging 35
jk 0 2 jk 0 2
exp x h=2 y exp
2
x a y ,
2
2:16a
2R 2l1
t rel2 x, y p2 x, y pR x, y
jk 0 2 jk 0 2
exp x h=2 y exp
2
x a y ,
2
2R d 2l1
2:16b
t rel3 x, y p1 x, y pR x, y t rel1 x, y , 2:16c
t rel4 x, y p2 x, y pR x, y t rel2 x, y : 2:16d
Note that trel3(x, y) and trel4(x, y) contain the original wavefronts p1(x, y) and
p2(x, y) of points 1 and 2, respectively, and upon reconstruction they give rise to
virtual images as shown in the last section for a single point object. However,
trel1(x, y) and trel2(x, y) contain the complex conjugates of the original complex
amplitudes p1 x, y and p2 x, y of points 1 and 2, respectively, and upon
reconstruction they give rise to real images. We shall now show how these
reconstructions come about mathematically for spherical reference recording and
reconstruction.
The reconstruction geometry for real images is shown in Fig. 2.6, where the
hologram just constructed is illuminated with a reconstruction spherical wave from
Figure 2.6 Reconstruction geometry for the two point objects, 1 and 2. The
reconstruction point source is labeled r.
36 Fundamentals of holography
a point source labeled r. For simplicity, we assume that the wavelength of the
reconstruction wave is the same as that of the waves of the recording process.
Hence the complex eld, pr (x, y), illuminating the hologram is
i
jk 0 jk 0 h 2
pr x, y x b, y hx, y; l2 exp jk 0 l2 exp x b y 2
2l2 2l2
i
jk 0 h 2
/ exp x b y 2
: 2:17
2l2
We nd the total complex eld immediately behind (away from the source) the
hologram by multiplying Eq. (2.17) with Eq. (2.15) but the reconstructions due to
the relevant terms are
pr x, yt reli x, y, 2:18
jk0 2 jk 0 2
/ exp x b y exp
2
x h=2 y
2
2l2 2R
jk 0 2 jk 0 jk 0 2
exp x a y 2
exp x y :
2
2:19
2l1 2z 2z
From the denition of convolution integral [see Table 1.1], we perform the
integration by writing the functions involved with new independent variables
x0 , y0 and (x x0 , y y0 ). We can then equate the coefcients of x0 2, y0 2, appearing
in the exponents, to zero, thus leaving only linear terms in x0 , y0 . Doing this for
Eq. (2.19), we have
1 1 1 1
0, 2:20
R l1 l2 zr1
where we have replaced z by zr1. zr1 is the distance of the real image reconstruction
of point object 1 behind the hologram. We can solve for zr1 to get
1 1 1 1 Rl1 l2
zr1 : 2:21
R l1 l2 l1 l2 l1 l2 R
2.3 Three-dimensional holographic imaging 37
which is a function shifted in the lateral direction and is a real image of point
object 1. The image is located zr1 away from the hologram and at
b h a
x x1 zr1 , y y1 0:
l2 2R l1
As for the reconstruction due to the relevant term pr (x, y)trel2 (x, y) in the
hologram, we have
pr x, yt rel2 x, y hx, y; z
jk 0 jk0 2
pr x, yt rel2 x, y exp jk0 zexp x y2
2z 2z
i
jk 0 h 2
i jk0 h 2
/ exp xb y 2
exp x h=2 y 2
2l2 2R d
i
jk
jk 0 h 2 0 jk 0 2
exp x a y 2
exp x y :
2
2:23
2l1 2z 2z
A similar analysis reveals that this is also responsible for a real image reconstruc-
tion but for point object 2, expressible as
b h a
pr x, yt rel2 x, y hx, y; zr2 / x zr2 ,y , 2:24
l2 2R d l1
where
1
1 1 1 R dl1 l2
zr2 :
R d l1 l2 l1 l2 l1 l2 R d
Here, zr2 is the distance of the real image reconstruction of point object 2 behind
the hologram and the image point is located at
b h a
x x2 zr2 , y y2 0:
l2 2R d l1
38 Fundamentals of holography
l1 l2 2
M rLong : 2:26
l1 l2 Rl1 Rl2 2
We nd the lateral distance (along x) between the two image points 1 and 2 by
taking the difference between the locations of the two -functions in Eqs. (2.22)
and (2.24), so the lateral magnication is
b h a b h a
zr2 zr1
l2 2R d l1 l2 2R l1
M rLat
h
b a h
zr2 zr1 zr2 zr1
l2 l1 2R
2:27
h
for R d. In order to make this magnication independent of the lateral separ-
ation between the objects, h, we set
b a
0,
l2 l1
or
b a
: 2:28
l2 l1
2.3 Three-dimensional holographic imaging 39
Then, from Eq. (2.27) and again for the condition that R d,
1 l1 l2
M rLat zr2 zr1 : 2:29
2R l1 l2 l1 l2 R
By comparing Eq. (2.26) and Eq. (2.29), we have the following important rela-
tionship between the magnications in three-dimensional imaging:
From the above result, we see that the image is twisted for a three-dimensional
object. In practice we can remove the translational distortion by setting the recon-
struction point to satisfy Eq. (2.28). The distortion can also be removed by setting
a b 0. However, by doing so, we lose the separation of the real, virtual, and
zeroth-order diffraction, a situation reminiscent of what we observed in Fig. 2.3
where we used plane waves for recording and reconstruction, with both the plane
waves traveling along the same direction. We will discuss this aspect more in
Chapter 3.
Table 2.1 MATLAB code for chromatic aberration calculation, see Fig. 2.8
zlambda0*R*L./(lambdaR*(L-R)-lambda0*R);
dzR*L/(L-2*R)-z;
plot(lambdaR,dz)
title(Longitudinal chromatic aberration)
xlabel(Reconstruction wavelength (nm))
ylabel({\delta}z (mm))
x-z.*(a/L-lambdaR/lambda0*h/R/2-lambdaR*a/lambda0/L);
dxx-R*L/(L-2*R)*(h/2/R);
gure;plot(lambdaR,dx)
title(Transverse chromatic aberration)
xlabel(Reconstruction wavelength (nm))
ylabel({\delta}x (mm))
-0.2
2
-0.0
0
-2 -0.2
-4 -0.4
dz (mm)
dx (mm)
-6 -0.6
-8 -0.8
-10
-1.0
-12
-1.2
-14 (a) (b)
-1.4
400 500 600 700 400 500 600 700
Figure 2.8 (a) Longitudinal, and (b) transverse chromatic aberration distances
when the recording wavelength is 0 632 nm.
wavelength. In the next chapter we will see that holograms can be reconstructed
using white light in some specic kinds of geometries.
so that the elds will always produce interference. In this section, we give a brief
introduction to temporal and spatial coherence. In temporal coherence, we are
concerned with the ability of a light eld to interfere with a time-delayed version of
itself. In spatial coherence, the ability of a light eld to interfere with a spatially
shifted version of itself is considered.
h i lim dt,
1
2:40
T! T
T=2
and A(r, t) and B(r, t) denote the optical elds to be superimposed. In the following
discussion we will rst assume that the two light elds are from an innitesimal,
quasi-monochromatic light source. We model the quasi-monochromatic light as
having a specic frequency 0 for a certain time and then we change its phase
randomly. Thus at xed r, A(t) and B(t) can be simply expressed as
At A0 exp j0 t t , 2:41a
Bt B0 exp j0 t t , 2:41b
where denotes the time delay due to the optical path difference between A(t) and
B(t), and (t) denotes the time-variant initial phase of the quasi-monochromatic
light. By substituting Eq. (2.41) into Eq. (2.39), we have
n o
I A20 B20 2A0 B0 Re e jt t0 2:42
because hjAt j2 i A20 and hjBtj2 i B20 . In Eq. (2.42), the time-average integral
is the interference term called the complex degree of coherence of the source,
which is denoted as
44 Fundamentals of holography
D E
e jt t0 : 2:43
wave
2p t0
q p
0
Time
Many natural and articial light sources have a monotonously decreasing function
in |()|, starting from |(0)| 1.
The modulus of () is plotted in Fig. 2.10. It is shown that |()| decreases with
and falls to zero when 0. By substituting Eq. (2.48) into Eq. (2.45), the
interferogram can be expressed as
I A20 B20 2A0 B0 cos 0
0
A20 B20 2A0 B0 cos 2d=, 2:49
0
46 Fundamentals of holography
g
0.5
0 t
t0
Figure 2.10 The modulus of the complex degree of coherence for quasi-
monochromatic light.
where d is the optical path difference corresponding to the time delay between
the two light waves, i.e., 2d/ 0.
The width of the complex degree of coherence, 0, is called the coherence time.
If the time delay between the light waves involved in the interference is larger than
the coherence time, no fringes can be observed.
Finally, we can also dene the coherence length lc as
c c 0 , 2:50
where c is the speed of light in vacuum. In other words, the coherence length is the
path the light passes in the time interval 0. To ensure the success of interference,
the optical path difference in an interferometer must be smaller than the coherence
length.
T=2
1
hE tEt i lim E tEt dt
T! T
T=2
/ E E: 2:52
From the correlation of the Fourier transform in Table 1.1, we have
F fE Eg E Eexp jd jF fEgj2 P c , 2:53
where and are the time and the temporal radian frequency variables, respect-
ively. Note that the denitions of F and F 1 are different from those of F and F
1
for the spatial function dened in Eq. (1.22); namely j is used in the exponen-
tial function for the time function in the forward transform. This is done purposely
to be consistent with the engineering convention for a traveling wave. In the
convention, exp[j(0t k0z)] denotes a plane wave traveling in the z-direction.
However, we can still use Table 1.1 as long as we replace j by j in the
transform pairs.
Let us return our attention to Eq. (2.53). The result of Eq. (2.53) is the Wiener
Khinchin theorem, which states that the Fourier transform of the auto-correlation of
the light eld is proportional to the power spectrum of the light source, P c .
P c is the power spectrum of the complex eld E, i.e., a eld represented by a
complex quantity. It should be noted that P c is not the same as P r , the
power spectrum of the real light eld where the light eld is expressed in terms of a
real quality. P r includes both and components, while P c contains
only the component. As it turns out, the relation between P c and P r is
simple, namely [4]
4P r for > 0
P c
0 for < 0:
48 Fundamentals of holography
The total contribution from all the frequencies to I() then becomes
I 2jA0 j2 1 cos d
0
/ 2 P r d 2 P r cos d
0 0
/ 2 I 0 I: 2:58
As jA0 j2 / P r , the power spectrum of the source, the rst term in the above
equation is a constant and simply proportional to the total intensity of the light
source, I0. The second term I() varies with the delay . Note that the power
spectrum is only given for positive frequencies and by letting P r P r , an
even function in , we can extend the second term of the equation to become
I 2 P r cos d P r exp jd
0
1
/ P r exp jd F 1
fP r g: 2:59
2
2.4 Temporal and spatial coherence 49
where we have used Eq. (2.45) to obtain the last step of the above equation. Using
the model of a partially coherent light as a quasi-monochromatic light of nite size
wave trains with random initial phase, we have, from Eq. (2.48),
= 0 e j0 , and Eq. (2.61) becomes
2 2
I 2 jEtj 1 jj cos0 2 jEtj 1 cos0 : 2:62
0
The power spectrum, according to Eq. (2.55), is
n
o
j0 2 0 0
P N / F fg F e 0 sinc
2
, 2:63
0 2
where we have used the transform pairs in Table 1.1, taking into account that we
are dealing with one-dimensional time functions. Note that the full width at half-
maximum (FWHM) of the power spectrum is related to 0 by 0 5.566/.
Other examples of spectra are the Gaussian spectrum and the rectangular spectrum.
For a Gaussian spectrum, we have
0 2
P N / exp 2:64a
2 2
with its complex degree of coherence given by
2 2
/ exp e j0 , 2:64b
2
p
where the FWHM of the power spectrum is related to by = 8 ln2.
For a rectangular spectrum, we have
0
P N / rect 2:65a
50 Fundamentals of holography
2
c , 2:68
where is the spectral line width of the source and is the wavelength corres-
ponding to the center frequency of the power spectrum 0. We can also relate the
spectral line width to the FWHM of the power spectrum as follows:
2c
: 2:69
2
White light has a line width of about 300 nm, ranging roughly from 400 to 700 nm,
and if we take the average wavelength at 550 nm, Eq. (2.68) gives c 1 m, a very
short coherence length. LEDs have a spectral width of about 50 nm and have a
coherence length of about 7 m for red color with wavelength of 0.6 m. As for the
green line of mercury at 546 nm having a line width about 0.025 nm, its coherence
length is about 1.2 cm. Lasers typically have a long coherence length. Helium
neon lasers can produce light with coherence lengths greater than 5 m, but 20 cm is
typical. Some industrial CO2 lasers of line width of around 105 nm producing
emission at the infrared wavelength of 10.6 m would give a coherence length of
around 11 km.
2.4 Temporal and spatial coherence 51
are the optical elds passing through point A and point B, respectively; E1i(t) and
E2i(t) are the components emitted from source 1 and source 2, respectively, where
the subscript i A or B. According to the concept of temporal coherence [Eq.
(2.51)], we can dene the complex degree of coherence between points A and B,
that is
EA tEB t
AB q
, 2:72
2 2
jE A tj jE B tj
which is called the complex degree of mutual coherence, and of Eq. (2.51) is,
strictly speaking, the complex degree of self coherence.
Because source 1 and source 2 are completely incoherent, they cannot interfere
with each other, i.e., cross terms such as E 1A tE 2B t 0, E2A tE 1B t 0, etc.
Thus we have the following results:
hEA tEB ti hE1A tE1B ti hE2A tE 2B ti,
jEA j2 jE 1A j2 jE 2A j2 ,
2
jEB j2 jE 1B i jE 2B j2 :
Suppose that jE1A j2 jE 1B j2 jE 2A j2 hE2B j2 for simplicity, then we
can rewrite Eq. (2.72) as
hE1A tE 1B ti hE2A tE 2B ti
AB : 2:73
2hE1A tj2 2 jE 2A tj2
Because both E1A(t) and E1B(t) come from the same source, their relationship must
be E1B(t) E1A(t 1); similarly, we will also have E2B(t) E2A(t 2), where 1
and 2 are the time delays between the waves at point A and point B, respectively,
from source 1 and source 2. As a result, the complex degree of coherence AB can
be expressed as
hE1A tE 1A t 1 i hE 2A tE2A t 2 i
AB
2hE 1A tj2 2hE2A tj2
1 1
1 1 2 2 : 2:74
2 2
Equation (2.74) shows that AB depends on the complex degree of coherence of the
two source points.
If the light emitted from the two point sources is quasi-monochromatic, we can
use the result from Eq. (2.48) for Eq. (2.74) to yield
1 1 j 1 1 2 j 2
AB e e : 2:75
2 0 2 0
2.4 Temporal and spatial coherence 53
After some manipulation, the modulus of the complex degree of coherence can be
expressed as
2 1 1 2
jAB j 1 cos 1 2 2:76
2 0 0
provided that 1 2 1 2 . Hence the period of the cosine function in
Eq. (2.76) is much shorter than that of the triangle function and the cosine function
dominates the visibility of the interferogram. By applying the simple geometry
described in Fig. 2.11 and using the paraxial approximation, we can obtain
sl
2 1 , 2:77
rc
where s is the separation of the two light sources, l is the separation of points A and
B, and r is the separation of the source plane from the plane of points A and
B. Finally, we can say that the interference fringes are visible, provided
1 2 < ,
or
r
l< : 2:78
2s
To understand the meaning of Eq. (2.78), we can imagine that in Fig. 2.11 point
A is a movable pinhole while point B is a xed pinhole. When the separation
between point A and point B is very short, i.e., l r=2s, the visibility of the
interferogram on the screen approaches unity. When the separation between point
A and point B is r=2s, the visibility falls to zero. As a result, we can dene the
transverse coherence length lt as
r
lt , 2:79
2s 2s
where s s/r is the angular separation of the source measured from the plane of
points A and B. In other words, we can improve the spatial coherence by moving
the source far away from the plane of points A and B.
Figure 2.12 Schematic diagrams of (a) correlation between light of two points
A and B from an extended source, and (b) corresponding calculation model.
0 0 jk0
Sx , y exp x xb x y yb y dx0 dy0 ,
0 0
2:80
r
jk 0
Sx , y exp r xa xb x ya yb y dx0 dy0
0 0 0 0
AB
Sx0 , y0 dx0 dy0
F fSx, ygk k0 xa xb , k k0 ya yb
x r y r
, 2:81
0 0
Sx , y dx dy 0 0
Table 2.2 MATLAB code for plotting the modulus of the complex degree of coherence,
see Fig. 2.14
plot(r0,gama)
title(Complex degree of coherence)
xlabel(Radius (mm))
ylabel(j\gamma_A_Bj)
must be smaller than 0.021 mm (i.e., diameter 42 m) so that the fringe visibility
can be larger than 0.5.
Problems
2.1 Verify the following convolution results,
(
jk 0
jk0 2 jk 0 2 exp x y for z 6 z0
2 2
exp x y2 exp x y2 2z0 z
2z0 2z
x, y for z z0 ,
which appear in Eq. (2.7).
Problems 57
2.2 With reference to Fig. 2.5 and Fig. 2.6 for the recording and reconstruction
geometry of point sources, let a b 0, i.e., the reference and reconstruction
point sources are on the z-axis, and show that the lateral magnication for the
virtual image is [6]
R R 1
M Lat 1
v
,
l1 l2
and the longitudinal magnication for R d is
l1 l2 2
M vLong :
l1 l2 Rl1 Rl2 2
2.3 Show that for a quasi-monochromatic light oscillating at 0 of nite size wave
trains with initial phase distributed randomly between 0 and 2 within some
xed time, i.e., the phase randomly changes every time interval 0 and remains
stable between the changes, as shown in Fig. 2.9, the complex degree of
coherence is given by
e j0 :
0
2.4 According to the denition of coherence time given by Mandel, show that the
coherence time of the quasi-monochromatic light from Problem 2.3 is given
by c 20/3.
2.5 The typical bandwidth of a commercial HeHe laser operated at 632.8 nm
is about 500 MHz. Calculate the corresponding coherence length.
2.6 A bandpass lter is usually applied in association with a broadband light
source to produce interference. Typically, the full width half-maximum
(FWHM) of the transmission band is 10 nm while the center wavelength of
the band is 630 nm. Calculate the corresponding coherence length.
2.7 When we investigate the interference formed by two incoherent light sources
shown in Fig. 2.11, 1 and 2 are the time delays between the waves at point A
and point B, respectively from source 1 and source 2. Show that 2 1 sl/rc,
assuming the small angle approximation.
2.8 Show that for two point sources that are quasi-monochromatic, the complex
degree of mutual coherence
1 1 j1 1 2 j2
jAB j2 e e
2 0 2 0
1 1 2
1 cos 1 2
2 0 0
if 1 2 (1 2).
58 Fundamentals of holography
2.9 Start from the Fresnel diffraction formula [Eq. (1.35)] and assume a spherical
wave passing through an aperture S(x, y) converging to (xb, yb) at the
diffraction plane [Fig. 2.12(b)]. Find the diffracted eld as given by the result
in Eq. (2.80).
2.10 Show that the complex degree of coherence of the light produced by a
uniformly quasi-monochromatic incoherent source with a shape of circular
disk of radius r0 is proportional to J1( )/( ) given by Eq. (2.82). Note that
since the problem is of circular symmetry, it is convenient to express the
Fourier transform as follows.
F f f x, yg F kx , ky 2 rf rJ 0 kr r dr,
0
p p
where r x2 y2 , k r k x 2 ky 2 , and J0( ) is the zeroth-order Bessel
function. The above integral is also referred to as the FourierBessel transform
denoted by
ff r g 2 rf rJ 0 k r rdr:
0
References
1. D. Garbor, A new microscopic principle, Nature 161, 777778 (1948).
2. M. P. Givens, Introduction to holography, American Journal of Physics 35, 10561064
(1967).
3. T.-C. Poon, On the fundamentals of optical scanning holography, American Journal of
Physics 76, 738745 (2008).
4. J. W. Goodman, Statistical Optics (John Wiley & Sons, New York, 1985).
5. L. Mandel, Fluctuations of photon beams: the distribution of the photo-electrons,
Proceedings of the Physical Society 74, 233 (1959).
6. F. T. S. Yu, Optical Information Processing (John Wiley & Sons, New York, 1983).
3
Types of holograms
In this chapter we will introduce some basic types of holograms and their basic
principles. Some topics, such as holographic recording materials, are not relevant
to digital holography and hence are not covered here (for reviews of holographic
materials, see Refs. [14]).
Figure 3.1 (a) Recording geometry, and (b) reconstruction geometry for the
Gabor hologram.
59
60 Types of holograms
The merit of Gabor holography is that the setup is very simple and it is possible
to produce a Gabor hologram using a low-coherence light source. On the other
hand, because all the light elds propagate along the same direction in the recon-
struction stage, they are observed simultaneously, and the reconstructed image is
always blurred by the transmitted zeroth-order light and the twin image. Another
problem of the Gabor hologram is that the amplitude uctuation of the object must
be small enough, i.e., (x, y) 0, to make the technique useful. Accordingly,
Gabor holography cannot be applied to the usual diffusively reecting objects.
However, this shortcoming can be overcome by using an independent reference
light as a reference, as shown in Fig. 3.2. Since the reference light and the object
light overlap along the same direction, this setup is called the on-axis or in-line
geometry. In on-axis holography, there is no limitation on the types of the object
used. However, the problem of the zeroth-order light and the twin image remains.
Figure 3.3 (a) Recording setup, and (b) reconstruction setup for off-axis
holography.
communication systems where a carrier wave is used to carry the message in the
theory of modulation. Hence the off-axis hologram is also called the carrier-
frequency hologram.
In the reconstruction, the hologram is illuminated by the reconstruction light
along the same direction as the reference light, as shown in Fig. 3.3(b). Assuming
the amplitude of the reconstruction light is the same as that of the reference light,
the complex eld just behind the hologram can be expressed as
r e jk0 sin x tx, y j r j2 r e jk0 sin x j 0 x, yj2 r e jk0 sin x
0 x, yj r j2 0 x, y 2r e j2k0 sin x , 3:6
where we have used Eq. (3.5a) to write the above equation. Similar to Gabor
holography, the rst two terms on the right hand side of Eq. (3.6) represent the
transmitted light, i.e., the zeroth-order beam. The third term contributes to the
virtual reconstructed image. Finally, the last term contributes to the conjugate real
image. To understand how the virtual image can be viewed without the annoying
disturbance due to the transmitted light and the twin image, it is convenient to
analyze the situation in the Fourier spectrum domain. In the spectrum domain, the
four terms on the right side of Eq. (3.6) can be respectively calculated by
3.2 Off-axis holography 63
Figure 3.4 One-dimensional plot of the spectrum of the off-axis hologram. Note
that 3, the spectrum of the zeroth-order beam, is centered at kx 0.
propagation vector along the x-direction is then k0x k0 sin (10 ). Thus the
bandwidth of the reconstructed virtual image is
B 2 k0 sin 10 3448 rad=mm,
where k0 2/0.6328 m 9929 rad/mm. The factor of two in the above equation
corresponds to taking into account that the spread of the light from the recon-
structed image also spills into the negative x-direction. By substituting the calcu-
lated value of B into Eq. (3.7), we can determine min as
1 3 3448 rad=mm
min sin 31:4 :
2 9929 rad=mm
Now, in order to successfully record the off-axis hologram, the recording medium
must be able to resolve the spatial carrier frequency, fx sinmin/, plus half the
bandwidth, B/2, of the reconstructed image, i.e., the required resolvable spatial
frequency is
sin min B=2 sin min B
f resolvable : 3:8
2 4
With the calculated values of min and B, we can nd
sin 31:4 3448 rad=mm
f resolvable
632:8 mm 2
1095 cycle=mm or linepair=mm or lp=mm:
Figure 3.5 (a) Recording geometry, and (b) reconstruction geometry for the
image hologram.
Figure 3.6 Original object (a) and the white-light reconstructed images with
distances between the image and hologram of (b)1 cm, (c) 5 cm, and (d)15 cm,
respectively.
Table 3.1 MATLAB code for simulation of an image hologram, see Example 3.2
Iidouble(Ii);
PHrand([256,256]);
IiIi.*exp(2i*pi*PH); % add a random phase on the object
M512;
Izeros(512);
I(128:383,128:383)Ii; % zero padding
% Reconstruction (650nm)
pexp(-2i*pi*(-z).*((1/w)^2-(1/M/delta)^2.*. . .(C-M/2-1).^2-
(1/M/delta)^2.*(R-M/2-1).^2).^0.5);
A1fftshift(ifft2(fftshift(E)));
Az1A1.*p;
R1fftshift(fft2(fftshift(Az1)));
R1(abs(R1)).^2;
gure; imshow(R1/max(max(R1)));
title(Reconstructed image(650nm))
axis off
% Reconstruction (450nm~650nm)
dw50;
IMAzeros(512,512);
for g0:40;
w2(6500-dw*g)*10^-8; % reconstruction wavelength
E2E.*exp(2i*pi*sind(10)*(w-w2)/w/w2.*R*delta);
% phase mismatch due to the wavelength shift
pexp(-2i*pi*(-z).*((1/w2)^2-. . .
(1/M/delta)^2.*(C-M/2-1).^2-. . .
(1/M/delta)^2.*(R-M/2-1).^2).^0.5);
68 Types of holograms
Az2ifft2(fftshift(E2)).*(fftshift(p));
R2fftshift(fft2(Az2));
R2(abs(R2)).^2; % summation of all wavelengths
IMAIMAR2;
end
IMAIMA/max(max(IMA));
gure; imshow(IMA)
title(Reconstructed image(white light))
axis off
chromatic aberration becomes large and the reconstructed image is blurred, as shown
in Figs. 3.6(c) and (d). The MATLAB code is listed in Table 3.1 as a reference.
where 0(x, y) is the amplitude transmittance of the object and 0 (kx, ky) is the
Fourier transform of 0(x, y); (x x0, y) stands for the reference light spot at x
x0, y 0; f is the focal length of lens 2, and A is the amplitude of the reference
light. Consequently, the hologram can be expressed as
3.4 Fresnel and Fourier holograms 69
Figure 3.7 (a) Recording geometry, and (b) reconstruction geometry for a
Fourier hologram.
2
k x k y k x k y jk x x
tx, y 0 jAj 0
0 0 2 0 0 0 0
, , A exp
f f f f f
k0 x k0 y jk0 x0 x
, A exp : 3:12
f f f
In the reconstruction process, the hologram placed at the front focal plane of a lens
is illuminated using a normal-incident plane wave with unit amplitude, as shown in
Fig. 3.7(b). According to Eq. (3.12), the complex eld at the back focal plane of
the lens contains three terms:
!
k x k y 2
1 x, y F 0
0
,
0 2
jAj k k x=f
f f x 0
ky k0 y=f
f4
0 x, y 0 x, y jAj2 x, y, 3:13a
k 40
! !
k0 x k0 y jk x x
A exp
0 0
2 x, y F 0 ,
kx k0 x=f
f f f
ky k0 y=f
f2
2 A 0 x x0 , y, 3:13b
k0
70 Types of holograms
Figure 3.8 (a) The off-axis Fourier hologram, and (b) the reconstructed image.
The object pattern is the same as in Fig. 3.6(a).
! !
k0 x k0 y jk 0 x0 x
3 x, y F 0 , A exp k x k0 x=f
f f f
k y k0 y=f
2
f
A x x0 , y: 3:13c
k20 0
Table 3.2 MATLAB code for simulation of a Fourier transform hologram, see Example 3.3
M512;
Izeros(512);
I(128:383,128:383)Ii; % zero-padding
gure; imshow(mat2gray(abs(I)));
title(Object pattern)
axis off
% Reconstruction
Ufftshift(ifft2(fftshift(H)));
gure; imshow(900.*mat2gray(abs(U)));
title(Reconstructed image)
axis off
both the object and a focused light spot are still located on the same plane. Suppose
that the distance between the object, 0 (x, y), and the holographic lm is within
the Fresnel region, so the complex amplitudes of the object light 0 (x, y) and the
reference light r(x, y) at the hologram plane can be described using the Fresnel
diffraction formula. Explicitly, they are expressed as
72 Types of holograms
jk0 2
jk 0 2
0 x, y exp x y2
F 0 x, yexp x y 2
k0x
2z0 2z0 kx ,
z0
k0 y
ky
z0
3:14a
" # " #
jk 0 2 jk 0 2
r x, y exp x y2 F xx0 , yexp x y2 k0 x
2z0 2z0 kx ,
z0
k0 y
ky
z0
i
jk 0 h 2
exp xx0 y2
, 3:14b
2z0
where z0 is the distance between the object and the hologram. Note that the
proportional constants in 0 and r have been dropped for simplicity. The recorded
hologram becomes
t x, y j 0 x, y r x, yj2 :
Our concern is the cross terms upon holographic recording, which are
!
jk 0 x0 x jk 0 x20
0 x, y r x, y 0 x, y r x, y exp
z0 2z0
" #
jk 0 2
F 0 x, yexp x y2 k0x k0 y
2z0 kx , ky
z0 z0
!
jk 0 x0 x jk 0 x20
exp
z0 2z0
" #
jk0 2
F 0 x, yexp x y2 k0 x k0 y
3:15
2z0 kx , ky :
z0 z0
We can now use the setup shown in Fig. 3.7(b) to reconstruct the lensless Fourier
hologram. Ignoring the zeroth-order light, the optical eld at the back focal plane
of the lens is
3.5 Rainbow hologram 73
!
z0 z0
F f 0 x, y r x, y 0 x, y r x, yg k0 x k0y / 0 x x0 , y
kx , ky f f
f f
( " #) !
jk 0 z0 z z z
0
0 0 0
exp xx0 2 y2 x x0 , y
2z0 f f f f
( " #)
jk 0 z0 z0
exp x x0 2 y2 , 3:16
2z0 f f
Figure 3.10 (a) First-step and (b) second-step recording geometries; (c) recon-
struction geometry for the rainbow hologram.
been proposed [8, 9]. In one-step recording, the real image of the object is
formed using a slit and a lens.
% 3.Reconstruction (650nm)
H2zeros(1024);
H2(256:767,256:767)conj(H);
M1024;
r1:M;
c1:M;
[C, R]meshgrid(c, r);
z30;
pexp(-2i*pi*z.*((1/w)^2-(1/M/delta)^2.*(C-M/2-1).^2-. . .
(1/M/delta)^2.*(R-M/2-1).^2).^0.5);
A2fftshift(ifft2(fftshift(H2)));
Az2A2.*p;
R650fftshift(fft2(fftshift(Az2)));
R650(abs(R650)).^2;
R650R650/max(max(R650));
R650R650(384:639,384:639);
gure; imshow(R650);
Table 3.3 (cont.)
title(Reconstructed image(650nm))
axis off
Az2Az2.*p; % at observer plane
S650fftshift(fft2(fftshift(Az2)));
S650(abs(S650)).^2;
S650S650/max(max(S650));
S650S650(384:639,384:639);
% Reconstruction (550nm)
w25500*10^-8;
H3H2.*exp(2i*pi*sind(0.4)*(w-w2)/w/w2.*R*delta);
pexp(-2i*pi*z.*((1/w2)^2-(1/M/delta)^2.*(C-M/2-1).^2-. . .
(1/M/delta)^2.*(R-M/2-1).^2).^0.5);
Az3fftshift(ifft2(fftshift(H3))).*p;
R550fftshift(fft2(fftshift(Az3)));
R550(abs(R550)).^2;
R550R550/max(max(R550));
R550R550(384:639,384:639);
gure; imshow(R550);
title(Reconstructed image (550nm))
Az3Az3.*p;
S550fftshift(fft2(fftshift(Az3)));
S550(abs(S550)).^2;
S550S550/max(max(S550));
S550S550(384:639,384:639);
% Reconstruction (450nm)
w34500*10^-8;
H4H2.*exp(2i*pi*sind(0.4)*(w-w3)/w/w3.*R*delta);
pexp(-2i*pi*z.*((1/w3)^2-(1/M/delta)^2.*(C-M/2-1).^2-. . .
(1/M/delta)^2.*(R-M/2-1).^2).^0.5);
Az4fftshift(ifft2(fftshift(H4))).*p;
R450fftshift(fft2(fftshift(Az4)));
R450(abs(R450)).^2;
R450R450/max(max(R450));
R450R450(384:639,384:639);
gure; imshow(R450);
title(reconstructed image (450nm))
Az4Az4.*p;
S450fftshift(fft2(fftshift(Az4)));
S450(abs(S450)).^2;
S450S450/max(max(S450));
S450S450(384:639,384:639);
% color slit image
SLITzeros(256,256,3);
SLIT(:,:,1)S650;
SLIT(:,:,2)S550;
SLIT(:,:,3)S450;
SLITuint8(SLIT.*500);
gure; image(SLIT)
title(Slit images)
axis off
axis equal
78 Types of holograms
Problems
3.1 When the off-axis hologram is reconstructed with a light beam by the original
reference wave [see Fig. 3.3(b)], the virtual image appears on-axis in the position
of the original object and the real image is formed off-axis. Let us reconstruct
the hologram with a light beam propagating at an angle to the z-axis, i.e., we
are using a conjugate beam to reconstruct the hologram. Show that the recon-
structed real image is on-axis and the virtual image appears off-axis.
3.2 Verify that in lensless Fourier holography as discussed in Fig. 3.9, the recon-
structed images are separated by 2fx0 / z0, where f is the focal length of the lens
used for reconstruction, x0 is the separation distance between the spherical
reference light and the object. In other words, you need to verify Eq. (3.16).
3.3 Consider an off-axis hologram. We assume that the object is at the optical axis
and is z0 behind the hologram. The offset angle of the plane wave reference
light is 0, and the recording wavelength is 0. If the reconstruction light is also a
plane wave with an offset angle 0 but with a wavelength of r, show that the
location of the reconstructed image is at (zr (0/r) z0; xr z0tan0(1 0/r)).
3.4 Assume that the slit for generating a rainbow hologram is z0 from the
hologram plane, and the offset angle of the plane wave reference light is 0.
The recording wavelength is 0. With reference to Fig. 3.10(c), show that the
relationship between the width of the slit, w, the diameter of the observers
pupil, d, and the bandwidth, , of the reconstructed image is given by
0(w d)/z0tan0.
References
1. R. J. Collier, C. B. Burckhardt, and L. H. Lin, Optical Holography (Murray Hill, NJ,
1983).
2. P. Hariharan, Optical Holography: Principles, Techniques, and Applications
(Cambridge University Press, Cambridge, 1996).
3. P. Hariharan, Basics of Holography (Cambridge University Press, Cambridge, 2002).
4. S. A. Benton, and V. M. B. Jr., Holographic Imaging (Wiley, Hoboken, NJ, 2008).
5. D. Garbor, A new microscopic principle, Nature 161, 777778 (1948).
6. E. N. Leith, and J. Upatnieks, Reconstructed wavefronts and communication theory,
Journal of the Optical Society of America 52, 11231130 (1962).
7. S. A. Benton, Hologram reconstructions with extended incoherent sources, Journal of
the Optical Society of America 59, 15451546 (1969).
8. H. Chen, and F. T. S. Yu, One-step rainbow hologram, Optics Letters 2, 8587 (1978).
9. H. Chen, A. Tai, and F. T. S. Yu, Generation of color images with one-step rainbow
holograms, Applied Optics 17, 14901491 (1978).
10. J. C. Wyant, Image blur for rainbow holograms, Optics Letters 1, 130132 (1977).
11. E. N. Leith, and H. Chen, Deep-image rainbow holograms, Optics Letters 2, 8284
(1978).
4
Conventional digital holography
79
80 Conventional digital holography
holography. In the following discussion we treat only the one-dimensional case for
simplicity without any loss of generality.
Assuming f (x) is a continuous analog signal along the x-axis, we can produce
a discrete signal corresponding to f (x) by sampling it with a xed separation, x,
X
N1
f n f nx f kx nk x , 4:1
k0
4.1 Sampled signal and discrete Fourier transform 81
where n is an integer between 0 and (N 1), and (n) is the unit impulse sequence,
which is the counterpart of the delta function when dealing with discrete signals.
The mathematical denition of ([n k]x) is
1 nk
nkx 4:2
0 n 6 k:
In Eq. (4.1) we have also assumed that f (x) is zero outside the range 0 x < Nx
so that we can use a nite number of samplings, say N, to represent the original
analog signal. Because the original signal is sampled with sampling period x,
the sampling frequency is fs 1/x. In the following discussion we assume that
N is even for the sake of simplicity, although it is not limited to being even. Also to
simplify the discussion, we ignore the problem of nding the spectrum of a discrete
signal. We must, however, know two important properties. First, the spectrum
of the sampled signal f [n] is periodic in the spectrum domain. This concept can
be easily grasped in the analog version. By applying the Fourier transform to a
continuous-dened sampled function, we get
( )
X X
F f x xnx 2fs Fk x 2fs n, 4:3
n n
Figure 4.2 Spectrum of (a) the original continuous function, f (x), and (b) the
sampled function.
Since the resulting spectrum is the convolution of the original spectrum F(kx) and
the sinc function, the sinc function can be regarded as an impulse response of
a system with the original spectrum F(kx) treated as an input. Therefore the width
of the impulse response denes the frequency resolution of the system.
The width, k, of sinc(kxLx/2) can be dened by setting the argument of
the sinc function to 1, i.e., the rst zero of the sinc function, giving the width
k 2/Lx 2/Nx or f 1/Nx 1/Lx. So when the samples are f apart in
the spectral domain, the value of f gives the frequency resolution of the
resulting Fourier transform.
Now, for convenience, the frequency range of the spectrum is usually
selected to be [0, fs) (this expression means 0 is included but fs is not included)
in the unit of cycle/length, or [0, 2fs) in the unit of radian/length. Accordingly,
the discrete spectrum F[m] can be found from f[n] by the discrete Fourier
transform (DFT),
X
N 1
j2nm
F m f n exp , 4:6a
n0
N
where m is an integer, being also between 0 and (N 1). Note that F[m] is the
N-point DFT of f [n] and is itself a periodic sequence with period equal to N.
Similar to the Fourier transform of a continuous signal, we can retrieve f [n] from
F[m] by the inverse discrete Fourier transform (IDFT),
1 XN1
j2nm
f n F m exp : 4:6b
N m0 N
4.1 Sampled signal and discrete Fourier transform 83
Figure 4.3 Spectrum of a sampled function when B > 2fs, illustrating spectral
folding.
Note that f[n] is periodic with period N samples or Nx in the spatial domain. Similarly,
the index m corresponds to the discretized radian frequencies km by kmx (2/N)m
if we recognize that exp(j2nm/N) exp(jkmnx) from Eq. (4.6).
With reference to Fig. 4.2, we see that the spectrum range [0, 2fs) is in
range 1. We observe that the right half of the central duplicate and the left half
of its adjacent duplicate are included in range 1. As a result, the bandwidth of
F(kx), B, must be smaller than 2fs if we want to avoid overlapping of the spectra.
We dene the Nyquist frequency fNQ as
fs
f NQ , 4:7
2
and the maximum frequency of the signal must be smaller than fNQ. If the
duplicates overlap with each other, as shown in Fig. 4.3, there is no way to separate
the two spectrum duplicates. In this case, the high frequency component is under-
sampling and the retrieved spectrum is incorrect, resulting in errors called aliasing
or spectral folding. In other words, in order to recover the original continuous
signal faithfully from its sampled version, the continuous signal should be band-
limited to fB B/2 fs 2fNQ for a given sampling frequency fs. A signal is said
to be band-limited to fB if all of its frequency components are zero above fB.
It should be noted that the DFT and IDFT [see Eq. (4.6)] are dened according
to MATLAB, but not according to Eq. (1.22). In MATLAB, the DFT and IDFT
can be evaluated by commands fft / ifft for one-dimensional functions and fft2 / ifft2
for two-dimensional functions, respectively. We use Arial font to represent the
MATLAB commands henceforth. These commands are named because of efcient
algorithms known as fast Fourier transform (FFT) algorithms, which speed up the
calculations of the DFT and IDFT, especially when the number of samples is large.
As shown in Fig. 4.2(b), for a well sampled signal the left half (m 0 ~ N/2 1) of
F[m] is sufcient to reect the spectrum properties of a real signal. In this case, the
right half (m N/2 ~ N 1) of F[m] is usually disregarded without any loss
of information. However, in digital holography, a complex signal (i.e., complex
amplitude of the optical eld) is analyzed and thus not only the positive frequency
84 Conventional digital holography
spectrum but also the negative frequency spectrum of F(kx) is necessary. In other
words, we prefer to obtain the spectrum in range 2 rather than in range 1 in Fig. 4.2(b).
Because the sampling period of f [n] is 1/x and the command fft generates the
spectrum of the signal in the range given by [0, 2fs), we have the distribution of
the spectrum in the range kx 2(N/2 1)/Nx ~ 2(N 1)/Nx, which is the same as
that in the range kx 2(N/2 1)/Nx ~ 2(1)/Nx. As a result, the full frequency
spectrum between fs can be obtained directly by swapping the right and left halves
of F[m]. After the swap, the spectrum is represented in fs, or (m N/2 ~ N/2 1),
and the zero frequency (m 0) locates at the (N/2 1) point of the swapped
spectrum. In MATLAB, we can use the command fftshift to swap the data. Readers
may also apply the command ifftshift, which is useful if the data number is odd.
Therefore in the DFT the phase contribution of the point at x is identical to that of
the point at (xNx), which is what we do using fftshift. After the spectrum has
been calculated using command fft, we perform fftshift again to re-locate the origin
of the spectrum domain at the center of the axis, i.e., the (N/21) point.
Table 4.1 MATLAB code for demonstrating under-sampling and aliasing, see Example 4.1
gure(1);
subplot(2,2,1)
plot(fx,SY)
axis([-inf, inf, 0, 800])
axis square
title(Sampling period 0.1 mm)
xlabel(1/mm)
ylabel(Spectrum)
subplot(2,2,2)
plot(fx1,SY1)
axis([-inf, inf, 0, 800])
axis square
title(Sampling period 0.2 mm)
xlabel(1/mm)
ylabel(Spectrum)
subplot(2,2,3)
plot(fx2,SY2)
axis([-inf, inf, 0, 800])
axis square
title(Sampling period 0.4 mm)
xlabel(1/mm)
ylabel(Spectrum)
subplot(2,2,4)
plot(fx3,SY3)
axis([-inf, inf, 0, 800])
axis square
title(Sampling period 3 mm)
xlabel(1/mm)
ylabel(Spectrum)
4.1 Sampled signal and discrete Fourier transform 87
Table 4.2 MATLAB code for calculating the Fourier spectrum of a rectangular function,
see Example 4.3; the code is for plotting Fig. 4.5(a) and (b)
SPfft(fftshift(REC));
% rst locate the center of the aperture to the st point
% then calculate the spectrum of REC
SPfftshift(SP); % locate the spectrum to the center of the domain
SPSP/(max(SP)); % normalization
SPreal(SP);
df1/N/dx; %frequency resolution
f-(N/2):(N/2)-1;
ff*df; % coordinate of spatial frequency centered at N/21
subplot(1,2,2)
plot(f,SP)
axis([-inf, inf, -0.25, 1.1])
axis square
title(Spectrum)
xlabel(1/cm)
In Fig. 4.5(a), the spatial domain contains 64 points and the separation between
adjacent sampled points is set to be x 0.05 cm. Hence according to Eq. (4.4),
the total record length is Lx Nx 64 0.05 3.2 cm and the frequency reso-
lution is f 1/Lx 0.3125 cm1 with the extent in the frequency domain being
Lf 1/x 20 cm1. The resulting spectrum is shown in Fig. 4.5(b). Note that
the plot is jagged because the frequency resolution is poor. We can use zero
padding to improve the frequency resolution, i.e., to pad zero-value points to the
ends of the original spatial domain function. In Fig. 4.5(c), we keep the
same x 0.05 cm but use 512 points (eight times as many points) to represent
88 Conventional digital holography
Figure 4.5 Rectangular function (left) and its Fourier spectrum (right). The
spatial resolution and sample numbers of (a), (c), (e) are (x, N) (0.05 cm,
64), (0.05 cm, 512), and (0.00625 cm, 512), respectively.
4.2 Recording and limitations of the image sensor 89
the whole spatial domain so that the domain size is eight times larger than that in
Fig. 4.5(a), i.e., Lx Nx 512 0.05 25.6 cm. The frequency resolution in the
resulting spectrum becomes f 1/Lx 0.039 cm1, which is eight times better but
with the same total length in the frequency domain as x is the same. The plot is
shown in Fig. 4.5(d), which has a smoother appearance compared to the plot in
Fig. 4.5(b).
Now in Fig. 4.5(e), we use 512 instead of 64 points to represent the same
record length in Fig. 4.5(a), i.e., Lx Nx 512 (x/8) 3.2 cm. Although the
sampling rate is eight times that of the original example, the sampling number is
also eight times more than that of the original example. According to Eq. (4.4),
the extent in the Fourier spectrum in Fig. 4.5(f) is eight times larger than that in
Fig. 4.5(b), i.e., Lf 8/x 160 cm1, but the frequency resolution is the same as
that in Fig. 4.5(b) as f 1/Lx 0.3125 cm1. For a xed spectrum range,
say 10 cm1, the two curves in Fig. 4.5(b) and (f) are identical. In this case,
more sampling points will not result in better precision in the frequency domain.
Finally, we want to point out a common misconception, i.e., that by padding
zeros we improve accuracy. The truth is that once aliasing occurs, we can never
obtain better accuracy by increasing the frequency resolution using zero padding.
We can only increase the accuracy by not under-sampling. In our present example,
we use a rectangular function, which is spatial-limited. Its Fourier transform is
innitely extended. Therefore, aliasing occurs no matter how small the frequency
resolution achievable. Nevertheless, zero padding increases the number of samples
and may help in getting a better idea of the spectrum from its samples. Zero
padding is also useful to make up the required data points N 2n, i.e., being a
power of 2 for efcient FFT operations.
it is easily fabricated. CMOS imagers with more than ten thousand pixels in
full-frame size (36 mm 24 mm) have been used for commercial digital cameras.
If the illumination is large enough, the CMOS imager can also provide high quality
images. Therefore more and more digital holographic experiments have been
demonstrated using CMOS imagers.
A CCD or a CMOS image sensor consists of a two-dimensional array of pixels,
as shown in Fig. 4.6. In each pixel, there is shielding around the active area of the
pixel (the white square in Fig. 4.6). The energies of photons impinging upon the
active area of the pixel are transferred to electrons. Ideally the number of electrons
in each pixel is proportional to the average intensity of the pixel. Because each
pixel only delivers a single signal in a single illumination, the acquired image is a
matrix with the same dimension as the imager. The pixel pitch, or the pixel size,
is the distance between pixels (a in Fig. 4.6). The size of the imager, the pixel
pitch and the ll factor, which is the ratio of active area b b to the total area
within a pixel (b2/a2), are three important parameters of an imager. For practical
CCD sensors, the typical pixel pitch is 4~8 m, and the ll factor is 80~90%.
The common CCD sensor has 1024 768 pixels, so the chip size is about
6.0 4.5 mm2. A perfect imager has to register a pattern whose value is propor-
tional to the intensity on the CCD or CMOS chip. In digital holography, we also
expect that the interference pattern can be recorded faithfully. However, the
recording property is affected by the pixel pitch and other parameters of the
imager. In the following paragraphs, we will discuss qualitatively the limitations
of a practical imager.
4.2 Recording and limitations of the image sensor 91
where G(x) represents a Gaussian point spread function due to lateral diffusion,
and is a coupling constant, denoting the efciency of the conversion of the
electrons released by the light illumination. Finally, the signal from a single pixel
of an imager is proportional to the integration of the illumination pattern on the
pixel, which can be expressed as
4.2 Recording and limitations of the image sensor 93
0 xCCD
x
2
Dxdx, 4:13
xCCD
x0 2
where is a linear ll factor, which is dened by b/a, and the ratio is illustrated in
Fig. 4.6; x0 is the location of the considered pixel. Equation (4.13) can be re-written
using the convolution operation as
0 0 0 0
xx x
Dx rect dx D x rect S x : 4:14
xCCD xCCD
Thus for a one-dimensional image sensor containing N pixels, the nth pixels
output signal (n1~N) can be expressed as
" !#
nxCCD
Sn SnxCCD DnxCCD rect
xCCD
!
nxCCD
InxCCD mnxCCD GnxCCD rect ,
xCCD
or in analog form as
x
Sx Ixmx Gx rect : 4:15
xCCD
For simplicity, we neglect the mask effect, i.e., letting m(x) 1, so that we can
dene the modulation transfer function (MTF) as
or
MTF jF fGxgsincxCCD k x =2 j, 4:16
which shows that the rectangular function and G(x) always act as lowpass lters
for the illuminating pattern. For Si-based CCD imagers under short-wavelength
illumination ( < 0.8 m), the diffusion effect is minor [G(x) (x)] and
the integrating effect of the pixel [Eq. (4.13)] dominates. A simulated MTF
based on Eq. (4.16) (xCCD 5 m; 0:9; 0:63 m) is plotted in Fig. 4.8.
It shows that the MTF is not zero when the frequency is larger than the Nyquist
frequency fNQ, where fNQ has been calculated according to Eq. (4.7) and is
94 Conventional digital holography
labeled in the gure. Aliasing will occur if the illumination pattern contains
spatial frequencies beyond fNQ.
It is difcult to apply the advanced model, that is Eq. (4.15), to determine
quantitatively the MTF of a specic CCD imager because we do not usually have
enough information on m(x) and G(x). However, the MTF of a CCD imager can be
measured by several experimental methods, which have been discussed in Refs.
[57]. For general-purpose CCD imagers, high spatial frequency components are
attenuated because of the roll-off of the MTF. Moreover, the response for frequen-
cies larger than fNQ is not zero, causing aliasing errors. Figure 4.9 shows a
photograph of a FZP taken by a CCD imager (Kodak KAI-2020M). The central
circular patterns are the true FZP structures while the darker circular patterns on the
four sides are the false (aliasing) structures due to undersampling of ne fringes.
4.3 Digital calculations of scalar diffraction 95
where H kx , ky ; z is the spatial frequency transfer function (SFTF) [see Eq. (1.26)].
Equation (4.17) is the basic formula of the angular spectrum method (ASM), which is
also called the convolution method or the double Fourier transform method.
Suppose that the sampling period along the x-axis is x with total M samples,
and that the sampling period along the y-axis is y with total N samples, then the
discrete version of Eq. (4.17) is expressed as
n n o o
p m, n DFT2D IDFT2D p0 m, n H p, q , 4:18
where
" s#
pkx 2 qky 2
H p, q exp jk 0 z 1 4:19
k 20 k20
and, according to Eq. (4.4), kx and ky are the frequency resolution (radian/unit of
length) corresponding to sampling periods x and y along the x- and y-directions,
respectively. Hence kx 2/Mx and ky 2/Ny. In Eq. (4.18), (m, n) and
(p, q) are the indices of the samples at the spatial domain and the Fourier
domain, respectively. Thus their ranges are: M/2 m M/2 1, N/2 n
N/2 1, M/2 p M/2 1, and N/2 q N/2 1. DFT2D{} and IDFT2D{}
denote the two-dimensional discrete Fourier transform and the two-dimensional
inverse discrete Fourier transform, respectively, and they are dened as
2 1 X
X
M
2 1
N
h pm qn i
DFT2D f f m, ng F p, q f m, n exp j2 , 4:20a
M N
mM2 n 2 N
2 1 X
X
M
2 1
N
h pm qn i
IDFT2D fF p, qg f m, n F p, q exp j2 : 4:20b
M N
pM2 q 2 N
Table 4.3 MATLAB code for calculating the diffraction eld of a rectangular aperture
using ASM, see Example 4.4
c1:M;
r1:M;
[C, R]meshgrid(c, r);
OBzeros(M);
OB(246:267,246:267)1;% create the rectangular aperture
SP_OBfftshift(ifft2(fftshift(OB)));
deltaf1/M/delta;% sampling period in the spectrum domain
c1:M;
r1:M;
[C, R]meshgrid(c, r);
SFTFexp(-2i*pi*z.*((1/lambda)^2-((R-M/2- 1).*deltaf).^2-. . .
((C-M/2-1). *deltaf).^2).^0.5);
% the spatial frequency transfer function of propagation of light
DFfftshift(fft2(fftshift(SP_OB.*SFTF)));
Intensitymat2gray((abs(DF)).^2);
gure; imshow(Intensity);
title(Intensity of the diffraction pattern)
Figure 4.10 Intensity patterns of the diffracted eld from a rectangular aperture
simulated using the ASM when the propagation distance is (a) z 0.05 m
and (b) 0.20 m.
Figure 4.11 Product of two band-limited signals and spectrum of the resulting
product.
To calculate Eq. (4.28) in the discrete space, we still assume that the sampling
period along the x-axis is x with total M samples, and that the sampling period
along the y-axis is y with total N samples. At the diffraction plane, the numbers
of samples remain, but the sampling periods (dx , dy ) change according to the
following relationships:
z
dx , 4:29a
Mx
z
dy : 4:29b
Ny
Consequently, the discrete version of the Fresnel diffraction formula is given by
jk 0 ejk0 z n o
p p, q Q2 p, qIDFT2D p0 m, nQ1 m, n , 4:30
2z
where
jk0 2 2
Q1 m, n exp m x n2 2y , 4:31
2z
( " #)
p 2 q 2
Q2 p, q exp jz 4:32
Mx Ny
with (m, n) and ( p, q) the indices of the samples in the spatial domain (object
plane) and in the Fourier domain (or diffraction plane), respectively. Thus their
ranges are: M/2 m M/2 1, N/2 n N/2 1, M/2 p M/2 1,
and N/2 q N/2 1. To obtain Eq. (4.30), we rst replace (x, y) by
(mx, ny) to get Q1[m, n]. We then replace (x, y) by pdx , qdy to get Q2[p, q]
so that p(x, y) becomes p[p, q]. In addition, it should also be noted that
the IDFT is used in Eq. (4.30) instead of the DFT as the IDFT [see Eq. (4.6)]
is dened according to MATLAB, but not according to Eq. (1.22). Note that we
4.3 Digital calculations of scalar diffraction 101
usually set x y and M N so that the sampling periods of the two axes
on the diffraction plane are at the same scale. In the following subsection we
consider this symmetrical case.
We can, for instance, assign half of the total bandwidth to each term on the left
hand side of the above equation. There is, however, a z-dependence on Q1[m, n]
and we need to determine the range of validity in z for a given bandwidth so as
to avoid aliasing. Q1[m, n] is a quadratic-phase exponential function, and thus its
spatial frequency can be evaluated in terms of the local fringe frequency. When
evaluated at the edge of the eld, i.e., (M/2) x, the local fringe frequency
should be smaller than 1/4x (because it only shares half of the sampling
bandwidth), that is,
Mx =2 1
, 4:33a
z 4x
or
2M2x
z : 4:33b
Therefore there is a minimum limit for the propagation distance to avoid the
aliasing error.
In Eq. (4.30), p[p, q] is proportional to the multiplication of the result of
the inverse discrete Fourier transform and another quadratic-phase exponential
function, Q2[p, q]. The phase function Q2[p, q] can be ignored when the intensity
distribution of the diffracted eld is measured. For example, in the digital recon-
struction of a hologram, we only take the modulus of the reconstructed eld. If
the phase of the diffracted eld is of interest (e.g., holographic recording), Q2[p, q]
cannot be ignored. Again, the maximum spatial frequency presented in this
situation is the sum of the maximum spatial frequencies of Q2[p, q] and
IDFT2D{p0 [m, n]Q1[m, n]}. Usually the edge of the output spatial domain is
102 Conventional digital holography
Figure 4.13 (a) Rectangular aperture, and modulus of the diffraction pattern at
(b) z 0.02 m and (c) z 0.06 m.
under-sampled as the local frequency increases linearly away from the center of the
hologram. A simple way to alleviate under-sampling is to interpolate the array of
IDFT2D{p0[m, n]Q1[m, n]} before multiplying Q2[p, q]. What is meant by inter-
polation, is to add more sampling points within the original adjacent sampling
points. Thus the resulting sampling bandwidth can be improved. Another way is to
perform zero-padding, that is, to pad zero pixels around the initial eld before the
calculation. However, the calculation load increases signicantly using these two
methods. Thus we recommend using the FDM to evaluate only the intensity
distribution of the diffracted eld for a propagation distance longer than 2M2x =.
Table 4.4 MATLAB code for calculating the diffraction eld of a rectangular aperture
using the FDM, see Example 4.5
c1:M0;
r1:M0;
[C, R]meshgrid(c, r);
THOR((R-M0/2-1).^2(C-M0/2-1).^2).^0.5;
ATHOR.*delta;
OBzeros(M0); % object pattern
OB(193:321,53:181)1;
Q1exp(-1i*pi/lambda/z.*(A.^2));
FTSfftshift(ifft2(fftshift(OB.*Q1)));%
Fresnel diffraction
AFDabs(FTS);
AFDAFD/max(max(AFD));
gure; imshow(OB);
title(Rectangular aperture)
gure; imshow(AFD);
title(Modulus of the Fresnel diffraction pattern)
In Fig. 4.14, we plot the critical distances described by Eqs. (4.27) and (4.33b).
In the plot, M 1024 and M0 512. It is apparent that, for any selected x, the
ASM is valid for a shorter distance, while the FDM is valid for a longer distance.
However, there is a forbidden gap, i.e., a region where aliasing occurs, between the
two methods. For the ASM, the critical distance can be extended by increasing the
spatial size M, i.e., zero-padding. For the FDM, the only way to shorten the critical
distance is to reduce the sampling period because spatial size M cannot be smaller
than M0 .
Figure 4.14 Critical distances for digital calculation using the angular spectrum
method (ASM) and the Fresnel diffraction method (FDM).
For the AFM, we rst Fourier transform and multiply H 1 on both sides of Eq.
(4.17). We then write down a new form of Eq. (4.17):
n n o o
p0 x, y F 1 F p x, y; z H 1 k x , k y ; z : 4:34
Because H 1 kx , ky ; z H kx , ky ; z , Eq. (4.34) can be expressed as
n n o o
p0 x, y F 1 F p x, y; z H kx , ky ; z : 4:35
So it is concluded that the ASM [Eq. (4.18)] can be applied directly to nd the eld
at the initial eld by setting a negative propagation distance. In backward propa-
gation, the condition of alias-free simulation for the ASM [Eq. (4.27)] must also be
satised, but now the z in Eq. (4.27) is replaced by jzj.
For the FDM, we can also re-arrange Eq. (4.30) to obtain the following form
j2z jk0 z 1 n o
p0 m, n e Q1 m, nDFT2D p p, qQ1
2 p, q : 4:36
k0
At a glance, Eq. (4.36) is different from Eq. (4.30). But it should be noticed that
now we have the sampling periods in the diffraction plane (dx , dy ), so we have to
rewrite Eqs. (4.31) and (4.32) according to Eq. (4.29):
jk 0 2 d2
Q2 p, q exp p x q y
2 d2
, 4:37
2z
4.4 Optical recording of digital holograms 105
8 2 !2 !2 39
< m n =
Q1 m, n exp jz4 5 : 4:38
: Mdx Ndy ;
reference wave forms a FZP. We can easily see that the ray of the object light
emitting from its lower edge to the upper edge of the CCD sensor (dashed line
in Fig. 4.15) forms a maximum angle with respect to the z-axis (rays forming
diagonal connections between the object and the CCD sensor are ignored).
Thus, the maximum transverse propagation distance of such a ray is
D=2 MxCCD =2, and the corresponding maximum local fringe frequency
[see Eq. (2.5)] is D MxCCD =2z. To avoid under-sampling of the holographic
fringes, the maximum local fringe frequency must be smaller than the Nyquist
frequency [see Eq. (4.5)] associated with the CCD, that is
D MxCCD 1
, 4:40a
2z 2xCCD
or
xCCD
z D MxCCD : 4:40b
Therefore, there is a minimum distance between the object and the CCD imager
if aliasing is to be avoided
where is the transverse shift of the object from its center, and is the offset angle
of the reference wave. Note that in Eqs. (4.40) and (4.41), the zeroth-order beam
4.4 Optical recording of digital holograms 107
and the twin image are not considered. In other words, the condition in Eq. (4.41)
ensures only that the hologram is well sampled and it does not guarantee that all
these beams are separated in the reconstruction plane. We will discuss this aspect
in Section 4.4.2.
Fourier holography
Figure 4.17 depicts the geometry of Fourier holography. Now the object and the
CCD are at the front focal plane and the back focal plane of the Fourier lens,
respectively. In this geometry, an off-axis object point produces a tilted plane wave
on the CCD plane, resulting in a uniform sinusoidal fringe with the plane wave
reference light [also see Fig. 3.7(a)]. However, the nest fringe is contributed by
the point source emitting at the margin of the object at x D/2. The nest fringe
on the CCD plane therefore is given by [see Eq. (3.12)]
1 e jkx D=2
2 2 2 cos xD , 4:42
f
where kx 2x/f. Thus to resolve the nest fringe, the local fringe frequency must
be smaller than the Nyquist frequency associated with the CCD, that is,
108 Conventional digital holography
D 1
, 4:43a
2f 2xCCD
or
f
D , 4:43b
xCCD
where f is the focal length of the Fourier transform lens. Note that the Fourier
hologram can be regarded as a recording of the spectrum with resolution
k 2xCCD =f . So the size of the reconstruction space can be evaluated to be
2=k f =xCCD , which is consistent with Eq. 4.43(b). So the resolution, or the
sampling distance x, of the reconstructed image is given by
f
x , 4:44
MxCCD
where MxCCD is the width of the CCD imager. Again, the twin image and the
zeroth-order light are not taken into account in the above analysis.
to optical holography, the three diffracted orders can be separated in the off-axis
geometry (Fig. 4.16). In Section 3.2, we showed that the offset angle of the
reference light should be large enough to avoid the interference between the 1st
order and the zeroth-order beam, i.e.,
2 3
sin B, 4:45
2
where B is the bandwidth of the 1st and 1st orders of light. On the other hand,
the angle of the reference light cannot be too large. Otherwise, the 1st order will be
under-sampled. Thus another condition is given by
2 B 2
sin : 4:46
2 2xCCD
The bandwidth of the object light, B, depends on the light upon the CCD with the
largest incident angle. Similar to the analysis of Eq. (4.40), the bandwidth can be
determined to be
D MxCCD
B 2 : 4:47
z
Combining Eqs. (4.45), (4.46), and (4.47), the offset angle of the reference light is
limited to the range
13D MxCCD 1 D MxCCD
sin sin : 4:48
2z 2xCCD 2z
In Eq. (4.48), the critical condition occurs as the equals sign is selected, giving
3D MxCCD D MxCCD
: 4:49
2z 2xCCD 2z
4xCCD D MxCCD
zc , 4:50
which is also the minimum distance for successful off-axis holography. By substi-
tuting Eq. (4.50) into Eq. (4.48), we can also nd the critical offset angle
1 3
c sin : 4:51
8xCCD
110 Conventional digital holography
Equations (4.50) and (4.51) describe the optimized recording conditions for
off-axis digital holography.
48 m104 m 10248 m
zc 92 cm
0:633 m
and
30:633 m
1
c sin 1:7 :
88 m
So the offset angle is 1.7 and the minimum distance between the object and
the CCD is 92 cm in order to avoid aliasing. It should be noted that according
to Section 4.2, the resolution of the reconstructed image, using Eq. (4.50),
now is
which is much larger than the pixel pitch of the CCD. If a larger offset angle as
well as a larger propagation distance are selected, the resolution of the recon-
structed image will be sacriced further.
According to the above analysis, we can see that off-axis Fresnel holography
is not a satisfactory method for acquiring digital holograms because most of
the bandwidth of the imager is wasted. In Fourier holography, the bandwidth of
the reconstructed real image only depends on the size of the imager. However, the
object size is limited to one fourth of the eld of view. Phase-shifting holography
(PSH) is an efcient technique to alleviate the problem of the twin image and the
zeroth-order light without sacricing the bandwidth or the eld of view. In PSH,
several digital holograms with different phase shifts between them are used to
retrieve the complex eld of the object light. The details of PSH will be discussed
in Section 5.1.
4.5 Simulations of holographic recording 111
Figure 4.18 (a) Input image (original object), (b) on-axis hologram, (c) spectrum
of (b), and (d) reconstructed image.
112 Conventional digital holography
Table 4.5 MATLAB code for simulation of on-axis holographic recording and
reconstruction, see Example 4.7
% parameter setup
M256;
deltax0.001; % pixel pitch 0.001 cm (10 um)
w633*10^-8; % wavelength 633 nm
z25; % 25 cm, propagation distance
gure; imshow(mat2gray(IH));
title(Hologram)
axis off
SPabs(fftshift(ifft2(fftshift(IH))));
gure; imshow(500.*mat2gray(SP));
title(Hologram spectrum)
axis off
%Step 3: reconstruction
A1fftshift(ifft2(fftshift(IH)));
Az1A1.*conj(p);
EIfftshift(fft2(fftshift(Az1)));
EImat2gray(EI.*conj(EI));
gure; imshow(EI);
title(Reconstructed image)
axis off
4.5 Simulations of holographic recording 113
image is 256 256 pixels; the distance between the object and the CCD is 25 cm;
the pixel pitch is 10 m, and the wavelength is 0.633 m. For simplicity, we
assume that the object is at the central area of the input image so that we can ignore
the procedure of zero-padding in the simulation of propagation. The simulation
result is shown in Fig. 4.18, while the MATLAB code is listed in Table 4.5. Note
that in Fig. 4.18(d) the reconstructed image is disturbed by the zeroth-order light
and the twin image.
We know that the zeroth-order light and the twin image can be separated from
the reconstructed image using off-axis holography. The code listed in Table 4.5
can be applied directly for the simulation of off-axis holography. However, the
result is noisy due to aliasing issues. So the simulation method must be modied
for off-axis holography.
First in step 1, we need to know the phase distribution on the hologram plane.
We usually use the ASM (Sections 4.3.1 and 4.3.2) to simulate light propagation.
However, the condition in Eq. (4.27) must be satised in the simulation to avoid
aliasing. The size of the CCD (hologram) may be larger than, equal to, or smaller
than the workspace. Thus we can crop the resulting eld or pad zeros around the
resulting eld to t the size of the hologram. For example, if the hologram size
is 1024 768, we can crop the central 512 512 pixels of the hologram for
reconstruction. Then the size of workspace in this case is 512 512.
In step 2, we add a reference light to the object light and take the intensity of the
superposed elds. The bandwidth of the workspace must be twice the bandwidth
of the object light if the hologram is an on-axis hologram. If the hologram is an
off-axis hologram, the bandwidth of the workspace must be at least four times
the bandwidth of the object. We can interpolate the eld data obtained in step 2 to
ensure sufcient bandwidth.
In step 3, we usually apply FDM (Sections 4.3.3 and 4.3.4) to simulate recon-
struction, provided the phase of the reconstructed image is not of interest. So the
condition in Eq. (4.33b) must be satised in the simulation.
Because proper zero-padding must be applied in the simulation of propagation,
the workspace grows step by step. So we crop the resulting eld or hologram
at the end of every step to reduce the computation loading. Example 4.8 is a
demonstration of the simulation in off-axis holography.
Table 4.6 MATLAB code for simulation of off-axis holographic recording and
reconstruction, see Example 4.8
% parameter setup
M256;
deltax0.001; % pixel pitch 0.001 cm (10 um)
w633*10^-8; % wavelength 633 nm
z20; % zM*deltax^2/w; % propagation distance
AV(min(min(abs(EOf)))max(max(abs(EOf))))/2;
angle0.3; % reference beam angle; degree
r21:4*M;
c21:4*M;
[C2, R2]meshgrid(c2, r2);
RefAV*exp(1i*2*pi*sind(angle)*deltax/4.*. . .(R2
-2*M-1)/w1i*2*pi*sind(angle)*deltax/4.*. . .(C2-2*M-1)/w);
IH(EOfRef).*conj(EOfRef);
IHIH(257:768,257:768); % reduce the hologram size
gure; imshow(mat2gray(IH));
title(Hologram)
Problems 115
axis off
SPfftshift(ifft2(fftshift(IH)));
gure; imshow(50.*mat2gray(abs(SP)));
title(Hologram spectrum)
axis off
Figure 4.19 (a) Off-axis hologram, (b) spectrum of (a), and (c) reconstructed
image.
116 Conventional digital holography
Problems
4.1 Following Example 4.3, calculate and plot the spectrum of a shifted rectangu-
lar function, i.e., the rectangular function is not centered at the axis origin.
4.2 Perform and plot the FFT of a rectangular function using MATLAB. Also plot
the spectrum of the rectangular function using the result from Table 1.1.
Compare the two plots and if you are performing the calculations correctly,
you will see that there is a little difference between the two
Xcurves.
4.3 Show that the spectrum of the sampled signal f x n xnx is
periodic in the spectrum domain as follows:
( )
X X
F f x xnx 2f s Fkx 2f s n,
n n
where fs 1/x.
4.4 Using the Fresnel diffraction method (FDM) to calculate the diffracted eld,
and if the sampling period on the object plane along the x-axis is x with total
M samples, and the sampling period along the y-axis is y with total
N samples, show that at the diffraction plane with the number of samples
being the same, the sampling periods (dx , dy ) change according to the
following relationships:
z z
dx , dy ,
Mx Ny
as given by Eq. (4.29).
4.5 Assume that an on-axis hologram is acquired according to the well-sampling
condition in Eq. (4.40), i.e., the distance between the object and the CCD is
xCCD
z D MxCCD :
Ignore the problem of the zeroth-order light and the twin image. Calculate
the resolution of the reconstructed image.
4.6 In off-axis Fresnel holography and for aliasing-free calculations, show that the
minimum distance of the object from the CCD is
xCCD D 2 MxCCD
z , 4:41
2 sin xCCD
where D is the size of the object, is the transverse shift of the object from the
center of the object plane, and is the offset angle of the reference wave as
shown in Fig. 4.16.
References 117
References
1. D. H. Seib, Carrier diffusion degradation of modulation transfer function in charge
coupled imagers, IEEE Transactions on Electron Devices 21, 210217 (1974).
2. S. G. Chamberlain, and D. H. Harper, MTF simulation including transmittance effects
and experimental results of charge-coupled imagers, IEEE Transactions on Electron
Devices 25, 145154 (1978).
3. J. C. Feltz, and M. A. Karim, Modulation transfer function of charge-coupled devices,
Applied Optics 29, 717722 (1990).
4. E. G. Stevens, A unied model of carrier diffusion and sampling aperture effects
on MTF in solid-state image sensors, IEEE Transactions on Electron Devices 39,
26212623 (1992).
5. M. Marchywka, and D. G. Socker, Modulation transfer function measurement tech-
nique for small-pixel detectors, Applied Optics 31, 71987213 (1992).
6. A. M. Pozo, and M. Rubio, Comparative analysis of techniques for measuring the
modulation transfer functions of charge-coupled devices based on the generation of
laser speckle, Applied Optics 44, 15431547 (2005).
7. A. M. Pozo, A. Ferrero, M. Ubio, J. Campos, and A. Pons, Improvements for
determining the modulation transfer function of charge-coupled devices by the speckle
method, Optics Express 14, 59285936 (2006).
8. J.-P. Liu, Controlling the aliasing by zero-padding in the digital calculation of the
scalar diffraction, Journal of the Optical Society of America A 29, 19561964 (2012).
9. A. A. Friesem, and J. S. Zelenka, Effects of lm nonlinearities in holography, Applied
Optics 6, 17551759 (1967).
10. O. Bryngdail, and A. Lohmann, Nonlinear effects in holography, Journal of the
Optical Society of America 58, 13251330 (1968).
5
Digital holography: special techniques
118
5.1 Phase-shifting digital holography 119
I j 0 r expjj2
j 0 j2 j r j2 0 r exp j 0 r expj, 5:1
where 0 and r stand for the complex amplitude of the object light and the
reference light, respectively; stands for the phase induced by the phase shifter.
Multiple holograms are acquired sequentially and, upon complete acquisition, the
complex amplitude of the object light can be derived from the holograms.
I 0 j 0 j2 j r j2 0 r 0 r ,
I =2 j 0 j2 j r j2 j 0 r j 0 r ,
I j 0 j2 j r j2 0 r 0 r ,
I 3=2 j 0 j2 j r j2 j 0 r j 0 r :
First, we can remove the zeroth-order light, j0j2 jrj2, by the following
subtractions:
I 0 I 2 0 r 2 0 r 5:2a
I =2 I 3=2 2j 0 r 2j 0 r : 5:2b
I 0 I jI =2 I 3=2 4 0 r : 5:3
I 0 j 0 j2 j r j2 0 r 0 r 5:6a
I =2 j 0 j2 j r j2 j 0 r j 0 r : 5:6b
(QPSH) [10]. In standard QPSH, the number of exposures is four (two holograms
and two intensity patterns j0j2 and jrj2), the same as for four-step PSH. How-
ever, the recording of the intensity patterns is not sensitive to any vibration or any
mechanism that causes phase variation between the object wave and the reference
wave, and thus the recording of QPSH is easier than that of four-step PSH. It is also
interesting to note that we do not need to measure the intensity of the object light
[1012]. To see how this happens, we rst take the square of Eqs. (5.6a) and
(5.6b), giving
h i2 h i 2
I 20 j 0 j2 j r j2 2I 0 j 0 j2 j r j2 4 Re 0 r 5:8a
h i2 h i 2
I 2=2 j 0 j2 j r j2 2I =2 j 0 j2 j r j2 4 Im 0 r : 5:8b
In Fig. 5.2, the gray-level represents the amount of error. We can see that the error
in the range of 2 is small only when the relative intensity, dened as jrj2/j0j2, is
larger than 2. Accordingly, to ensure that the calculated intensity of the object light
at any pixel is correct, the intensity of the reference light must be at least two times
larger than the maximum intensity of the object light. This demand limits the
dynamic range of the acquired digital holograms, and thus limits the signal-to-
noise ratio of the reconstructed images. When the plus sign in Eq.(5.11)is chosen,
results indicate that not only the relative intensity but also arg 0 r must be
limited to a small range to ensure a small error. So the minus sign in Eq. (5.11) is
always selected.
I 1 j 0 j2 j r j2 0 r 0 r 5:13a
Table 5.1 MATLAB code for calculating the error of QPSH, see Fig. 5.2
I11I2*(I.^0.5).*cos(T*pi);
I21I2*(I.^0.5).*sin(T*pi);
D8*I.*I2-(I1-I2-2*I).^2;
Io(I1I2-D.^0.5)./2;
errabs(Io-1);
gure; mesh(I,T,err);
view(2)
set(gca, XScale, log) ;
xlabel(Relative Intensity of the reference light);
ylabel(Phase difference (\pi radian));
colorbar
axis tight
There are some techniques for eliminating the phase error or for measuring the
actual phase step used so that a high-quality reconstructed image can be obtained.
For example, if the phase applied in retrieving the object eld deviates from the
phase step applied in the experiment, the twin image cannot be removed com-
pletely. Thus the correct phase step can be determined to be the phase at which the
power of the residual twin image is minimized. This calibration method can be
easily implemented for off-axis holography [8], but not for on-axis holography as
the twin image and the zeroth-order beam overlap. Hence we cannot evaluate the
phase by monitoring the twin image.
For on-axis holography, one can use other constraints as a measurement of the
residual power of the twin image. For example, the amplitude should be uniform
across the object plane if the object is a purely phase object. When the phase applied
in retrieving the object light is correct, the amplitude uctuation at the object plane
should be minimized [13, 14]. Other algorithms have been proposed to directly
estimate the sequential phase step process between the holograms [15, 16].
In addition, the object light is also directed to the CCD by a beam splitter. As a
result, interferences on the four pixels of a super pixel correspond to four different
phase steps. Subsequently, we can calculate the complex amplitude of the object
wave using Eq. (5.4). The pixel size of the retrieved object light thus has the same
size as a super pixel, but is four times larger than the pixel size of the CCD.
The pixel-by-pixel imaging in Fig. 5.3 demands a high-precision alignment and
is sensitive to environmental interference. An improvement is made by attaching
the phase mask directly against the CCD chip [1719]. Alternatively, a random
phase mask can be used instead of a deterministic phase mask [20]. However,
several pre-measurements must be performed in order to calibrate the phase of the
reference light among super pixels.
Figure 5.4 Four holograms for (a) 0, (b) /2, (c) , and (d) 3/2.
126 Digital holography: special techniques
Fig. 5.5(a), which is identical to the original pattern [Fig. 4.18(a)]. There is no
noticeable zeroth-order light or twin image among the reconstructed image as
compared to that in conventional on-axis holographic reconstruction [Fig. 4.18(d)].
In another simulation, we set the phase steps to be 0, 0.4, 0.8, and 1.2. The
complex hologram is still obtained using Eq. (5.4). But as a result of the phase
errors, the complex hologram is not correct. The corresponding reconstructed
image is shown in Fig. 5.5(b). We can see that the image quality is a little worse
than that shown in Fig. 5.5(a). In any case, there is no zeroth-order light present in
the reconstruction. This is because in producing the complex hologram, the zeroth-
order light can be removed even though the phase steps are not correct [Eq. (5.14)].
The MATLAB code for this example is listed in Table 5.2 as a reference.
Table 5.2 MATLAB code for simulation of four-step PSH, see Example 5.1; the code is
based on simulation of an on-axis hologram (Example 4.7)
gure(1); imshow(I);
title(Original object)
axis off
gure(2)
subplot(2,2,1)
imshow(I0/MAX);
axis off
title(hologram 1)
subplot(2,2,2)
imshow(I1/MAX);
axis off
title(hologram 2)
subplot(2,2,3)
imshow(I2/MAX);
axis off
title(hologram 3)
128 Digital holography: special techniques
Table 5.2 (cont.)
subplot(2,2,4)
imshow(I3/MAX);
axis off
title(hologram 4)
%Step 3: Reconstruction
CH(I0-I2)-1j*(I1-I3); % the complex hologram (4-step PSH)
A1fftshift(ifft2(fftshift(CH)));
Az1A1.*conj(p);
EIfftshift(fft2(fftshift(Az1)));
EI(EI.*conj(EI));
EIEI/max(max(EI));
gure(3);
imshow(EI);
title(Reconstructed image of 4-step PSH)
axis off
versus , i.e., the bias has been subtracted from I(z) and only the intensity variation
is plotted, for a plane wave source. In this case, we have complete coherence.
For a partially coherent source having a sinc-squared type power spectrum [see
Eq. (2.63)], we have
I 2I 0 1 cos 0 5:18
0
5.2 Low-coherence digital holography 129
Figure 5.7 I() 2I0 versus for (a) a complete coherent source, and (b) a source
having a sinc-squared type power spectrum.
according to Eq. (2.49). Figure 5.7(b) shows the intensity variation as a function
of , a phenomenon in which an interference pattern forms only when the optical
path difference between the two incident elds is less than the coherence length
of the light source, c c 0. Low-coherence holography uses this phenomenon
to allow for interference between the object wave and the reference wave.
The thickness of the layer of the object that can be holographically recorded is
about half of the coherence length owing to the reection setup. Since c can be
made short by using an appropriate source, the thickness of the layer of a thick
object being recorded can be made small. The capability of imaging only a thin
section optically without any out-of-focus haze, i.e., noise coming from sections
other than the section being imaged, is called optical sectioning [2124]. We
shall adopt the holographic microscope investigated by Lin et al. [21] to illustrate
the principle of low-coherence holography.
The principle of low-coherence holography is easily understood now that
we have discussed low-coherence interferometry. The experimental setup of a
low-coherence phase-shifting digital holographic microscope for measuring sec-
tional images is shown in Fig. 5.8. It consists of a modied Michelson interfer-
ometer with a PZT-driven mirror in the reference arm and a high resolution
CCD sensor (pixel number, 1280 1024; pixel size, 5.2 m 5.2 m). The
motorized linear stage is moved longitudinally step by step to different depths
of the sample to capture holograms at different layers. Then, the PZT-driven
130 Digital holography: special techniques
(a)
Linear stage
YDFA
PZT-driven
mirror
Fiber Objective
Lens Lens
output Sample
Objective
BS
CCD
(b)
-10
-20
Power (dbm)
-30
-40 50nm
-50
-60
1000 1030 1060 1090 1120 1150
Spectrum (nm)
(b)
where k 1,2,3,4 is for the different phase shifts in four-step phase shifting
holography. We have borrowed the result from Eq. (2.45) to t into our current
holographic recording situation. In Eq. (5.19), 2l0 lRk =c, where l0 denotes
the distance between the object and the center of the beam splitter (BS) and lRk
is the distance between the reference mirror and the center of the beam splitter and,
by changing its distance for different k, we can realize the different phase shifts
between the object wave and the reference wave. Now in Eq. (5.19), R can be
grouped with 0 because lRk is a variable, hence, we can write
2l0 lRk
2 2l0 lRk : 5:21
2 2 c
Since 2l0 lRk can be changed to obtain different phase shifts for path lengths of
0, /4, /2, 3/4, which correspond to phase shifts of (k 1)/4, we have
k 1 : 5:23
2 4
Using Eqs. (5.20)(5.23), together with the function form of j()j we can re-write
Eq. (5.19) as
2 2
I k 1=2 / jE 0 j jE R j 2jE 0 jjER jsinc k 1 cos 0 k 1=4:
4
5:24
By using the phase-shifting procedure, i.e., according to Eq. (5.4) and using
Eq. (5.24), we can extract the complex eld E0 jE 0 jej0 with the phase and
the amplitude, respectively, as follows:
8
9
> >
>
<I sinc 1 >
=
3=2 I =2 2
0 arctan
, 5:25
>
> I I 3 >
>
: 0 sinc sinc ;
4 4
132 Digital holography: special techniques
and
s
2
2
3
I 3=2 I =2 sinc 1 I 0 I sinc sinc
2 4 4
jE0 j
3
2 sinc 1 sinc sinc
2 4 4
5:26
by assuming ER 1.
Figure 5.9 shows a tranquilized zebra sh. Holograms were captured using the
CCD at 2.5 frames/second. Figure 5.10(a) shows a portion of the tail of the zebra
sh where holograms were captured, and Fig. 5.10(b) (d) show three optical
sections. Clearly, in Fig. 5.10(d), we see the outline of the spine at that section.
Figure 5.11 shows the phase plot at the section z 60 m, where the spine is
clearly outlined.
Current low-coherence holographic recording systems have a sectioning cap-
ability of about 1020 m as is the case shown in the above results. When using an
ultra-wideband source, such as a supercontinuum of spectral bandwidth of about
500 nm, a very short coherence length of the order of 1 m is possible.
Figure 5.9 Image of a zebra sh. Reprinted from Ref. [21], with permission,
OSA.
(a)
Reconstruction (low coherence)
(b) (c) (d)
z=0 m z=30 m z=60 m
Phase (rad.)
10
5 0
0
100
400
200
)
x ( 200 m
m 300 (
) y
0 400
r2 i R k20 i R 0: 5:30
s(R) is the scattered eld, which is attributed entirely to the inhomogeneities. By
substituting Eq. (5.29) into Eq. (5.28), together with Eq. (5.30), we can obtain the
following equation for the scattered eld:
r2 s R k20 s R oR p R: 5:31
A solution to the above scalar Helmholtz equation can be written in terms of the
Green function. The Green function, G(R, R0 ), is a solution of the following
equation:
r2 GR, R0 k20 GR, R0 R R0 5:32
with
0
0 ejk0 jR R j
GR, R GjR R0 j:
4jR R0 j
Now, the source term in Eq. (5.31) can be written as a collection of impulses
weighted by o(R0 ) p(R0 ) and shifted by R0 :
oR p R oR0 p R0 R R0 d 3 R0 : 5:33
Note that in Eq. (5.34), we still have the unknown total eld, p(R0 ). To nd the
solution to the scattered eld, we make use of one of the simplest approximations
called the rst Born approximation. Other approximations such as the rst Rytov
approximation have also been used to estimate the solution to the scattered eld
[26]. The rst Born approximation states that the scattered eld is much weaker
than the incident eld, i.e.,
s R p R: 5:35
Under this approximation, we can replace p(R) by i(R) in Eq. (5.34) to obtain
the Born approximation of the scattered eld, given by
R GR R0 oR0 i R0 d3 R0 :
s
5:36
Based on the above result, we will now derive what is known as the Fourier
diffraction theorem in tomography. The theorem will relate the Fourier transform
of the scattered eld to the three-dimensional Fourier transform of the object
5.3 Diffraction tomographic holography 135
function. The Green function is a spherical wave and we now introduce Weyls
plane wave expansion of a spherical wave [27]:
0
0 0 0
ejk0 jR R j j ejkx x x ky y y kz z z
GjR R0 j 0 q dk x dky , 5:37
4jR R j 8 2 k20 k2x k 2y
q
where kz k 20 k2x k2y under the restriction that k2x k2y k2z < k20 so that
evanescent waves are ignored. Now assuming that the incident eld is a plane
wave propagating along the z-direction, i.e., i R ejk0 z , and measuring the
scattered eld at z l as shown in Fig. 5.12, Eq. (5.36) becomes
2
3
jkx x x0 ky y y0 kz l z0
6 j e 07
s x, y; z l 4 2 q dkx dky ox0 , y0 , z0 ejk0 z 5
8 k20 k2x k 2y
dx dy0 dz0 :
0
5:38
Figure 5.12 The two-dimensional scattered eld spectrum along the plane z l
is related to the Fourier transform of the object on the surface of the Ewald sphere.
136 Digital holography: special techniques
0 0 0 0
ejkx x jky y jkz z ox0 , y0 , z0 ejk0 z dx0 dy0 dz0 F 3D ox, y, z, ejk0 z
O kx , ky , kz k0 : 5:39
is the Ewald equation, which says that the frequency spectrum is limited to a sphere,
k2x k 2y k2z k20 in the frequency domain, the so-called Ewald sphere. Hence,
Eq. (5.40) states that the two-dimensional scattered eld spectrum along the plane
z l is related to the Fourier transform of the object on the surface of the Ewald
sphere. The situation is shown in Fig. 5.12. For brevity, we only illustrate along two
directions x and z and therefore kx and kz, accordingly. The solid line of the Ewald
sphere can be considered transmission tomography, whereas the dotted line indi-
cates reection tomography where the measured eld is on the plane z l.
The power of diffraction tomography is that on rotating the object by 360
degrees, each rotation will map different regions of the three-dimensional fre-
quency spectrum of the object function. After a complete mapping by rotating, we
can take the inverse three-dimensional Fourier transform of the object function to
get the three-dimensional complex refractive index. Figure 5.13 summarizes the
5.4 Optical scanning holography 137
k0 x k0 y k0x k0y
Piz , Pi , hx, y; z, i 1, 2: 5:43
f f f f
In the above equation Pi(k0x/f, k0y/f) is the optical eld at the back focal plane of
lens L1. According to the setup described in Fig. 5.14, and knowing that the
Fourier transform of the pupil in the front focal plane of a lens is the eld
distribution at the back focal plane of the lens [see Eq. (1.45)], we have
k0 x k0 y
Pi , F fpi x, ygjkx k0 x, ky k0 y , i 1, 2 5:44
f f f f
where pi(x, y) is the pupil function at the front focal plane of L1.
To nd the complex eld at the object plane, we consider a typical case, namely
the rst pupil is considered clear, i.e., p1(x, y) 1, and the second pupil is a
pinhole, p2(x, y) (x, y). Therefore, beam 1 becomes a spherical wave behind
lens L1, while beam 2 becomes a plane wave, as shown in Fig. 5.14. The optical
eld at z away from the focal plane of lens L1 is thus given by
jk 0 jk0 2
Sx, y; z exp x y 2
exp j0 t a exp j0 t , 5:45
2z 2z
where a is a proportionality constant. The above equation is similar to Eq. (2.3) for
a point-source hologram. The intensity on the object plane is
140 Digital holography: special techniques
2
jk jk
0 0
jSx, y; zj2 exp x2 y2 expj0 t a exp j0 t
2z 2z
k0 2
DC B sin x y2 t , 5:46
2z
A diagram of the core circuit of the lock-in amplier is shown in Fig. 5.15. This
circuit performs the so-called synchronous demodulation. Basically, a reference
signal, sin(t), also oscillating at frequency , is applied to the lock-in amplier to
demodulate the signal of interest. In practice, the reference signal comes from
another xed photodetector PD2, as shown in Fig. 5.14. In this way, any unwanted
phase uctuation within the interferometer (formed by beamsplitters BS1, BS2,
and mirrors M1 and M2) can be compensated signicantly.
For a general dual-channel lock-in amplier, there are two outputs given by
Out1 / LPFfi sin tg, 5:49a
and
Out2 / LPFfi cos tg, 5:49b
where LPF{} stands for the operator of lowpass ltering, i.e., any frequency of
or higher will be ltered out and will not be present at the output of the lter.
For convenience, we let P P1z P2z jPjexpj. Thus Eq. (5.48b) can be re-
written as
n o
i Re P jTj2 ejt dz jPj cos t jTj2 dz: 5:50
By substituting Eq. (5.50) into Eqs. (5.49a) and (5.49b) and evoking the operation
of lowpass ltering, we have
Out1 jPj sin jTj2 dz, 5:51a
Out2 jPj cos jTj2 dz, 5:51b
for the outputs of the lock-in amplier. The two outputs can be converted to digital
signals by an analog-to-digital converter (ADC), and therefore can be transmitted
or processed digitally.
142 Digital holography: special techniques
To see the effect of retrieving the three-dimensional location of the object target,
we assume that the object is located at z z0 and its amplitude transmittance can
be expressed as T(x, y; z) T(x, y) (z z0). Hence Eq. (5.52) can be reduced to
where zr is the distance measured from the hologram plane. Apparently, the
intensity distribution of the object target can be reconstructed at zr z0 because
p(x, y; z0) jT(x, y)j2. This is a real image of the reconstruction [see Problem 5.5].
If we reconstruct H c x, y, the reconstruction is a virtual image.
It is noted that the reconstruction process of optical scanning holography (OSH)
is exactly the same as that in conventional holography. But in OSH there is no
annoying zeroth-order beam and twin image among the reconstructed eld. This is
the main merit of the complex hologram. Also we note that in the above process
only the intensity distribution of the object target jT(x, y)j2 is retrieved. Indeed,
because the phase of the object T(x, y) is not recorded, OSH is operated in the
incoherent mode. However, the phase of the object target can also be recorded,
provided a pinhole mask is attached to photodetector PD1 located at the back focal
plane of lens L2 [46]. We will not discuss the details here but only show the
results. The resulting two outputs become
5.4 Optical scanning holography 143
Out1 jPj sin T dz, 5:57a
Out2 jPj cos T dz: 5:57b
Similar to the incoherent mode, we can combine the two outputs to produce Hc,
H c Out2 j Out1 P T dz: 5:58
It is noted that now Hc is the complex hologram of T and OSH is operated in the
coherent mode. Figure 5.16 shows the reconstruction of a complex hologram given
by Eq. (5.58), which is recorded in the coherent mode [47], while in Fig. 5.17 we
show the holograms and reconstruction of a diffusely reecting object (a die),
illustrating the incoherent operation of optical scanning holography as the recon-
struction is free of the speckle noise commonly exhibited in coherent optical
systems [48]. Figure 5.7(a) and (b) are the real and imaginary parts of the complex
hologram given by Eq. (5.52), respectively, where the real part of the hologram is
given by Eq. (5.51b) and the imaginary part is given by Eq. (5.51a). Digital
reconstruction of the complex hologram is shown in Fig. 5.7(c), which is obviously
not contaminated by speckle noise as compared to the image shown in Fig. 5.17(d)
where the image is observed by a CCD camera upon coherent illumination of the
die. Reference [49] is the most recent review paper on optical scanning
holography.
20
40
y (m)
10
phase (rad)
60
0 150
80
-10 100
100
150
100 50
120
0 20 40 60 80 100 120 50
x (m) y (m) x (m)
0 0
(a) (b)
Figure 5.17 (a) Real part of a complex hologram of a die, (b) imaginary part of a
complex hologram of a die, (c) digital reconstruction of the complex hologram,
and (d) image observed by a CCD camera upon coherent illumination of the die.
Reprinted from Ref. [48], with permission, OSA.
Figure 5.18 Recording for (a) a conventional hologram, (b) a scanning holo-
gram at x1 0 , and (c) a scanning hologram at x2 0 .
146 Digital holography: special techniques
where the interference patterns are recorded. Let us consider a single pixel, e.g. x1,
on the hologram plane. The object light on the pixel, 1, is the superposition of the
light emerging from all the object points, that is
where a0 is the complex amplitude of the light emerging from object point a,
which is dependent on the surface properties of the object; ra1 is the path from
object point a to hologram pixel x1, and so on. So the three-dimensional infor-
mation of the object can be recorded via the interference between the scattered
light and a reference light.
Now we consider a comparable setup in OSH. Although in typical OSH we use
the interference pattern of a spherical wave and a plane wave to illuminate the
object, we ignore the plane wave in the discussion because it serves as the
reference light. So how do we apply OSH to acquire a data point on a hologram,
the same as the data point obtained in the hologram at point x1 as shown in
Fig. 5.18(a)? We can illuminate the object with a scanning beam from the right
0
side, and let a spherical wave focus at location x1 , as shown in Fig. 5.18(b). Then,
the light on the photodetector, S, is proportional to the superposition of the light
emerging from all object points, that is
where r1a is the path from the focal point x10 to object point a. Assume that the
geometrical relationship between a and x10 is the same as that between a and x1, we
can say r1a ra1 and S / 1 so that the data point on the hologram obtained
in Fig. 5.18(b) is proportional to the data point obtained at hologram point x1 in
Fig. 5.18(a).
Similarly, we can obtain another data point on the hologram comparable to the
data point at x2 in Fig. 5.18(a) by moving the spherical wave to focus at location
0
x2 , as shown in Fig. 5.18(c).
In conclusion, we nd that the back focal plane of lens L1 in Fig. 5.14 can be
regarded as a virtual hologram plane in comparison with conventional holography.
As we scan the object using the interference pattern between a spherical wave
and a plane wave, we acquire the holographic data pixel by pixel. Since the
hologram data obtained in Fig. 5.18(a) and those obtained in Fig. 5.18(b) and (c)
are the same, the reconstruction method is also the same: we only need to back-
propagate the object light from the hologram plane to the object plane, which is z0
in both cases.
Problems 147
Problems
5.1 Show that, in three-step phase-shifting holography, the complex amplitude of
the object is given by
1 j I 0 I =2 j1I I =2
o ,
4 r
I j o r expjj2
Show that the two outputs from a dual channel lock-in amplier are given by
Out1 jPj sin jTj2 dz,
Out2 jPj cos jTj2 dz:
f 1 x f 2 x f 1 x f 2 x,
References
1. I. Yamaguchi, and T. Zhang, Phase-shifting digital holography, Optics Letters 22,
12681270 (1997).
2. T. Zhang, and I. Yamaguchi, Three-dimensional microscopy with phase-shifting
digital holography, Optics Letters 23, 12211223 (1998).
3. I. Yamaguchi, J.-I. Kato, S. Ohta, and J. Mizuno, Image formation in phase-shifting
digital holography and applications to microscopy, Applied Optics 40, 61776186
(2001).
4. J. Rosen, G. Indebetouw, and G. Brooker, Homodyne scanning holography, Optics
Express 14, 42804285 (2006).
5. M. Gross, and M. Atlan, Digital holography with ultimate sensitivity, Optics Letters
32, 909911 (2007).
6. P. Guo, and A. J. Devaney, Digital microscopy using phase-shifting digital holog-
raphy with two reference waves, Optics Letters 29, 857859 (2004).
7. J.-P. Liu, T.-C. Poon, G.-S. Jhou, and P.-J. Chen, Comparison of two-, three-, and
four-exposure quadrature phase-shifting holography, Applied Optics 50, 24432450
(2011).
8. M. Atlan, M. Gross, and E. Absil, Accurate phase-shifting digital interferometry,
Optics Letters 32, 14561458 (2007).
9. V. Mic, J. Garca, Z. Zalevsky, and B. Javidi, Phase-shifting Gabor holography,
Optics Letters 34, 14921494 (2009).
10. J.-P. Liu, and T.-C. Poon, Two-step-only quadrature phase-shifting digital holography,
Optics Letters 34, 250252 (2009).
11. X. F. Meng, L. Z. Cai, X. F. Xu, X. L. Yang, X. X. Shen, G. Y. Dong, and Y. R.
Wang, Two-step phase-shifting interferometry and its application in image encryption,
Optics Letters 31, 14141416 (2006).
12. X. F. Meng, X. Peng, L. Z. Cai, A. M. Li, J. P. Guo, and Y. R. Wang, Wavefront
reconstruction and three-dimensional shape measurement by two-step dc-term-
suppressed phase-shifted intensities, Optics Letters 34, 12101212 (2009).
13. C.-S. Guo, L. Zhang, H.-T. Wang, J. Liao, and Y. Y. Zhu, Phase-shifting error and
its elimination in phase-shifting digital holography, Optics Letters 27, 16871689
(2002).
14. W. Chen, C. Quan, C. J. Tay, and Y. Fu, Quantitative detection and compensation of
phase-shifting error in two-step phase-shifting digital holography, Optics Communi-
cations 282, 28002805 (2009).
15. Y. Awatsuji, T. Tahara, A. Kaneko, T. Koyama, K. Nishio, S. Ura, T. Kubota, and O.
Matoba, Parallel two-step phase-shifting digital holography, Applied Optics 47,
D183D189 (2008).
16. Y. Awatsuji, A. Fujii, T. Kubota, and O. Matoba, Parallel three-step phase-shifting
digital holography, Applied Optics 45, 29953002 (2006).
17. T. Tahara, K. Ito, T. Kakue, M. Fujii, Y. Shimozato, Y. Awatsuji, K. Nishio, S. Ura, T.
Kubota, and O. Matoba, Parallel phase-shifting digital holographic microscopy,
Biomedical Optics Express 1, 610616 (2010).
18. T. Kakue, Y. Moritani, K. Ito, Y. Shimozato, Y. Awatsuji, K. Nishio, S. Ura, T.
Kubota, and O. Matoba, Image quality improvement of parallel four-step phase-
shifting digital holography by using the algorithm of parallel two-step phase-shifting
digital holography, Optics Express 18, 95559560 (2010).
19. T. Tahara, K. Ito, M. Fujii, T. Kakue, Y. Shimozato, Y. Awatsuji, K. Nishio, S. Ura, T.
Kubota, and O. Matoba, Experimental demonstration of parallel two-step phase-
shifting digital holography, Optics Express 18, 1897518980 (2010).
Problems 149
151
152 Applications in digital holography
Figure 6.3 (a) Ray conguration, and (b) wavefront imaging using the micro-
scope objective.
where do and di are the distances between the specimen and the MO, and between
the MO and the image, respectively; o and i are the divergence angles of the
object light and the image light, respectively. The angle i will be much less than
o, provided jMLatj is large enough owing to a high numerical aperture (NA) of
the MO. Thus the light wave scattered from the image space can be holographic-
ally recorded more easily than the light wave scattered from the object space. The
main purpose of the high-NA microscope objective is to make i smaller such
that diffraction of the magnied image interfering with the reference wave will
make coarser fringes, thereby adapting to the low resolution of the CCD. Another
way to draw the same conclusion is that since the sampling period of the optical
eld is the pixel pitch of the CCD, xCCD , a magnied image is equivalent to
having the sampling period reduced to xCCD =jMLat j in the object space. Hence
the optical resolution of the MO dominates the practical resolution of the recon-
structed image.
As the digital hologram is recorded, the complex eld i0 (x, y) at the CCD plane
can be retrieved with standard ltering or phase-shifting procedures. Subsequently,
the complex eld at the image plane iz (x, y) can be obtained directly by Fresnel
diffraction [Eq. (1.37)],
jk 0 jk 0 2
iz x, y exp jk 0 z exp x y 2
2z 2z
jk0 2
F i0 x, yexp x y 2
: 6:2
2z k x k y
kx 0 , ky 0
z z
154 Applications in digital holography
are matched, a higher spatial frequency of the object light can possibly be recorded.
We assume that the reference light is at the optical axis, and that phase shifting is
performed to remove the zeroth-order light and the conjugate image. The complex
hologram Hc is thus expressed as
; jk 0 2
H c x, y F p0 x, y exp x y 2
: 6:7
2z0 k x k y
kx 0 , ky 0
z0 z0
We used Eq. (3.15) to obtain the above equation with x0 0 due to the assumed
on-axis point reference source and we have also eliminated the conjugate term
through the action of phase shifting. In Eq. (6.7), p0 (x, y) is the complex eld of
the object, and z0 is the distance between the object and the CCD. The complex
eld of the object, p0 (x, y), can be reconstructed digitally by taking the inverse
transform of Eq. (6.7) to obtain
jk 0 2 1 k x z0 k y z0
p0 x, y exp x y 2
F Hc , : 6:8
2z0 k0 k0
Note that the sampling of the hologram by the CCD is in the Fourier domain
of the object. Considering one dimension for simplicity, kx and x are related by
kx k0x/z0 according to Eq. (6.7). Hence
k0 xCCD
k ,
z0
where xCCD is the pixel pitch of the CCD. On the other hand, as the spectrum is
transformed to the spatial domain (reconstruction plane) by FFT, there is a
156 Applications in digital holography
Figure 6.5 (a) Recording, and (b) reconstruction congurations for spherical-
reference-light DHM.
6.1 Holographic microscopy 157
; jk 0 1 1 2
H c x, y exp x y 2
2 z0 zr
; jk 0 2
F p0 x, y exp x y 2
, 6:10
2z0 k x k y
kx 0 , ky 0
z0 z0
where z0 and zr are the distances between the specimen and the CCD and between
the reference point source and the CCD, respectively. As the complex hologram is
obtained, we recommend rst interpolating the hologram before multiplying by a
digital spherical wave, Rm (x, y), to reconstruct the complex hologram. As shown in
Fig. 6.5(b), the spherical reconstruction light is given by
; jk 0 2
Rm x, y exp x y ,
2
6:11
2zm
where zm is the distance between the reconstruction point source and the CCD.
Finally, we perform free-space propagation to obtain the reconstructed image. We
can set zm zr, resulting in a eld at the hologram plane as
; jk 0 2
x, y exp x y 2
2z0
; jk 0 2
F p0 x, y exp x y2
: 6:12
2z0 k x k y
kx 0 , ky 0
z0 z0
Note that Eq. (6.12) is in the form of Fresnel diffraction. Thus the object light at
the specimen plane p0 (x, y) can be retrieved from backward propagation by setting
z z0 (Section 4.3.5). As described in Section 2.3, we can also set zm 6 zr to
introduce a magnication of the reconstructed image [15]. However, the resolution
achieved of the specimen is still limited to z0/Dx (Section 4.2), where Dx is the size
of the CCD. By using a spherical reference light, the distance from the specimen to
the CCD can be signicantly reduced. Thus high-resolution reconstruction can be
achieved.
Besides the above mentioned techniques, Gabor holography can also be applied to
DHM. Figure 6.6 shows a typical setup of a Gabor digital holographic microscope. We
assume that the specimen is illuminated with a spherical wave. The spherical wave
itself serves as the reference wave, and the light scattered from the small particles in the
specimen is the object wave. The setup is very simple, and the relationship between the
object wave and the reference wave is similar to that of spherical-reference-based
DHM. As a result, we can apply the same reconstruction method to the hologram
obtained by a Gabor digital holographic microscope. Indeed, the zeroth-order light and
the twin image associated with the Gabor hologram can be removed by phase-shifting
158 Applications in digital holography
Image plane
Objective lens
Thick specimen
Object plane
techniques [19, 20]. If the specimen is moving, the zero-order light can be removed by
subtracting two holograms obtained sequentially [2123].
which can be represented in a discrete form when we discretize the object into N
sections at locations z1, z2, . . ., zN :
N
X
2 k0 k 2
H c x, y
T x, y; zi
exp j
0
x y :
2
6:14
i1 2zi 2zi
X
2 k0 k0 2
T x, y; zi exp j x y 2
hx, y; z z1
i1
2zi 2zi
N
X
2 k0 k0 2
2
jT x, y; z1 j
T x, y; zi exp j x y 2
hx, y; z z1 ,
i2
2zi 2zi
6:16
where we have used Eq. (6.15) to extract the section at z z1, which is the rst
term of the above equation. The second term in Eq. (6.16) is what has been referred
160 Applications in digital holography
Figure 6.9 (a) Real part, and (b) imaginary part of the complex hologram of two
rectangular objects, and the reconstructed images focused at (c) z z1, and (d) z z2.
by OSH. Iterative methods based on optimization using the L2 norm [28] and
ltering in the phase-space domain [29] to achieve optical sectioning have also
been proposed in the context of digital holography.
The work of Zhang et al. deserves to be mentioned since it was the rst reported
use of the L2 norm in holography to perform sectioning [28, 30]. Based on
Eq. (6.16), Fig. 6.10 shows the reconstruction of a three-dimensional specimen
consisting of a slide of uorescent beads 2 m in diameter (excitation around
542 nm, emission around 612 nm) for two sections. The beads tend to stick either
to the top surface of the mounting slide or to the bottom surface of the coverslip,
giving us a simple three-dimensional specimen with two dominant sections. The
distance between the two sections is around 35 m. Figure 6.10(a) and (b) show the
section on the top of the slide and on the bottom of the slide, respectively. When
the beads are not sharply focused on the section, we can see the haze from the out-
of-focus plane. Figure 6.11(a) and (b) show the section on the top of the slide and on
the bottom of the slide, respectively, when the L2 norm is employed in the recon-
struction. We can clearly see that the haze has been eliminated on both of the planes.
The rst use of the L2 norm in digital holography seems to have opened up
vibrant research into performing sectioning directly from a hologram using other
optimization methods such as the well-known L1 norm, already widely used in
signal processing, which ties to what is often nowadays called compressive
holography [31]. The use of compressive holography has allowed, for example,
162 Applications in digital holography
Table 6.1 MATLAB code for demonstrating the out-of-focus haze, see Example 6.1
% parameter setup
M512;
deltax0.001; % pixel pitch 0.001 cm (10 um)
w633*10^-8; % wavelength 633 nm
z130; % 25 cm, propagation distance z1
z215; % 15 cm, propagation distance z2
gure;imshow (Hr)
title(Real part of the complex hologram)
axis off
gure;imshow (Hi)
title(Imaginary part of the complex hologram)
axis off
%Reconstruction
Arfftshift(ifft2(fftshift(Hologram)));
Arz1Ar.*conj(p1);
6.2 Sectioning in holography 163
EI1fftshift(fft2(fftshift(Arz1)));
EI1mat2gray(EI1.*conj(EI1));
gure; imshow(EI1);
title(Reconstructed image at zz_1)
axis off
Arz2Ar.*conj(p2);
EI2fftshift(fft2(fftshift(Arz2)));
EI2mat2gray(EI2.*conj(EI2));
gure; imshow(EI2);
title(Reconstructed image at zz_2)
axis off
(a) (b)
(a) (b)
Figure 6.11 Sectioning by the L2 norm. (a) Top section of the specimen, and
(b) bottom section of the specimen. From [Ref. 30], with permission, OSA.
164 Applications in digital holography
rejection of the twin image noise as well as out-of-focus haze for on-axis holo-
grams. Most recently, Zhao et al. have proposed an adaptively iterative shrinkage-
thresholding algorithm to show that the performance of the algorithm in terms of
sectioning capability is better than that of the L2 norm method [32]. In addition, the
proposed algorithm can section well even when using only half of the data of a
complex hologram. It is expected that compressive holography will continue to
ourish, for example, with better and faster algorithms, for application to
sectioning and to different challenges in holography.
By setting the initial conditions uw[1] w[1], [1] 0, and g[1] 0, the
unwrapped phase uw[n] can be obtained sequentially by
uw n w n gn: 6:20
It has also been indicated that Eqs. (6.19) and (6.20) can be expressed using a
single formula as [33]
Xn1
uw n w 1 m1
Wfmg, 6:21
where W{ } is the wrapping operator. The wrapping operator can be regarded as a
non-linear calculation as
Wfmg m 2lm, 6:22
where l[m] 2 {. . ., 1, 0, 1, . . .}, i.e., it is a sequence of integers chosen to ensure
that the wrapped function, W{[m]}, is between and . The last term in
Eq. (6.21) represents a summation of the wrapped phase difference [n]. Equation
(6.21) is usually called the Itoh algorithm.
is rst produced in the range x 0 ~ 256, y 0 ~ 256, and wrapped. The sampling
separation is 0.5, and so the wrapped phase function contains 513 513 pixels.
The wrapped phase is then unwrapped using the Itoh algorithm.
In this example, we do not write the MATLAB code for implementing
Eq. (6.21), rather we use directly the MATLAB command unwrap to unwrap the
166 Applications in digital holography
Figure 6.13 (a) The original phase, (b) the wrapped phase, and (c) the
unwrapped phase.
wrapped phase function. It should be noted, however, that unwrap only applies to a
single dimension. So we must apply unwrap twice, once for the columns and once
for the rows. The simulation results are shown in Fig. 6.13, and the MATLAB
code is listed in Table 6.2. Figure 6.13(a) shows the original phase ranging from
15 to 150 radians. The wrapped phase, shown in Fig. 6.13(b), ranges from 3.14
to 3.14 radians. The unwrapped phase is shown in Fig. 6.13(c), which is nearly the
same as the phase shown in Fig. 6.13(a). Because the phase uw is unwrapped from
some start point, there is a difference at the start point, uw[n] s[n], between
the unwrapped phase uw and the exact phase s. In the current example of a
two-dimensional phase function, we nd the phase difference uw[1,1] s[1,1]
2.92 28.05 25.13 radians, where we have used the start point at coordinates
(x, y) (0,0) to nd the phase difference. Indeed uw[1,1] is the rst point when
(x, y) (0,0). We can also use other start points to wrap the phase function but
for this we need to write a code instead of using the MATLAB command. So in
6.3 Phase extraction 167
Table 6.2 MATLAB code for two-dimensional phase-unwrapping, see Example 6.2
FUNexp(1i*PHASE);
WRAP_PHASEangle(FUN);% wrapped phase function
UNWRAP_PHASEunwrap(WRAP_PHASE,[],1);% unwrapping columns
UNWRAP_PHASEunwrap(UNWRAP_PHASE,[],2);% unwrapping rows
gure; mesh(x,y,PHASE);view([-30,52])
title(Original phase (radian))
colorbar
set (gca, xtick, [0 125 250]);
set (gca, ytick, [0 125 250]);
gure; mesh(x,y,WRAP_PHASE);view([-30,52])
title(Wrapped phase (radian))
colorbar
set (gca, xtick, [0 125 250]);
set (gca, ytick, [0 125 250]);
gure; mesh(x,y,UNWRAP_PHASE);view([-30,52])
title(Unwrapped phase (radian))
colorbar
set (gca, xtick, [0 125 250]);
set (gca, ytick, [0 125 250]);
Fig. 6.13(c) the unwrapped phase ranges roughly from 40 to 125 radians. The added
constant phase difference of 25.13 radians in Fig. 6.13(c) compared with the original
phase in Fig. 6.13(a) does not matter because only the relative phase is meaningful.
Although in Example 6.2 we have demonstrated unwrapping of a two-
dimensional phase function successfully and easily, there are some problems with
the Itoh algorithm. First, Eq. (6.21), as well as Eq. (6.19), implies that the phase
difference between adjacent pixels of s must be between and . If the exact
phase changes abruptly (say, larger than ), the algorithm fails and the unwrapped
phase uw suffers from errors.
For example, we assume that s[1] 0.9 and s[2] 2.1, where the phase
difference is not between and . The wrapped phases of the two pixels are then
w[1] 0.9 as there is no need to wrap the phase, and w[2] 2.1 2 0.1
(l has been chosen to be 1) so that the wrapped phase is in the range between
and . According to Eq. (6.18), [2] w[2] w[1] 0.8 with initial assumed
168 Applications in digital holography
[1] 0. Hence according to Eq. (6.19), g[2] g[1] 0. We can now, from
Eq. (6.20), nd uw[1] w[1] g[1] 0.9, which is correct. But uw[2]
w[2] g[2] 0.1, which is incorrect.
In addition, in practice the actual acquired phase image is usually contaminated
with noise. Any noisy pixel may affect the judgment described in Eq. (6.19), and
the phase may be shifted incorrectly up or down. What is worse is that incorrect
determination of the phase on the nth pixel will propagate to the following (n 1,
n 2,. . .) pixels. Many ingenious, noise-immune unwrapping methods have been
proposed. For example, the noisy pixels are usually detected and excluded from the
unwrapping procedure (e.g., [34, 35]).
where d is the separation between plane A and another reference plane, plane B;
h is the height of the specimen measured from plane B. Therefore, the height of the
specimen is given by
uw n 2k 0 d buw n
hn , 6:24
2k 0 2k0
where buw n is the unwrapped phase found by setting a zero reference phase at
plane B. Thus the height separation between 2 phase contouring is
2
h , 6:25
2k0 2
which is also called the height sensitivity of the system. Hence we can see that each
contour is the locus of all points on the surface of the specimen that lie at some
constant height above a reference plane.
Whenever the height difference of the specimen is less than h/2 /4, we
have buw n s as the phase difference between adjacent pixels is always less
than . In other words, successful unwrapping is achievable if the height variation
of a specimen is less than /4. However, such height sensitivity in the system being
considered is rather small and is sometimes impractical. Hence there is a need to
have a system of large height sensitivity. Two-wavelength contouring, to be
discussed next, is a solution.
1 2
: 6:28
1 2
By setting 1 2 1, the synthetic wavelength, 21 =, is much larger
than 1. And, according to Eq. (6.25), the height sensitivity is
2
h 1 : 6:29
2 2
For example, supposing that 1 640.2 nm and 2 640 nm, we will have h
320 nm and h 1024 m. Thus the measurement range of height is signi-
cantly extended.
The two-wavelength procedure begins with two wrapped phase functions
obtained by digital holography,
1w n W 1s n 1s n 2l1 n, 6:30a
2w n W 2s n 2s n 2l2 n, 6:30b
where l1 and l2 are integers to ensure 1w and 2w are in the range, respectively.
Note that again according to the Itoh algorithm, 1w or 2w cannot be unwrapped to a
correct unwrapped phase function provided the phase jump between adjacent pixels
of 1s or 2s is larger than . The difference between the two wrapped functions is
w n 2w n1w n
2s n1s n 2 l2 nl1 n 6:31
s n 2 l2 nl1 n :
n2012
hn 700 m
1000
in the range n 1 ~ 1000, as shown in Fig. 6.15(a). The two wavelengths involved
in the example are 1 0.650 m and 2 0.652 m. Thus the synthetic
wavelength is 212 m, according to Eq. (6.28). Figure 6.15(b) shows the
wrapped phase for 1. It looks tumultuous because the actual phase difference
between adjacent pixels exceeds the limit. As a result, using the Itoh algorithm,
the unwrapped phase shown in Fig. 6.15(c) cannot correctly represent the prole of
the specimen. Figure 6.15(d) shows the phase difference w. Note that because
both 1w and 2w range between , their difference, w, is between 2. In other
words, the difference of l2 and l1, i.e., l2 l1 in Eq. (6.31), is not always the proper
parameter for wrapping s to the range. For example, assume that s[1]
0.2 and s[2] 0.5. The phase obtained by Eq. (6.31) is w[1] 0.2, but
w[2] might have wrapped to the range beyond because of improper choice of
the parameter (l2 l1), and hence w[2] 0.5 2 1.5. By setting [1] 0,
and g[1] 0, we can get [2] w[2] w[1] 1.5 0.2 1.7
according to Eq. (6.17), and thus g[2] g[1] 2 2 according to Eq. (6.18).
Finally, from Eq. (6.19) the unwrapped phase will be uw[2] w[2] g[2]
1.5 2 0.5, which is the correct phase. So we can still apply the
Itoh algorithm to unwrap w, yielding a continuous phase s, as shown in
Fig. 6.15(e). Finally, we apply Eq. (6.32) to convert the phase distribution to a
prole function in Fig. 6.15(f), which is identical to Fig. 6.15(a).
UNWRAP_PHASE3unwrap(PHASE3,[],2);
% unwrapping synthetic phase function
UNWRAP_PHASE3UNWRAP_PHASE3-min(UNWRAP_PHASE3);
HUNWRAP_PHASE3*Ls/4/pi;
% calculate the corresponding height
s 2s 1s k 0 1 cos h: 6:40
The height deformation is thus given by
s
h : 6:41
k0 1 cos
In two-illumination contouring, some portions of the specimen may be shadowed
because the illumination light is tilted. This problem together with any additive
noise can be eliminated by applying more than two illuminations with different
illumination directions [45, 46].
Problems
6.1 A CCD imager contains 1000 1000 pixels with a pixel pitch of 6 m, and it
is used in spherical-reference-based DHM. Find a combination of (z0, zr),
where the distances z0 and zr are dened in Fig. 6.5(a), so that the acquired
hologram is well sampled and the lateral resolution in the object space is better
than 1.2 m. Assume that phase-shifting procedures have been applied to
extract a complex hologram and the wavelength used is 0 0.6 m.
6.2 The CCD described in Problem 6.1 is applied in Fourier-based DHM. Find the
object distance z0 and the size of reconstructed eld when a lateral resolution
better than 1.2 m is required. Assume that phase-shifting procedures have
been applied to extract a complex hologram and the wavelength used is
0 0.6 m.
References
1. D. Garbor, A new microscopic principle, Nature 161, 777778 (1948).
2. E. N. Leith, and J. Upatnieks, Microscopy by wavefront reconstruction, Journal of the
Optical Society of America 55, 569 (1965).
176 Applications in digital holography
42. I. Yamaguchi, S. Ohta, and J.-I. Kato, Surface contouring by phase-shifting digital
holography, Optics and Lasers in Engineering 36, 417428 (2001).
43. D. Velsquez Prieto, and J. Garcia-Sucerquia, Three-dimensional surface contouring
of macroscopic objects by means of phase-difference images, Applied Optics 45,
63816387 (2006).
44. I. Yamaguchi, J.-I. Kato, and H. Matsuzaki, Measurement of surface shape and
deformation by phase-shifting image digital holography, Optical Engineering 42,
12671271 (2003).
45. S. Seebacher, T. Baumbach, W. Osten, and W. P. O. Jueptner, Combined 3D-shape
and deformation analysis of small objects using coherent optical techniques on the
basis of digital holography, SPIE Proceedings 4101, 510521 (2000).
46. S. M. Sols, F. M. Santoyo, and M. D. S. Hernndez-Montes, 3D displacement
measurements of the tympanic membrane with digital holographic interferometry,
Optics Express 20, 56135621 (2012).
7
Computer-generated holography
Computer-generated holography deals with the methods used for the generation of
holograms digitally. The hologram can be subsequently printed on a lm or loaded
onto a spatial light modulator (SLM) for holographic reconstruction. Computer-
generated holograms (CGHs) have the advantage that the three-dimensional
objects do not have to exist in the real world. In other words, the objects
which one wants to display can be ctitious. For generating CGHs, different
calculation methods have been developed for tting various display devices and
reconstruction methods. In this chapter, we rst discuss some of the historical
methods of generating computer-generated holograms and then we describe some
modern approaches for fast calculations as well as ways to process holographic
information. Finally, we discuss three-dimensional holographic display and
address some of the issues involving display with SLMs.
179
180 Computer-generated holography
where r (x, y) exp (jk0 sinx) denotes the complex amplitude of a plane wave
reconstruction light, and is the tilt angle.
In general, the hologram is a complex function, that is H(x, y) a(x, y)
exp[j(x, y)], where a(x, y) and (x, y) are the modulus and the phase of
the hologram, respectively. Hence the goal is to devise a binary (opaque or
transparent) amplitude pattern Hb(x, y) to approximate the complex hologram
H(x, y). This seems a formidable problem, but if our concern is limited to a
window of size dx dy at the reconstruction plane, it is possible to nd Hb(x, y)
such that
F fH b x, y r x, yg F fH x, y r x, yg 7:2
where (m, n) indexes the cell centered at xm mw, yn nw; amn a(xm, yn) and
mn (xm, yn). Under the illumination of a tilted plane wave, each point source in
the sampled hologram produces a plane wave on the reconstruction plane, giving a
complex eld in the reconstruction plane as
h i
X j
k0
xm x yn y
jmn jk 0 sin xm
s x, y F fexpjk0 sin xH s x, ygk :
f
k0 x k0 y amn e e e
f , ky
m, n
x f
7:4
7.1 The detour-phase hologram 181
Figure 7.2 Cell description of (a) a complex hologram, and (b) a detour-phase
binary hologram.
We can eliminate the effect of the unwanted phase term exp (jk0 sinxm) in
Eq. (7.4) by designing the width of the unit cell to be
2
w , 2 1, 2, 3 . . . 7:5
k 0 sin
as exp(j2m) 1, yielding
h i
X k
j f0 xm x yn y
s x, y amn ejmn e , 7:6
m, n
Under the illumination of a tilted plane wave, the complex eld at the reconstruc-
tion plane is given by
b x, y F fexpjk 0 sin xH b x, yg k0x k0y
kx , ky
f f
" # !
X pmn k0 x f sin qmn k 0 y
pmn qmn sinc sinc 7:8
m, n
2f 2f
k0 k0
j xm x yn y j mn x
f f jk 0 sin mn jk 0 sin xm
e e e e ,
where the term, exp(jk0 sinxm), becomes unity by applying Eq. (7.5). Further-
more, we assume
pmn k0 d x qmn k 0 dy
1, 1, 7:9
2f 2f
so that the two sinc functions in Eq. (7.8) approach unity within the region of the
observation window. Additionally, the term exp[ j(k0/f ) mnx] 1 can also be
dropped, provided (k0/f )mndx 1. Then, Eq. (7.8) can be simplied to
h i
X j
k0
x x y y
pmn qmn ejk0 sin mn e
m
b x, y : 7:10
f n
m, n
Figure 7.3 (a) Original object, (b) detour-phase hologram of (a), (c) recon-
structed image, and (d) enlarged reconstructed image with selected diffracted
order from (c).
analysis, the aperture in the cell is rectangular, but it can be any shape, such as
circular, with an area proportional to amn. From Eqs. (7.5) and (7.11b), we can
determine the shift of the aperture within each cell by
mn w
mn mn : 7:12
k0 sin 2
Because mn is wrapped between and , the range of mn is within w/2.
Table 7.1 MATLAB code for generating a detour-phase CGH, see Example 7.1
Idouble(I);
gure; imshow(abs(I));
title(Original object)
PHrand([64,64]);
II.*exp(2i*pi*PH);% add a random phase to the object
FTSfftshift(ifft2(fftshift(I)));
Aabs(FTS);
gure;imshow(mat2gray(A));
title(Object spectrum)
AA./max(max(A))*15;
Around(A);% The amplitude is divided into 16 levels
Bangle(conj(FTS));
BB-min(min(B));
BB./max(max(B))*7;
Bround(B);% The phase is divided into 8 levels
Hzeros(1024);
for m1:64;
for n1:64;
Pzeros(16);
aA(m,n);
bB(m,n);
cx(a/2);
drem(a,2);
P(9-c:8cd,(1b):(9b))1;
H(16*(m-1)1:16*(m-1)16,16*(n-1)1:16*(n-1)16)P;
end
end
gure; imshow(H)
title(Detour-phase CGH)
imwrite(H, 1AA.jpg,jpg)
%Reconstruction (FFT)
Rfftshift(ifft2(fftshift(H)));
gure; imshow(100.*mat2gray(abs(R)));
title(Reconstructed image)
7.2 The kinoform hologram 185
be coded to a binary pattern as a cell. The MATLAB code is listed in Table 7.1 as
a reference.
First, the modulus and the phase are quantized to 16 and 8 levels, respectively.
We set pmn 9 pixels, and qmn 1 ~ 16 pixels to represent 16 modulus levels.
We also select mn 4 ~ 3 pixels to represent 8 phase levels. In other words, we
select 2 in Eqs. (7.5) and (7.12). By using this setup, the opening of the
aperture will always be inside a cell. The produced detour-phase hologram is
shown in Fig. 7.3(b). The reconstructed image is shown in Fig. 7.3(c), while a
selected diffracted order is enlarged in Fig. 7.3(d). There are many diffracted
orders of light due to the cell grids, but only specic diffracted orders satisfy the
assumptions in our analysis, resulting in quality images. Note that in the example
the diffraction efciency is very low because we use a simple but low-efciency
coding pattern. For example, pmn cannot exceed 9 pixels while the cell width, w, is
16 pixels in our present example. One may devise other coding patterns to improve
the diffraction efciency.
Figure 7.4 (a) Original object, (b) random-phase mask with phase ranging
from to , (c) spectrum of the object with the random-phase mask, and
(d) reconstructed image of the kinoform (phase-only) hologram.
PHrand([256,256]);
II.*exp(2i*pi*PH);% add a random phase to the object
FTSfftshift(ifft2(fftshift(I)));
Aabs(FTS);
gure; imshow(mat2gray(A));
title(Spectrum modulus)
Bangle(FTS);
gure; imshow(mat2gray(B));
title(Spectrum phase)
%Reconstruction (FFT)
Rfftshift(ifft2(fftshift(exp(-1j*B))));
gure; imshow(mat2gray(abs(R)));
title(Reconstructed image)
A(x, y) of the target reconstructed eld. If desired, one can also apply a phase mask
on the input pattern as the initial eld. The initial eld is then Fourier transformed
to the spectrum domain. The spectrum is modied according to a constraint. In our
problem the constraint is that the spectrum modulus must be uniform. So we set the
spectrum modulus to become unity without altering its phase. The modied
spectrum is transformed back to the spatial domain. We then apply a constraint
on the resulting eld in the spatial domain, thereby obtaining a new eld. In our
7.4 Calculations and holographic information processing 189
problem the constraint in the spatial domain is the given modulus distribution,
A(x, y). So we enforce the modulus of the eld to be A(x, y) without altering
its phase. The resulting eld is then regarded as the initial eld in the next iteration.
Iterations are repeated until the goal (e.g., iteration number) is achieved.
When using the IFTA, it is important to hold sufcient degrees of freedom.
If the constraint in either the spatial domain or the spectrum domain is too severe,
there will be no satisfactory solution. If the constraint is proper, the resulting eld
will converge to an optimized solution. Note that the IFTA only searches for a
small region of the phase-function set. For example, for a 512 512-pixel phase
function with 256 phase levels, there are a total of 256512512 phase functions.
Thus, the solution found by the IFTA may be only locally optimized, i.e., it is the
best solution within the searched region, but not a globally optimized solution
(the best solution in the complete set). Additional algorithms must be included
in the IFTA to search for a globally optimized solution [8, 9].
where A(m, n) is the target image, (m, n) is the evaluated eld, (m, n) are the
sampling indices, and M and N are the sampling numbers along the x-axis and y-axis,
respectively. Figure 7.6(a) shows the RMSE as a function of the iteration number.
In the example, the total number of iterations is 100. The error decreases gradually
as the number of iterations increases. The resulting reconstructed image after 100 iter-
ations is shown in Fig. 7.6(b). Note that it is comparable to Fig. 7.4(a).
Table 7.3 MATLAB code for generating a phase-only Fourier hologram using the IFTA,
see Example 7.2
Hfftshift(fft2(fftshift(I1)));
I2fftshift(ifft2(fftshift(exp(1j.*angle(H)))));
avg2mean(mean(abs(I2)));
I2(I2./avg2).*avg1;
rmse(mean(mean((abs(I2)-I).^2)))^0.5;
plot(n,rmse, o);
pause(0.3); % To see the error in each iteration.
I1I.*exp(1j*angle(I2));
end
hold off
I2I2./max(max(abs(I2)));
gure;imshow(abs(I2));
title(Reconstructed image)
Figure 7.6 (a) RMSE as a function of iteration number, (b) reconstructed image
after 100 iterations.
7.4 Calculations and holographic information processing 191
Figure 7.7 Spatial relation between the 3-D object space, the 2-D WRP, and the
hologram. From Ref. [18], with permission, Chinese Optics Letters.
192 Computer-generated holography
The intensity of each point, and its axial distance (depth) from the hologram are
given by ai and zi, respectively. The Fresnel hologram, H(x, y), is given by
X 1
N
ai 2
where X and Y are the horizontal and vertical extents of the hologram, respectively,
and are assumed to be identical to that of the object scene. is the wavelength of
the optical beam which is used to generate the complex hologram. The term
q
ri x xi 2 y yi 2 z2i is the distance of an ith object point at position
(xi, yi) to a point at (x, y) on the hologram. From Eq. (7.16), we can see that
each object point is contributing to the entire hologram, and the evaluation of each
hologram pixel involves the complicated expression enclosed in the summation
operation. In the framework of the WRP, as evident from Fig. 7.7, wavefronts
emitted from a self-illuminating point will diverge to cover the entire hologram and
intercept the WRP in its path. If the WRP is near to the object point, the coverage
of the object wavefronts on the WRP is limited to a small virtual window.
For simplicity, the virtual window is assumed to be a square of size W W. The
Fresnel diffraction equation in Eq. (7.16) can be applied, with slight modication,
to compute the diffraction pattern uw(x, y) on the WRP within the virtual window.
Let di denote the axial distance from the ith object point to the WRP, then we have
the eld distribution on the virtual window given by
X
N1
uw x, y Fi 7:17
i0
where
8 !
>
< ai 2
exp j Rwi for jx xi j and jy yi j < 12 W
F i Rwi
>
:
0 otherwise,
7.4 Calculations and holographic information processing 193
q
and where Rwi x xi 2 y yi 2 d 2i is the distance of the point from the
WRP to the ith object point. In Eq. (7.17), computation of the WRP for each object
point is only conned to the region of the virtual window on the WRP. As W is
much smaller than X and Y, the computation load is signicantly reduced compared
with that in Eq. (7.16). In [Ref. 15], the calculation is further simplied
by pre-computing the terms (1/Rwi) exp (j(2/)Rwi) for all combinations of
jx xij, jy yij, and di (within the coverage of the virtual window), and storing
the results in a look-up table (LUT). In the second stage, the eld distribution in
the WRP is expanded to the hologram plane given by
ux, y F 1 fF fuw x, ygF fhx, y; zw gg, 7:18
where zw is xed for a given separation between the WRP and the hologram. As
demonstrated by Shimobaba et al. [14], a video sequence of digital holograms,
each comprising 2048 2048 pixels and representing 1014 object points, can be
generated at a rate of 10 frames per second.
Adopting the WRP framework, Tsang et al. proposed a method known as the
interpolative wavefront recording plane (IWRP) method [20], and extended it to
handle not just a plane object surface but also a three-dimensional object surface
which has the same size and number of pixels as the hologram. In this approach, it
is assumed that the resolution of the object scene is generally smaller than that of
the hologram. Hence it is unnecessary to convert every object point of the scene to
its wavefront on the WRP. On this basis, the object scene is evenly partitioned into
a non-overlapping square support of size M M as shown in Fig. 7.8. The object
scene is sampled at the sample point (m, n), which is at the center of the square
support.
This is equivalent to subsampling the object scene evenly M times along the
horizontal and the vertical directions. It is then assumed that each sample point
is contributing to the wavefront only to a square virtual window in the WRP as
shown in Fig. 7.9. Although the reduction has effectively reduced the computation
time, as shown in Ref. [20], the reconstructed images obtained are weak, noisy,
and difcult to observe upon optical reconstruction. This is of course caused by the
sparse distribution of the object points caused by subsampling of the scene image.
To overcome this issue, the interpolated WRP (IWRP) is proposed to interpolate
the support of each object point with padding, i.e., the object point is duplicated to
all the pixels within each square support. After the interpolation, the wavefront of
a virtual window will be contributed by all the object points within the support.
Note that since each object point is duplicated to all the pixels within each square
support, the wavefront on the visual window from each interpolated object point is
simply a shifted version of the wavefront from the sample point, which can be pre-
computed and stored in a look-up table (LUT). Consequently, each virtual window
194 Computer-generated holography
Figure 7.9 Spatial relation between the square support and its corresponding
virtual window. From Ref. [18], with permission, Chinese Optics Letters.
Figure 7.10 (a) An image divided (dotted line) into left and right parts, pos-
itioned at z1 0.005 m (left) and z2 0.01 m (right) from the WRP/IWRP,
respectively. The WRP/IWRP is positioned at zw 0.4 m from the hologram; (b)
optically reconstructed upper half of the image using the WRP method, (c) same
as (b) but for the lower half of the image, (d) optically reconstructed upper half of
the image using the IWRP method, and (e) same as (d) but for the lower half of
the image. From Ref. [20], with permission, OSA.
where Re[] denotes taking the real part of the complex function in brackets. The
hologram is displayed on a liquid crystal on silicon (LCOS) modied from the
Sony VPL-HW15 Bravia projector. The projector has a horizontal and vertical
resolution of 1920 and 1080, respectively. Due to the limited size and resolution of
the LCOS, only part of the hologram (and hence the reconstructed image) can be
displayed. Employing the WRP method, the reconstructed images corresponding
196 Computer-generated holography
to the upper half and the lower half of the hologram are shown in Figs. 7.10(b) and
7.10(c), respectively. We observe that the images are extremely weak and noisy.
Next we repeat the above process using the IWRP method with M M 8 8.
The reconstructed images are shown in Figs. 7.10(d) and 7.10(e). Evidently, the
reconstructed image is much clearer in appearance. To illustrate further the useful-
ness of the method, a sequence of holograms of a rotating globe rendered with the
texture of the Earth was generated. The radius of the globe is around 0.005 m, and
the front tip of the globe is located at 0.01 m from the IWRP. The IWRP is at a
distance of 0.4 m from the hologram. For a hologram (as well as image size) of
2048 2048 pixels, the method is capable of attaining a generation speed of over
40 frames per second [20].
which covers the entire area). As a result, the overall diffraction pattern on the
VDP, which is the summation of the contribution of individual object points, can
be deduced with a very small amount of computation. The rationale is that the VDP
is found to carry the diffraction fringes of the entire hologram, but at the same time
it carries similar optical properties as the object scene. Generally speaking, this
implies that modifying the brightness and contrast on the VDP will lead to almost
identical changes in the pictorial contents it represents. Along this line of thinking,
a hologram can revert into a VDP. The information on the VDP can be conveni-
ently processed using many existing image processing techniques and tools.
Therefore, in the second stage shown in Fig. 7.11, we perform image processing
on the VDP. After processing, the VDP can be easily expanded into the ultimate
hologram by forward projection, which is further away from the object scene.
Let us now formulate the idea mathematically. Suppose the VDP is located at an
axial distance zv from the hologram, the complex wavefront uVDP(x, y) on the VDP
is back-propagated from the hologram t(x, y) given by
uVDP x, y tx, y hx, y; zv t x, y h x, y; zv : 7:20
In the second stage, a selected region(s) on the VDP, denoted by S, is processed
with a given function P[.]. After the modication, the diffraction pattern on the
VDP becomes
PuVDP x, y x, y 2 S
vx, y 7:21
uVDP x, y otherwise:
In the third stage of the VDP processing framework, the modied or the processed
wavefront v(x, y) is forward projected onto a hologram plane by convolving with
the spatial impulse response in Fourier optics given by
t P x, y vx, y hx, y; zv : 7:22
We can realize Fourier transformation in Eqs. (7.20) and (7.22), and the Fourier
transform of h(x, y; zv) and h*(x, y; zv) can be pre-computed in advance. Hence, the
processing of a hologram with the VDP framework mainly involves a pair of
forward and inverse Fourier transform operations. The rest of the process is
negligible with regard to computation time. Based on a commercial graphic
processing unit (GPU) to conduct the Fourier transform, a medium size digital
hologram comprising 2048 2048 pixels can be processed in less than 10 ms,
equivalent to a rate of over 100 frames per second [26].
Figure 7.12 shows an example of VDP-based processing of the hologram. Apart
from readjusting the brightness and contrast of the image represented in a holo-
gram, sometimes it is also necessary to enhance the sharpness in certain regions to
increase the visibility of the high frequency contents. This process can be achieved
198 Computer-generated holography
Figure 7.12 (a) A hemisphere rendered with the texture of the Earth positioned
at 0.3 m from the hologram plane. (b) Numerical reconstructed image of the
hologram representing the image in (a) after direct application of high-boost
ltering to the left side of the virtual diffraction plane. From Ref. [26], with
permission, OSA.
by applying a high-boost lter to the selected area on the VDP of the hologram.
High-boost ltering is performed by amplifying the original input image and then
subtracting a lowpass image, which is given by
h i
vx, yjx, y2S AuVDP x, yBuLVDP x, y , 7:23
1X 1 X 1
uLVDP x, y uVDP x m, y n: 7:24
9 m1 n1
The terms A and B are constant values. The larger the values of A and B, the higher
will be the brightness and sharpness of the region S, respectively. Other types of
sharpening lters can be applied under the same principle. The hologram
sharpening process is illustrated by a hemisphere with the texture of the Earth as
shown in Fig. 7.12. The hemisphere has a radius of 5 mm with its tip located at
0.3 m from the hologram. Once the digital hologram is obtained, the VDP is
generated using Eq. (7.20). The high-boost lter in Eq. (7.23) is applied to the
VDP, and subsequently forward projected into a hologram using Eq. (7.22). The
reconstructed image of the modied hologram, focused at 0.3 m (which causes
slight de-focusing around the rim of the hemisphere), is shown in Fig. 7.12(b). It
can be seen that the edge on the left side of the reconstructed image is
strengthened, and the rest of the reconstructed image is not affected as S, the
region to be processed, has been chosen to the right half of the plane.
7.5 Display using spatial light modulators 199
7.5.1 Resolution
To consider the high resolution issue, for simplicity, let us display an on-axis point
source hologram as an example. According to Chapter 2, the on-axis point source
hologram is given by
k0 2
tx, y A B sin x y : 2
7:25
2z0
Again, z0 is the distance of the point source from the hologram. An instantaneous
spatial frequency along the x-direction within the hologram is
1 d k0 2 x
f inst x x : 7:26
2 dx 2z0 z0
Assuming the SLM is limited to the size xmax with spatial resolution f0, then the
highest spatial frequency of the hologram fringes is finst (xmax) xmax/z0 and if we
want to record it, we must set
xmax
f inst xmax f 0: 7:27
z0
Hence, for a given spatial resolution of the SLM, we can nd the limiting aperture,
xmax, of the hologram on the SLM. We dene the numerical aperture of the
hologram as
" #0:5
xmax xmax xmax 2
NA sin p2 1 , 7:28
2 x2max z0 z0 z0
where is the viewing angle as shown in Fig. 7.13, as the on-axis hologram is
reconstructed with a real point source located z0 away from the hologram.
200 Computer-generated holography
Combining Eqs. (7.27) and (7.28), we can nd the viewing angle in terms of the
spatial resolution of the SLM as follows:
2 3
6 f 0 7
2 sin 1 4q5: 7:29
1 f 0 2
For an illumination wavelength 0.6 m, Fig. 7.14 shows f0 versus the viewing
angle.
For example, Hamamatsus electron-beamaddressed SLM with a spatial reso-
lution of about f0 8 lp/mm (line pairs per millimeter) gives a viewing angle of
about 0.6. Even with Hamamatsus parallel aligned nematic liquid crystal SLM
(PALSLM) with f0 100 lp/mm, we have 6.8. In fact, none of the modern
7.5 Display using spatial light modulators 201
SLMs, typically with a spatial resolution of about 100 lp/mm, are suitable for large
viewing angle three-dimensional display. Since 1 lp/mm means there are two
pixels in 1 mm, 100 lp/mm means the pixel size is about 12 100 1
mm or 5 m for
modern SLMs. The situation becomes even worse if off-axis holography is
employed because the SLM needs to resolve the carrier frequency. For an offset
angle of 45, the spatial carrier frequency is sin 45/ 1000 lp/mm or a pixel size
of about 0.5 m, well beyond modern SLM capability. Hence high resolution
holographic display with SLMs is an important area of research.
An effective solution for higher resolution holographic display has been
realized through the integration of optically addressed SLMs and the active tilting
method [28, 29]. But both the cost and complexity of such systems are high.
Here we discuss a modern approach to address the issue of low resolution
SLMs. The type of novel digital hologram generated is called a binary mask
programmable hologram (BMPH) [30].
hr(x, y;z) is called the spatial impulse response of propagation of light [31]. The
exact inverse transform has been given but is fairly complicated. For z that is many
wavelengths away from p0(x, y) and with the paraxial approximation in the spatial
domain, i.e., x2 y2 z2, for any amplitude factors, hr(x, y;z) becomes
h pi
jk0 exp jk0 x2 y2 z2
hr x, y; z : 7:32
2z
7.5 Display using spatial light modulators 203
XX
1YX1
I d x, y
Bp, qexp jk 0 x pd 2 y qd2 z2d
, 7:34
p0 q0
where X d and Y d are the horizontal and vertical extents of the hologram,
respectively, as d is the width and the height of a pixel in B(x, y). Note that the
above equation in double summations is simply a discrete version of Eq. (7.33),
where we have neglected any constants in front of the integration. Without loss of
generality, we assume that the hologram and the image scene have identical
horizontal and vertical extents. From Eqs. (7.30) and (7.34), we can deduce that
the reconstructed image is dependent on the binary mask pattern, M(x, y). How-
ever, given Id(x, y) there is no explicit inverse formulation to compute M(x, y).
In view of this, the inverse problem has been treated as an optimization process to
determine the mask pattern that best approximates the target reconstructed image.
To begin with, an objective function Od is dened to determine the root-mean-
square error (RMSE) [see Eq. (7.15)] between the reconstructed image Id(x, y) and
a planar target image Td(x, y) which is located at a distance zd from the hologram
such that
v
u X 1 Y1
u 1 XX
Od t T d p, q I d p, q2 : 7:35
XY p 0 q 0
Figure 7.16 (a) A planar image placed at 0.4 m from the hologram. (b) Simulated
reconstructed image of the hologram display by a SLM of pixel size 5 m at
0.4 m. (c) Same as (b) but with SLM of pixel size 20 m. The number of
hologram samples is 256 256 and the wavelength of light used is 0.65 m.
From Ref. [30], with permission, OSA.
the case, the resolution of B(x, y) will have to be identical to that of the high
resolution grating. In this method, a low resolution mask M(x, y) is determined
so that when it is coupled with the grating, it will result in a reconstructed image
that is similar to the target. This is a formidable problem that cannot be solved with
2
brute-force means (such as blind search), as there are 2XY=k combinations on the
mask pattern that can be represented in M(x, y). For example, for a modest square
hologram with X and Y both equal to 256, and k 4, the total number of patterns
that can be generated is 24096. In view of this, a simple genetic algorithm (SGA)
has been employed [32], a method that mimics the evolutionary mechanism in
biological species, to determine the optimal mask pattern. Past research has
demonstrated that the SGA is effective in solving complicated optimization prob-
lems in many engineering applications [33]. As the principles and details of the
SGA have been described in the literature, we shall only present results here.
The effect of the use of a low resolution SLM for holographic display is shown in
Fig. 7.16. Obviously, the low resolution SLM with 20 m pixel size has problems
reconstructing the hologram as shown in Fig. 7.16(c), whereas the SLM with 5 m
pixel size can display the hologram reasonably well.
When the BMPH is generated for the target image shown in Fig. 7.16(a) with
grating size 256 256 pixels and pixel size 5 m square, and a binary mask of
64 64 pixels (shown in Fig. 7.17(a)) with pixel size 20 m, which is four times
worse than that of the Fresnel hologram as k 4, the reconstructed image is as
shown in Fig. 7.17(b).
The main feature of a MPBH is that the target image it represents can
be composed by simply changing the pattern on the binary mask. As the binary
mask can be implemented with less stringent electronic devices, the method can
be used for holographic display where high resolution SLMs are simply not
available.
7.5 Display using spatial light modulators 205
Figure 7.17 (a) Binary mask corresponding to the target image in Fig. 7.16(a).
(b) Reconstructed image of the BMPH at 0.4 m. From Ref. [30], with permission,
OSA.
compared to the case of full parallax of 2.4 gigapixels. The serial data rate reduces to
12.1 million samples/frame 8 bits/sample 30 frames/second 2.9 gigabits/
second, which is quite manageable with a modern Ethernet connection. Hence
HPO holography aims to record and reconstruct HPO information and
many HPO holographic systems have been proposed and studied [34, 35].
The rst computer-generated HPO digital holographic system was proposed and
constructed by St. Hilaire et al. [36]. The authors achieved a displayed
image of 5 cm 3 cm 3 cm with a viewing angle of about 15. However, the
rst digital holographic recording technique to be investigated regarding the HPO
approach was that proposed by Poon et al. [37]. The HPO digital holographic system
is based on the principle of optical scanning holography. The idea is fairly simple.
Instead of scanning a three-dimensional object with a full two-dimensional
time-dependent Fresnel zone plate [see Eq. (5.46)] to obtain holographic information,
a one-dimensional time-dependent Fresnel zone plate is used to generate HPO
holograms. However, these slit-type holograms for each point of the object produce
vertical spreading and require the use of a cylindrical lens to compensate for this
spreading during reconstruction. Recently, a computer algorithm has been proposed
to convert a full parallax hologram to an off-axis HPO hologram for three-
dimensional display [38], and three-dimensional display has been achieved [39].
However, the display is conned to simple point objects.
1 m x
gx, y cos 2 , 7:39
2 2
where m is the modulation depth of the grating and is the period of the grating.
To ensure perfect registration of the two holograms on the output plane, f1/d.
Also, in order to eliminate I0 from being displayed on the output plane, a neutral
density lter with a phase plate given by Hdc mI0 e j5/4 at the center of the input
plane is used. The overall pattern that is displayed on the SLM is shown in
Fig. 7.19(a). Figure 7.19(b) is the display of the output plane where the central
portion of the area, which is marked with a dashed square, is the synthesized
complex hologram. Figure 7.19(c) is the intensity of the reconstructed image at the
reconstruction plane. The simulation results in Fig. 7.19 used gray tone holograms.
However, binary holograms can be produced swiftly with a printer. In addition,
with binary holograms, we can enhance the storage capacity of digital holograms
and facilitate more efcient transmission of holograms.
To accomplish the binarization process, we rst multiply the object pattern, u,
by a random phase and then calculate the corresponding diffraction eld by
convolving with h, the spatial impulse response in Fourier optics, to reach the
recording plane. The binary holograms are then obtained by binarizing the real and
the imaginary parts of the complex hologram by sign binarization. The two binary
holograms are, therefore, given by
Br x, y B0 Re u e j h , 7:40a
Bi x, y B0 Im u e j h , 7:40b
208 Computer-generated holography
Figure 7.19 (a) Designed input pattern at the SLM, (b) optical eld at the output
plane, and (c) intensity of the reconstructed image at the reconstruction plane.
From Ref. [10], with permission, OSA.
Figure 7.20 (a) Binary hologram Br(x, y), (b) binary hologram Bi(x, y), (c) image
reconstruction using Br(x, y), and (d) image reconstruction using the complex
hologram, Br(x, y) jBi(x, y). From Ref. [10], with permission, OSA.
See Table 7.4 for the MATLAB code.
7.5 Display using spatial light modulators 209
Table 7.4 MATLAB code for generating two binary holograms to synthesize a display
hologram, see Fig. 7.20
[M N]size(I0);
z-80lambda/4; %(mm, distance)
r1:M;
c1:N;
[C, R]meshgrid(c, r);
pexp(-1i*2*pi*z.*((1/lambda)^2-(1/M/delta)^2.*. . .
(C-N/2-1).^2-(1/N/delta)^2.*(R-M/2-1).^2).^0.5);
A0fftshift(ifft2(fftshift(I0)));
AzA0.*p;
Efftshift(fft2(fftshift(Az))); % propagation
% binary hologram
Hrreal(E);
Hr(Hr>0);
Hiimag(E);
Hi(Hi>0);
gure;
imshow(mat2gray(Hr));
210 Computer-generated holography
gure;
imshow(mat2gray(Hi));
title(Binary hologram B_i)
axis off
gure;
imshow(3*Ir);
title(reconstructed image of B_r)
axis off
gure;
imshow(3*I);
title(Reconstructed image of the syntheti chologram)
axis off
where is a random phase function between 0 and 2 and B0{.} is the binarization
operator with the threshold value set to zero, i.e., if the input value is larger than
zero, then the output value is 1, otherwise the output value is zero. The purpose of
the random phase is to reduce the edge effect caused by binarization.
The two binary holograms Br(x, y) and Bi(x, y) for the text FCU are shown in
Figs. 7.20(a) and (b), respectively. Figure 7.20(c) is the reconstruction of a single
binary hologram, while Fig. 7.20(d) shows the reconstruction of the synthesized
complex hologram. The MATLAB code is listed in Table 7.4. Although some
artifacts due to the binarization process exist, most of the background noise,
such as the noise due to the twin image, has been removed, as shown in
Fig. 7.20(d). For optical reconstruction of Br(x, y) and Bi(x, y) using the system
shown in Fig 7.18, readers are encouraged to check Ref. [10].
Problems
7.1 Generate a detour-phase CGH in MATLAB by setting 1 in Eq. (7.12).
Note that there may now be crosstalk between adjacent cells.
7.2 In developing the detour-phase CGH, we have applied some approximations,
and they are
pmn k0 dx qmn k0 dy
1, 1, f sin dx pmn sin ,
2f 2f
k0
and mn dx 1:
f
References 211
Propose a design strategy (i.e., determine the hologram size, cell size, window
size. . .) so that all the approximations can be satised.
7.3 Equation (7.26) is a formula derived under the paraxial approximation. Derive
the local fringe frequency of the interferogram formed by a plane wave and a
spherical wave without applying the paraxial approximation. Then go on to
show that the viewing angle becomes
2 sin 1 f 0
instead of the angle derived under the paraxial approximation given by
Eq. (7.29). Finally plot versus f0 for f0 up to 1000 lp/mm at 0.6 m.
References
1. B. R. Brown, and A. W. Lohmann, Complex spatial ltering with binary masks,
Applied Optics 5, 967969 (1966).
2. A. W. Lohmann, and D. P. Paris, Binary Fraunhofer holograms, generated by com-
puter, Applied Optics 6, 17391748 (1967).
3. L. B. Lesem, P. M. Hirsch, and J. A. Jordan, The kinoform: a new wavefront
reconstruction device, IBM Journal of Research and Development 13, 150155
(1969).
4. N. C. Gallagher, and B. Liu, Method for computing kinoforms that reduces image
reconstruction error, Applied Optics 12, 23282335 (1973).
5. B. Liu, and N. C. Gallagher, Convergence of a spectrum shaping algorithm, Applied
Optics 13, 24702471 (1974).
6. P. M. Hirsch, J. J. A. Jordan, and L. B. Lesem, Method of making an object-dependent
diffuser, U.S. patent 3,619,022 (1971).
7. R. W. Gerchberg, and W. O. Saxton, A practical algorithm for the determination of
phase from image and diffraction plane pictures, Optik 35, 237246 (1972).
8. J. R. Fienup, Phase retrieval algorithms: a comparison, Applied Optics 21, 27582769
(1982).
9. J. R. Fienup, Phase retrieval algorithms: a personal tour [Invited], Applied Optics 52,
4556 (2013).
10. J.-P. Liu, W.-Y. Hsieh, T.-C. Poon, and P. Tsang, Complex Fresnel hologram display
using a single SLM, Applied Optics 50, H128H135 (2011).
11. L. C. Ferri, Visualization of 3D information with digital holography using laser
printers, Computer & Graphics 25, 309321 (2001).
12. H. Yoshikawa, and M. Tachinami, Development of direct fringe printer for computer-
generated holograms, Proceedings SPIE 5742, 259266 (2005).
13. T. Shimobaba, N. Masuda, and T. Ito, Simple and fast calculation algorithm for
computer-generated hologram with wavefront recording plane, Optics Letters 34,
31333135 (2009).
14. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, Rapid calculation algorithm of
Fresnel computer-generated-hologram using look-up table and wavefront-recording
plane methods for three-dimensional display, Optics Express 18, 1950419509
(2010).
15. T. Yamaguchi, G. Okabe, and H. Yoshikawa, Real-time image plane full-color and
full-parallax holographic video display system, Optical Engineering 46, 125801
(2007).
212 Computer-generated holography
16. P. Tsang, J.-P. Liu, W.-K. Cheung, and T.-C. Poon, Fast generation of Fresnel
holograms based on multirate ltering, Applied Optics 48, H23H30 (2009).
17. S.-C. Kim, J.-H. Kim, and E.-S. Kim, Effective reduction of the novel look-up
table memory size based on a relationship between the pixel pitch and reconstruc-
tion distance of a computer-generated hologram, Applied Optics 50, 33753382
(2011).
18. P. W. M. Tsang, and T. C. Poon, Review on theory and applications of wavefront
recording plane framework in generation and processing of digital holograms, Chinese
Optics Letters 11, 010902 (2013).
19. T.-C. Poon, On the fundamentals of optical scanning holography, American Journal of
Physics 76, 739745 (2008).
20. P. Tsang, W. K. Cheung, T. C. Poon, and C. Zhou, Holographic video at 40 frames
per second for 4-million object points, Optics Express 19, 1520515211 (2011).
21. A. Anand, and S. Vijay Raj, Sectioning of amplitude images in digital holography,
Measurement Science and Technology 17, 7578 (2006).
22. W.-C. Chien, D. S. Dilworth, E. Liu, and E. N. Leith, Synthetic-aperture chirp
confocal imaging, Applied Optics 45, 501510 (2006).
23. T. Kim, Optical sectioning by optical scanning holography and a Wiener lter,
Applied Optics 45, 872879 (2006).
24. X. Zhang, E. Y. Lam, and T.-C. Poon, Reconstruction of sectional images in holog-
raphy using inverse imaging, Optics Express 16, 1721517226 (2008).
25. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, Compressive holography,
Optics Express 17, 1304013049 (2009).
26. P. W. M. Tsang, T. C. Poon, and K. W. K. Cheung, Enhancing the pictorial content of
digital holograms at 100 frames per second, Optics Express 20, 1418314188 (2012).
27. T.-C. Poon, Three-dimensional television using optical scanning holography, Journal
of Information Display 3, 1216 (2002).
28. M. Stanley, R. W. Bannister, C. D. Cameron, S. D. Coomber, I. G. Cresswell, J. R.
Hughes, V. Hui, P. O. Jackson, K. A. Milham, R. J. Miller, D. A. Payne, J. Quarrel,
D. C. Scattergood, A. P. Smith, M. A. G. Smith, D. L. Tipton, P. J. Watson, P. J.
Webber, and C. W. Slinger, 100-megapixel computer-generated holographic images
from active tiling: a dynamic and scalable electro-optic modulator system, Proceed-
ings SPIE 5005, 247258 (2003).
29. H.-S. Lee, H. Song, S. Lee, N. Collings, and D. Chu, High resolution spatial light
modulator for wide viewing angle holographic 3D display, in Collaborative Confer-
ence on 3D Research (CC3DR), (2012), pp. 7172.
30. P. W. M. Tsang, T. C. Poon, C. Zhou, and K. W. K. Cheung, Binary mask program-
mable hologram, Optics Express 20, 2648026485 (2012).
31. T.-C. Poon, and T. Kim, Engineering Optics with MATLAB (World Scientic,
Singapore, 2006).
32. D. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning
(Addison-Wesley, Boston, MA, 1989).
33. A. M. Zalzala, and P. J. Fleming, eds., Genetic Algorithms in Engineering Systems
(Institution of Electrical Engineers, Stevenage, UK, 1997).
34. S. A. Benton, The mathematical optics of white light transmission holograms, in
International Symposium on Display Holography (Lake Forest College, Lake Forest,
July 1982).
35. C. P. Grover, R. A. Lessard, and R. Tremblay, Lensless one-step rainbow holography
using a synthesized masking slit, Applied Optics 22, 33003304 (1983).
References 213
36. P. St. Hilaire, S. A. Benton, and M. Lucente, Synthetic aperture holography: a novel
approach to three-dimensional displays, Journal of the Optical Society of America A 9,
19691977 (1992).
37. T.-C. Poon, T. Akin, G. Indebetouw, and T. Kim, Horizontal-parallax-only electronic
holography, Optics Express 13, 24272432 (2005).
38. T. Kim, Y. S. Kim, W. S. Kim, and T.-C. Poon, Algorithm for converting full-parallax
holograms to horizontal-parallax-only holograms, Optics Letters 34, 12311233
(2009).
39. Y. S. Kim, T. Kim, T.-C. Poon, and J. T. Kim, Three-dimensional display of a
horizontal-parallax-only hologram, Applied Optics 50, B81B87 (2011).
40. T.-C. Poon, T. Kim, G. Indebetouw, B. W. Schilling, M. H. Wu, K. Shinoda, and
Y. Suzuki, Twin-image elimination experiments for three-dimensional images in
optical scanning holography, Optics Letters 25, 215217 (2000).
41. B. E. A. Saleh, and K. Lu, Theory and design of the liquid crystal TV as an optical
spatial phase modulator, Optical Engineering 29, 240246 (1990).
42. J. A. Coy, M. Zaldarriaga, D. F. Grosz, and O. E. Martinez, Characterization of a
liquid crystal television as a programmable spatial light modulator, Optical Engineer-
ing 35, 1519 (1996).
43. L. G. Neto, D. Roberge, and Y. Sheng, Full-range, continuous, complex modulation
by the use of two coupled-mode liquid-crystal televisions, Applied Optics 35, 4567
4576 (1996).
44. R. Tudela, I. Labastida, E. Martin-Badosa, S. Vallmitjana, I. Juvells, and A. Carnicer,
A simple method for displaying Fresnel holograms on liquid crystal panels, Optics
Communications 214, 107114 (2002).
45. M.-L. Hsieh, M.-L. Chen, and C.-J. Cheng, Improvement of the complex modulated
characteristic of cascaded liquid crystal spatial light modulators by using a novel
amplitude compensated technique, Optical Engineering 46, 070501 (2007).
46. R. Tudela, E. Martn-Badosa, I. Labastida, S. Vallmitjana, I. Juvells, and A. Carnicer,
Full complex Fresnel holograms displayed on liquid crystal devices, Journal of Optics
A: Pure and Applied Optics 5, S189 (2003).
47. R. Tudela, E. Martn-Badosa, I. Labastida, S. Vallmitjana, and A. Carnicer, Wavefront
reconstruction by adding modulation capabilities of two liquid crystal devices, Optical
Engineering 43, 26502657 (2004).
48. S.-G. Kim, B. Lee, and E.-S. Kim, Removal of bias and the conjugate image in
incoherent on-axis triangular holography and real-time reconstruction of the complex
hologram, Applied Optics 36, 47844791 (1997).
Index
214
Index 215
Fresnel 68 pixel pitch 90
Gabor 59 pixel size 83 (see also pixel pitch)
image 64 plane wave 3
kinoform 185 power spectrum 47
lensless Fourier 70 propagation vector 3
off-axis 61 (see also hologram, carrier frequency) pseudoscopic image 32 (see also orthoscopic image)
rainbow 73 PSF engineering 138 (see also pupil engineering)
holography 27 pupil function 15
compressive 161 pupil engineering 138 (see also PSF engineering)
computer-generated 179
diffraction tomographic 133 quasi-monochromatic light 43, 45, 51
horizontal-parallax-only (HPO) 205
low-coherence digital 126 reconstruction wave 30
optical scanning (OSH) 137 record length 81
parallel phase-shifting (PPSH) 124 reference wave 29
phase-shifting (PSH) 118 resolution 91
quadrature phase shifting (QPSH) 120 root-mean-square error (RMSE) 189, 203