Articula Guia

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

B60 Vol. 37, No.

9 / September 2020 / Journal of the Optical Society of America A Review

State-of-the-art active optical techniques for


three-dimensional surface metrology: a review
[Invited]
Andres G. Marrugo,1 Feng Gao,2 AND Song Zhang3, *
1
Facultad de Ingeniería, Universidad Tecnológica de Bolívar, Cartagena, Colombia
2
School of Computing and Engineering, University of Huddersfield, Queensgate, Huddersfield HD1 3DH, UK
3
School of Mechanical Engineering, Purdue University, 585 Purdue Mall, West Lafayette, Indiana 47907, USA
*Corresponding author: szhang15@purdue.edu

Received 27 May 2020; revised 7 July 2020; accepted 7 July 2020; posted 8 July 2020 (Doc. ID 398644); published 7 August 2020

This paper reviews recent developments of non-contact three-dimensional (3D) surface metrology using an active
structured optical probe. We focus primarily on those active non-contact 3D surface measurement techniques that
could be applicable to the manufacturing industry. We discuss principles of each technology, and its advantageous
characteristics as well as limitations. Towards the end, we discuss our perspectives on the current technological chal-
lenges in designing and implementing these methods in practical applications.
Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work
must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
https://doi.org/10.1364/JOSAA.398644

1. INTRODUCTION with (x , y , z) coordinates for each measurement point in the


With the increasing availability of computational power on Cartesian coordinate system.
In this paper, we will classify the 3D optical imaging methods
computing devices and cloud sources, and options of afford-
according to the “probing” structured patterns. The patterns
able optical three-dimensional (3D) surface metrology tools,
can be broadly classified into three categories: discrete, con-
there is a rapidly growing interest in employing those tools for
tinuous, or hybrid—meaning a combination of the previous
practical solutions. Yet there is one question that has not been
two. The discrete methods refer to systems using structured
fully addressed: What would be the “best” option for a given
patterns, including, for example, in space, a dot, a line, an area
application? This seemingly easy question is difficult to answer
of dot patterns, a group of lines, and binary area patterns; or in
without understanding the state-of-the-art technologies and time, a pulse or train of pulses. The continuous methods refer to
their advantageous features as well as the limitations. With this systems using structured patterns continuously in space or time,
review, we intend to provide valuable information for a decision- including color spectral encoding, interference using coherent
maker to select a technology that is more likely to be successful or incoherent light, continuous-wave (CW) ToF, digital fringe
for a given application. projection (DFP), and binary defocusing. The hybrid meth-
The concept of structured light (SL) has been used by different ods use both discrete and continuous patterns to accomplish
scientific communities with different interpretations [1]. For a measurement. This paper focuses primarily on 3D surface
example, in computer science, SL often refers to 3D imaging measurement using triangulation-based SL methods; however,
techniques using binary-coded structured patterns projected as there are numerous similarities between “conventional” SL
by a digital video projector, while in the optics community, SL methods and interferometry or ToF, this paper will also overview
is broadened to include techniques using sinusoidal structured those methods to provide a larger picture of the state-of-the-art
patterns and later defocused binary patterns being projected 3D optical surface measurement techniques.
by a digital video projector or even a mechanical projector. For each technology, we will briefly overview the principle
Essentially, SL, as optical non-contact 3D imaging methods, and present the advantageous features and limitations from our
produces a 3D representation by “probing” the visible sur- perspective. We focus primarily on recent developments that
faces of an object by projecting illuminations with predefined have shown to improve the performance or capability of 3D
spatiotemporal structures. As such, SL could broadly include surface measurement techniques. Also, we will refer to several
interferometry or time of flight (ToF). In general, the output of classical technical or review papers for the interested reader to
a 3D imaging system is usually a set of points (i.e., point cloud) learn more. We will include technologies that have not been

1084-7529/20/090B60-18 Journal © 2020 Optical Society of America


Review Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A B61

included or thoroughly discussed in other review papers, along


with technologies proven successful in applications.
It is of importance to know that when referring to 3D surface
measurement, accuracy is likely to be one of the first perform-
ance parameters to consider. However, accuracy requirements
vary in different applications. There are applications where
accuracy is not the most critical performance metric. For
example, in 3D surface measurement techniques embedded
in consumer electronics devices (e.g., smartphones, tablets),
power consumption requirements, speed, and footprint place
a higher priority than accuracy. There are also applications
such as human-machine interfaces, autonomous vehicles, and
robots that present challenges not only on 3D measurement
accuracy but also more on instantaneous data analytics. As such,
low-accuracy yet flexible and more affordable technologies will Fig. 1. Performance of various optical surface measurement
also be covered. techniques. Image was recreated based on the image in Ref. [2].
Despite numerous successes in employing 3D surface mea-
surement techniques to solve practical application problems, the
application portfolio is likely to continue growing. Moreover, disadvantages. They can be classified into two major catego-
each application requires a certain level of customization to ries: methods that require triangulation and methods that do
achieve the best performance. However, to the best of our not. The former derives from the human perception system
knowledge, there is still a lack of general guidelines or tools for (i.e., stereo vision). The latter is related to the physical nature
non-experts to easily and rapidly optimize measurement sys- of light (e.g., how light travels in space and time). Even though
tems (software and hardware) to achieve the best performance. there are many methods to recover depth from other properties
We will discuss some recent efforts that could pave the way for of light (e.g., shadowing, lens interaction), we consider pri-
overcoming these challenges. marily three major areas of 3D surface measurement methods:
Large strides have been made in the field of 3D surface triangulation, ToF, and wave interference (e.g., holography,
measurement over the past decades. However, there are still interferometry). We should emphasize that, to the best of our
numerous remaining challenges for any of the state-of-the-art knowledge, there is no existing system that works best for all,
techniques to conquer. This paper will list some of the critical and each method is most appropriate for certain metrological
challenges that we believe need to be addressed. Often, interdis- requirements [e.g., accuracy, uncertainty, object size, depth of
ciplinary collaborative effort is necessary to tackle some of the field (DOF)]. Figure 1 summarizes the overall performance.
challenging problems. We will cast our perspectives on how to This section will explain the fundamentals of each method along
address each of these challenges. with state-of-the-art advancements.
We aim at an integrative review, attempting to find common
ideas and concepts from reviewed materials and to provide
critical summaries of each subject. This paper is written as a A. Interferometry-Based Surface Metrology
reference for researchers, graduate students, engineers, or scien- Interferometry is the most accurate measurement technology
tists from industry working in the field of optical metrology or at the heart of modern optical metrology. It was used for the SI
developing products and applications that use these systems or definition of the meter, for the detection of gravitational waves,
principles. and generally for the most sensitive measurements in science and
Section 2 explains the general principles of recovering 3D industry. Optical interferometry has been explored widely for
information from structured patterns. Later, in Section 3 we surface measurement because of the advantages of non-contact
present recent advances. Following, in Section 4, we discuss sev- and high measurement accuracy. This subsection will discuss
eral challenges in the field, and finally, in Section 5, we provide a these techniques.
summary of the review paper.
1. Phase-Shifting Interferometry
2. FUNDAMENTALS OF 3D OPTICAL SURFACE To achieve high measurement resolution and accuracy, phase-
MEASUREMENT TECHNIQUES shifting interferometry (PSI) is often the natural choice [3].
Recovering the 3D shape of an object through the intensity Various phase-shifting algorithms have been developed for
registered in a sensor is the purpose of active optical techniques. phase retrieval [4]. In general, for an N-step phase-shifting
They are used to probe the scene with a customized/tailored algorithm, the phase can be recovered by
light beam that enables highly precise and reliable measure- "P #
N
ments of the object surface topography through codification k=1 Ik (x , y ) sin(2π k/N )
φ(x , y ) = −tan −1
PN , (1)
k=1 Ik (x , y ) cos(2π k/N )
methods that depend on the type of structured illumination
method and setup. There are numerous methods for optical
3D surface measurement, with each having its advantages and where
B62 Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A Review

Ik (x , y ) = I 0 (x , y ) + I 00 (x , y ) cos[φ(x , y ) + 2π k/N ]. (2)

Here, I 0 (x , y )0 denotes average intensity, I 00 (x , y ) denotes


intensity modulation, and φ(x , y ) is the carrier phase. High-
speed applications typically use a three-step (N = 3) or four-step
(N = 4) phase-shifting algorithm because it requires capturing a
small number of patterns.
Since Eq. (1) uses an inverse tangent function, the resultant
phase value ranges from [−π, +π ) with a 2π modus. However,
due to the 2π ambiguity of the phase measurement, it requires a
null setup to obtain accurate surface test results.
The phase obtained from Eq. (1) is a wrapped phase, which
usually cannot be used directly for 3D surface measurement
before removing the 2π discontinuities. The process of
detecting and rectifying the phase for each pixel is called phase
unwrapping. Once the phase is unwrapped, the obtained phase Fig. 2. Basic principle of CSI.
can be used for subsequent 3D reconstruction.
Phase unwrapping can be classified as spatial phase unwrap-
ping and temporal phase unwrapping. The spatial phase the two beams combined by the beam splitter. The superposi-
unwrapping algorithm [5,6] analyzes the wrapped phase to tioned interference images of the measured surface topography
determine a “proper” number of 2π ’s (or fringe order) to be are sampled by each of the imagers of the light detector, which
added to a point based on surface smoothness assumption. The is normally a CCD/CMOS camera. The OPD between these
temporal phase unwrapping algorithm (e.g., [7,8]) temporally two beams is reflected in the phase of an interference image; by
acquires additional information to determine the unique fringe analyzing the interferograms, the zero OPD position of each
order for each point. Each of these phase unwrapping methods pixel can be determined, which corresponds to the mechani-
has its merits and limitations. The spatial phase unwrapping cal scanning position of the scanner. In this way, the surface
methods do not require additional temporal information topography of the measurand can be determined accurately
acquisition. However, they require the surface to be smooth, with a subnanometer vertical resolution. The lateral resolution
or impose a limited depth range, or increase system complexity is dependent on the microscope’s objective lens used in the
measurement, normally submicrometer or a few micrometers.
and cost. The temporal phase unwrapping algorithms are more
CSI is sensitive to environmental disturbances and requires a
robust for arbitrary objects, yet require longer times to acquire
controlled environment for its use, which differs from applica-
necessary information.
tions such as shop floor testing and in situ/ in-line measurement.
It also has some unwanted measurement errors due to its inter-
2. Coherence Scanning Interferometry ference nature and data processing algorithms [14]. CSI is
The speckle noise caused by single-wavelength coherent laser normally used for micro-scale surface measurement. For large
surface measurements, multiple overlapped measurement
light decreases the signal-to-noise ratio (SNR) and thus limits
and stitching algorithms are needed, which can be error prone
its achievable resolution. To eliminate the problems caused
and time consuming. It is also troublesome to use on non-
by speckle, white light or a broadband light is used as its light
standard surfaces, for instance, surfaces with variable reflectivity,
source to illuminate the measurement and reference surfaces [9].
multilayered materials, and additive manufacturing.
Because a white light source has a very limited coherence length,
interference signals can be observed only when the optical path
difference (OPD) between the reference arm and measurement 3. Computer-Generated Holography
arm is within the coherence length of the light source. Typically Computer-generated holography (CGH) is the method to
only a few interference fringes can be observed, and the maxi- generate holographic interference patterns digitally [15].
mum fringe position is at the zero OPD position. Through For macroscale surface measurement such as optical surface
vertically scanning one of its optical arms, a set of interferograms measurement, CSI is used widely for high-accuracy optical
at each image pixel of the CCD camera can be recorded. This inspection. However, because of the 2π ambiguity of the phase
technology is called coherence scanning interferometry (CSI). measurement, CSI has only a few hundreds of nanometers ver-
CSI is also known as coherence radar (CR), white-light scanning tical measurement range. For surface form errors exceeding the
interferometry (WLSI), or vertical scanning interferometry 2π ambiguity range, it requires a null setup to obtain accurate
(VSI) [10–13]. It is used widely in microscale 3D profilome- surface test results. For near-plane or near-spherical surfaces
try. A typical CSI system setup for obtaining a 3D surface of under test, an optical null compensator can be used to set up a
a measurand is shown in Fig. 2. The light from a broadband null measurement. For freeform and aspheric surfaces, which
light source is collimated and split into a reference beam and a are used widely due to their advantages in functionalities and
measurement beam by the beam splitter. The reference beam performance, a physical null is extremely difficult to be satisfied.
and the measurement beam incident to the reference mirror and In this case, CGHs can be used as the null components in CSI
measurand surface are reflected back and superpositioned after measurement, which have the advantage that the wavefronts of
Review Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A B63

the objects are entirely digitally synthetic hologram generated


[16–18]. CGHs are powerful because the holograms can change
a wavefront into virtually any shape that a computer can specify.
CGHs are increasingly used as null components in inter-
ferometric tests for their capabilities to accurately generate
a freeform null wavefront [16–18]. However, CGHs could
be excessively expensive and can null only a specific surface
configuration.

B. Time-of-Flight-Based Surface Metrology


Interferometry-based techniques are often the choice for
microscale 3D surface metrology, but there are applications Fig. 4. ToF depth measurement using phase offset. Copyright
where accuracy is not the primary concerns while the field of [2011] IEEE. Reprinted, with permission, from Ref. [22].
view (FOV) and range are. ToF-based surface measurement
techniques can be appealing.
ToF is essentially a ranging technique that simultaneously phase unwrapping is required. Often another low frequency of
measures many points, as opposed to point-by-point measure- modulated light is used to capture another phase map that can
ment such as in scanning lidar [19]. The distance d to an object be used to unwrap the phase using an algorithm similar to the
is calculated by measuring the time delay τ from the round-trip two-wavelength PSI algorithm [7].
of an emitted modulated light and the detected back-reflected The typical operation consists of emitting modulated near-
light. The distance is determined by infrared (NIR) light via light-emitting diodes (LEDs) and then
c ·τ reflected from the surface to the sensor. As illustrated in Fig. 4,
d= , (3) every sensor pixel samples light reflected by the scene four times
2
at equal intervals for every period m 0 , ... , m 3 , which allows for
where c is the speed of light. Despite the simplicity of Eq. (3), the parallel measurement of its phase difference:
its implementation is technologically challenging because it  
involves the speed of light. The accurate measurement of the −1 m 3 − m 1
1ϕ = tan . (4)
round-trip time τ is usually solved by two approaches: i) direct m0 − m2
methods that either measure the time τ by pulsed light or phase
ϕ by CW operation, and ii) indirect methods that derive τ (or ϕ) The target distance d can be calculated from phase 1ϕ by
from time-gated measurements of the signal at the receiver. It is
important to note that, in general, the emitted signal can have c · 1ϕ
d= , (5)
temporal or spatiotemporal modulation (space–time structured 4π · f m
illumination) to probe the surface of the object and perform
surface measurement [20]. where f m is the modulation frequency. Once the target dis-
The most common operation mode found in commercial tance d is known, the camera lens is calibrated (discussed in
devices is the CW approach in which the source intensity is Section 2.D.3), and (x , y , z) coordinates can be calculated.
modulated at radio frequencies (tens of MHz). The detector Although ToF has the limitations of accuracy and depth
reconstructs the phase change 1ϕ between the reflected and resolution, it has been used extensively in commercial products
emitted signals. The distance is calculated by scaling the phase (e.g., Microsoft Kinect Azure DK, 2020 iPad Pro) especially
by the modulation frequency, as shown in Fig. 3. This method for long-range measurement because of its merits including
is called the amplitude modulated CW (AMCW) ToF, and compactness and relative robustness to motion error.
it offers a suitable SNR for real-time, consumer applications
[21]. In this mode of operation, when the emitted pulse extends C. Triangulation-Based Surface Metrology
beyond the maximum range, the resulting phase is wrapped, and
The interference-based techniques are used primarily for
extremely-high-accuracy and microscale measurement, and
ToF-based techniques are good for low-accuracy and large-
scale measurements. The triangulation-based methods to be
discussed in this section land in between.

1. Fundamental Concepts
Triangulation-based SL techniques originated from the con-
ventional stereo vision method that recovers 3D information
by imitating the human perception system. For a given 3D
point in object space P(x , y , z), pl (u, v) is the 2D image point
perceived from the first view, and pr (u, v) is the 2D image
Fig. 3. Basic principle of ToF. point perceived from the other view. If the angles of perception
B64 Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A Review

such a method has low spatial resolution because (1) the struc-
tured pattern is discrete in both x and y directions; (2) it has
difficulty in achieving high measurement accuracy because it
is difficult to precisely locate the corresponding points from
the captured image to the projected pattern; and (3) it could be
sensitive to ambient light with the same spectral distributions.

3. 1D Discrete and 1D Continuous Structured Light Patterns


Another way to speed up the measurement process of the single
discrete dot projection-based methods is to use a line pattern.
Fig. 5. Basic principle of triangulation-based SL.
This technique is employed extensively in short-range laser
scanning devices. Since the structured pattern is continuous in
one dimension, such a method can achieve high measurement
(θ l , θ r ) are known, and two viewpoints (ol , or ) are also known resolution in one direction, thus high measurement accuracy.
(and the distance b between them is also known), the object As such, SL-based line scanning can be used for applications
point in 3D space P(x , y , z) can be uniquely determined using where the measurement accuracy requirement is high. Laser
simple triangulation. Figure 5 illustrates a special case when range scanning techniques see great success in the manufactur-
these three points lie on the x − z plane. To precisely reconstruct ing production line because the parts to be measured move at a
a given object point P, the triangulation-based approach hinges constant speed. The relative movement between object and laser
on finding the corresponding point pairs (ol , or ), precisely line naturally allows the whole surface measurement without
determining their locations, as well as the view angles (θ l , θ r ). swiping the laser line.
Typical triangulation-based SL systems use at least one cam- To further improve measurement efficiency, coded area
era and one structured pattern emitter [23]. The structured patterns were designed. In such a method, all points are simul-
pattern emitter replaces one of the views for a stereo system taneously illuminated with structured patterns without gaps.
described above. A 3D point can be reconstructed once the Depending on how the information is coded, it could be con-
corresponding pairs are known, and the system is calibrated. tinuous in both directions or only in one direction. Since an
Section 2.D.2 discusses the details of SL system calibration. SL system requires uniquely determining only one direction
correspondence after applying the geometric constraints of the
2. 2D Discrete Structured Light Patterns system such as epipolar geometry [25,26], structured patterns
can be unique in one direction (e.g., patterns with structured
The simplest possible system is that the emitter sends out a sin- stripes). If each stripe is uniquely encoded, the stripes can be
gle illumination dot at a time, the camera captures the acquired identified from the captured images. If the stripe is binary (black
image, and software algorithms analyze the captured image or white) in nature, such a method is often regarded as binary
to extract the illumination point. Once the camera model is coding. For binary coding methods, a sequence of structure
precisely determined, for each point on the camera image, its patterns is required to determine a unique stripe. For each pixel,
location and angle can be determined. Additional calibration the black and white sequence can define a unique code (often
can be adopted to determine the relative location between the regarded as a codeword) that can be projected by the projector.
camera coordinate system and the emitter, as well as the angle of The area structured patterns are often generated by a computer
the emitter. Once the entire system is calibrated, 3D coordinates and projected by an image/video projector. The projector has to
of the object point being illuminated can be reconstructed using be calibrated for 3D reconstruction.
triangulation. Though conceptually straightforward, the single- Assuming black represents 0 and white represents 1, the
dot-based methods require scanning in both x and y directions sequence of structured images with black and white stripes is
to measure a 3D surface. As a result, such a technique is not captured to convert to 0’s and 1’s that decode the corresponding
employed extensively primarily because of its low measurement codeword for each stripe. The corresponding stripes informa-
efficiency. tion along with the calibrated projector and camera information
To speed up the measurement process, methods based on allow the reconstruction of 3D information for the entire area
discrete dot patterns have been developed. The dot distribution at once. Various structured pattern codification strategies have
is often random or pseudo random. As a result, the coded pat- been discussed thoroughly and evaluated by Salvi et al. [23].
tern is often regarded as a statistical pattern [24]. To quickly and The binary coding methods allow each point to be measured
uniquely discern the coded information from a captured image independently. Unlike all triangulation-based SL methods
to find the corresponding point pairs, the statistical pattern discussed above where measurements can be realized with each
encodes unique features within a small 2D window such that structured image being captured, the binary coding methods
for any given point on the camera image (u c , v c ), they can be require multiple structured images to perform a single measure-
differentiated from any other areas. Such coding methods have ment. As a result, the binary coding method is sensitive to object
seen great commercial successes in consumer electronics because motion for any measurement point. In contrast, the methods
of their simplicity, small footprint, and low cost (e.g., Microsoft discussed above could measure a given point without being
Kinect V1, Intel RealSense, iPhone, Orbbec Astra). However, influenced by the object motion.
Review Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A B65

To achieve high-speed measurement and reduce motion arti- Due to the flexibilities of an FTP system, other phase unwrap-
facts, structured patterns must be switched rapidly, and captured ping approaches have been developed including variations of
in a short period of time. For example, Rusinkiewicz and Levoy temporal phase unwrapping algorithms [40–42], geometric
[27] developed a real-time 3D shape measurement system using constraints-based phase unwrapping, multiview geometry-
the stripe boundary code [28] that requires only four binary based phase unwrapping, or hybrid methods, along with others
patterns for codification. Such a system achieved 15 Hz 3D data [43]. Adding a secondary camera or projector to provide addi-
acquisition speed. The digital light processing (DLP) develop- tional constraints could also be used to unwrap the phase pixel
ment kits allow binary images to be switched at kiloHertz (kHz) by pixel [44–46]. The inherent geometric constraints of an
or above. Thus, achieving high-speed measurements using this SL system can also be used to determine fringe order for phase
technique is not a major concern. unwrapping [47]. The hybrid phase unwrapping methods we
However, since each stripe width is larger than one camera developed enhance temporal phase unwrapping (e.g., improve
and one projector pixels, the spatial resolution is limited, and robustness and/or speed). These methods include the use of
thus the achievable measurement accuracy is not high. To cir-
embedded markers [48–50], ternary coded patterns [51], phase
cumvent such a problem, 2D continuous structured patterns
coded patterns [41,52], and others. The spatial geometric
were proposed.
constraint-based phase unwrapping methods do not require
additional information acquisition temporarily. They either
4. 2D Continuous Structured Light Patterns require the surface to be smooth, have a limited depth range,
Though adopted extensively, the spatial resolution for the meth- or increase system complexity and cost. All the newly devel-
ods based on 0D or 1D continuous structured patterns is not oped temporal phase unwrapping algorithms could be more
limited only by the camera, but also by the projected structured robust for arbitrary objects but require a longer time to acquire
pattern. Furthermore, since these techniques use intensity infor- necessary information.
mation directly for correspondence pair establishments, they
could be affected by surface texture. As a result, it is difficult to
5. Hybrid Structured Light Patterns
achieve high measurement accuracy.
Some of the approaches to generate 2D continuous struc- Square waves become pseudo sinusoidal waves after applying a
tured patterns are by interference with coherent light, by low-pass filter, and low-pass filtering can be physically realized
physical grating, or by the Moiré effect [29]. This section dis- by lens defocusing. Therefore, the binary defocusing techniques
cusses primarily the triangulation-based method using digital that have been developed in recent years “bridge” the continu-
video projectors for structured pattern generations, and such ous pattern and the discrete pattern for 3D surface measurement
a method is often regarded as DFP. Instead of intensity, the [53,54]. Due to hardware advancements, especially the DLP
carrier phase information is often extrapolated to establish platforms, the binary defocusing method has enabled speed
correspondence for 3D reconstruction. breakthroughs [55]. It has also overcome several limitations of
In theory, a single fringe pattern is sufficient to recover the car- standard DFP techniques that use 8-bit computer-generated
rier phase using the Fourier transform [30]. Such a method for patterns, such as relaxing the precise timing requirement
3D surface measurement is often regarded as Fourier transform between the projector and the camera [56], or eliminating the
profilometry (FTP). Kemao [31,32] developed the windowed impact of the projector’s nonlinear response [56]. It has even
Fourier transform (WFT) method to increase the robustness allowed the achievement of higher depth resolution [57].
[33] and broadly extend its applications [34]. The single-pattern Because binary patterns can be modulated freely, the recov-
FTP has the advantages of speed and simplicity, yet has the limi-
ered phase quality has been further improved by 1D modulated
tations of being sensitive to noise, surface texture, and geometric
patterns [58,59], 2D modulated patterns [60–62], and 3D
surface structures. By projecting another structured pattern
modulated patterns (2D + time) [63,64]. Those 1D modula-
[35,36], the modified FTP method substantially improves
tion techniques could improve phase quality for middle-range
its capability and could be more robust to surface texture or
geometry changes. patterns, but fail to improve quality when fringe patterns are too
Because of their speed advantages, FTP methods have been wide or too narrow [65]. 2D area modulated techniques work
demonstrated successful for fast events capture [37–39]. Due well for wide fringe patterns but still have limited improvements
to the limitations discussed above, FTP methods are often for narrow fringe patterns when the number of pixels is very
used to measure objects at least locally smooth without strong small. 3D optimization could produce a higher-quality phase
texture variations. The reason is that FTP methods use local or than those of 1D or 2D but at the cost of data acquisition time.
global pixel information to recover phase pixel by pixel. This Instead of digitally optimizing the binary patterns, a cylindrical
restriction introduces phase recovery problems; therefore, pixel lens was also found effective for improving phase quality [66].
by pixel phase recovery is desirable, which is why phase-shifting The drawback of such an approach is that it requires additional
algorithms were developed. hardware components besides a standard projector.
Phase-shifting algorithms developed in interferometry [4]
have been employed directly here for phase retrieval, except the
D. 3D Surface Metrology System Calibration
fringe patterns are computer generated. Similarly, the phase
obtained also has 2π ambiguities that can be unwrapped using System calibration plays a key role for any metrology system,
spatial [5,6] or temporal phase unwrapping algorithms [7,8]. and the system measurement accuracy is largely dependent
B66 Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A Review

on the calibration accuracy. This section discusses the calibra- calibration [70]. Once the intrinsic and extrinsic parameters
tion approaches used in each category of surface measurement have been calibrated, the 3D coordinates of a point are obtained.
techniques. Later, Li et al. [72] extended such a method for out-of-focus
projector calibration, Bell et al. [73] developed a method to
1. Interferometry System Calibration
calibrate the out-of-focus camera, and An et al. [74] developed a
method for large-range SL system calibration.
ISO 5436 [67,68] specified the measurement standards of The above calibration procedure does not take into account
surface measurement instruments. For interferometric-based lens distortions, although it could be sufficient in applications
surface measurement instruments, normally vertical and lateral where high accuracy is not required. However, in reality, the
calibrations need to be performed before a measurement. The camera and projector lenses have distortions, mostly radial
calibrations are normally performed by measuring calibration and tangential distortions. These distortions make the imag-
artifacts according to ISO 5436. The details of any concerns ing points deviate from their ideal locations, and introduce
regarding the calibration and verification including materials, systematic errors in the 3D reconstruction [75]. For highly
calibration artifacts, filtering and data processing, and software accurate 3D reconstruction, these distortions need to be cor-
measurement standards can be found in Ref. [69]. rected [76]. Before triangulation, the lens distortion correction
(also called undistortion) is carried out. There also have been
2. Triangulation System Calibration many improvements and innovations to the general calibration
methods, for instance, Yin et al. [77] used a bundle adjustment
The triangulation-based SL system can be calibrated using the strategy, and Huang et al. [78] employed least square algorithms
reference plane approach that was developed in interferometry for calibration of parameter estimation. An et al. [79] developed
systems. Basically, this approach measures an “ideal” planar
a method for large-scale system calibration, and Vargas et al.
surface as the reference plane that requires to be parallel to the
[80] developed a hybrid method that further improves the
projector-camera baseline [30], and other artifacts for spatial
calibration accuracy.
and depth calibration. The measured surface is the difference
It is worth noting that the “standard” pinhole model may not
between the actual measurement and the reference plane. This
work well for high-accuracy 3D surface measurement because
approach is often seen in the literature, where an equation that
it cannot precisely model lens artifacts, especially for affordable
relates object depth to the phase distribution is calibrated based
lenses. Often a more complex model such as ray tracing could
on the system geometry. This calibration approach works well
be necessary. Instead of representing the camera imaging system
if both the projector and the camera use telecentric lenses.
as a smooth function, the ray-tracing method considers each
However, the macroscale SL system typically does not use
pixel ray independently. Thus, the local distortions of the lens
telecentric lenses.
system can be considered. The challenge though is that there is
For the SL system without a telecentric lens, the camera
no mature method to be easily adopted for non-experts.
imaging system is often mathematically modeled as a pinhole
system [70]. The pinhole model represents two transforma-
tions: the transformation from the world coordinate system 3. Time-of-Flight System Calibration
(x w , y w , z w ) to the camera lens coordinate system (x c , y c , z c ) A ToF camera requires both a standard camera calibration pro-
through translation and rotation (i.e., extrinsic parameters); cedure [70] and a distance calibration procedure [81,82]. Since
and the transformation from to the camera lens coordinate for each point the (u, v) coordinates on the camera are known,
system (x c , y c , z c ) to the image coordinate system (u c , v c ) the distance d from the sensor to the object is also known, and
through projection (i.e., intrinsic parameters). Under an ideal (x , y , z) coordinates in the Cartesian space can be solved with
situation without considering lens distortion, the mathematical the calibrated camera parameters.
transformations can be described as matrix operations: The pinhole model and intrinsic calibration parameters are
[u c , v c , 1]T = A · [R, t] · [x w , y w , z w , 1]T , (6) needed to compute Cartesian 3D points from depth points
[26]. The standard calibration follows the same pinhole camera
where T denotes a matrix transpose, intrinsic parameters are model that we described earlier. However, the typical low res-
modeled as a 3 × 3 matrix A representing the focal length olution of the amplitude image makes it difficult to detect the
and the principal point of the imaging system, and extrinsic board reliably. Several heuristic methods have been proposed
parameters are modeled as a 3 × 3 rotation matrix R and a 3 × 1 to improve feature detection and provide a more robust and
translation vector t. Camera calibration essentially estimates reliable camera calibration [21].
the intrinsic and extrinsic parameters. One of the most popular Although the distance value in ToF seems straightforward
camera calibration methods requires only a flat calibration plane to calculate, several factors introduce errors in the estimated
with some known feature points (e.g., checkerboard, circle distance d . There are systematic errors including distance-
patterns) and processes those images with existing open-source distortion errors caused by non-ideal sinusoidal waves in the
software packages (e.g., OpenCV camera calibration toolbox). modulation process or temperature-related drift in the overall
The projector can be regarded as an inverse camera, and thus depth values. These errors can be compensated for by calibra-
the same mathematical model can be used to describe the projec- tion. As such, thorough distance calibration procedures are
tor. Zhang and Huang [71] developed a method that enables the required. The typical systematic error compensation methods
projector to capture images like a camera. As a result, the camera include using look-up tables (LUTs), B-splines, or polynomials
and the projector are calibrated following the standard stereo [22]. Note that because the ToF camera measures the ToF along
Review Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A B67

the light path, the error calibration should be done with respect Multi-wavelength interferometry (MWI) extends the mea-
to the radial distance, not in the Cartesian space [81]. surement range of a single wavelength PSI from a few hundreds
There are also unpredictable non-systematic errors. For nanometers to tens of micrometers by utilizing the synthetic
instance, the SNR distortion appears in scenes not uniformly wavelength of MWI [100,101]. Single-shot color PSI has been
illuminated. Thus, poorly illuminated areas tend to have higher explored to extend MWI into in situ surface inspection [102].
noise than better illuminated ones. Another source of error is Focus detection [103–105] and confocal microscopy
multiple path interference [83,84], in which the sensor captures [106–108] are two techniques used widely for microscale
multiple light reflections. These are often due to surface edges surface measurement. Focus detection microscopy can be used
or object concavities. This is a critical problem in ToF, and there for very rough surfaces such as rusted metal surfaces and mul-
have been many attempts at dealing with it via special acqui- tilayer transparent film thickness measurement but is difficult
sition conditions and iterative schemes that split the acquired when applied to shiny surface measurement because the reflec-
signal into a direct and global component [83]. tions make the focus detection difficult. Confocal microscopy
can be applied to steep surface slope measurement that is far
beyond the acceptance angle of the objective lens.
3. RECENT DEVELOPMENTS A longstanding competitor in the micro-scale 3D surface
Over the past decades, large strides have been made in the field of measurement is microscopic DFP profilometry. This technique
3D surface measurement using active optical methods. This sec- is more versatile and less sensitive to environmental disturb-
tion discusses some of the recent advancements. ances. It can achieve high speeds that are desirable for in situ or
online measurements. Such systems either modify a standard
stereo-microscopic system by replacing one stereo view with a
A. Microscale 3D Surface Profilometry projector [109–113] use small FOV, non-telecentric lenses with
Microscale optical interferometry is used widely in measuring long working distance (LWD) [114–118], or replace pinhole
microscale structures accurately. However, it is extremely sen- lenses with telecentric lenses for a standard triangulation-based
sitive to environmental disturbances such as air turbulence, SL system [119–121]. The fundamental difference between the
temperature drift, and mechanical vibration due to the different interferometry-based surface measurement techniques and this
optical paths between the measurement arm and reference technique is that the former can perform on-axis measurement,
arm. Here, we review recent developments (experimental and while DFP requires triangulation. The triangulation require-
theoretical/simulation) to improve optical interferometry ment limits its capability, for example, to measure deep holes or
mainly through understanding the mechanisms in noise reduc- sharp edges due to shadow or occlusion associated problems.
tion methods to reduce uncertainty in measurements and to
allow its use outside the lab. One approach is to acquire the B. Time-of-Flight Surface Measurement
measurement data fast by using a high-speed camera and fast ToF cameras have been around for over two decades. Recent
phase-shifting method or even sampling all the measurement innovations have substantially improved the performance of
data simultaneously [85–88]. Another approach is to arrange ToF cameras and pushed their limits.
the two interference arms as completely common-path, such The measurement error caused by multiple propagation
as scatterplate interferometers, which are also insensitive to paths (MPI) from the light source to the receiving pixel has been
noise [89–91]. The above two noise reduction methods are one of the most challenging issues for any ToF system. Early
usually applied for laser-based PSI, which is limited to the mea- approaches made severe assumptions on scene characteristics
surement of relatively smooth surfaces that are due to the 2π or relied on placing tags in the scene, which are not practical
phase ambiguity of PSI without employing a multi-wavelength for many applications. Recently, Whyte et al. [122] attempted
technique. to model the multiple paths as a direct return and a global
CSI can overcome the 2π phase ambiguity problem and return. Bhandari et al. [83] proposed to increase the sampling of
enable the absolute measurement of the OPD by determining received signal with two or up to four modulation frequencies
the peak position from the interferogram [11,12,92]. However, for signal separation. Gupta et al. [123] developed a concept
the need to perform mechanical scanning of a heavy probe head called phasor imaging that indicated that the global effects vanish
or the specimen stage limits the measurement speed, which for frequencies higher than a certain scene-dependent threshold.
restricts its applications within the optical laboratory. Many This observation allows for recovering depth in the presence of
efforts have been made in extending the applications of CSI to MPI. This approach has been extended, for example, to obtain
in situ measurement and complex measurement surfaces and depth in the presence of fog and other scattering media [124].
situations [93–95]. A related and important problem in manufacturing is meas-
Wavelength scanning interferometry (WSI) is based on the uring the shape of transparent objects and their backgrounds.
phase shifts that are caused by wavelength variations to avoid The sparse deconvolution approach proposed by Kadambi et al.
mechanical scanning used in CSI [96–98]. Absolute OPD can [125] can not only address the MPI problem, but also detect
still be measured without any 2π phase ambiguity. By adding an the background and the transparent object by performing two
active servo control system that serves as a phase-compensating measurements and processing the inconsistent points between
mechanism to eliminate the effects of environmental noise. two observations [126,127]. The issues associated with low
The application of WLSI can be extended to in situ/in-process spatial resolution of ToF cameras have been addressed, for
measurement [99]. instance, by using an additional color camera and performing
B68 Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A Review

upscaling based on the high-resolution color image. General D. Artificial Intelligence for Structured Light
data-fusion methods have been introduced to complement Techniques
the strengths and limitations of different technologies, such The rapid development of ML methods in the past two decades
as ToF and stereo vision [128]. Furthermore, instead of fusing and the recent availability of sufficient computation resources
multiple sensors, a new paradigm of ToF modulation has shown have enabled a new approach in the field: data-driven system
promise to improve ToF technology. Recent efforts of producing design [139,140]. The ultimate goal is to enhance the quality of
spatially modulated ToF light in a single device can reduce the measurement procedures beyond what traditional techniques
MPI problem [129] and resolve the 2π ambiguity without using can deliver.
multiple frequencies [20]. Although all measurement systems rely on well-understood
Driven by consumer electronics needs, ToF technology physical principles, often, the technological implementation
is rapidly evolving. New modulation techniques that do not of a reliable and stable system is too challenging to control
require fast and expensive electronics are driving the costs down the operating conditions in practical applications. Note that
while maintaining good performance. Due to the problems operating conditions may include ambient lighting, the type
associated with how depth is estimated, we believe that new of objects or materials, instrument interference, or sensor tem-
depth correction methods are still required to ensure ToF perature, among others. The traditional approaches of cascaded
extends to high-accuracy applications. processing stages, such as modulated exposure, denoising, phase
unwrapping, and 3D coordinate mapping, provide accurate
deterministic outputs if the operating conditions are similar
C. Computational Imaging with Structured Light to those of the calibrated conditions. However, in general,
Techniques
the operating conditions often change, and accounting for all
Computational imaging (CI) has been around since the early possible conditions may lead to extremely complex calibration
1990s, but only until recently has it been merging with other procedures that are too challenging to handle with a single algo-
forms of imaging or techniques such as machine learning (ML) rithm. In contrast, artificial intelligence (AI) techniques realize
[130,131]. CI systems typically start from an imperfect physi- an intelligent data treatment that can often capture the behavior
cal measurement (often of lower dimensionality) and prior of a system without necessarily requiring a priori knowledge.
knowledge about the scene or object being imaged and deliver The desired solution of a problem is “learned” through examples
an estimate of the presented scene or object. In conventional instead of being defined employing algorithmic statements
imaging, the optics always maps the luminance at every point [139].
in the object space to a point in the image space. In contrast, The idea of AI is straightforward and can be described broadly
in CI, there is no one-to-one mapping; instead, an algorithm in two stages. The first stage consists of gathering enough exper-
constructs the output image or spatial map, typically from a few imental input–output data under different experimental
structured measurements. The premise is that the appearance conditions (the input being, for example, raw sensor data, and
of most objects in a scene has spatial correlations that, if discov- the output being 3D surface coordinates [141]). In the second
ered, could reduce the uncertainty in the recovery of the object’s stage, ML architecture [typically a convolutional neural net-
appearance. work (CNN)] is trained to obtain a mapping from the input
Illuminating the scene with structured illumination provides domain to the output domain. The training stage attempts to
reduce a global objective function, which could be declared in
the means to probe in a precise and controlled way, and thus the
terms of 3D reconstruction error, phase error, noise reduction,
CI problem is better posed mathematically. Probably, one of the
or other quality metrics. While there is still some skepticism
most remarkable achievements of CI was the single-pixel camera
on how well these methods can generalize and produce reliable
[132], in which with a small number of spatially correlated mea-
outputs on input data they have not previously “seen” [142],
surements, a full-resolution image of a larger number of points
there are many successes to date that give confidence in its use.
was recovered. Eventually, using similar principles, a 3D ToF
Among many successes, AI has been demonstrated in 3D
single-pixel camera was proposed [133]. Another interesting surface measurement for robust phase unwrapping [143,144],
development of CI was the light field camera [134,135] that high-speed profilometry [145], residual lens distortion cor-
allows for image refocusing. Similar to digital holography (DH), rection [146], single-shot profilometry [147,148], robust
the light field camera captures many perspectives of a scene ToF 3D imaging [141,149], sensor fusion [128], and others.
typically using a lenslet array on top of a regular CCD sensor. Furthermore, AI techniques shine through exceptional pat-
The arrangement provides many low-resolution images of the tern recognition in the most challenging conditions including
scene that have sufficient redundancy to enable a computational the identification of projected patterns [150,151] or in 3D
approach to synthesize a high-resolution image focused at recognition [152,153].
almost any depth. Again, this technique was boosted with the Despite these successes, it is still challenging to use AI
use of SL. Cai et al. [136,137] proposed the SL field 3D surface techniques as the sole processing method for a 3D surface
measurement aimed at overcoming the limitations in conven- measurement system. Jiao et al. [154] demonstrated that the
tional passive light field imaging. The use of phase encoding conventional linear-regression-based methods can outperform
instead of image structure provides a more reliable mechanism deep learning methods, especially when the number of training
in retrieving accurate depth almost independently for the entire samples is low. Wang et al. [155] proposed a new middle-ground
scene [138]. approach in which a physical model was incorporated in a deep
Review Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A B69

neural network for phase imaging to avoid the training with tens 1. Microscopic Systems
of thousands of labeled data. This approach is already taking There is an inherent trade-off between magnification and DOF
place, for instance, using deep neural networks to correct for when using interferometry-based optical microscopes for 3D
residual lens distortions [146] that the conventional pinhole surface metrology. Although CSI allows a unique identification
method does not account for. We believe that this novel hybrid of the zeroth order of the fringe pattern regardless of magnifica-
approach may provide the best flexibility and performance tion, the need for mechanical scanning renders the technique
in the design and operation of modern systems for practical very limited in terms of the measurement range. In the past two
applications. decades, DH has emerged as one of the most promising ways
to overcome several of the limitations of conventional optical
E. Automation 3D imaging systems. The main advantage of DH, with respect
to classical holography, is the direct access to phase maps by the
The quality of acquired data depends largely on how it was
numerical solution to the diffraction problem. As a result, it
acquired. Surprisingly, even in static scenes, optimal exposure
offers focus flexibility and 3D imaging properties, among others
is required to capture objects and scenes that have not been pre- [174,175].
viously characterized. Currently, for most high-end 3D surface DH allows tackling the limited DOF for 3D surface recon-
measurement instruments, optimal acquisition still requires the struction in the following way. Starting from a single digital
intervention of skillful/trained personnel. hologram, through the reconstruction of numerical images at
Automatically adjusting the camera exposure based on the different image planes (i.e., at different depths z), it is possible
scene within the FOV has been used extensively in 2D imaging. to obtain an extended focus image with all surface details, with-
Yet, the level of automation for advanced 3D surface measure- out changing the physical distance between the object and the
ment techniques is far lower than its 2D counterparts because microscope [176]. One of the advantages of DH is the ability
of the involvement of a projection device. Ekstrand and Zhang to recover the 3D shape of a surface by changing a parameter
[156] developed a single optimal exposure time determina- between recorded states in a known way. This procedure can be
tion method by analyzing a sequence of images with different done by changing illumination direction, refractive index, or
exposure times. Though successful, such a method is very slow. wavelength [177].
Similarly, various 3D high dynamic range (HDR) techniques DH is a versatile metrological tool for quantitative analysis
were also developed [157–170]. To determine the desired opti- and inspection of a variety of materials, ranging from surfaces
mal exposure(s) rapidly without human intervention, Zhang of industrial interest to biological samples. However, current
[171] developed a method that can determine the single global DH still suffers from certain limitations such as the trade-off
optimal exposure time by capturing image(s) with a single expo- between the FOV and image resolution.
sure for an arbitrary object, and also the HDR exposure times by
capturing image(s) with the optimal exposure time.
The state-of-the-art optical surface measurement techniques 2. Triangulation-Based Techniques
are designed to work within a fixed focal depth range, and thus One of the fundamental underlying principles of the calibration
adaptively changing the focal plane of the system remains diffi- of a triangulation-based system is that of the pinhole camera
cult. The recently developed electrically tunable lens (ETL) can model. However, this model has a major limitation: the opti-
control the focal plane of the imaging system in a “known” man- cal center of an optical system with a fixed position does not
ner. Thus, it offers the promise to achieve autofocus for the 3D generally exist, even for an ideal optical system. Therefore, the
shape measurement system. Hu et al. [172] developed a single- optical center can be defined unambiguously only for an ideal
camera and a single-projector system with an ETL lens being lens at one specific object distance in general [178]. If we add
attached to the camera. The lens was mathematically modeled as to this limitation the geometrical lens distortions and optical
a continuous function with respect to the electric input. Zhong aberrations, then it follows that to extend the measurement
et al. [173] developed a system with a single camera with an range, we need active optical devices, more flexible calibration
ETL lens and two projectors with standard pinhole lenses. The models, or a combination of both.
camera captures always in-focus fringe images to establish the The inability, or the high cost, to manufacture large-scale
correspondence between two projector points. Triangulation calibration artifacts poses one of the calibration problems for
was realized by two calibrated projectors without using cameras. a triangulation-based system with an extended depth range.
That is, if a system is calibrated at a near distance, measuring
an object at a far distance has large errors. To tackle this issue,
F. Towards Large-Range Measurement
An et al. [79] proposed a two-step approach in which first, the
The effective depth measurement range of most systems is intrinsic parameters of the camera and projector are calibrated
limited by several factors, including the DOF, the power of at a near distance while focused at a far distance. Second, the
the light source, geometrical arrangement of components, extrinsic parameters are calibrated with the aid of an additional
speed of electronics, and type of projected pattern(s), among 3D sensor. This calibration strategy could be promising to tackle
others. Moreover, most standard calibration techniques are this challenging problem.
best suited for a limited depth range. As a result, the accuracy The type of projected pattern also plays a role for extended
quickly degrades when measurements go outside the calibrated range measurement. Salvi et al. [23] proposed to extend the
range. We discuss here the recent developments attempting to DOF of the projector to use fringe patterns with more than
overcome such limitations. one frequency. This approach allows reducing the projector’s
B70 Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A Review

defocusing effect, but precisely selecting the optimal frequency the manufacturing industry because it does not require surface
is often not feasible for a digital video projector. Zhang et al. touch, yet achieves high measurement accuracy and speeds.
[179] proposed a method that continuously updates speckle Unfortunately, the state-of-the-art 3D optical metrology meth-
patterns according to the recovered depth map to extend the ods use primarily the one-size-fits-all approach or often require
DOF. Unfortunately, the achievable resolution and accuracy for prohibitively expensive customizations. As such, challenges
such a system based on speckle pattern projection are typically remain to make advanced 3D shape measurement techniques
not high. Ultimately, due to the limited DOF of the projector accessible and available to solve challenging problems in science,
optics, it is desirable to use patterns that are as invariant as pos- engineering, industry, and our daily lives. This section lists some
sible to defocus. The use of phase shifting with defocused binary of the challenging problems worth exploring to advance this
patterns has paved the way for extended-range triangulation- field further.
based systems [54,72]. Moreover, Ekstrand and Zhang [180]
showed that going from perfectly defocused binary patterns A. Low Cost
to nearly focused ones has a negligible effect if sufficient phase
shifts are used. Although the use of off-the-shelf components has brought down
Often, triangulation-based systems are not considered a the cost of most 3D imaging devices, when high performance
viable option for carrying 3D surface measurements at a far dis- (e.g., accuracy, speed) is required and design customization is
tance due to the low power of conventional projectors. However, necessary, the cost often goes way beyond the affordable range.
this limitation could been largely solved by designing active One of the possible approaches is modular design and manu-
stereo systems with mechanical projectors that use a rotating facturing. Standard components can be mass produced with
wheel coupled to projection optics and a powerful light source drastically reduced costs, and be easily “assembled” as an inte-
[181,182]. This setup enables the use of practically any light grated 3D system for measurement. However, mass production
source, and due to the high radiant flux, it can measure objects requires a large quantity of the same part being manufactured.
within a large range with high SNR. To achieve this, the metrology community has to work closely
Recent advances in ETL have opened a new avenue for with business sectors to develop a large enough market, which is
research in developing large DOF 3D measurement systems naturally more challenging because technological experts often
[172,183]. The idea is quite simple, but it did not come to cannot speak the same language as business professionals.
fruition until ETLs became much more reliable in recent years
[184]. The camera has an ETL that is controlled by and synchro- B. Miniaturization
nized with the projector to capture consistently in-focus images System miniaturization is one of the most important yet chal-
of the projected patterns in the scene using different focal length lenging tasks for any sensing system. It is encouraging to see
settings. Through a special phase unwrapping method with miniaturization happening every day: the triangulation tech-
geometric constraints, Hu et al. [183] obtained a high-quality nique using a statistical pattern, and ToF has been embedded
measurement depth range on the order of 1000 mm (400– into mobile devices. However, neither resolution nor accuracy
1400 mm) with an error of 0.05%. We expect this approach to achieved on these devices is comparable to that achieved by
continue to facilitate the design of robust 3D imaging systems. advanced 3D optical metrology methods. Efforts on minia-
turizing accurate 3D surface measurement methods are highly
3. Time of Flight needed.
The simplest way to avoid the 2π ambiguity problem in a con-
tinuous wave modulation (CWM) ToF system is to reduce the C. Repeatability/Reproducibility
modulation frequency f such that the unambiguous depth Measurement uncertainty and traceability are more concerned
range is increased. However, increase depth range with lower and studied in the development of surface measurement devices.
modulation frequency decreases the depth resolution. One of Generally, if an instrument is well calibrated before a measure-
the most used methods for extending the unambiguous range ment is carried out and the related measurement good practices
while preserving a high depth resolution is through the mul- guide [185] of a particular instrument is followed, the mea-
tifrequency approach. However, it requires the acquisition of surement results are considered repeatable and reproducible
multiple frequencies, which could be prone to motion artifacts within the given measurement uncertainty of the instrument.
and/or increase the overall complexity of the system. Recently, However, this is an area that has not been employed extensively,
spatiotemporal ToF promises to increase the limited range even studied in the optical metrology field. Further studies
with a reduced number of observations, and simultaneously to in terms of repeatability and reproducibility of 3D optical
address the MPI problem [20,129]. imaging instruments are needed, as the operating environ-
ment of an instrument usually is more hazardous, and the
4. CHALLENGES operation and data processing are more complex compared to
interferometric-based surface measurement instruments.
Ever-growing modern smart and flexible manufacturing drives
the needs for better sensing and metrology tools that can be
quickly reconfigurable and affordable for quality assurance. D. Complex and Difficult to Measure Surfaces
High-speed and high-accuracy 3D optical metrology based on Difficult to measure surfaces are always a challenge for optical
structured optical probing has proven extremely valuable for 3D surface measurement methods. Although there are ways
Review Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A B71

to circumvent these difficulties, they typically require the use F. Interface Between Sensing and Application
of additional equipment such as polarizers or special arrange- The available 3D sensors, especially those designed for con-
ments [186,187]. The implementation of these additional sumer electronics, are somewhat automated and easy to use
procedures or equipment reduces flexibility for measuring due largely to the tremendous effort made towards automa-
other surfaces. As discussed earlier, recent developments have tion. However, the accuracy and resolution performances of
opted for specialized codification approaches to avoid the use those sensors are not high, making it easier to perform mea-
of additional hardware [188]. The most prominent technique surement without requiring the system to be operated under
is the use of adaptive pattern projection [165], which works optimal conditions. Yet, most high-accuracy optical 3D surface
sufficiently well for the intended surfaces, albeit leading to slow measurement systems are still not plug-and-play or compli-
acquisition, and it is not a general-purpose codification strategy. ant with industry standards for automation and control. The
Many efforts have been made to handle shiny, transparent, general-purpose “point-and-shoot” high-accuracy 3D surface
high-dynamic-range, or discontinuous parts [158–162,164– measurement tools are rare. Interfacing 3D imaging systems
171,187–191]. For instance, SL in the UV or IR range has requires the development of a middleware that is often an
been used for 3D surface reconstruction of transparent objects insurmountable barrier for many applications. In order for 3D
[192,193]; however, the measurement errors are typically technologies to reach their full potential, 3D systems have to
much higher than the typical errors obtained in the visible be as easy to use as their 2D counterparts. The interface has
to be simple such that users can develop applications without
range.
expert knowledge in 3D surface measurement system devel-
The most troublesome aspect is that these techniques have
opment. One way to achieve this is automation: the system is
been developed in research labs and have not been tested in fully automated, such that there are no training requirements
industrial settings. Ultimately, they are optical methods that for someone to capture the best quality data. The automation
will face challenges to measure optically unfriendly surfaces. includes auto-exposure, auto-focus, auto-calibration, etc. Of
Translating these developments to practice requires extensive course, achieving all together will be a long journey for this
validation to ensure optimal performance. field, yet the community could advance this field by drawing
inspirations from the historical breakthroughs in 2D imaging.
E. Metrology-in-the-Loop System
G. Design Optimization
The needs for system metrology-in-the-loop become critical
for an industry such as additive manufacturing because each Design optimization is a complex problem without unique solu-
layer should be inspected before moving to the next layer. It is tions. For example, many can follow open knowledge and build
even better if the machine can make adjustments on the next a single camera–projector system with a commercial projector
layer based on the current layer’s information. This goal may and camera. However, not many can achieve the full potential
be achieved if 3D surface metrology is embedded into the of the hardware permits because (1) the hardware components
are not designed or thus optimized for metrology purpose;
manufacturing process such that in situ measurement, in situ
(2) the driving software is not designed or thus optimized for a
data analytics, and in situ decision making can occur while the
non-expert to use easily; (3) the geometric configuration opti-
part is being made. In situ measurement requires robustness
mization is not studied in the literature; and (4) the calibration
and ruggedness of the sensors with a deeper understanding remains difficult for non-experts, along with others. As such,
of the impact of noise and vibration on system performance. only experts can design and develop optimal solutions. Design
Inferring the state of the part at each given manufacturing stage optimization involves multiple stages, hardware component
requires robust and rapid algorithms for data analytics. In situ design, hardware system design, and software algorithm opti-
control requires robust and efficient algorithms to make the mization. Achieving this goal is challenging because of different
machine adapt appropriately without slowing down the manu- and sometimes conflicting interests from various parties.
facturing process. One of the most challenging issues is that the
software/hardware latency could undesirably slow down the
H. Self-Calibration
production process.
A metrology-in-the-loop system could be even more valu- Accurate calibration is difficult and requires sufficient expertise
able to cyber manufacturing. The current practice is that each and controlled settings. There is a need for developing self-
machine makes its parts independently, and relies heavily on the calibration approaches that require minimal user input or a
ensured part quality with desired parameters to make the entire rough calibration. Ideally, the system automatically optimizes
the calibration parameters to meet specific metrological criteria
system work. Though successful, such practice can be improved
using affordable standard calibration artifacts.
if the metrology is brought into the loop. For example, if one
As discussed earlier, calibrating 3D surface measurement
part is made even without meeting design specifications, can the systems is typically an elaborate and lengthy task that requires
following parts be adjusted such that the part can still be used? multiple acquisitions of calibration artifacts, and in some cases,
This ambitious goal requires data-driven design and manu- independent pre-calibration of each component. Despite recent
facturing to be in the loop as well. Consequently, enormous developments [194–197], system self-calibration remains
challenges would emerge because the entire manufacturing very challenging because all calibration parameters need to be
process has to be drastically revolutionized. estimated simultaneously.
B72 Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A Review

In a general sense, the self-calibration problem is often cast work together to address the challenging questions: how can
as a constrained optimization problem [196]. Early works real- we come out with methods to effectively store and deliver such
ized that by considering the projector as an inverse camera, a enormously large 3D data?
multiview approach with bundle adjustment could be used to
carry out system self-calibration [198,199]. However, we should
distinguish between fully self-calibrating methods that estimate 5. SUMMARY
all calibration parameters, such as in those discussed by various Active structured probing techniques have proven to be one of
teams [194–197,200], and methods that estimate the relative the most powerful concepts in the design of 3D optical mea-
poses of the components with precisely calibrated intrinsic surement systems. This paper endeavored to review and provide
information [201,202]. Nonetheless, achieving a successful a critical summary of the state-of-the-art techniques for 3D
calibration with high 3D reconstruction accuracy depends surface measurement. We have shown that probing a surface
largely on the underlying assumptions. For example, assuming with an intentionally manipulated light beam is likely the most
a known 3D geometry [195,200,203], or a good guess of the reliable way to perform non-contact 3D measurements today.
intrinsic parameters [194] tends to produce satisfactory calibra- While there are still many persisting challenges, we believe the
tion results. However, those strong assumptions require a priori time has come to consolidate best practices into standards and
knowledge that is not too far from the conventional calibration to push forward the integrated design of modern systems. We
approach. Alternatively, Li et al. [196] reduced the number of encourage readers to refer to the original work of each referenced
assumptions, with the strongest one being a non-planar scene. paper to carefully evaluate them before spending effort adopting
They achieved a reliable 3D reconstruction with acceptable any for practical applications.
errors. However, the requirement of precise projector intrinsic
parameters poses practical challenges, mainly because most of Funding. Fulbright Colombia (Cohort 2019–2020);
the available projectors are manufactured for purposes other
Directorate for Computer and Information Science and
than metrology, and thus the intrinsic parameters (e.g., princi-
Engineering (IIS-1637961, IIS-1763689); Engineering
ple point) could vary dramatically from one device to another.
and Physical Sciences Research Council (EP/P006930/1,
We believe that when the design of projectors for 3D surface
EP/T024844/1).
measurement systems is standardized, self-calibration could
become easier in practice.
Nevertheless, there are cases where the system can be Acknowledgment. A.G. Marrugo thanks Universidad
pre-calibrated, but due to the conditions of the operating Tecnológica de Bolívar for a Research Leave Fellowship, and
environment (e.g., mechanical vibrations), calibration param- acknowledges support from the Fulbright Commission in
eters change solely because components move relative to each Colombia and the Colombian Ministry of Education within the
other. This condition is better posed, and several of the existing framework of the Fulbright Visiting Scholar Program. F. Gao
methods could successfully re-calibrate the system assuming thanks the EPSRC of the UK for the funding of the EPSRC
the new calibration parameters are not too far from the initial Future Advanced Metrology Hub and A Multiscale Digital
calibration. However, the entire self-calibration with little to no Twin-Driven Smart Manufacturing System for High Value-
a priori knowledge is still too challenging. Added Products. S. Zhang thanks the NSF for its support. The
views expressed in this paper are those of the authors and not
necessarily those of the sponsors.
I. Data Management
With 3D surface metrology tools being integrated into mobile Disclosures. The authors declare no conflicts of interest.
devices, manufacturing production lines, surveillance, and
others, capturing 3D images becomes increasingly easier; con-
sequently, storing and managing those historical data become REFERENCES
increasingly critical. Effectively representing 3D data in the 1. R. Won, “Structured light spiralling up,” Nat. Photonics 11, 619–622
form of standard meshes (e.g., STL, OBJ, PLY) will soon (2017).
2. J.-A. Beraldin, B. Carrier, D. MacKinnon, and L. Cournoyer,
become an issue because of their large storage requirements.
“Characterization of triangulation-based 3D imaging systems
In fact, most standard mesh formats do not take advantage of using certified artifacts,” NCSLI Meas. 7, 50–60 (2016).
the inherent structure of 3D surface metrology tools and thus 3. K. Creath, “Phase-measurement interferometry techniques,” Prog.
store redundant information. For example, the area 3D surface Opt. 26, 349–393 (1988).
measurement system has natural connectivity information; 4. D. Malacara, ed., Optical Shop Testing, 3rd ed. (Wiley, 2007).
5. D. C. Ghiglia and M. D. Pritt, eds., Two-Dimensional Phase
therefore, the mapping to color or normals can be computed at
Unwrapping: Theory, Algorithms, and Software (Wiley, 1998).
the time of demand. Therefore, standard data structures should 6. X. Su and W. Chen, “Reliability-guided phase unwrapping
be designed and tailored for this community. algorithm: a review,” Opt. Laser Eng. 42, 245–261 (2004).
Even more urgent is to develop methods to further compress 7. Y.-Y. Cheng and J. C. Wyant, “Two-wavelength phase shifting inter-
3D data in a lossy or lossless format (such as the counterparts ferometry,” Appl. Opt. 23, 4539–4543 (1984).
8. Y.-Y. Cheng and J. C. Wyant, “Multiple-wavelength phase shifting
for 2D image representations). Large strides have been made
interferometry,” Appl. Opt. 24, 804–807 (1985).
for variations of 3D data representation and compression 9. J. Schmit, K. Creath, and J. C. Wyant, Optical Shop Testing, 3rd
[204–209], yet none of them has been widely accepted as a ed. (Wiley, 2007), Chap. Surface profilers, multiple wavelength, and
common practice. As such, the entire community needs to white light interferometry, pp. 667–755.
Review Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A B73

10. R. Windecker, P. Haible, and H. Tiziani, “Fast coherence scan- 38. Z. Zhang, “Review of single-shot 3D shape measurement by phase
ning interferometry for measuring smooth, rough and spherical calculation-based fringe projection techniques,” Opt. Laser Eng. 50,
surfaces,” J. Mod. Opt. 42, 2059–2069 (1995). 1097–1106 (2012).
11. T. Dresel, G. Häusler, and H. Venzke, “Three-dimensional sensing of 39. M. Takeda, “Fourier fringe analysis and its applications to metrology
rough surfaces by coherence radar,” Appl. Opt. 31, 919–925 (1992). of extreme physical phenomena: a review,” Appl. Opt. 52, 20–29
12. L. Deck and P. De Groot, “High-speed noncontact profiler based (2013).
on scanning white-light interferometry,” Appl. Opt. 33, 7334–7338 40. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision
(1994). based on a combination of gray-code and phase-shift light projec-
13. A. Harasaki, J. Schmit, and J. C. Wyant, “Improved vertical- tion: analysis and compensation of the systematic errors,” Appl.
scanning interferometry,” Appl. Opt. 39, 2107–2115 (2000). Opt. 38, 6565–6573 (1999).
14. F. Gao, R. K. Leach, J. Petzing, and J. M. Coupland, “Surface mea- 41. Y. Wang and S. Zhang, “Novel phase coding method for absolute
surement errors using commercial scanning white light interferome- phase retrieval,” Opt. Lett. 37, 2067–2069 (2012).
ters,” Meas. Sci. Technol. 19, 015303 (2007). 42. C. Zuo, L. Huan, M. Zhang, Q. Chen, and A. Asundi, “Temporal
15. J. P. Waters, “Holographic image synthesis utilizing theoretical phase unwrapping algorithms for fringe projection profilometry: a
methods,” Appl. Phys. Lett. 9, 405–407 (1966). comparative review,” Opt. Laser Eng. 85, 84–103 (2016).
16. J. Wyant and V. Bennett, “Using computer generated holograms to 43. S. Zhang, “Absolute phase retrieval methods for digital fringe pro-
test aspheric wavefronts,” Appl. Opt. 11, 2833–2839 (1972). jection profilometry: a review,” Opt. Laser Eng. 107, 28–37 (2018).
17. J. H. Burge, “Applications of computer-generated holograms for 44. K. Zhong, Z. Li, Y. Shi, C. Wang, and Y. Lei, “Fast phase mea-
interferometric measurement of large aspheric optics,” Proc. SPIE surement profilometry for arbitrary shape objects without phase
2576, 258–269 (1995). unwrapping,” Opt. Laser Eng. 51, 1213–1222 (2013).
18. H. Shen, R. Zhu, Z. Gao, E. Pun, W. Wong, and X. Zhu, “Design and 45. Z. Li, K. Zhong, Y. Li, X. Zhou, and Y. Shi, “Multiview phase shifting: a
fabrication of computer-generated holograms for testing optical full-resolution and high-speed 3D measurement framework for arbi-
freeform surfaces,” Chin. Opt. Lett. 11, 032201 (2013). trary shape dynamic objects,” Opt. Lett. 38, 1389–1391 (2013).
19. P. Zanuttigh, G. Marin, C. D. Mutto, F. Minto, and G. M. Cortelazzo, 46. Y. R. Huddart, J. D. R. Valera, N. J. Weston, and A. J. Moore,
Time-of-Flight and Structured Light Depth Cameras (Springer, “Absolute phase measurement in fringe projection using multiple
2016). perspectives,” Opt. Express 21, 21119–21130 (2013).
20. T. Kushida, K. Tanaka, T. Aoto, T. Funatomi, and Y. Mukaigawa, 47. Y. An, J.-S. Hyun, and S. Zhang, “Pixel-wise absolute phase
“Phase disambiguation using spatio-temporally modulated illu- unwrapping using geometric constraints of structured light system,”
mination in depth sensing,” IPSJ Trans. Comput. Vis. Appl. 12, 1
Opt. Express 24, 18445–18459 (2016).
(2020).
48. W. Cruz-Santos and L. Lopez-Garcia, “Implicit absolute phase
21. M. Hansard, S. Lee, O. Choi, and R. Horaud, Time-of-Flight
retrieval in digital fringe projection without reference lines,” Appl.
Cameras, Principles, Methods and Applications (Springer, 2013).
Opt. 54, 1688–1695 (2015).
22. S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cam-
49. S. Zhang and S.-T. Yau, “High-resolution, real-time 3-D absolute
eras: a survey,” IEEE Sens. J. 11, 1917–1926 (2011).
coordinate measurement based on a phase-shifting method,” Opt.
23. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in
Express 14, 2644–2649 (2006).
structured light patterns for surface profilometry,” Pattern Recogn.
50. X. Su, Q. Zhang, Y. Xiao, and L. Xiang, “Dynamic 3-D shape mea-
43, 2666–2680 (2010).
surement techniques with marked fringes tracking,” in Fringe
24. S. Zhang, “High-speed 3D shape measurement with structured light
(2009), pp. 493–496.
methods: a review,” Opt. Laser Eng. 106, 119–131 (2018).
51. D. Zheng, Q. Kemao, F. Da, and H. S. Seah, “Ternary gray code-
25. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense
based phase unwrapping for 3D measurement using binary
two-frame stereo correspondence algorithms,” Int. J. Comput. Vis.
patterns with projector defocusing,” Appl. Opt. 56, 3660–3665
47, 7–42 (2002).
(2017).
26. R. Hartley and A. Zisserman, Multiple View Geometry in Computer
52. C. Zhou, T. Liu, S. Si, J. Xu, Y. Liu, and Z. Lei, “Phase coding method
Vision (Cambridge University, 2003).
27. S. Rusinkiewicz, O. Hall-Holt, and M. Levoy, “Real-time 3D model for absolute phase retrieval with a large number of codewords,” Opt.
acquisition,” ACM Trans. Graph. 21, 438–446 (2002). Express 20, 24139–24150 (2012).
28. O. Hall-Holt and S. Rusinkiewicz, “Stripe boundary codes for real- 53. X. Y. Su, W. S. Zhou, G. Von Bally, and D. Vukicevic, “Automated
time structured-light range scanning of moving objects,” in 8th IEEE phase-measuring profilometry using defocused projection of a
International Conference on Computer Vision (2001), pp. 359–366. Ronchi grating,” Opt. Commun. 94, 561–573 (1992).
29. J. Xu and S. Zhang, “Status, challenges, and future perspectives of 54. S. Lei and S. Zhang, “Flexible 3-D shape measurement using pro-
fringe projection profilometry,” Opt. Laser Eng. (to be published). jector defocusing,” Opt. Lett. 34, 3080–3082 (2009).
30. M. Takeda and K. Mutoh, “Fourier transform profilometry for the 55. S. Zhang, D. van der Weide, and J. Oliver, “Superfast phase-shifting
automatic measurement of 3-D object shapes,” Appl. Opt. 22, method for 3-D shape measurement,” Opt. Express 18, 9684–9689
3977–3982 (1983). (2010).
31. Q. Kemao, “Windowed Fourier transform for fringe pattern 56. S. Lei and S. Zhang, “Digital sinusoidal fringe generation: defocus-
analysis,” Appl. Opt. 43, 2695–2702 (2004). ing binary patterns vs focusing sinusoidal patterns,” Opt. Laser Eng.
32. Q. Kemao, “Two-dimensional windowed Fourier transform for fringe 48, 561–569 (2010).
pattern analysis: principles, applications and implementations,” 57. B. Li and S. Zhang, “Microscopic structured light 3D profilometry:
Opt. Laser. Eng. 45, 304–317 (2007). binary defocusing technique vs sinusoidal fringe projection,” Opt.
33. K. Qian, “Comparison of Fourier transform, windowed Fourier trans- Laser Eng. 96, 117–123 (2017).
form, and wavelet transform methods for phase extraction from a 58. G. A. Ayubi, J. A. Ayubi, J. M. D. Martino, and J. A. Ferrari, “Pulse-
single fringe pattern in fringe projection profilometry,” Opt. Laser width modulation in defocused 3-D fringe projection,” Opt. Lett. 35,
Eng. 48, 141–148 (2010). 3682–3684 (2010).
34. K. Qian, “Applications of windowed Fourier fringe analysis in optical 59. Y. Wang and S. Zhang, “Optimal pulse width modulation for sinus-
measurement: a review,” Opt. Laser Eng. 66, 67–73 (2015). oidal fringe generation with projector defocusing,” Opt. Lett. 35,
35. L. Guo, X. Su, and J. Li, “Improved Fourier transform profilometry 4121–4123 (2010).
for the automatic measurement of 3D object shapes,” Opt. Eng. 29, 60. T. Xian and X. Su, “Area modulation grating for sinusoidal struc-
1439–1444 (1990). ture illumination on phase-measuring profilometry,” Appl. Opt. 40,
36. H. Guo and P. S. Huang, “Absolute phase technique for the Fourier 1201–1206 (2001).
transform method,” Opt. Eng. 48, 043609 (2009). 61. W. Lohry and S. Zhang, “Genetic method to optimize binary dither-
37. X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: a ing technique for high-quality fringe generation,” Opt. Lett. 38,
review,” Opt. Laser Eng. 48, 191–204 (2010). 540–542 (2013).
B74 Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A Review

62. J. Dai, B. Li, and S. Zhang, “High-quality fringe patterns gener- 85. C. L. Koliopoulos, “Simultaneous phase-shift interferometer,” Proc.
ation using binary pattern optimization through symmetry and SPIE 1531, 119–127 (1992).
periodicity,” Opt. Laser Eng. 52, 195–200 (2014). 86. B. Ngoi, K. Venkatakrishnan, and N. Sivakumar, “Phase-shifting
63. J. Zhu, P. Zhou, X. Su, and Z. You, “Accurate and fast 3D surface interferometry immune to vibration,” Appl. Opt. 40, 3211–3214
measurement with temporal-spatial binary encoding structured (2001).
illumination,” Opt. Express 24, 28549–28560 (2016). 87. J. E. Millerd, N. J. Brock, J. B. Hayes, and J. C. Wyant,
64. Y. Wang, C. Jiang, and S. Zhang, “Double-pattern triangular pulse “Instantaneous phase-shift point-diffraction interferometer,” Proc.
width modulation technique for high-accuracy high-speed 3D SPIE 5531, 264–272 (2004).
shape measurement,” Opt. Express 25, 30177–30188 (2017). 88. H. Kihm and S.-W. Kim, “Fiber-diffraction interferometer for
65. Y. Wang and S. Zhang, “Comparison among square binary, sinus- vibration desensitization,” Opt. Lett. 30, 2059–2061 (2005).
oidal pulse width modulation, and optimal pulse width modulation 89. J. Huang, T. Honda, N. Ohyama, and J. Tsujiuchi, “Fringe scanning
methods for three-dimensional shape measurement,” Appl. Opt. scatter plate interferometer using a polarized light,” Opt. Commun.
51, 861–872 (2012). 68, 235–238 (1988).
66. M.-A. Drouin, G. Godin, M. Picard, J. Boisvert, and L.-G. Dicaire, 90. M. B. North-Morris, J. VanDelden, and J. C. Wyant, “Phase-shifting
“Structured-light systems using a programmable quasi-analogue
birefringent scatterplate interferometer,” Appl. Opt. 41, 668–677
projection subsystem,” Proc. SPIE 11294, 112940O (2020).
(2002).
67. “Geometrical product specifications (GPS)—surface texture:
91. D.-C. Su and L.-H. Shyu, “Phase shifting scatter plate interferom-
profile method; measurement standards—part 1: material mea-
eter using a polarization technique,” J. Mod. Opt. 38, 951–959
sures,” Standard ISO 5436-1: 2000 (International Organization for
(1991).
Standardization, 2000).
92. G. S. Kino and S. S. Chim, “Mirau correlation microscope,” Appl.
68. “Geometrical product specifications (GPS)—surface texture:
Opt. 29, 3775–3783 (1990).
profile method; measurement standards—part 1: material mea-
sures,” Standard ISO 5436-2:2012 (International Organization for 93. C. Gomez, R. Su, P. De Groot, and R. Leach, “Noise reduction
Standardization, 2012). in coherence scanning interferometry for surface topography
69. R. K. Leach, C. Giusca, H. Haitjema, C. Evans, and X. Jiang, measurement,” Nanomanuf. Metrol. 3, 68–76 (2020).
“Calibration and verification of areal surface texture measuring 94. H. Altamar-Mercado, A. Patiño-Vanegas, and A. G. Marrugo,
instruments,” CIRP Ann. 64, 797–813 (2015). “Robust 3D surface recovery by applying a focus criterion in white
70. Z. Zhang, “A flexible new technique for camera calibration,” IEEE light scanning interference microscopy,” Appl. Opt. 58, A101–A111
Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000). (2019).
71. S. Zhang and P. S. Huang, “Novel method for structured light system 95. M. Thomas, R. Su, N. Nikolaev, J. Coupland, and R. K. Leach,
calibration,” Opt. Eng. 45, 083601 (2006). “Modeling of interference microscopy beyond the linear regime,”
72. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for Opt. Eng. 59, 034110 (2020).
structured light system with an out-of-focus projector,” Appl. Opt. 96. S. Kuwamura and I. Yamaguchi, “Wavelength scanning profilom-
53, 3415–3426 (2014). etry for real-time surface shape measurement,” Appl. Opt. 36,
73. T. Bell and S. Zhang, “Method for out-of-focus camera calibration,” 4473–4482 (1997).
Appl. Opt. 55, 2346–2352 (2016). 97. D. S. Mehta, S. Saito, H. Hinosugi, M. Takeda, and T. Kurokawa,
74. Y. An, T. Bell, B. Li, J. Xu, and S. Zhang, “Novel method for large “Spectral interference Mirau microscope with an acousto-optic
range structured light system calibration,” Appl. Opt. 55, 9563–9572 tunable filter for three-dimensional surface profilometry,” Appl. Opt.
(2016). 42, 1296–1305 (2003).
75. K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improv- 98. K. Hibino, B. F. Oreb, P. S. Fairman, and J. Burke, “Simultaneous
ing measurement accuracy of fringe projection profilometry,” Opt. measurement of surface shape and variation in optical thickness
Laser Eng. 85, 53–64 (2016). of a transparent parallel plate in wavelength-scanning Fizeau
76. R. Vargas, A. G. Marrugo, J. Pineda, J. Meneses, and L. A. Romero, interferometer,” Appl. Opt. 43, 1241–1249 (2004).
“Camera-projector calibration methods with compensation of geo- 99. X. Jiang, K. Wang, F. Gao, and H. Muhamedsalih, “Fast surface
metric distortions in fringe projection profilometry: a comparative measurement using wavelength scanning interferometry with
study,” Opt. Pura Appl. 51, 50305 (2018). compensation of environmental noise,” Appl. Opt. 49, 2903–2909
77. Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe pro- (2010).
jection profilometry with bundle adjustment strategy,” Opt. Lett. 37, 100. G. Bourdet and A. Orszag, “Absolute distance measurements by
542–544 (2012). CO2 laser multiwavelength interferometry,” Appl. Opt. 18, 225–227
78. L. Huang, P. S. Chua, and A. Asundi, “Least-squares calibration (1979).
method for fringe projection profilometry considering camera lens
101. K.-H. Bechstein and W. Fuchs, “Absolute interferometric distance
distortion,” Appl. Opt. 49, 1539–1548 (2010).
measurements applying a variable synthetic wavelength (mesures
79. Y. An, T. Bell, B. Li, J. Xu, and S. Zhang, “Method for large-range
de distances absolues par interférométrie utilisant une longueur
structured light system calibration,” Appl. Opt. 55, 9563–9572
d’onde variable synthétique),” J. Opt. 29, 179 (1998).
(2016).
102. H. Muhamedsalih, S. Al-Bashir, F. Gao, and X. Jiang, “Single-shot
80. R. Vargas, A. G. Marrugo, S. Zhang, and L. A. Romero, “Hybrid cali-
RGB polarising interferometer,” Proc. SPIE 10749, 1074909 (2018).
bration procedure for fringe projection profilometry based on stereo
103. J. Kagami, T. Hatazawa, and K. Koike, “Measurement of surface
vision and polynomial fitting,” Appl. Opt. 59, D163–D167 (2020).
81. D. Lefloch, R. Nair, F. Lenzen, H. Schäfer, L. Streeter, M. J. Cree, R. profiles by the focusing method,” Wear 134, 221–229 (1989).
Koch, and A. Kolb, “Technical foundation and calibration methods 104. M. Visscher and K. Struik, “Optical profilometry and its application
for time-of-flight cameras,” in Time-of-Flight and Depth Imaging. to mechanically inaccessible surfaces part I: principles of focus
Sensors, Algorithms, and Applications (Springer, 2013), pp. 3–24. error detection,” Precis. Eng. 16, 192–198 (1994).
82. S. Fuchs and G. Hirzinger, “Extrinsic and depth calibration of ToF- 105. M. Visscher, C. Hendriks, and K. Struik, “Optical profilometry and its
cameras,” in IEEE Conference on Computer Vision and Pattern application to mechanically inaccessible surfaces part ii: application
Recognition (CVPR) (2008), pp. 1–6. to elastometer/glass contacts,” Precis. Eng. 16, 199–204 (1994).
83. A. Bhandari, A. Kadambi, R. Whyte, C. Barsi, M. Feigin, A. 106. M. Minsky, “Memoir on inventing the confocal scanning
Dorrington, and R. Raskar, “Resolving multipath interference in microscope,” Scanning 10, 128–138 (1988).
time-of-flight imaging via modulation frequency diversity and 107. D. Hamilton and T. Wilson, “Surface profile measurement using the
sparse regularization,” Opt. Lett. 39, 1705–1708 (2014). confocal microscope,” J. Appl. Phys. 53, 5320–5322 (1982).
84. A. Jarabo, B. Masia, J. Marco, and D. Gutierrez, “Recent advances 108. H.-J. Jordan, M. Wegner, and H. Tiziani, “Highly accurate non-
in transient imaging: a computer graphics and vision perspective,” contact characterization of engineering surfaces using confocal
Vis. Inf. 1, 65–79 (2017). microscopy,” Meas. Sci. Technol. 9, 1142 (1998).
Review Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A B75

109. R. Windecker, M. Fleischer, and H. J. Tiziani, “Three-dimensional 132. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F.
topometry with stereo microscopes,” Opt. Eng. 36, 3372–3377 Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive
(1997). sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008).
110. C. Zhang, P. S. Huang, and F.-P. Chiang, “Microscopic phase- 133. M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb,
shifting profilometry based on digital micromirror device and M. J. Padgett, “Single-pixel three-dimensional imaging with
technology,” Appl. Opt. 41, 5896–5904 (2002). time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
111. K.-P. Proll, J.-M. Nivet, K. Körner, and H. J. Tiziani, “Microscopic 134. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic
three-dimensional topometry with ferroelectric liquid-crystal-on- camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
silicon displays,” Appl. Opt. 42, 1773–1778 (2003). 135. T. E. Bishop and P. Favaro, “The light field camera: extended depth
112. R. Rodriguez-Vera, K. Genovese, J. Rayas, and F. Mendoza- of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal.
Santoyo, “Vibration analysis at microscale by Talbot fringe Mach. Intell. 34, 972–986 (2011).
projection method,” Strain 45, 249–258 (2009). 136. Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured
113. A. Li, X. Peng, Y. Yin, X. Liu, Q. Zhao, K. Körner, and W. Osten, light field 3D imaging,” Opt. Express 24, 20324–20334 (2016).
“Fringe projection based quantitative 3D microscopy,” Optik 124, 137. Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Ray calibration and phase
5052–5056 (2013). mapping for structured-light-field 3D reconstruction,” Opt. Express
114. C. Quan, X. Y. He, C. F. Wang, C. J. Tay, and H. M. Shang, “Shape 26, 7598–7613 (2018).
measurement of small objects using LCD fringe projection with 138. Z. Cai, X. Liu, G. Pedrini, W. Osten, and X. Peng, “Accurate depth
phase shifting,” Opt. Commun. 189, 21–29 (2001). estimation in structured light fields,” Opt. Express 27, 13532–13546
115. C. Quan, C. J. Tay, X. Y. He, X. Kang, and H. M. Shang, “Microscopic (2019).
surface contouring by fringe projection method,” Opt. Laser 139. C. Alippi, A. Ferrero, and V. Piuri, “Artificial intelligence for instru-
Technol. 34, 547–552 (2002). ments and measurement applications,” IEEE Instrum. Meas. Mag.
116. J. Chen, T. Guo, L. Wang, Z. Wu, X. Fu, and X. Hu, “Microscopic 1(2), 9–17 (1998).
fringe projection system and measuring method,” Proc. SPIE 8759, 140. A. Halevy, P. Norvig, and F. Pereira, “The unreasonable effectiveness
87594U (2013). of data,” IEEE Intell. Syst. 24, 8–12 (2009).
117. D. S. Mehta, M. Inam, J. Prakash, and A. Biradar, “Liquid-crystal 141. S. Su, F. Heide, G. Wetzstein, and W. Heidrich, “Deep end-to-end
phase-shifting lateral shearing interferometer with improved fringe time-of-flight imaging,” in Proceedings of the IEEE Conference on
contrast for 3D surface profilometry,” Appl. Opt. 52, 6119–6125 Computer Vision and Pattern Recognition (2018), pp. 6383–6392.
(2013). 142. D. Weichert, P. Link, A. Stoll, S. Rüping, S. Ihlenfeldt, and S. Wrobel,
118. Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection “A review of machine learning for the optimization of production
3D microscopy with the general imaging model,” Opt. Express 23, processes,” Int. J. Adv. Manuf. Technol. 104, 1889–1902 (2019).
6846–6857 (2015). 143. W. Yin, Q. Chen, S. Feng, T. Tao, L. Huang, M. Trusiak, A. Asundi,
119. D. Li and J. Tian, “An accurate calibration method for a camera with and C. Zuo, “Temporal phase unwrapping using deep learning,” Sci.
telecentric lenses,” Opt. Laser Eng. 51, 538–541 (2013). Rep. 9, 1–12 (2019).
120. D. Li, C. Liu, and J. Tian, “Telecentric 3D profilometry based on 144. K. Wang, Y. Li, Q. Kemao, J. Di, and J. Zhao, “One-step robust
phase-shifting fringe projection,” Opt. Express 22, 31826–31835 deep learning phase unwrapping,” Opt. Express 27, 15100–15115
(2014). (2019).
121. B. Li and S. Zhang, “Flexible calibration method for microscopic 145. S. Feng, C. Zuo, W. Yin, G. Gu, and Q. Chen, “Micro deep learning
structured light system using telecentric lens,” Opt. Express 23, profilometry for high-speed 3D surface imaging,” Opt. Laser Eng.
25795–25803 (2015). 121, 416–427 (2019).
122. R. Whyte, L. Streeter, M. J. Cree, and A. A. Dorrington, “Resolving 146. S. Lv, Q. Sun, Y. Zhang, Y. Jiang, J. Yang, J. Liu, and J. Wang,
multiple propagation paths in time of flight range cameras using “Projector distortion correction in 3D shape measurement using a
direct and global separation methods,” Opt. Eng. 54, 113109 structured-light system by deep neural networks,” Opt. Lett. 45,
(2015). 204–207 (2020).
123. M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: 147. S. Van der Jeught and J. J. J. Dirckx, “Deep neural networks
a generalization of correlation-based time-of-flight imaging,” ACM for single shot structured light profilometry,” Opt. Express 27,
Trans. Graph. 34, 1–18 (2015). 17091–17101 (2019).
124. T. Muraji, K. Tanaka, T. Funatomi, and Y. Mukaigawa, “Depth from 148. J. Qian, S. Feng, Y. Li, T. Tao, J. Han, Q. Chen, and C. Zuo, “Single-
phasor distortions in fog,” Opt. Express 27, 18858–18868 (2019). shot absolute 3D shape measurement with deep-learning-based
125. A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. color fringe projection profilometry,” Opt. Lett. 45, 1842–1844
Dorrington, and R. Raskar, “Coded time of flight cameras: sparse (2020).
deconvolution to address multipath interference and recover time 149. J. Marco, Q. Hernandez, A. Muñoz, Y. Dong, A. Jarabo, M. H. Kim,
profiles,” ACM Trans. Graph. 32, 1–10 (2013). X. Tong, and D. Gutierrez, “Deep ToF: off-the-shelf real-time correc-
126. S. Lee and H. Shim, “Skewed stereo time-of-flight camera for tion of multipath interference in time-of-flight imaging,” ACM Trans.
translucent object imaging,” Image Vis. Comput. 43, 27–38 (2015). Graph. 36, 1–12 (2017).
127. K. Tanaka, Y. Mukaigawa, H. Kubo, Y. Matsushita, and Y. Yagi, 150. S. Zhan, T. Suming, G. Feifei, S. Chu, and F. Jianyang, “DOE-based
“Recovering transparent shape from time-of-flight distortion,” structured-light method for accurate 3D sensing,” Opt. Laser Eng.
in Proceedings of the IEEE Conference on Computer Vision and 120, 21–30 (2019).
Pattern Recognition (2016), pp. 4387–4395. 151. Budianto and D. P. K. Lun, “Robust fringe projection profilometry via
128. M. Poggi, G. Agresti, F. Tosi, P. Zanuttigh, and S. Mattoccia, sparse representation,” IEEE Tran. Image Process. 25, 1726–1739
“Confidence estimation for ToF and stereo sensors and its (2016).
application to depth data fusion,” IEEE Sens. J. 20, 1411–1421 152. H. Guo, “Face recognition based on fringe pattern analysis,” Opt.
(2020). Eng. 49, 037201 (2010).
129. G. Agresti and P. Zanuttigh, “Combination of spatially-modulated 153. F. Liu, D. Zhang, and L. Shen, “Study on novel curvature features for
ToF and structured light for MPI-free depth estimation,” in 3D fingerprint recognition,” Neurocomputing 168, 599–608 (2015).
Proceedings of the European Conference on Computer Vision 154. S. Jiao, Y. Gao, J. Feng, T. Lei, and X. Yuan, “Does deep learning
(ECCV) (2018). always outperform simple linear regression in optical imaging?”
130. J. N. Mait, G. W. Euliss, and R. A. Athale, “Computational imaging,” Opt. Express 28, 3717–3731 (2020).
Adv. Opt. Photon. 10, 409–475 (2018). 155. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G.
131. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning Barbastathis, and G. Situ, “Phase imaging with an untrained neural
for computational imaging,” Optica 6, 921–943 (2019). network,” Light Sci. Appl. 9, 77 (2020).
B76 Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A Review

156. L. Ekstrand and S. Zhang, “Auto-exposure for three-dimensional 180. L. Ekstrand and S. Zhang, “Three-dimensional profilometry with
shape measurement with a digital-light-processing projector,” Opt. nearly focused binary phase-shifting algorithms,” Opt. Lett. 36,
Eng. 50, 123603 (2011). 4518–4520 (2011).
157. B. Chen and S. Zhang, “High-quality 3D shape measurement using 181. J.-S. Hyun, G. T. C. Chiu, and S. Zhang, “High-speed and high-
saturated fringe patterns,” Opt. Laser Eng. 87, 83–89 (2016). accuracy 3D surface measurement using a mechanical projector,”
158. S. Zhang and S.-T. Yau, “High dynamic range scanning technique,” Opt. Express 26, 1474–1487 (2018).
Opt. Eng. 48, 033604 (2009). 182. S. Heist, P. Lutzke, I. Schmidt, P. Dietrich, P. Kühmstedt, A.
159. C. Waddington and J. Kofman, “Analysis of measurement sensitivity Tünnermann, and G. Notni, “High-speed three-dimensional shape
to illuminance and fringe-pattern gray levels for fringe-pattern pro- measurement using GOBO projection,” Opt. Laser Eng. 87, 90–96
jection adaptive to ambient lighting,” Opt. Laser Eng. 48, 251–256 (2016).
(2010). 183. X. Hu, G. Wang, Y. Zhang, H. Yang, and S. Zhang, “Large depth-of-
160. C. Jiang, T. Bell, and S. Zhang, “High dynamic range real-time 3D field 3D shape measurement using an electrically tunable lens,” Opt.
shape measurement,” Opt. Express 24, 7337–7346 (2016). Express 27, 29697–29709 (2019).
161. Y. Zheng, Y. Wang, V. Suresh, and B. Li, “Real-time high-dynamic- 184. W. Torres-Sepúlveda, J. Henao, J. Morales-Marn, A. Mira-Agudelo,
range fringe acquisition for 3D shape measurement with a RGB and E. Rueda, “Hysteresis characterization of an electrically focus-
camera,” Meas. Sci. Technol. 30, 075202 (2019). tunable lens,” Opt. Eng. 59, 044103 (2020).
162. V. Suresh, Y. Wang, and B. Li, “High-dynamic-range 3D shape 185. R. Leach, L. Brown, J. Jiang, R. Blunt, M. Conroy, and D. Mauger,
measurement utilizing the transitioning state of digital micromirror Guide to the Measurement of Smooth Surface Topography using
device,” Opt. Laser Eng. 107, 176–181 (2018). Coherence Scanning Interferometry (2008).
163. B. Salahieh, Z. Chen, J. J. Rodriguez, and R. Liang, “Multi- 186. T. Chen, H. P. Lensch, C. Fuchs, and H.-P. Seidel, “Polarization
polarization fringe projection imaging for high dynamic range and phase-shifting for 3D scanning of translucent objects,” in IEEE
objects,” Opt. Express 22, 10064–10071 (2014). Conference on Computer Vision and Pattern Recognition (IEEE,
164. H. Lin, J. Gao, Q. Mei, Y. He, J. Liu, and X. Wang, “Three- 2007), pp. 1–8.
dimensional shape measurement technique for shiny surfaces 187. R. M. Kowarschik, J. Gerber, G. Notni, W. Schreiber, and P.
by adaptive pixel-wise projection intensity adjustment,” Opt. Laser Kuehmstedt, “Adaptive optical 3D measurement with structured
Eng. 91, 206–215 (2017). light,” Opt. Eng. 39, 150–158 (2000).
165. D. Li and J. Kofman, “Adaptive fringe-pattern projection for image 188. H. Lin, J. Gao, G. Zhang, X. Chen, Y. He, and Y. Liu, “Review and
saturation avoidance in 3D surface-shape measurement,” Opt. comparison of high-dynamic range three-dimensional shape
Express 22, 9887–9901 (2014). measurement techniques,” J. Sens. 2017, 9576850 (2017).
166. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: 189. H. Lin, J. Gao, Q. Mei, Y. He, J. Liu, and X. Wang, “Adaptive
a novel 3-D scanning technique for high-reflective surfaces,” Opt. digital fringe projection technique for high dynamic range three-
Laser Eng. 50, 1484–1493 (2012). dimensional shape measurement,” Opt. Express 24, 7703–7718
167. H. Zhao, X. Liang, X. Diao, and H. Jiang, “Rapid in-situ 3D measure- (2016).
ment of shiny object based on fast and high dynamic range digital 190. G.-H. Liu, X.-Y. Liu, and Q.-Y. Feng, “3D shape measurement of
fringe projector,” Opt. Laser Eng. 54, 170–174 (2014). objects with high dynamic range of surface reflectivity,” Appl. Opt.
168. C. Chen, N. Gao, X. Wang, and Z. Zhang, “Adaptive projection inten- 50, 4557–4565 (2011).
sity adjustment for avoiding saturation in three-dimensional shape 191. P. Lutzke, “Measuring error compensation on three-dimensional
measurement,” Opt. Commun. 410, 694–702 (2017). scans of translucent objects,” Opt. Eng. 50, 063601 (2011).
169. S. Feng, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General 192. R. Ran, C. Stolz, D. Fofi, and F. Meriaudeau, “Non contact 3D mea-
solution for high dynamic range three-dimensional shape measure- surement scheme for transparent objects using UV structured
ment using the fringe projection technique,” Opt. Laser Eng. 59, light,” in 20th International Conference on Pattern Recognition
56–71 (2014). (ICPR) (IEEE, 2010), pp. 1646–1649.
170. S. Ri, M. Fujigaki, and Y. Morimoto, “Intensity range extension 193. A. Brahm, C. Rößler, P. Dietrich, S. Heist, P. Kühmstedt, and G.
method for three-dimensional shape measurement in phase- meas- Notni, “Non-destructive 3D shape measurement of transparent
uring profilometry using a digital micromirror device camera,” Appl. and black objects with thermal fringes,” Proc. SPIE 9868, 98680C
Opt. 47, 5400–5407 (2008). (2016).
171. S. Zhang, “Rapid and automatic optimal exposure control for digital 194. S. Yamazaki, M. Mochimaru, and T. Kanade, “Simultaneous self-
fringe projection technique,” Opt. Laser Eng. 128, 106029 (2020). calibration of a projector and a camera using structured light,”
172. X. Hu, G. Wang, J.-S. Hyun, Y. Zhang, H. Yang, and S. Zhang, in IEEE Computer Society Conference on Computer Vision and
“Autofocusing method for high-resolution three-dimensional Pattern Recognition Workshops (CVPR Workshops) (IEEE, 2011),
profilometry,” Opt. Lett. 45, 375–378 (2020). pp. 60–67.
173. M. Zhong, X. Hu, F. Chen, C. Xiao, D. Peng, and S. Zhang, 195. R. Orghidan, J. Salvi, M. Gordan, C. Florea, and J. Batlle,
“Autofocusing method for digital fringe projection system with “Structured light self-calibration with vanishing points,” Mach.
dual projectors,” Opt. Express 28, 12609–12620 (2020). Vis. Appl. 25, 489–500 (2014).
174. M. K. Kim, “Principles and techniques of digital holographic micros- 196. F. Li, H. Sekkati, J. Deglint, C. Scharfenberger, M. Lamm, D. Clausi,
copy,” SPIE Rev. 1, 018005 (2010). J. Zelek, and A. Wong, “Simultaneous projector-camera self-
175. M. Paturzo, V. Pagliarulo, V. Bianco, P. Memmolo, L. Miccio, F. calibration for three-dimensional reconstruction and projection
Merola, and P. Ferraro, “Digital holography, a metrological tool for mapping,” IEEE Trans. Comput. Imaging 3, 74–83 (2017).
quantitative analysis: trends and future applications,” Opt. Laser 197. S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and
Eng. 104, 32–47 (2018). M. J. Marn-Jiménez, “Simultaneous reconstruction and calibration
176. P. Ferraro, S. Grilli, D. Alfieri, S. De Nicola, A. Finizio, G. Pierattini, for multi-view structured light scanning,” J. Visual Commun. Image
B. Javidi, G. Coppola, and V. Striano, “Extended focused image in Represent. 39, 120–131 (2016).
microscopy by digital Holography,” Opt. Express 13, 6738–6749 198. W. Schreiber and G. Notni, “Theory and arrangements of self-
(2005). calibrating whole-body 3-D-measurement systems using fringe
177. T. Kreis, “Application of digital holography for nondestructive testing projection technique,” Opt. Eng. 39, 159–169 (2000).
and metrology: a review,” IEEE Trans. Ind. Inf. 12, 240–247 (2016). 199. J. Tian, Y. Ding, and X. Peng, “Self-calibration of a fringe projection
178. A. Mikš and J. Novák, “Analysis of the optical center position of an system using epipolar constraint,” Opt. Laser Technol. 40, 538–544
optical system of a camera lens,” Appl. Opt. 57, 4409–4414 (2018). (2008).
179. Y. Zhang, Z. Xiong, P. Cong, and F. Wu, “Robust depth sensing with 200. C. Resch, P. Keitler, C. Menk, and G. Klinker, “Semi-automatic cal-
adaptive structured light illumination,” J. Visual Commun. Image ibration of a projector-camera system using arbitrary objects with
Represent. 25, 649–658 (2014). known geometry,” in IEEE Virtual Reality (VR) (2015), pp. 271–272.
Review Vol. 37, No. 9 / September 2020 / Journal of the Optical Society of America A B77

201. H. Kawasaki, R. Sagawa, Y. Yagi, R. Furukawa, N. Asada, and P. 206. S. Zhang, “Three-dimensional range data compression using
Sturm, “One-shot scanning method using an uncalibrated projector computer graphics rendering pipeline,” Appl. Opt. 51, 4058–4064
and camera system,” in IEEE Computer Society Conference on (2012).
Computer Vision and Pattern Recognition—Workshops (2010), 207. T. Bell and S. Zhang, “Multi-wavelength depth encoding method
pp. 104–111. for 3D range geometry compression,” Appl. Opt. 54, 10684–10961
202. B. Zhang and Y. Li, “Dynamic calibration of the relative pose and (2015).
error analysis in a structured light system,” J. Opt. Soc. Am. A 25, 208. A. Maglo, G. Lavoué, F. Dupont, and C. Hudelot, “3D mesh com-
612–622 (2008). pression: survey, comparisons, and emerging trends,” ACM
203. D. D. Lichti, C. Kim, and S. Jamtsho, “An integrated bundle adjust- Comput. Surv. 47, 1–41 (2015).
ment approach to range camera geometric self-calibration,” ISPRS 209. T. Bell, B. Vlahov, J. P. Allebach, and S. Zhang, “Three-dimensional
range geometry compression via phase encoding,” Appl. Opt. 56,
J. Photogramm. Remote Sens. 65, 360–368 (2010).
9285–9292 (2017).
204. N. Karpinsky and S. Zhang, “Holovideo: real-time 3D video encod-
ing and decoding on gpu,” Opt. Laser Eng. 50, 280–286 (2012).
205. Z. Hou, X. Su, and Q. Zhang, “Virtual structured-light coding for
three-dimensional shape data compression,” Opt. Laser Eng. 50,
844–849 (2012).

You might also like