0% found this document useful (0 votes)
7 views18 pages

Book 2

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 18

FMCW Lidar: Scaling to the Chip-Level and Improving Phase-Noise-Limited Performance

By

Phillip Alan McGinnis Sandborn

A dissertation submitted in partial satisfaction of the

requirements for the degree of

Doctor of Philosophy

W
in

Engineering – Electrical Engineering and Computer Sciences


IE in the

Graduate Division
EV

of the

University of California, Berkeley


PR

Committee in charge:

Professor Ming C. Wu, Chair


Professor Bernhard E. Boser
Professor Kris J. Pister
Professor Liwei Lin

Fall 2017
ProQuest Number: 10687479

All rights reserved

INFORMATION TO ALL USERS


The quality of this reproduction is dependent on the quality of the copy submitted.

In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.

W
IE
EV
ProQuest 10687479

Published by ProQuest LLC ( 2020 ). Copyright of the Dissertation is held by the Author.

All Rights Reserved.


PR

This work is protected against unauthorized copying under Title 17, United States Code
Microform Edition © ProQuest LLC.

ProQuest LLC
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346
© Copyright 2017
Phillip A.M. Sandborn
All rights reserved

W
IE
EV
PR
Abstract

FMCW Lidar: Scaling to the Chip-Level and Improving Phase-Noise-Limited Performance

by

Phillip A.M. Sandborn

Doctor of Philosophy in Electrical Engineering

University of California, Berkeley

Professor Ming C. Wu, Chair

Lidar (light detection and ranging) technology has the potential to revolutionize the way
automated systems interact with their environments and their users. Most lidar systems in the

W
industry today rely on pulsed (or, "time-of-flight") lidar, which has reached limits in terms of
depth resolution. Coherent lidar schemes, such as frequency-modulated continuous-wave
(FMCW) lidar, offer significant advantage in achieving high depth resolution, but are often too
IE
complex, too expensive, and/or too bulky to be implemented in the consumer industry. FMCW,
and its close cousin, swept-source optical coherence tomography (SS-OCT) are often targeted
towards metrology applications or medical diagnostics, where systems can easily cost upwards
of $30,000.
EV

In this dissertation, I present my work in chip-scale integration of optical and electronic


components for application in coherent lidar techniques. First, I will summarize the work to
integrate a typically bulky FMCW lidar control system onto an optoelectronic chip-stack. The
chip-stack consists of an SOI silicon-photonics chip and a standard CMOS chip. The chip was
PR

used in an imaging system to generate 3D images with as little as 10um depth precision at stand-
off distances of 30cm.

Second, I will summarize my work in implementing and analyzing a new post-processing


method for FMCW lidar signals, called "multi-synchronous re-sampling" (MK-re-sampling).
This involved Monte Carlo studies of laser phase noise under non-linear signal processing
schemes, so I will show stochastic simulations and experimental results to demonstrate the
advantages of the new re-sampling method. QS-re-sampling has the potential to
improve acquisition rate, accuracy, SNR, and dynamic depth range of coherent imaging
systems.

1
To my inspiration and foundation: Mom and Dad

W
IE
EV
PR

i
Table of contents
Abstract ......................................................................................................................................................... 1
Table of contents ........................................................................................................................................... ii
List of figures ............................................................................................................................................... iv
List of tables................................................................................................................................................. vi
Acknowledgments....................................................................................................................................... vii
1 Introduction ........................................................................................................................................... 1
1.1 3D Imaging as an Enabling Technology for Intelligent Machines .............................. 1
1.1.1 Current Applications of 3D Imaging..................................................................................... 1
1.1.2 Overview of 3D Imaging and Sensing Technology .............................................................. 2
1.2 Light-Based Direct Range Measurement Sensors (Lidar) ........................................... 2
1.2.1 Pulsed Time-of-Flight (TOF) ................................................................................................ 2

W
1.2.2 Amplitude Modulated Continuous Wave (AMCW) Lidar ................................................... 3
1.2.3 Frequency Modulated Continuous Wave (FMCW) Lidar .................................................... 4
IE
1.2.4 Fiber-Optic Sensors and Optical Frequency Domain Reflectometry.................................... 5
1.2.5 Swept-Source Optical Coherence Tomography (SS-OCT) .................................................. 5
1.3 Integrated Photonics for Datacom and Sensors ........................................................... 6
EV
1.3.1 Overview of Integrated Photonic Technologies .................................................................... 6
1.3.2 Notable Integrated Photonic Lidar Sensors........................................................................... 7
1.4 Structure of the dissertation ......................................................................................... 8
2 Practical FMCW Lidar Considerations ................................................................................................. 9
PR

2.1 Principles and Fundamental Limits of Linear FMCW Lidar ....................................... 9


2.1.1 Tunable Lasers ...................................................................................................................... 9
2.1.2 Mach-Zehnder Interferometer and Balanced Photodetection ............................................. 10
2.1.3 Range and Velocity Measurement ...................................................................................... 13
2.2 CW Lidar Link Budget .............................................................................................. 15
2.2.1 Eye-Safety Considerations .................................................................................................. 16
2.2.2 One-Way Lidar Equation based on the Radar Equation ..................................................... 17
2.2.3 Detector Noise and Signal to Noise Ratio in FMCW Lidar................................................ 18
2.2.4 Solar Background Considerations ....................................................................................... 19
2.2.5 Laser Phase Noise in FMCW Lidar ........................................................................... 22
2.2.5.1. Numerical Verification Through Monte Carlo Method .................................................. 24
2.2.5.2. SNR in FMCW Lidar, Including Laser Phase Noise ...................................................... 25
2.2.6 Beam Steering Considerations ............................................................................................ 26
3 FMCW Lidar Utilizing Optoelectronic Phase Locked Loop (OPLL)................................................. 27

ii
3.1 Feedback in Optoelectronic Systems ......................................................................... 27
3.2 Optoelectronic Phase-Locked Loop Principles and Analysis for Swept-Source Control
27
3.2.1 The Integrator/Tunable Laser/MZI as a Voltage-Controlled-Oscillator ............................. 27
3.2.2 Phase Noise in the Optoelectronic Phase-Locked Loop ..................................................... 29
3.2.3 OPLL Loop Parameter Design ............................................................................................ 31
3.2.4 OPLL Laser Phase Noise Shaping ...................................................................................... 32
3.3 Implementation of Chip-Scale OPLL for Electronically-Tuned DBR Laser ............. 33
3.3.1 Silicon Photonics Design and Characterization .................................................................. 34
3.3.2 Known-Good-Die Screening............................................................................................... 37
3.3.3 Flip-Chip Packaging ........................................................................................................... 41
3.3.4 Evaluating TSV Functionality through OPLL System Testing .......................................... 42
3.3.5 OPLL Parameter Selection.................................................................................................. 45

W
3.3.6 Experimental Results: Open-Loop vs. Closed-Loop Operation.......................................... 45
3.3.7 Experimental Results: Imaging ........................................................................................... 47
4 Wide-Range Non-Linear FMCW Lidar .............................................................................................. 49
IE
4.1 Principles of Non-Linear FMCW Lidar ..................................................................... 49
4.2 Resampling Experimental Design .............................................................................. 50
4.3 Resampling as a Method to Ensure Linearity in Signal-Processing .......................... 51
EV
4.3.1 Resampling Analysis with Delay-Matched Clock and Target ............................................ 52
4.3.2 Multi-K-Clock (MK) Resampling Architecture Description .............................................. 54
4.3.3 Experimental Characterization of Standard K-Clock and MK Resampling Methods ........ 55
4.3.4 Resolving Range Ambiguity with MK Resampling ........................................................... 61
PR

4.4 Phase Noise Reshaping in Resampling FMCW Systems .......................................... 63


4.4.1 Analytical Intuition ............................................................................................................. 64
4.4.2 Proposed Phase Noise Suppression for Resampling using Long Reference Length .......... 65
4.4.3 Analysis of Noise in Long-Reference Resampling .................................................... 67
4.4.4 Long Reference Resampling System Characterization and 3D Image Scans ............ 71
5 Summary, Outlook and Open Questions............................................................................................. 73
5.1 Summary .................................................................................................................... 73
5.2 Discussion and Future Work in OPLL Techniques and Chip-Scale Integration for FMCW
Imaging 73
5.3 Discussion and Future Work in Resampling Methods for FMCW Imaging.............. 73
References .................................................................................................................................. 76

iii
List of figures
Fig. 1-1. Schematic for pulsed TOF lidar. .................................................................................................... 3
Fig. 1-2. Schematic for amplitude-modulated continuous-wave lidar. ........................................................ 4
Fig. 1-3. Schematic for frequency-modulated continuous-wave lidar. ........................................................ 4
Fig. 2-1. FMCW Lidar as an interferometric probe. ..................................................................................... 9
Fig. 2-2. Interferometric architectures. ...................................................................................................... 11
Fig. 2-3. Simple directional coupler............................................................................................................ 12
Fig. 2-4. MZI beat frequency intuition with a swept-source laser. ............................................................ 14
Fig. 2-5. Doppler shift processing with FMCW lidar. ............................................................................... 15
Fig. 2-6. Link budget considerations ......................................................................................................... 16
Fig. 2-7. Ocular maximum permissible exposure vs. exposure time for several lidar wavelengths .......... 16
Fig. 2-8. Solar radiation spectrum, following a typical black-body spectrum with several atmospheric
absorption lines. .................................................................................................................................. 19
Fig. 2-9. Solar-limited coherent detection limits. ...................................................................................... 21
Fig. 2-10. Solar-limited coherent detection limits for different optical bandwidths. ................................. 21
Fig. 2-11. Lorentzian lineshape for laser with 10 MHz linewidth. ............................................................ 23

W
Fig. 2-12. Photocurrent spectrum for 𝜏/𝜏𝑐 = 1 and 𝑇 = 125𝜇𝑠. .............................................................. 24
Fig. 2-13. Numerical verification of Lorentzian lineshape with Monte Carlo simulation ......................... 24
Fig. 2-14. Numerical verification of photocurrent spectrum for 𝜏/𝜏𝑐 = 1 and 𝑇 = 125𝜇𝑠...................... 25
Fig. 2-15. Maximum distance based on SNR including phase noise, shot noise, and target attenuation
IE
from the one-way lidar equation. ........................................................................................................ 26
Fig. 3-1. Block diagram for phase-locked loop. ........................................................................................ 28
Fig. 3-2. VCO implemented with integrator, tunable laser, and Mach-Zhender Interferometer (MZI). ... 30
Fig. 3-3. Block diagram PLL for examining the transfer function of laser phase noise. ........................... 31
EV
Fig. 3-4. Cross-section of silicon photonics chip processed with through-silicon-vias (TSVs). ................ 34
Fig. 3-5. Layout of silicon photonics chip with fiber couplers, directional couplers, Mach-Zehnder
interferometers, germanium photodiodes, wirebonding pads, and through-silicon via (TSV) locations
noted.................................................................................................................................................... 35
Fig. 3-6. Coupling light into the chip via grating coupler array. ................................................................ 36
PR

Fig. 3-7. Typical light- and dark-IV curves for the GePD on the silicon photonics chips. ........................ 36
Fig. 3-8. Typical MZI response for a wide-waveguide MZI on a silicon photonics chip.......................... 37
Fig. 3-9. Grating coupler wavelength responses for a selection of 37 chips in the screening process. ..... 38
Fig. 3-10. On-chip germanium photodiode response characteristics. ......................................................... 38
Fig. 3-11. Interferometer photodiode response with respect to wavelength for 37 chips. ......................... 38
Fig. 3-12. SEM image of cross-section of silicon photonics chip with TSVs. ........................................... 39
Fig. 3-13. Well-revealed TSV ..................................................................................................................... 39
Fig. 3-14. Non-revealed TSV...................................................................................................................... 40
Fig. 3-15. Non-revealed TSV after attempted bonding with CMOS and shearing. .................................... 40
Fig. 3-16. Revealed TSV with both a “moat” and “dishing” feature. ......................................................... 41
Fig. 3-17. Silicon photonic chip designs for flip-chip packaging tests. ...................................................... 42
Fig. 3-18. Edge-diced silicon photonic development chip bonded to CMOS after flip-chip bonding. ...... 42
Fig. 3-19. Chip-level OPLL on PCB “daughter board.” ............................................................................. 43
Fig. 3-20. Side-by-side chip probe measurement........................................................................................ 43
Fig. 3-21. Bonded stack configuration for verification of photodiode-TIA interface................................. 44
Fig. 3-22. Bonded stack configuration for verification of silicon photonics as an electrical interposer for
CMOS. ................................................................................................................................................ 44
Fig. 3-23. Edge-diced chip testing configuration. ....................................................................................... 45
Fig. 3-24: Block diagram of ranging experiments using integrated OPLL................................................. 45
Fig. 3-25: Photograph of free-space ranging setup. .................................................................................... 46

iv
Fig. 3-26: Average beat frequency per ramp over successive ramps .......................................................... 47
Fig. 3-27: Range precision as a function of range and OPLL condition (open- or closed-loop). ............... 47
Fig. 3-28: Photograph of 1.1mm thick gear (left) and a 3D image of a gear (right) with 1.1mm thickness
placed at 40cm distance from the imager taken with chip-level OPLL. ............................................. 48
Fig. 3-29. FMCW 3D images of a US quarter for open-loop OPLL (Left) and closed-loop OPLL (right).
............................................................................................................................................................ 48
Fig. 4-1. Linear vs Non-linear laser chirp for FMCW Lidar ...................................................................... 49
Fig. 4-2. Imaging system configuration with reference interferometer and two target configurations....... 50
Fig. 4-3. Block diagrams of resampling architectures ............................................................................... 52
Fig. 4-4 Numerical analysis of standard k-clock and multi-k-clock resampling algorithms ..................... 54
Fig. 4-5. Standard k-clock resampling processing ..................................................................................... 55
Fig. 4-6. Standard K-clock resampling characterization result .................................................................. 56
Fig. 4-7. Standard k-clock resampling imaging results.............................................................................. 57
Fig. 4-8. Experimental analysis of standard k-clock and multi-k-clock resampling algorithms................ 59
Fig. 4-9. Experimentally determined resolution of standard k-clock and MK-resampling methods as a
function of range ................................................................................................................................. 60
Fig. 4-10. MK-resampling 3D imaging results. .......................................................................................... 61
Fig. 4-11. Conjugate suppression through MK-resampling. ...................................................................... 63

W
Fig. 4-12. Monte Carlo verification of laser phase noise effects on FMCW beats for linear sweep .......... 65
Fig. 4-13. Monte Carlo modeling of linear and non-linear FMCW beats with three different reference
lengths ................................................................................................................................................. 66
Fig. 4-14. Accuracy and SNR vs. reference and target length .................................................................... 67
IE
Fig. 4-15. Frequency-tracking error as a function of reference length and target length. ........................... 69
Fig. 4-16. Experiment for long-reference resampling................................................................................. 71
Fig. 4-17. Spot-size-limited imaging with 30-meter fiber reference........................................................... 72
EV
PR

v
List of tables
Table 2-1. Some Laser Sources for FMCW Lidar ...................................................................................... 10
Table 2-2. Assumed properties for solar-limited detection analysis. .......................................................... 20
Table 3-1 Transfer Functions for Type-II PLL ........................................................................................... 28
Table 4-1 Experimental Configurations for Wide-Range Non-Linear FMCW Lidar................................. 50

W
IE
EV
PR

vi
Acknowledgments

These things never happen in a void.

I need to thank countless people, including advisors, colleagues, friends, and family. First, I’m
grateful to Prof. Ming Wu, my advisor throughout my graduate school career. He always offers
grounded advice. In addition, I’m thankful to my dissertation committee, who also comprised my
Qualifying Committee: Profs. Bernhard Boser, Kris Pister, and Liwei Lin.

Throughout my graduate career, several colleagues deserve special shout-outs. Niels Quack, a
post-doc in the Wu group when I first entered the program, fostered a sense of progress and team
integrity in the EPHI project, and offered many hours of advice during our meetings at the I-
House Bar. Behnam Behroozpour is an incredible peer, colleague, and friend. Behnam and I
spent several years working together on FMCW Lidar, and we learned an incredible amount
from each other. I will cherish that relationship as we move on in our careers. Also, my thanks

W
goes out to the members of the Wu Group, the Chang-Hasnain Group, and the Yablonovitch
Group, who made the 253m office a home of sorts for five and a half years.

The beginning of grad school at Berkeley was marked by the beginning of several friendships
IE
that will surely last beyond graduation – Eliot Bessette, Ethan Schaler, and Thomas Rembert
have been there for so many good times, and I’ll never forget the years we spent helping each
other through the hardest parts of being graduate students, and laughing through the best parts of
EV
our twenties.

Half-way through grad school, I met Jeanmarie. No one has ever made me happier to be where I
am, to be doing what I’m doing, with the people I do it with. Jeanmarie, you bring so much love
into my life. Thank you for being there for me through thick and thin – I love you.
PR

Lastly, I would not be where I am today without the love and support of my parents, Peter and
Tammy Sandborn. Mom and Dad, so much of who I am today is a reflection of you, not only
because of how you raised me, but also because of the solace and encouragement I receive from
you from three-thousand miles away. I hope you are proud, and so I dedicate my dissertation to
you both.

vii
1 Introduction
1.1 3D Imaging as an Enabling Technology for Intelligent Machines
As computers, robots, and systems in general become more useful, more integrated, and
more necessary as parts of our working, cultural, and social lives, we crave more natural and
flexible experiences for interfacing with these systems. This represents a paradigm shift from
designing so-called “user-input devices,” such as the oft-referred-to keyboard and mouse, to
designing “human-computer-interactions,” reflecting the shifting roles of computers in our lives.
This shift from “input” to “interaction” requires that computers and systems have the capability
to understand, interpret, and themselves interact with users as well as the environments which
they inhabit. An important aspect of this new paradigm of “interaction” is the ability to sense and
interpret the physical environment, whether that is the ability to recognize objects, gestures, or
other agents.
Three-dimensional imaging has emerged as a critical component of next-generation
computers. As a sensing modality, it is a natural way for robots to find paths through dynamic

W
human environments, and it is critical for mapping physically unknown spaces.
In this introduction, first, I will discuss a plethora of current applications of 3D imaging, in the
hopes of demonstrating the emerging ubiquity of the technology; second, I will briefly discuss
IE
several sensing modalities which are not the focus of this dissertation. I will then introduce
several light-based direct range measurement sensors (lidar). Lastly, I will discuss the emergence
of integrated photonics, and explain its merits as a platform for developing and producing state-
of-the-art 3D imaging sensors for the applications that are discussed.
EV
1.1.1 Current Applications of 3D Imaging
3D imagers have applications that span myriad fields. In particular, market insight firms
have identified consumer-grade lidar for 3D imaging as a technology that will be in high-demand
for use in autonomous vehicles and driver-assist applications [1]. In addition to robotic
PR

navigation, 3D imaging has been found to be an invaluable technology for both professional and
consumer applications. In particular, lidar for 3D imaging is evolving beyond its original use as
an aerial 3D mapping tool and into a critical technology for imaging at distances from short (1
meter) to long (100s of meters).
Advanced driver assistance systems (ADAS) and self-driving vehicles have the potential
to significantly reduce pedestrian-related vehicle accidents and fatalities [2], significantly
decrease the net cost of commercial truck accidents [3], and improve traffic congestion in large
urban areas [4]. ADAS-enabled vehicles and autonomous vehicles will likely be equipped with
numerous imaging systems inside and outside the car. For example, several different imaging
modalities, from stereo vision to scanning lidar, have been integrated into autonomous vehicles
for use in navigation, localization, and object identification [5]. Range-finding systems have
already been integrated into ADAS systems for adaptive cruise control (ACC), in which the
vehicle can sense the range and velocity of objects in its vicinity and adjust its own acceleration
in order to maintain safe following distances [6].
In medical applications, physicians and medical technicians often desire extremely high-
resolution, non-invasive procedures for detecting and diagnosing patient medical problems. For
example, ophthalmologists often diagnose eye-related issues by examining a cross-sectional or
three-dimensional image of the patient’s eye. This application requires short-range (<10cm) and

1
high range-precision (on the order of 1s to 10s of microns) in order to properly identify and
possibly treat eye-related issues [7–10]. In dentistry and orthodontics, three-dimensional models
of a patient’s teeth are often taken using a mold impression, but three-dimensional imaging tools
are becoming more important in securing “patient acceptance” of proper orthodontic
therapy [11].
1.1.2 Overview of 3D Imaging and Sensing Technology
Depth imaging sensors take on many forms, and are often classified by their modalities –
optical, ultrasonic, and radar are notable modalities for ranging. Optical modalities span a variety
of technologies, from stereo imaging, structured light, reflectance modeling, and other camera-
based imaging sensors, to direct measurement of distance using light, as is the case with time-of-
flight lidar sensors. Ultrasound and radar are classical modalities which use the known
wavespeed of a signal in a medium to calculate distance to various reflective targets. Stereo,
structured light, and reflectance modeling are methods which infer 3D surfaces from two-
dimensional projections, so they are sometimes referred to as indirect 3D imagers. In contrast,
lidar, ultrasound, and radar, which return a direct measurement for range, can be referred to as
direct 3D imagers. Myriad studies have examined the performance and limits of both indirect

W
and direct 3D imagers, so we will dedicate the next section solely to the study of lidar time-of-
flight sensors, which are most directly related to the topic of this dissertation.

1.2 Light-Based Direct Range Measurement Sensors (Lidar)


IE
Lidar can take on several different modalities, which can be classified by their
dependence on incoherence or coherence of the laser source which is used. Each of these
methods uses the optical path of the target reflector to effectively modulate the envelope
EV
intensity of the detected signal. Time-of-flight (pulsed) and amplitude-modulated continuous-
wave (AMCW) sensors detect range by measuring temporal properties of the received light
intensity. Frequency-modulated continuous-wave (FMCW) and optical coherence tomographic
(OCT) sensors map properties of the received optical field (amplitude and phase) into intensity,
and attempt to leverage knowledge of both the amplitude and phase in order to detect range.
PR

1.2.1 Pulsed Time-of-Flight (TOF)


TOF lidar uses the known fact that light travels at a fixed speed through a medium with a
constant refractive index (3x108 m/s in air). Examples of pulsed time-of-flight (TOF) systems
can be found in [12,13]. The transmitted pulse must be reflected by the target object, and
collected by an aperture at the receiver. Range is measured by determining the difference in time
of arrival and the time of transmission of the pulse. Fig. 1-1 shows a simple schematic outlining
the operating principles of pulsed TOF lidar.
The pulse can be created by an incoherent LED source or a high-power mode-locked
laser, depending on system cost, optical output power, or power consumption constraints. The
depth resolution of such a system is limited by the timing resolution in electronics. This indicates
that the minimum range that can be measured by TOF systems is limited to single centimeters.
The maximum range of TOF systems is primarily limited by the link budget of the free-space
path in the system. The loss of the free-space path often directly scales with the distance of the
path. This means that the allowed transmitted power and receiver sensitivity together indicate a
maximum range beyond which the amplitude of a target reflector is too low to detect. With
typical receiver sensitivities in commercial TOF systems today, the maximum range is limited to

2
100-200 meters [13]. Higher-power systems for aerial mapping and other scientific endeavors,
maximum range can be extended to several kilometers [14].

W
Fig. 1-1. Schematic for pulsed TOF lidar. A laser source transmits an optical pulse, which is reflected by a target surface.
The difference between the transmit time and receive time encodes distance to the target.

1.2.2 Amplitude Modulated Continuous Wave (AMCW) Lidar


IE
Amplitude-modulated continuous-wave (AMCW) lidar uses similar principles to TOF
lidar, in that a target delay is measured at the receiver. However, in the case of AMCW, an
intensity pattern is encoded on the transmitted optical power, such as a linear radio frequency
chirp. For AMCW, the free-space path encodes a phase shift on the RF chirp, which can be
EV
detected accurately by measuring the intermediate frequency after mixing the received intensity
signal with a non-delayed electronic version of the chirp. Examples of AMCW lidar systems
have been studied in [15,16]. Fig. 1-2 shows a simple schematic outlining the operating
principles of AMCW lidar.
The depth resolution of this implementation is limited by the system’s capability to
PR

resolve the delay-induced phase shift. In the case of an RF chirp, the resolution is actually
limited by the spread of frequencies exhibited by the chirp, B. For example, the resolution
exhibited by a system with 1 GHz excursion on the RF chirp is 15 centimeters. In order to
achieve single-centimeter resolution, it would be necessary to have an excursion on the order of
15 GHz, which quickly shows the limits of electronic amplitude-modulation.
The maximum range of AMCW systems can be limited by several factors. Similar to
TOF systems, AMCW maximum range can be limited by the optical link budget. Less
intuitively, and in the case of the RF-chirp system, the maximum range can be limited by the
maximum frequency that can be detected at the receiver. By modifying the period of the RF-
chirp, the maximum range can be extended, but at the cost of acquisition time.

3
Fig. 1-2. Schematic for amplitude-modulated continuous-wave lidar. A laser source transmits an intensity modulated
optical wave, which is reflected by a target surface. The difference between the transmitted signal phase and the received

W
signal phase encodes distance to the target.

1.2.3 Frequency Modulated Continuous Wave (FMCW) Lidar


Frequency-modulated continuous-wave (FMCW) lidar can analytically be shown as a
comparable method to RF-chirped AMCW lidar, except where the chirped field is the optical
IE
field of a tunable laser. Where chirped AM lidar uses the laser as a carrier for an RF signal, and
the RF signal is applied to the intensity of the light source, chirped FM lidar modulates the phase
of the light source (usually a single-mode laser) such that the optical frequency of the light
EV
source is modulated directly. A free-space path encodes a phase shift on the optical chirp, and the
phase shift is detected by mixing the reflected chirp with a non-delayed version of the chirp. This
mixing occurs at the photodiode upon detection, so no special design beyond good detector
design is needed to achieve this mixing effect. A schematic for FMCW lidar is shown in Fig. 1-3.
PR

Fig. 1-3. Schematic for frequency-modulated continuous-wave lidar. A laser source transmits a frequency-modulated
optical wave, which is reflected by a target surface. The beat frequency on the receiver photodiode encodes distance to the
target.

4
The descriptions of chirped-AMCW lidar and chirped-FMCW lidar are very similar, but
differ in several important ways. Firstly, the modality of the chirp is different in each case: an
AM chirp modulates the intensity of a light source, while an FM chirp modulates the phase of the
light source. Secondly, the mixing method is different in each case: delayed and non-delayed
AM chirps are electronic signals manifested as current or voltage, so they must be mixed with an
RF mixer, while FM chirps are electric field signals which superimpose on a photodiode. The
photodiode detects power in electric fields, not amplitude, thus the photodiode is well-modeled
as an ideal square-law detector, performing the mixing step on conversion of signals to the
electronic domain. This allows for simple detection of signals with minimal electronic signal
conditioning. Thirdly, the resolution in both cases is limited by the frequency excursion of the
intensity modulation (AMCW) or frequency modulation (FMCW), but AMCW intensity
modulation is limited by achievable electronic bandwidths, 10s of GHz for state-of-the-art
electronics, while FMCW is limited by the tuning ranges of laser diodes, which can easily be in
the range of 100s of GHz or even a few THz. This allows FMCW methods to achieve much
higher transform-limited resolution, and thus FMCW is useful for metrological and precision
manufacturing applications. Instantiations of FMCW and FMCW-like systems are described
further in the next few sections.

W
1.2.4 Fiber-Optic Sensors and Optical Frequency Domain Reflectometry
The terminology for FMCW lidar used heavily in this dissertation is adapted primarily
from the field of radar. However, other fields with applications other than free-space object
IE
imaging/detection have also embraced similar principles under different monikers. In this
section, I highlight the use of coherent ranging in fiber-optic sensors in which very small
environmental or structural changes are of interest. The methodology in this field is often
EV
referred to as “optical frequency-domain reflectometry” (OFDR), and many engineering
challenges in this field are relevant to the development of FMCW imaging lidar. Traditionally,
the problems being addressed in OFDR are referred to as “distributed sensing,” “fault detection,”
and “link health assessment.”
Distributed sensing of and fault detection in fibers can be achieved using time-resolved
PR

reflectometry (“optical time-domain reflectometry” or OTDR), which suffers from similar


limitations to pulsed TOF lidar. For short optical links such as those used in avionics or rack-to-
rack links in datacenters, using OTDR is challenging to use for localizing faults. For this reason,
OFDR methods (using principles analogous to those described for FMCW herein) have been
proven to exhibit high resolution fault-detection, proving useful for short distance measurement.
In addition, OT- and OFDR methods have been shown to be useful techniques for fiber-
optic environmental sensors. For example, optical fibers embedded in large structures such as
buildings or bridges are subject to thermal stresses, tensile/compressive stresses, and sometimes
torsional stresses. Calibrating and probing these optical fibers with reflectometry methods allows
for the detection of stresses with great precision, allowing efficient preventative maintenance to
take place.
1.2.5 Swept-Source Optical Coherence Tomography (SS-OCT)
In a field of research closely related to FMCW lidar, known as Fourier-domain optical
coherence tomography (FD-OCT), 3D reconstructions of biological tissues can be acquired with
extremely high precision (10s of microns) [7–9]. This technology is commonly applied to
ophthalmologic imaging for medical diagnostics [10]. Recent work in this field has extended the
maximum range of a subset of FD-OCT systems, known as swept-source OCT (SS-OCT), beyond
a typical working distance of 2 centimeters to a working distance of 1.5 meters [17,18].

5
Similar to FMCW lidar sensors, SS-OCT systems also suffer from issues stemming from
non-linearity of tunable lasers, therefore many solutions proposed in SS-OCT literature focus on
innovative and efficient post-processing architectures. So-called “k-clock resampling” systems
track the optical frequency of a swept source accurately and resample raw target signals with
uniformly-spaced samples in optical frequency [19]. Other implementations have demonstrated
the use of non-linear discrete Fourier transforms (NDFTs) in the reconstruction of target signals
exhibiting non-linearity [20]. SS-OCT systems have shown extremely high resolution, though
only recent studies presented in [17,18] have shown extension of SS-OCT methods beyond
working ranges of 2-cm. In addition, these systems use tunable lasers with exceptional optical
frequency excursion (80nm in [17]; 100nm in [18]). Such tunable lasers with coherence lengths
on the order of 10s to 100s of meters are very expensive, and prohibit the use of such 3D imaging
systems for ubiquitous and/or cost-sensitive applications.
1.3 Integrated Photonics for Datacom and Sensors
Integrated photonics has emerged as a key technology in the fields of telecommunication,
sensors, and quantum computing. Photonic carrier signals have several advantages over
electronic carrier signals, and perhaps the most significant advantage is the low loss of light

W
through optical waveguides. Resistance and capacitance of wires cause significant attenuation of
electrical signals, increasing the amount of energy needed to transmit signals over significant
distances. Even at the smallest scales, this attenuation is considered a limiting factor in the
energy consumption of chips. Significant research and industrial effort has focused on the
IE
integration of optical transceivers and optical links from the long-haul applications (submarine
telecommunication), all the way to the intrachip scale for links between regions on a single CPU.
In this dissertation, we will give an overview of current and future photonic integrated circuit
EV
(PIC) technologies, some applications of PIC technology for sensor technology, and some
specific applications of PICs for lidar.
1.3.1 Overview of Integrated Photonic Technologies
Most optical sensors can be modeled as optical links, where the goal is to characterize
some property of the link itself. For example, in the case of a lidar measurement, we are
PR

interested in the physical length of the link. It is therefore useful to study PIC technologies by
studying the backbone components of a single optical interconnect. Namely, we will give a broad
overview of the following components and their integration progress: the light emitter (an LED
or laser diode), the modulation scheme (direct or external), the waveguide, the fiber coupling
scheme (usually a grating coupler), and the photo-receiver. In addition, we will give a short
overview of typical platforms on which these components can be integrated, though we refer the
reader to other references for a more exhaustive study of the numerous technology platforms.
Light emitting diodes (LEDs) and lasers operating in the near-infrared and infrared
wavelengths are often implemented with III-V materials, and can have complex integration
processes for integration on a silicon platform.
Direct modulation theory can be applied to laser diodes integrated on silicon or other
substrates, but other research has also focused on the application of external intensity and phase
modulators. Once light has been coupled onto a PIC, signals can be modulated onto the
amplitude and/or phase of the carrier photons by thermal, mechanical, or electrical means.
Intensity modulators can be cascaded in various configurations to create more complex
structures, such as single-side-band (SSB) modulators (also known as I/Q modulators), which are
useful in modulating a wavelength shift onto a single-wavelength optical carrier.

6
Optical waveguides have been developed in many processes, with the goal of creating
effective routing structures between transmitter and receivers. The design of waveguides often
calls for the use of silicon-on-insulator (SOI) technology, because it offers optical confinement in
the vertical dimension. Waveguides fabricated in SOI processes are often limited by their
propagation loss, usually due to roughness induced in the processing of the SOI wafer. In
addition, waveguides in SOI processes must be designed to be single-mode, exhibit minimal
dispersion, exhibit low bending loss, and exhibit minimal crossing loss (in the case of data center
optical switches or on-chip routing). For applications requiring frequency discriminators for
sensing, long delay lines may be required, pushing the limits of waveguides on a particular SOI
process.
Optical signals are often coupled on to and off of PICs through some kind of fiber-
coupling scheme. The challenge with optical coupling is transforming the optical mode
supported by a fiber into a mode supported by the waveguides on chip. If there is significant
mode mismatch somewhere in this link between fiber mode and waveguide mode, the coupler
can have large insertion loss. In addition, couplers are usually designed to maximize an optical
bandwidth over which there is minimal insertion loss.
Integrated photoreceivers on PICs vary by platform and optical wavelength. At the

W
telecom wavelengths, around 1550nm, germanium is a promising direct-band-gap material with
the potential of integrating with crystalline silicon.
1.3.2 Notable Integrated Photonic Lidar Sensors
IE
Integrated photonics has enabled several new lidar technologies in the past half-decade.
These include various instantiations of integrated lidar sources, beam steering devices, and
integrated receivers. As is the case with integrated photonics in data communication applications,
EV
integrated photonics has the potential to make lidar systems smaller, more scalable, and less
expensive.
Several beam-steering devices have been shown previously. In [21], the authors
demonstrate a 1D optical phased array (OPA) steerable in one dimension by an array of 32
thermo-optic phase modulators, and in the second dimension by tuning the wavelength of a
PR

broadly tunable laser. In a similar demonstration, a 1D optical phased array with non-uniform
emitter spacing is presented in [22], which optimizes beam steering side-lobe suppression.
Successful demonstration of a fixed pattern 2D beam-steering OPA has been presented in [23].
The prospects of fully-integrated beam-steering devices such as those referenced here are very
promising in the development of “solid-state” lidar, eliminating the need for sensors with
mechanical parts and thus improving sensor reliability and lifetime. The promise of solid-state
beam scanning has prompted several commercial ventures in beam-steering development, most
notably the efforts announced by Quanergy ( [24]) and Velodyne ( [25]) in the past few years.
In addition to integrated beam steering devices, several receivers have been successfully
demonstrated for both FMCW and optical coherence tomography applications. In FMCW
applications, it may become desirable to create sensors that multiplex the receiver design, in such
a way that “flash” lidar can be achieved for high-resolution imaging. This has been demonstrated
by the “nanophotonic coherent imager,” [26], in which an array of grating couplers is used in
conjunction with a micro-lens array to distribute received light to a camera-like array of coherent
receivers. In addition, integrated in-phase/quadrature (I/Q) receivers have gained attention in
extremely high resolution applications, due to their ability to resolve conjugate ambiguity in
range measurements [27].

Reproduced with permission of copyright owner. Further reproduction prohibited without permission.

You might also like