Remotesensing 14 05416
Remotesensing 14 05416
Remotesensing 14 05416
Article
Urban Traffic Imaging Using Millimeter-Wave Radar
Bo Yang 1 , Hua Zhang 1, *, Yurong Chen 1 , Yongjun Zhou 2 and Yu Peng 3
1 School of Aerospace Science and Technology, Xidian University, Xi’an 710071, China
2 The Science and Technology on Near-Surface Detection Laboratory, Wuxi 214035, China
3 The Beijing Institute of Control Engineering, Beijing 100190, China
* Correspondence: zhanghua@mail.xidian.edu.cn
Abstract: Imaging technology enhances radar environment awareness. Imaging radar can provide
richer target information for traffic management systems than conventional traffic detection radar.
However, there is still a lack of research on millimeter-wave radar imaging technology for urban
traffic surveillance. To solve the above problem, we propose an improved three-dimensional FFT
imaging algorithm architecture for radar roadside imaging in urban traffic scenarios, enabling the
concurrence of dynamic and static targets imaging. Firstly, by analyzing the target characteristics
and background noise in urban traffic scenes, the Monte-Carlo-based constant false alarm detection
algorithm (MC-CFAR) and the improved MC-CFAR algorithm are proposed, respectively, for mov-
ing vehicles and static environmental targets detection. Then, for the velocity ambiguity solution
problem with multiple targets and large velocity ambiguity cycles, an improved Hypothetical Phase
Compensation algorithm (HPC-SNR) is proposed and complimented. Further, the Density-Based
Spatial Clustering of Applications with Noise (DBSCAN) algorithm is used to remove outliers to
obtain a clean radar point cloud image. Finally, traffic targets within the 50 m range are presented as
two-dimensional (2D) point cloud imaging. In addition, we also try to estimate the vehicle type by
target point cloud size, and its accuracy reaches more than 80% in the vehicle sparse condition. The
proposed method is verified by actual traffic scenario data collected by a millimeter-wave radar sys-
Citation: Yang, B.; Zhang, H.; tem installed on the roadside. The work can support further intelligent transportation management
Chen, Y.; Zhou, Y.; Peng, Y. Urban and extend radar imaging applications.
Traffic Imaging Using Millimeter-
Wave Radar. Remote Sens. 2022, 14, Keywords: urban traffic imaging; radar roadside imaging; imaging techniques; millimeter-wave radar
5416. https://doi.org/10.3390/
rs14215416
richer traffic information such as vehicle profiles, vehicle travel status (moving or stopped),
and the surrounding environment of the road, which can better reflect the actual situation of
the traffic scenes. However, simply modifying the radar transmission method on common
traffic detection radar systems is not impractical.
Plaque-like targets detection and extraction: Radar Constant False Alarm Rate (CFAR)
detection is an important technique to separate target and background by adaptively setting
detection thresholds by evaluating the current clutter environment [17,18]. Currently, the
Cell Average CFAR (CA-CFAR) [19], the Ordered Statistical CFAR (OS-CFAR), and the fusion
CFAR algorithm combining CA and OS (namely, OSCA-CFAR) [20], etc., are widely used in
traffic radars to detect moving vehicles. They detect all targets in the radar’s field of view from
the range-Doppler power spectrum matrix (RDM) by two-dimensional reference window
sliding and in-window noise power estimation. However, the resolution of imaging radars
in terms of range, velocity, and angle is much higher (up to 10 times or more) than that of
detection radar, which turns the target area from an “ideal point” to a “plaque” in the RDM. In
this case, the classic CFAR algorithm based on window sliding detection will fail because the
target samples that fill the reference window lead to incorrect estimation of the background
noise; moreover, target misses and target segmentation will occur.
The detection of targets from the RDM is unsuitable for static target detection, as
stationary targets with larger area characteristics are compressed in the zero-Doppler
region (only one column or row of the RDM area), resulting in less target information and
large fluctuations in background noise energy.
Large speed ambiguity cycle compensation: The MIMO technology brings high an-
gular resolution to radar systems while significantly reducing the maximum unambiguous
speed of the radar. Without changing the distance resolution, the maximum unambiguous
speed of the radar decreases further with the expanded detection range. In practice, the
speed of a vehicle in urban traffic can be 5–8 times the maximum unambiguous speed
of the imaging radar. In MIMO radar systems, incorrect velocity estimates can lead to
incorrect target angle estimates due to the coupling between the target velocity and the
angle. Therefore, the velocity ambiguity solution is necessary to ensure the accuracy of
imaging, especially in the case of a large velocity ambiguity cycle.
The Chinese Remainder Theorem (CRT) is a typical multi-frame velocity disambigua-
tion technique [21,22] that recovers the true velocity of the target by transmitting multiple
pulse repetition frequency (MPRF) with different configuration parameters. However, the
performance of the CRT algorithm decreases as the ambiguity cycle increases. A hybrid
method between the CRT and Density-Based Spatial Clustering of Applications with Noise
algorithm (CRT-DBSCAN) [23] is proposed for more stable processing speed ambiguation
problems. However, target matching and time delay problems are unavoidable. To avoid
the target matching problem caused by MPRF and improve processing efficiency, people
began to study the speed disambiguation algorithm based on a single frame. A speed
expansion algorithm based on the Doppler phase shift assumption is proposed [24,25], but
the maximum solvable speed is only twice the maximum unambiguous speed. In [26,27],
Hypothetical Phase Compensation (HPC) is proposed, which can theoretically handle
ambiguity cycles up to a maximum of M times, where M is the number of transmitting
antennas. However, the algorithm may fail when multiple target points exist in the same
range-Doppler cell due to the instability of energy peak detection.
Multi-dimensional Perception of Vehicle Information: The elevation-dimensional
antenna array design enables the radar to estimate the height of the objects [28], which
makes it possible for the radar to acquire target size or shape information. However, the
multi-dimensional antenna array increases the complexity and cost of the design. In general,
we would like to obtain more information about the target with less economic and design
costs in civilian applications.
To avoid complex antenna array designs, multi-platform radars are used to enhance
elevation angle resolution to estimate the height of the target [29]. In [30,31], a method for
estimating the height of objects using FMCW automotive radar is presented, which exploits
Remote Sens. 2022, 14, 5416 3 of 30
the frequency shift caused by the Doppler effect when approaching a stationary object
to estimate the heights of the target. Although the algorithm does not require multiple
vertical antennas to find the height, it needs to provide a well-defined travel speed for the
radar itself.
As aforementioned, many problems must be solved to apply radar imaging technology
in the traffic field. In [32,33], moving vehicles are imaged using millimeter-wave radar, but
static targets in the road are ignored and the imaging range is limited (<15 m). In fact, so far,
there is little literature on urban traffic surveillance using radar imaging technology, with
even less study on the concurrence of dynamic and static imaging. To make up for the lack
of research and promote the application of radar imaging, We present a preliminary study
on roadside imaging using millimeter-wave radar in traffic surveillance. The contributions
of this paper are summarized as follows:
• An improved 3D-FFT imaging algorithm architecture is presented and discussed
for urban traffic imaging using the millimeter-wave radar system. It enables the
concurrence of dynamic and static target imaging within 50 m (speed up to 70 km/h)
in the form of a 2D point cloud.
• In the proposed imaging algorithm framework, Monte-Carlo-based CFAR detection
algorithm (the MC-CFAR) and improved MC-CFAR are applied and proposed, respec-
tively, to detect moving vehicles and traffic static objects. The MC-CFAR algorithm
estimates the background noise of the moving target by random sampling of the RDM,
which avoids reference window design and sliding. Compared with the traditional
CFAR algorithm, the MC-CFAR algorithm has higher detection efficiency and provides
a more complete target output, which is more suitable for “plaque-like” target detec-
tion. The improved MC-CFAR uses RAM as the detection object. While maintaining
the advantages of the MC-CFAR algorithm, it achieves static target extraction with
non-uniform background noise and a larger “plaque-like” area by dividing the RAM
area with a noise power drop gradient.
• In the proposed imaging algorithm framework, an improved HPC algorithm is pro-
posed for velocity ambiguity solution with a large ambiguity cycle. We changed the
evaluation method to estimate speed—that is, by choosing the hypothesis with the
strongest SNR in the angular power spectrum instead of the peak value to obtain the
real speed. Compared with the original HPC algorithm (which we call HPC-Peak),
the effectiveness of the velocity disambiguation is guaranteed when there are multiple
targets in the same range-Doppler cell.
• The DBSCAN algorithm is used to eliminate the isolated noise generated by the detec-
tion of false alarms to obtain a clean point cloud image. In addition, we performed a
preliminary estimation experiment for the vehicle type based on the target point cloud
image, which is not possible with traffic detection radar. Although the method is still
crude in the type identification of the vehicle, it avoids errors caused by target distance,
plane mapping relationship, and occlusion, etc. To a certain extent, it is able to provide
accurate dimensions of ordinary small cars and buses in sparse vehicle scenarios.
The structure of this paper is organized as follows: Section 2 describes the radar
signal model, the scenario model, the imaging millimeter-wave radar system, and the scene
characteristics analysis. Then, in Section 3, we give the principle and model of the road
imaging processing architecture and imaging algorithm. Numerical simulations of the
proposed key algorithms are given in Section 4. In Section 5, we analyze the performance
of the imaging algorithm through real experiments. Finally, Section 6 summarizes the
conclusions of this paper.
Remote Sens. 2022, 14, 5416 4 of 30
1
s Tx (t) = A T exp(2π ( f o t + kt2 )) (1)
2
where A T refers to the transmit power of the Chirp signal, f o the starting frequency,
k = B/Tc the slope, B the bandwidth, and Tc the duration.
The transmitted signal is received by the radar receiving antenna (RX) after being
reflected by the target, and the echo signal can be expressed as
1
s Rx (t) = αA T exp(2π ( f o (t − τ ) + k(t − τ )2 )) (2)
2
where α is the return loss, τ = 2R/c is the transmission delay, R represents the distance
of the target relative to the radar, and c is the speed of light. The beat frequency signal
is obtained when the echo signal is mixed with the transmitted signal, which can be
expressed as
2RB 2R Bτ 2R
y(t) = A R exp(2π ( t+ ( fo − ))) ≈ A R exp(2π ( f b t + f o )) (3)
cTc c Tc c
where A R represents received power and f b is the frequency of the beat frequency signal.
Then, we can obtain the distance of the target relative to the radar by f b :
c f b Tc
R= (4)
2B
When the radial velocity of the target relative to the radar is v, the beat signal expres-
sion of formula (3) is transformed into
where k = 1, 2, 3, · · · , K is the number of Chirp cycles. Assuming that the target moves at
a uniform speed, the phase difference ∆φ between the beat signals caused by the target
speed can be expressed as
4π f o ∆R 4πvTc
∆φ = = (6)
c λ
where ∆R represents the displacement of the object in time Tc and λ represents the wave-
length. The phase change can be equivalent to the integration of a fixed frequency over
Tc time:
∆φ = 2π f d Tc (7)
where f d represents a fixed frequency due to velocity v, usually called Doppler. So, combin-
ing formulas (6) and (7), we can obtain the velocity of the target relative to the radar:
λ fd
v= (8)
2
In a radar system, at least two or more receiving antennas are required to estimate
the target azimuth because there is a target angle-dependent phase difference between the
antenna channels, which is introduced due to the different antenna positions. As shown in
Figure 1, suppose the receiving antenna is uniformly distributed, the distance between the
Remote Sens. 2022, 14, 5416 5 of 30
antennas is d, and the angle between the target and the radar is θ; then, the beat frequency
signals received by different antenna channels can be expressed as
2π f o
y(n, k, m) = y(n, k)exp( j d(m − 1) sin θ ) (9)
c
where m = 1, 2, 3, · · · , M, M represents the number of receiving antennas (virtual antennas).
The phase difference (∆ω) between adjacent receiving channels can be expressed as
∆ω = (2π/λ)dsin(θ ) (10)
∆ωλ
θ = arcsin (11)
2πd
Figure 2. A typical radar imaging process flow based on the 3D-FFT method.
Remote Sens. 2022, 14, 5416 6 of 30
(a) (b)
Figure 3. (a) Traffic scene model. (b) Radar deployment and test scenarios.
To not lose generality, we use the radar RF front-end available in the market and
match it with the self-designed IF processing board (as shown in Appendix A.1) to form the
millimeter-wave radar system required for the experiment, as shown in Figure 4. The real
antenna array of the radar system consists of 12 TX antennas and 16 RX antennas, as shown
in Figure 5a. Via MIMO technology, 86 virtual antenna arrays (as shown in Figure 5b) in
azimuth dimension can be obtained to achieve an azimuth resolution of Aresolution = 1.4◦ .
Elevation resolution (20◦ ) due to the elevation antenna array can be ignored. The imaging
radar system parameters are shown in Table 1.
(a) (b)
Figure 5. (a) Radar real antenna array. (b) Virtual antenna array.
We conduct statistical analysis on the data in RDM to obtain the background noise
distribution in the area where the moving target is located (ignoring zero-Doppler data
and data with a distance of fewer than 10 m). As shown in Figure 7, the background noise
distribution power fluctuates smoothly and the power distribution is relatively uniform.
Both the variance of the velocity dimension and the variance of the distance dimension are
less than 0.03, as shown in Figure 7a,b. The non-zero-Doppler cells in RDM are sampled and
processed by MATLAB Distribution Fitter Toolbox; then, the distribution of data (purple)
and the fitting curve of Normal distribution (red) are shown in Figure 7c. Figure 7d shows
the matching degree between data probability distribution and Normal distribution, i.e., the
closer the data are to a curve line, the more consistent they are with a Normal distribution.
Remote Sens. 2022, 14, 5416 8 of 30
Figure 7. Background noise amplitude dispersion analysis in non-zero-Doppler region. (a) Noise
amplitude variance in range direction. (b) Noise amplitude variance in Doppler direction. (c) Noise
distribution curve. (d) Matching degree of Normal distribution.
However, in RDM, the reflected energy in the area where the static target is located
fluctuates seriously and irregularly (as shown in Figure 8), which is not conducive to
weak signal target detection. Besides, the stationary target information only occupies
the zero-Doppler column, which leads to a severe lack of static target information. It is
entirely impossible to detect stationary targets from the RDM based on the strength of the
reflected energy.
Table 2 also summarizes the differences between imaging radar and traffic detection
radar for traffic applications, including detection objects, target characteristics, and outputs.
In fact, the typical 3D-FFT signal processing methods used in detection radar systems are
not entirely suitable for imaging radar.
Remote Sens. 2022, 14, 5416 9 of 30
3.1. The MC-CFAR Algorithm and Improved MC-CFAR Algorithm for Target Detection
3.1.1. The MC-CFAR Algorithm for Moving Target Detection
In the previous work [34], we proposed a new CFAR algorithm (the MC-CFAR) for
detecting vehicles in urban traffic environments using traffic detection radar. The MC-CFAR
algorithm flow is shown in Figure 10; the process is as follows:
• Step 1: Use Monte Carlo experiments to obtain configuration parameters (sampling
points M and the threshold factor α). When the detection environment and platform
remain unchanged, this step only needs to be performed once.
• Step 2: Randomly draw M sample points (X) in the RDM non-zero-Doppler region.
Sort the sample points according to the magnitude of the power magnitude:
X 1 ≤ X 2 ≤ · · · ≤ X q ≤ · · · ≤ X M−k ≤ · · · ≤ X M , (12)
• Step 3: k maximum points and q minimum points are removed, and it is considered
that the remaining M − k − q sample points only contain background noise power
points.
• Step 4: Average the remaining M − q − k sample points to obtain the estimated value
(µ) of the current background average noise power in the RDM.
M−k−q
1
µ=
M−k−q ∑ Xi , (13)
i=k
• Step 5: Each detection cell in the RDM is sequentially compared with the threshold
value T = α · µ. If X > T, the target exists; otherwise, the target does not exist.
Remote Sens. 2022, 14, 5416 10 of 30
(a) (b)
Figure 11. Static target distribution in the RDM and RAM. (a) RDM in a two-dimensional view.
(b) RAM in a two-dimensional view.
To better extract static targets in traffic scenes, we convert the zero-Doppler data in the
RDM into the range–angle power spectrum matrix (RAM) by angular dimensional FFT. In
RAM, static targets and lane backgrounds are separated in different distance angle cells
(as shown in Figure 11b), just as moving targets are distributed in RDM. However, the
MC-CFAR algorithm proposed above is unsuitable for static target detection because the
noise power has a non-uniform distribution in RAM, which changes with distance (the
discrete points in Figure 12a). Through statistics and fitting, we can obtain a fitted curve
of background noise power and distance (blue curve in Figure 12a). The overall trend is
similar to R12 (the blue curve in Figure 12a).
Remote Sens. 2022, 14, 5416 11 of 30
(a) (b)
Figure 12. RAM noise analysis and interval division. (a) Relationship between noise amplitude and
distance in RAM. (b) RAM interval division for MC-CFAR detection.
Although the noise amplitude in the RAM is no longer uniformly distributed, the trend
of noise amplitude changes moderately, especially with increasing distance. Therefore,
we divide the RAM into multiple sub-RAM spaces along the distance dimension based
on the scale of the 3 dB drop of the fitted curve (as shown in Figure 12b). Consider
that the background noise amplitude in the sub-spaces fluctuates uniformly. The MC-
CFAR algorithm will be executed separately in each RAM subspace to achieve static target
detection. We call this method the improved MC-CFAR algorithm, and the algorithm flow
is as follows:
• Step 1: Extract the zero-Doppler column of each RDM from different receiving chan-
nels and implement the angle dimension FFT on each range cell to obtain the RAM.
• Step 2: Obtain the interval parameter for dividing RAM. Firstly, the background
noise samples in RAM are extracted, and the change curve between noise power and
distance is fitted. Then, we obtain the interval parameters based on the scale of the
3 dB drop of the fitted curve. When the detection environment and platform remain
unchanged, this step only needs to be performed once.
• Step 3: Divide the RAM to form multiple sub-RAM spaces along the distance dimen-
sion based on interval parameters.
• Step 4: Implement the MC-CFAR algorithm for each sub-RAM space separately to
search and extract all static targets.
Although the time complexity of the improved MC-CFAR algorithm is higher than the
original MC-CFAR, it retains the good detection performance of the MC-CFAR algorithm.
In addition, through the division of sub-spaces, the MC-CFAR algorithm can be applied to
detect stationary objects in traffic scenes.
shown in Figure 13b). In addition, as the angle of the target with respect to the radar center
is larger, the error between the value of the estimated angle and the actual angle is larger.
In this case, the imaging radar cannot obtain the correct image of the moving target.
(a) (b)
Figure 13. Velocity and angle estimation errors caused by velocity ambiguity. (a) Velocity estimation
error. (b) Angle estimation error.
where Si is the echo signal data from the i-th transmitting antenna.
• Step 3: Perform FFT operation on the calibrated radar data to obtain the angular power
spectrum under each hypothetical condition (Hk ).
• Step 4: According to the HPC-Peak algorithm, if the hypothesis with maximum
peak point in the FFT power spectrum is accurate, then the velocity and the angle
corresponding to this hypothesis are the true velocities and angle of the target. Unlike
the HPC-Peak algorithm, HPC-SNR will extract the maximum value of SNR in each
hypothesis and consider the hypothesis with the largest SNR value as true. Then, the
speed and the angle corresponding to this hypothesis will be the actual speed and
angle of the target.
Remote Sens. 2022, 14, 5416 13 of 30
circles in Figure 14b), which will lead to an incorrect target in the estimation of the size
of the target. In addition, factors such as occlusion between targets, discrete point cloud
distribution of large vehicle, and differences in strength of reflected energy in different
parts will affect the estimation of the length and width of the target.
(a) (b)
Figure 14. Vehicle 2D image mapping. (a) Radar detection geometry model. (b) Vehicle 2D imaging.
The transformation of the target from “ideal point” to “point cloud block” is the most
prominent feature of radar imaging. The size and type of the target directly affect the area
of the point cloud image of the target, i.e., the larger the size of the target, the larger the
point cloud area in the 2D image. Although the shape of the surface of the target changes,
it has less effect on the area. Thus, we propose a method to perceive the target information,
i.e., to first classify the target based on the result of the point cloud imaging and then assign
a value to the target size. The process is as follows:
• Step 1: By calculating the area of different targets in the radar image, we set the
minimum area unit U.
• Step 2: Divide the target into three categories based on the relationship between the
point cloud area of the target (S) and the minimum unit: ordinary small cars (S < P1 U),
medium-sized cars (P1 U < S < P2 U), and large buses (S > P2 U).
• Step 3: The vehicle size is inferred in a reverse way according to the industry rules on
vehicle types and sizes.
Although the information acquired by using this method may not be accurate for
every vehicle, it ensures a stable estimation of the vehicle type and size throughout the
monitored road segment and provides information about the height of the target.
(a) (b)
Figure 15. MC-CFAR algorithm performance. (a) Detection sensitivity. (b) Computational complexity.
(a) (b)
Figure 17. The improved MC-CFAR algorithm performance. (a) Target detection under non-uniform
noise. (b) Noise region division based on the fitting curve.
As shown in Figure 17a, the improved MC-CFAR algorithm can handle the target
detection task with a non-uniform distribution of background noise power compared with
the MC-CFAR algorithm. Compared with the CA-CFAR algorithm, the improved MC-
CFAR has multi-target detection capability. In addition, the improved MC-CFAR algorithm
retains the ability of MC-CFAR to detect speckled targets, which is not available in the
OS-CFAR algorithm.
Through the analysis, we can obtain the following two conclusions:
• When there are multiple targets in the same range-Doppler cell, the performance
of the HPC-Peak algorithm is unstable. Once the algorithm fails, there is a large
error between the estimated angle and the true value, and the number of targets is
incorrectly estimated. However, the HPC-SNR algorithm maintains good performance
in multi-object situations. Moreover, the angular power spectrum can correctly reflect
the number of targets.
• The improved MC-CFAR algorithm maintains the advantages of the MC-CFAR algorithm
and realizes target detection under a non-uniform noise background by fitting and
dividing the noise curve, which makes up for the shortage of the MC-CFAR algorithm.
(a) (b)
Figure 18. The HPC-Peak algorithm simulation. (a) Velocity estimation error. (b) Angle estimation error.
We use the error between the estimated angle obtained after velocity compensa-
tion and the actual value to measure the performance of the algorithm, as shown in
Figure 19a,b,d,e,g,h. Figure 19c,f,i shows the angular power spectrum waveform after HPC
algorithm compensation and FFT operation. The FFT points represent the angle frequency
points of the target—that is, targets at different angles are located at different frequency
locations. The number of spectrum peaks represents the number of targets.
Remote Sens. 2022, 14, 5416 18 of 30
It can be seen that some angle groups may make the HPC-Peak algorithm fail under
multi-object velocity disambiguation. Once HPC-Peak fails, there will be a large error
between the estimation angle of the target and the real one (the maximum error can reach
5◦ ), as shown by the red dot area in Figure 19a,d,g. Further, the number of peaks in the
angular power spectrum does not match the true number of the targets (as shown in the red
angular spectrum waveform in Figure 19c,f,i), i.e., the number of targets is misestimated.
On the contrary, the HPC-SNR algorithm has always maintained good performance in
Doppler compensation. Throughout the experiment, the error between the estimated
angle and the actual value does not exceed 0.5◦ and the number of peaks of the power
spectrum is consistent with the number of actual targets, as shown by the blue dot area and
waveform in Figure 19b,e,h,c,f,i. Through the experiment, the following two conclusions
can be obtained:
• When there are multiple targets in the same range-Doppler cell, the performance
of the HPC-Peak algorithm is unstable. Once the algorithm fails, there is a large
error between the estimated angle and the true value, and the number of targets is
incorrectly estimated. However, the HPC-SNR algorithm maintains good performance
in multi-object situations. Moreover, the angular power spectrum can correctly reflect
the number of targets.
• In the case where both algorithms are valid, the HPC-SNR algorithm performs better, i.e.,
the error of angle estimation is smaller, especially in the case of multi-objective situations.
Figure 20. DBSCAN-based noise removal simulation. (a) The raw data of point cloud. (b) The
performance of the DBSCAN algorithm. (c) The performance of the K-MEANS algorithm.
The radar center is taken as the coordinate origin, the direction of the radar beam
(parallel to the lane direction) as the y-axis, and the perpendicular to the direction of radar
beam (perpendicular to the direction of the lane) as the x-axis. The target speed away
from the radar is defined as positive speed; otherwise, it is defined as negative speed. The
original parameters of the vehicle are shown in Table 3.
Vehicle Type Driving Direction Lane (Relative to Radar) X-Axis Range (m)
Vehicle 1 close Second lane on the left radar [−7.5, −3.5]
Vehicle 2 far away Third lane on the right radar [7.5, 11.5]
Vehicle 3 close First lane on the left radar [−11.5, −7.5]
Vehicle 4 close Third lane on the left radar [−3.5, 0]
Figure 22 shows the imaging results of moving targets using different CFAR detection
algorithms. With the same detection target, the MC-CFAR algorithm with higher detection
sensitivity can output more target points than the commonly used CFAR. For all CFAR
detectors, as the volume of the detected object continues to increase, the set of point clouds
that belong to the same target will be dispersed into multiple subsets. However, the
point cloud obtained by the MC-CFAR algorithm is more dense and cohesive, as shown
in Figure 22d,h,i,p. In addition, by comparing vehicle 3 and vehicle 4, radar roadside
surveillance is more conducive to presenting the shape of the target because it can avoid
occlusion of the vehicle itself.
Through the experiments on the detection of actual targets, the MC-CFAR algorithm
has the best imaging effect compared with other CFAR algorithms. It can represent the
Remote Sens. 2022, 14, 5416 20 of 30
shape of the target with a richer and denser point cloud, which will be beneficial for the
recognition of vehicle types.
Figure 22. Moving targets CFAR test. (a) The CA-CFAR algorithm test result in the case of vehicle 1.
(b) The OS-CFAR algorithm test result in the case of vehicle 1. (c) The OSCA-CFAR algorithm test
result in the case of vehicle 1. (d) The MC-CFAR algorithm test result in the case of vehicle 1. (e) The
CA-CFAR algorithm test result in the case of vehicle 2. (f) The OS-CFAR algorithm test result in
the case of vehicle 2. (g) The OSCA-CFAR algorithm test result in the case of vehicle 2. (h) The
MC-CFAR algorithm test result in the case of vehicle 2. (i) The CA-CFAR algorithm test result in the
case of vehicle 3. (j) The OS-CFAR algorithm test result in the case of vehicle 3. (k) The OSCA-CFAR
algorithm test result in the case of vehicle 3. (m) The MC-CFAR algorithm test result in the case of
vehicle 3.(l) The CA-CFAR test result algorithm in the case of vehicle 4. (n) The OS-CFAR algorithm
test result in the case of vehicle 4. (o) The OSCA-CFAR algorithm test result in the case of vehicle 4.
(p) The MC-CFAR algorithm test result in the case of vehicle 4.
• For the imaging of high-speed target, velocity ambiguity will affect the accuracy of
the lateral position of the target. The faster the speed, the larger the error in the lateral
position. Velocity disambiguation is one of the key techniques to ensure the imaging
accuracy of moving targets.
• Compared with HPC-Peak, the HPC-SNR algorithm can obtain fewer abnormal points
and more stable imaging results. When there are multiple targets in the same range-
Doppler cell, the HPC-Peak algorithm may fail.
Figure 23. Velocity disambiguation test. (a) Without velocity disambiguation in vehicle 1 case. (b)
The HPC-Peak algorithm test result in vehicle 1 case. (c) The HPC-SNR algorithm test result in
vehicle 1 case. (d) Without velocity disambiguation in vehicle 2 case. (e) The HPC-Peak algorithm
test result in vehicle 2 case. (f) The HPC-SNR algorithm test result in vehicle 2 case. (g) Without
velocity disambiguation in vehicle 3 case. (h) The HPC-Peak algorithm test result in vehicle 3 case. (i)
The HPC-SNR algorithm test result in vehicle 3 case. (j) Without velocity disambiguation in vehicle 4
case. (k) The HPC-Peak algorithm test result in vehicle 4 case. (l) The HPC-SNR algorithm test result
in vehicle 4 case.
Obviously, among all CFAR detection algorithms, the improved MC-CFAR algorithm
has the best detection effect, i.e., the green belt areas on both sides of the road are clearly
presented in the form of 2D point clouds by the improved MC-CFAR detection technology.
Thanks to the fact that it is not limited by the reference window, MC-CFAR can detect
large-sized objects better than several other detection algorithms. In addition, the 3 dB
division operation of RAM enables the improved MC-CFAR algorithm to implement static
target detection and extraction in the traffic environments with increasing or decreasing
background noise power amplitude, which compensates for the shortcomings of the MC-
CFAR algorithm.
the original “clean” lane, as shown in Figure 26a,d in the red box. The points output by
the detector are sent to the DBSCAN processor for class clustering, and these outliers and
sparse noise points will be identified, as shown by the red circle in Figure 26b,e. Finally, the
detected noise points are removed and a “clean” radar point cloud image is obtained, as
shown in Figure 26c,f.
Figure 26. Noise points removal results by DBSCAN algorithm. (a) Raw point cloud image of traffic
scene 1. (b) Target point classification by the DBSCAN in traffic scene 1. (c) Noise remove Result
in traffic scene 1. (d) Raw point cloud image of traffic scene 2. (e) Target point classification by the
DBSCAN in traffic scene 2. (f) Noise remove Result in traffic scene 2.
5.3. Urban Traffic Road Scenes 2D Imaging and Vehicle Information Perception
5.3.1. Dynamic and Static Target Imaging Integration
Figure 27 shows the imaging results of the traffic scene based on millimeter-wave radar.
Based on the proposed imaging architecture and algorithm, targets in the range of 10–50 m
in urban roads are presented in the form of 2D radar point cloud images. In the point cloud
image, the color of the point cloud represents the speed and direction of travel of the target
(the faster the target, the darker the point cloud), and the shape, size, and distribution of
the point field represent different types of targets. Compared with detection radar, the use
of millimeter-wave radar imaging technology can provide people and intelligent traffic
monitoring systems with richer and more intuitive road information. In addition, a video
is provided to further show the imaging results (as shown in Appendix B, for video link).
car is insufficient in the case of long-distance and the point cloud of the large car in the case
of short distance is scattered; so, the error of the length estimation of vehicles exceeds 50%.
(a) (b)
Figure 27. Urban traffic road imaging based on millimeter-wave radar. (a) Traffic scene 1 and
combined moving and static targets imaging result. (b) Traffic scene 2 and combined moving and
static targets imaging result.
Although the shape of the vehicle changes during the 2D projection, there is a large
difference between the areas of the point cloud sets due to different vehicle types, as shown
in Figure 29a. Therefore, an approach of vehicle information perception that first estimates
the vehicle type and then infers the vehicle size is adopted. Before the estimation of vehicle
types, the minimum unit area U (=1 m × 1 m) is set by counting the point cloud areas of
different types of vehicles. Then, based on the relationship between the target point cloud
area (S) and the minimum unit, the targets are divided into three categories: small car
(S < 8U ), medium-sized car (8U < S < 12U ), and large bus (S > 20U ).
Remote Sens. 2022, 14, 5416 25 of 30
(a) (b)
Figure 29. Vehicle type estimation based on point cloud area. (a) Different types of vehicle point
clouds. (b) Vehicle type awareness accuracy.
Based on the proposed perception method, the accuracy of the estimation of vehicle
types remains above 80%, as shown in Figure 29b. It is worth noting that the accuracy of
the estimation of small cars decreases at close range; however, this does not mean that the
area and distribution of point clouds of small cars do not satisfy the set conditions. The
fundamental reason for this phenomenon is that when large vehicles are imaged at close
range, the distribution of point cloud sets is more serious, which leads to some point cloud
sets belonging to large vehicles being mistaken for small cars. Based on the classification
results, the length, width, and height information ( L, W, H ) can be assigned to different
types of vehicles: small car (4 m, 1.7 m, 1.5 m), medium-sized car (7 m, 2 m, 2.5 m), large
bus (10 m, 2.5 m, 3 m).
The proposed method can ensure that the error of vehicle size is relatively stable
during the entire monitored road segment, and avoids the misjudgment of vehicle type
resulting from the change in the point cloud length. However, the vehicle size under this
method is only a reference value where the error is relatively stable and, thus, cannot be
guaranteed to suit every vehicle.
6. Conclusions
In this work, the Monte-Carlo-based moving target detection algorithm (the MC-
CFAR), the improved Hypothetical Phase Compensation (HPC-SNR) velocity disambigua-
tion algorithm, the improved MC-CFAR static target detection algorithm, and the DBSCAN-
based noise removal algorithm are supplemented and proposed to improve typical three-
dimensional FFT imaging algorithm architecture for radar roadside imaging in traffic
scenarios. The performance of the proposed improved 3D-FFT algorithm architecture is
verified with actual urban traffic data collected by a roadside-mounted radar platform
instead of simulated data. It turns out that it enables the concurrence of dynamic and static
target imaging in urban traffic using millimeter-wave radar. In addition, based on radar
point cloud images, we initially perceive the type of moving vehicles, which conventional
traffic detection radars cannot achieve. We hope our work can promote the application of
radar imaging in urban traffic surveillance.
However, there are still issues that need to be improved and further studied. In future
work, we will conduct further research on radar urban roadside imaging: (1) Combined
with ISAR algorithm, to improve the imaging effect of long-distance targets. (2) Research on
Remote Sens. 2022, 14, 5416 26 of 30
the discrete phenomenon of large vehicle point cloud images to avoid large vehicles being
misclassified as multiple small targets. (3) Improve radar target recognition, including
vehicle types and road obstacles.
Author Contributions: Conceptualization, B.Y. and H.Z.; investigation, Y.Z. and Y.P.; project ad-
ministration, H.Z.; writing—original draft preparation, B.Y. and Y.C.; writing—review and editing,
B.Y. and Y.C. All authors have read and agreed to the published version of the manuscript.
Funding: This work was supported partially by the Civil Aerospace Technology Advanced Research
project (No. D020403) and Science and Technology on Near-Surface Detection Laboratory (No.
6142414211202). The authors greatly appreciate the above financial support.
Data Availability Statement: Not applicable.
Conflicts of Interest: This manuscript has not been published or presented elsewhere in part or
in its entirety, and is not under consideration by another journal. There are no conflicts of interest
to declare.
Abbreviations
The following abbreviations are used in this manuscript:
Appendix A
Appendix A.1
In practice, compared with traffic detection radar, the imaging radar system can
provide richer target point cloud information, but it also requires the support of a better
processor and storage system to complete the tasks of traffic monitoring, which is an
inevitable price to pay. However, this price is manageable. In our radar system, we use the
physical architecture of FPGA+RAM to provide a platform for software development. At
the same time, six pieces of DDR memory (two on the FPGA side and four on the RAM
side) are used to support the data processing and data flow of the algorithm, as shown in
Figure A1.
Remote Sens. 2022, 14, 5416 27 of 30
Appendix A.2
We divide the whole radar signal processing process into signal pre-processing and
signal post-processing. The detailed processing flow of the proposed algorithms on the
radar system is shown in Figures A2 and A3.
The radar signal pre-processing process is as follows:
• First, the radar uses the antenna array to obtain the raw data of the traffic scene. In the
working mode of TDM-MIMO, the radar can obtain raw data from k (k = 86) azimuth
antenna channels at one time.
• Then, the data of each azimuth antenna channel are subjected to a 2D-FFT operation
to obtain k RDM matrices.
• Next, we divide RDM data into two categories: zero-Doppler data and non-zero-
Doppler data.
• Finally, the non-zero-Doppler data and zero-Doppler data will be sent to the moving
target signal processing unit and static target signal processing unit, respectively, in the
post-processing of the radar signal to realize the detection and extraction of moving
and static targets.
In the post-processing process of the radar signal, it is mainly divided into moving
target signal processing and static target signal processing. The two processing flows are
as follows:
Moving target information processing flow:
• A new RDM matrix is obtained by incoherent accumulation of all non-zero-Doppler
data. The new RDM matrix will be processed by the moving target CFAR search engine.
• In the moving target CFAR search engine, the MC-CFAR algorithm is implemented
to obtain the indices of target distance and velocity. In addition, the CA-CFAR, OS-
CFAR, and OSCA-CFAR algorithms are implemented and compared to verify the
performance of MC-CFAR algorithm.
• The raw data on the target angle are obtained from the RDM set according to the
index information. The angle data of all targets are sent to the velocity disambiguation
engine for processing.
• In the velocity disambiguation engine, the HPC-SNR algorithm is implemented to ob-
tain the true velocity and azimuth of the target. Furthermore, the HPC-Peak algorithm
is implemented and compared to verify the performance of HPC-SNR algorithm.
• Finally, the real distance, speed, and angle of the target obtained through the CFAR
engine and the velocity disambiguation engine are output to the PC terminal and
drawn by drawing tools such as MATLAB.
Static target information processing flow:
• The RDM zero-Doppler data in all channels are sent to the static target signal process-
ing unit. The RAM matrix is obtained after FFT processing of the angle. The RAM
matrix will be processed by the static target CFAR search engine.
Remote Sens. 2022, 14, 5416 28 of 30
• In the static target CFAR search engine, the improved MC-CFAR algorithm is imple-
mented to obtain indices of the target distance and angle. In addition, the CA-CFAR,
OS-CFAR, and OSCA-CFAR algorithms are implemented and compared to verify the
performance of the improved MC-CFAR algorithm.
• All target points are input into the Outlier Removal Engine to eliminate interference points.
• Finally, the static target point, after being processed by the DBSCAN algorithm, is
output to the PC terminal to be drawn by tools such as MATLAB.
Appendix B
Here is a video link:
https://github.com/Radar208/Video_of_Urban_Traffic_Imaging_Using_Millimeter_
wave_Radar.git/, accessed on 1 October 2022.
References
1. Prabhakara, A.; Jin, T.; Das, A.; Bhatt, G.; Kumari, L.; Soltanaghaei, E.; Bilmes, J.; Kumar, S.; Rowe, A. High Resolution Point
Clouds from mmWave Radar. arXiv 2022, arXiv:2206.09273.
2. Liu, H.; Li, N.; Guan, D.; Rai, L. Data feature analysis of non-scanning multi target millimeter-wave radar in traffic flow detection
applications. Sensors 2018, 18, 2756. [CrossRef] [PubMed]
3. Chang, S.; Zhang, Y.; Zhang, F.; Zhao, X.; Huang, S.; Feng, Z.; Wei, Z. Spatial attention fusion for obstacle detection using
mmwave radar and vision sensor. Sensors 2020, 20, 956. [CrossRef]
4. Pegoraro, J.; Meneghello, F.; Rossi, M. Multiperson continuous tracking and identification from mm-wave micro-Doppler
signatures. IEEE Trans. Geosci. Remote Sens. 2020, 59, 2994–3009. [CrossRef]
5. Dankert, H.; Horstmann, J.; Rosenthal, W. Wind-and wave-field measurements using marine X-band radar-image sequences.
IEEE J. Ocean. Eng. 2005, 30, 534–542. [CrossRef]
6. Nieto-Borge, J.; Hessner, K.; Jarabo-Amores, P.; De La Mata-Moya, D. Signal-to-noise ratio analysis to estimate ocean wave
heights from X-band marine radar image time series. IET Radar Sonar Navig. 2008, 2, 35–41. [CrossRef]
7. Wei, S.; Zhou, Z.; Wang, M.; Wei, J.; Liu, S.; Zhang, X.; Fan, F. 3DRIED: A high-resolution 3-D millimeter-wave radar dataset
dedicated to imaging and evaluation. Remote Sens. 2021, 13, 3366. [CrossRef]
8. Wang, Z.; Guo, Q.; Tian, X.; Chang, T.; Cui, H.L. Near-field 3-D millimeter-wave imaging using MIMO RMA with range
compensation. IEEE Trans. Microw. Theory Tech. 2018, 67, 1157–1166. [CrossRef]
9. Zhao, Y.; Yarovoy, A.; Fioranelli, F. Angle-insensitive Human Motion and Posture Recognition Based on 4D imaging Radar and
Deep Learning Classifiers. IEEE Sens. J. 2022, 22, 12173–12182. [CrossRef]
10. Qian, K.; He, Z.; Zhang, X. 3D point cloud generation with millimeter-wave radar. Proc. Acm Interact. Mob. Wearable Ubiquitous
Technol. 2020, 4, 148. [CrossRef]
11. Guo, Q.; Wang, Z.; Chang, T.; Cui, H.L. Millimeter-Wave 3-D Imaging Testbed with MIMO Array. IEEE Trans. Microw. Theory
Tech. 2020, 68, 1164–1174. [CrossRef]
12. Sun, S.; Zhang, Y.D. 4D Automotive Radar Sensing for Autonomous Vehicles: A Sparsity-Oriented Approach. IEEE J. Sel. Top.
Signal Process. 2021, 15, 879–891. [CrossRef]
13. Gao, X.; Xing, G.; Roy, S.; Liu, H. Ramp-cnn: A novel neural network for enhanced automotive radar object recognition. IEEE
Sens. J. 2020, 21, 5119–5132. [CrossRef]
14. Li, G.; Sit, Y.L.; Manchala, S.; Kettner, T.; Ossowska, A.; Krupinski, K.; Sturm, C.; Goerner, S.; Lübbert, U. Pioneer study on
near-range sensing with 4D MIMO-FMCW automotive radars. In Proceedings of the 2019 20th International Radar Symposium
(IRS), Ulm, Germany, 26–28 June 2019; pp. 1–10.
15. Lee, T.Y.; Skvortsov, V.; Kim, M.S.; Han, S.H.; Ka, M.H. Application of W -Band FMCW Radar for Road Curvature Estimation in
Poor Visibility Conditions. IEEE Sens. J. 2018, 18, 5300–5312. [CrossRef]
16. Sabery, S.M.; Bystrov, A.; Gardner, P.; Stroescu, A.; Gashinova, M. Road Surface Classification Based on Radar Imaging Using
Convolutional Neural Network. IEEE Sens. J. 2021, 21, 18725–18732. [CrossRef]
17. Farina, A.; Studer, F.A. A review of CFAR detection techniques in radar systems. Microw. J. 1986, 29, 115.
18. Gandhi, P.; Kassam, S. Analysis of CFAR processors in nonhomogeneous background. IEEE Trans. Aerosp. Electron. Syst. 1988,
24, 427–445. [CrossRef]
19. Finn, H.M. Adaptive detection mode with threshold control as a function of spatially sampled-clutter-level estimates. RCA Rev.
1968, 29, 414–464.
20. Yan, J.; Li, X.; Shao, Z. Intelligent and fast two-dimensional CFAR procedure. In Proceedings of the 2015 IEEE International
Conference on Communication Problem-Solving (ICCP), Guilin, China, 16–18 October 2015; pp. 461–463. [CrossRef]
21. Rohling, H. Resolution of Range and Doppler Ambiguities in Pulse Radar Systems. In Proceedings of the Digital Signal
Processing, Florence, Italy, 7–10 September 1987; p. 58.
22. Kronauge, M.; Schroeder, C.; Rohling, H. Radar target detection and Doppler ambiguity resolution. In Proceedings of the 11-th
International Radar Symposium, Vilnius, Lithuania, 16–18 June 2010; pp. 1–4.
23. Kellner, D.; Klappstein, J.; Dietmayer, K. Grid-based DBSCAN for clustering extended objects in radar data. In Proceedings of the
IEEE Intelligent Vehicles Symposium, Madrid, Spain, 3–7 June 2012; pp. 365–370.
24. Roos, F.; Bechter, J.; Appenrodt, N.; Dickmann, J.; Waldschmidt, C. Enhancement of Doppler unambiguity for chirp-sequence
modulated TDM-MIMO radars. In Proceedings of the 2018 IEEE MTT-S International Conference on Microwaves for Intelligent
Mobility (ICMIM), Munich, Germany, 16–17 April 2018; pp. 1–4.
25. Gonzalez, H.A.; Liu, C.; Vogginger, B.; Mayr, C.G. Doppler Ambiguity Resolution for Binary-Phase-Modulated MIMO FMCW
Radars. In Proceedings of the 2019 International Radar Conference (RADAR), Toulon, France, 23–27 September 2019; pp. 1–6.
[CrossRef]
Remote Sens. 2022, 14, 5416 30 of 30
26. Gonzalez, H.A.; Liu, C.; Vogginger, B.; Kumaraveeran, P.; Mayr, C.G. Doppler disambiguation in MIMO FMCW radars with
binary phase modulation. IET Radar Sonar Navig. 2021, 15, 884–901. [CrossRef]
27. Liu, C.; Gonzalez, H.A.; Vogginger, B.; Mayr, C.G. Phase-based doppler disambiguation in TDM and BPM MIMO FMCW radars.
In Proceedings of the 2021 IEEE Radio and Wireless Symposium (RWS), San Diego, CA, USA, 17–22 January 2021; pp. 87–90.
28. Stolz, M.; Wolf, M.; Meinl, F.; Kunert, M.; Menzel, W. A new antenna array and signal processing concept for an automotive 4D
radar. In Proceedings of the 2018 15th European Radar Conference (EuRAD), Madrid, Spain, 26–28 September 2018; pp. 63–66.
29. Phippen, D.; Daniel, L.; Hoare Sr, E.; Gishkori Sr, S.; Mulgrew, B.; Cherniakov, M.; Gashinova, M. Height estimation for 3-D
automotive scene reconstruction using 300 GHz multireceiver radar. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 2339–2351.
[CrossRef]
30. Laribi, A.; Hahn, M.; Dickmann, J.; Waldschmidt, C. A new height-estimation method using FMCW radar Doppler beam
sharpening. In Proceedings of the 2017 25th European Signal Processing Conference (EUSIPCO), Kos Island, Greece,
28 August–2 September 2017; pp. 1932–1936.
31. Laribi, A.; Hahn, M.; Dickmann, J.; Waldschmidt, C. A novel target-height estimation approach using radar-wave multipath
propagation for automotive applications. Adv. Radio Sci. 2017, 15, 61–67. [CrossRef]
32. Cui, H.; Wu, J.; Zhang, J.; Chowdhary, G.; Norris, W.R. 3D Detection and Tracking for On-road Vehicles with a Monovision
Camera and Dual Low-cost 4D mmWave Radars. In Proceedings of the 2021 IEEE International Intelligent Transportation
Systems Conference (ITSC), Indianapolis, IN, USA, 19 September 2021; pp. 2931–2937.
33. Jin, F.; Sengupta, A.; Cao, S.; Wu, Y.J. Mmwave radar point cloud segmentation using gmm in multimodal traffic monitoring. In
Proceedings of the 2020 IEEE International Radar Conference (RADAR), Washington, DC, USA, 28–30 April 2020; pp. 732–737.
34. Yang, B.; Zhang, H. A CFAR Algorithm Based on Monte Carlo Method for Millimeter-Wave Radar Road Traffic Target Detection.
Remote Sens. 2022, 14, 1779. [CrossRef]