Automated Point Clouds Registration Usin
Automated Point Clouds Registration Usin
Automated Point Clouds Registration Usin
"Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
Technology, Atlanta, GA, USA, 30332-0355; Phone (+1) 678 735 1781; email:
pkim45@gatech.edu
2
Ph. D. Student, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology,
777 Atlantic Dr. N.W., Atlanta, GA, USA, 30332-0355; Phone (+1) 314 489 3172; E-Mail:
jchen490@gatech.edu
3
Corresponding Author, Associate Professor, Department of Civil and Environmental
Engineering, Georgia Institute of Technology, Atlanta, GA, USA, 30332-0355; Phone (+1) 404
ABSTRACT
Due to the limited view of each single laser scan data, multiple scans are required to cover all
scenes of the large construction site, and a registration process is needed to merge them together.
While many research efforts have been made on the automatic point cloud registration, however
the prior works have some limitations; the automatic registration was tested in a bounded region
and required a large overlapped area between scans. The aim of this paper is to introduce a novel
method that achieves the automatic point cloud registration in an unbounded region and with a
relatively small overlapped area without using artificial targets, landmarks, or any other manual
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
alignment process. For the automatic point cloud registration, the proposed framework utilizes the
correspondences among the series of scans for the initial alignment. Then, it computes the
overlapped area between scans and determines a method to use for the final alignment. If the
overlapped area is sufficiently large, the iterative closest point (ICP) algorithm is used to generate
the proper transformation. Otherwise, a plane matching algorithm is used to achieve precise
registration. The proposed framework was tested at outdoor construction sites and an indoor
environment, which resulted in deviation angle accuracy of less than 0.35o for outdoor and 0.13o
for indoor testbeds respectively with processing time of less than four minutes. These promising
results demonstrate that the proposed target-free automatic registration method can significantly
reduce the manual registration time and data gathering time without compromising the registration
accuracy, thus simplifying and promoting the laser scanning practices in the AEC/FM industry.
Introduction
ways for structural damage assessment, urban planning, historical building restoration, building
renovation, facility management, and building energy analyses. With the recent advancement of
sensing technologies, both terrestrial laser scanning and computer vision-based techniques have
been extensively studied in those areas (Brilakis et al. 2010; Ham and Golparvar-Fard 2013, 2014;
Olsen et al. 2010; Tang et al. 2010; Wang et al. 2013; Wang and Cho 2014; Rashidi et al. 2015).
While both the approaches have strengths and weaknesses based on working environments and
data quality requirements, computer vision-based techniques cannot generally provide the same
level of accuracy as that of laser scanners (Dai et al. 2013; Golparvar-Fard et al. 2011). Especially,
1
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
3D laser scanning technology has been extensively used in construction to render real-sized objects
or environments in the form of dense point cloud data (Cho and Haas 2003; Wang and Cho 2015).
This is because current laser scanners are less sensitive to lighting conditions (Cho and Gai 2014)
and have become faster with higher data collection rates, smaller and more affordable. Further, a
virtual 3D model of a construction site through point cloud mapping and registration can improve
the ability to understand the scene of interest, track the construction progress (Golparvar-Fard et
al. 2009), monitor the structural health (Park et al. 2007) and recognize potential safety hazards
An entire construction site covers a large area, so it is necessary to collect scans from
multiple points of view to get a full reconstruction of the site. All the individual point clouds
collected in the local coordinate frame must be transformed to a global coordinate system through
a procedure known as point cloud registration. Registration of point clouds is defined as fitting and
matching multiple point clouds scanned from different viewpoints into one common coordinate
system, which transforms each point cloud set from its local coordinate frame to a global
coordinate frame. However, the raw 3D scanned point cloud data can often be distorted by
obstacles or sensor noise. Because of these varieties of challenges, automating the registration of
Literature Review
There has been a considerable amount of research in registering multi-view point clouds, which
can be classified into two categories depending on the use of registration targets. One of the
traditional approaches to point cloud registration is to place artificial targets (i.e. spheres,
checkerboards, etc.) visible within separate but overlapping point clouds, which is known as target-
2
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
based registration. Becerik-gerber et al. (2011) proposed a 3D target-based point cloud registration
method. They experimented with three different types of targets such as fixed paper, paddle, and
sphere, and with phased-based, time-of-flight laser scanners. According to their experiments, the
spherical target with time-of-flight scanner provided the most accurate results. Kim et al. (2011)
developed a system for ship fabrication, which used spherical targets attached to an object of
interest for merging point clouds. However, their target-based point cloud registration requires
extra time and effort for installing and adjusting the targets at every scan. Also, their use of targets
necessitates extra costs, making it less desirable on large and complex construction sites.
The other approach to point cloud registration is to use image processing methods and
iterative algorithms instead of artificial targets. Böhm and Becker (2007) tried to register terrestrial
laser scan data without markers using Scale-Invariant Feature Transform (SIFT) key points
extracted from the reflectance data. However, their target point cloud is only a single building and
does not include the scattered surroundings. Moussa et al. (2012) and Eo et al. (2012) proposed a
procedure for automatic combination and co-registration of digital images and terrestrial laser data,
which used images associated with intensity and RGB values. However, this method is highly
sensitive to the size of overlapping area between scans. In their test (Eo et al. 2012), 12 scans were
collected for one corner of the building. Fusing edges extracted in 2D images and 3D point cloud
data using range images was proposed with a simple pixel corresponding mechanism (Wang et al.
2013). Their approach implies edge extraction from 2D images, but has some flaws in border
feature detection. The feature-based registration was achieved without initial alignment because
2D images are employed to aid in the recognition of feature points. However, it was too sensitive
to the size of overlapping areas in the point cloud data. In addition, a large number of scans is
needed to get a good performance result, and the feature extraction is heavily influenced by
3
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
for common feature-based registration methods (Gai et al. 2013). Weinmann and Jutzi (2011)
studied an automatic image-based registration of unordered Terrestrial Laser Scanning (TLS) data,
which used both range and reflectance information. They sorted unordered collected TLS data
automatically and registered them using the iterative closest point (ICP) algorithm with camera
pose estimation. This might be applicable for construction sites; however, they performed over 10
scans of a limited space to provide sufficient overlapped area to apply the ICP algorithm and do
not consider the cases of low overlapped area between each scan. Weinmann et al. (2013) studied
automatic and accurate alignment of multiple point clouds using range imaging devices. In general,
the range imaging devices can collect only low resolution point clouds compared to lidar devices
and they also performed the experiment using miniature mockups. Therefore, it is not applicable
Tombari and Remondino (2013) reviewed the state-of-the-art techniques for automatic
registration of 3D point clouds and meshes. However, their work only tested small objects and
organized point clouds and not the whole scene of an outdoor environment. Wang et al. (2015)
studied a feature-based urban point cloud registration method using radar images with a “Lshape
detection and matching” method. However, the point cloud data from the TomoSAR are not in
high resolution so the method may work well for urban scale environments but not appropriate for
construction sites. Gong and Seibel (2016) studied a three-dimensional registration process of
multiple 3D point clouds collected from different viewpoints using machine vision. They solved
the problem of registering point clouds with repetitive geometries using a feature-based
registration algorithm and demonstrated higher accuracy compared to the ICP algorithm. However,
their experimental environment was too small compared to outdoor construction environments.
4
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
Kim et al. (2016) presented a framework to register two point clouds by using SURF feature
extraction from RGB panorama images. The experimental result was promising but it was
performed only in an indoor environment where many line-of-sight vertical and horizontal lines
5
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
Table 1 shows that the summary of point cloud registration technologies. The purpose of
testing large sites as well as overlapped areas is to identify the challenging cases where the ICP
registration algorithm does not perform well. The overlapped area refers to the size (%) of the area
of intersection between consecutive scans. From our tests, a large site refers to the cases 1) when
the maximum range of laser scan (i.e., 80 meters) is used to cover the site area, 2) where the
distance between consecutive laser scan positions is greater than about 10 meters, and 3) the
overlapped area is less than around 89%. Under these criteria, the ICP algorithm did not perform
well based on our experimental results in Fig. 5. As shown in Table 1, the common limitations of
existing approaches are: (1) they used targets for point cloud registration; (2) they tested only small
and tailored point clouds for target objects; and (3) they scanned many times to guarantee a
sufficiently overlapped area. In addition, the automatic point clouds registration of unstructured
and scattered environments such as construction sites has not yet been successfully demonstrated
from the prior works. For these reasons, merging multi-source point information into one dataset
is still of great interest and a challenge in the construction field. Therefore, the main objective of
this study was to develop an automatic point cloud registration framework by taking relatively less
number of scans without any other manual adjustment to build a complete as-built point cloud of
Methodology
Achieving the requirements defined by the objective required designing a framework for the
automatic registration method. This framework consists of four steps, as shown in Fig. 1. The first
step is data acquisition using a 3D laser scanner and a digital camera, and RGB texture mapping
which merges 3D point clouds with 2D RGB images through a kinematics calculation.
6
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
The second step is initial alignment based on the extracted common features, which allows finding
correspondences in the 3D point cloud data sets by using RGB-fused point cloud data. The third
step is calculating overlapped area between each point clouds and determining the final alignment
method. The last step, final alignment, utilizes ICP or the plane matching algorithm to match the
point cloud data sets. The following sections will discuss the proposed framework, present the
The proposed method for obtaining point clouds and RGB images requires a 3D laser scanner
system with a built-in camera, which is a very common configuration in most commercial laser
scanner products. In this study, a robotic hybrid Light Detection And Ranging (LiDAR) system
was used which consists of four SICK 2D line laser scanners (80-meter working range at 25Hz
scan speed, 200 sec / 360º scan, 190º for vertical line) and a regular digital camera, as shown in
7
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
Fig. 2. The resolution of each line laser is 0.1667 degrees in the vertical direction and 0.072 degrees
in the horizontal direction. The digital camera captures eight pictures per 360º scan to obtain the
RGB information of the construction site. The customized 3D LiDAR system provides more
scanner. Multiple degree-of-freedom (DOF) kinematics problems were solved based on the built-
in mechanical information between laser scanners and a digital camera. The schematic diagram of
the kinematics solution of the equipment used in this paper is shown in Fig. 2.
The local coordinate (𝑥𝑥0, 𝑦𝑦0, 𝑧𝑧0) indicates the mobile robot’s platform coordinates
located on the ground level, and (𝑥𝑥1, 𝑦𝑦1, 𝑧𝑧1) is the origin for the laser scanner coordinate
system, which is located at the center of platform frame. The local coordinates 2 is a base frame
for distance measurement from surrounding objects fixed at the center of each laser scanner. In
8
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
addition, 𝜃𝜃1 is the body rotation angle and 𝜃𝜃2 is the angle from a laser scanner. From the
relationship among this information, the kinematics problem is solved, as shown in Eq. (1).
1 0 0 0 1 1
(1)
𝑟𝑟 cos(𝜃𝜃1) cos(𝜃𝜃2) + 𝑑𝑑 cos(𝜃𝜃1)
= 𝑟𝑟 sin(𝜃𝜃1) cos(𝜃𝜃2) + 𝑑𝑑 sin(𝜃𝜃1)
𝑟𝑟 sin(𝜃𝜃2) + ℎ
1
The kinematics solution corresponds to the extrinsic parameters of the digital camera, while the
intrinsic parameters, including focal length, image sensor format, and principal point, are estimated
by the pinhole camera model. The point clouds fused with RGB texture were collected as shown
in Fig. 3.
The RGB texture mapping step is mapping RGB data from a digital camera onto the 3D
point cloud data from laser scanners, which is a well-known process using the pinhole camera
model. The advantage of using the RGB texture mapping is to easily visualize the scanned area,
and to find the correspondences between 3D point cloud data and a 2D camera plane. In the
modeling process, a camera calibration step is necessary for the digital camera, which includes
finding both the internal and external parametric matrixes for the camera. The internal parametric
matrix consists of the intrinsic parameters, the focal length, image sensor format, and principal
point, which can be estimated by the pinhole camera model, while the extrinsic parameters can be
obtained through a geometric relationship based on the mounting configuration such as the height
and direction of the camera. Using these intrinsic and extrinsic parameters, the laserscanned 3D
9
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
point cloud can be transformed into 3D camera coordinates, and then finally transformed into 2D
𝑥𝑥 𝑢𝑢/𝑤𝑤
𝑦𝑦 = 𝑣𝑣/𝑤𝑤 ,
1 𝑤𝑤/𝑤𝑤
𝑢𝑢 𝑓𝑓𝑒𝑒 0 𝑐𝑐𝑒𝑒𝑅𝑅11𝑅𝑅12𝑅𝑅13𝑇𝑇𝑒𝑒 (2) 𝑋𝑋
𝑣𝑣 = 𝐾𝐾𝑖𝑖𝑖𝑖𝑖𝑖𝐾𝐾𝑒𝑒𝑒𝑒𝑖𝑖 = 𝑓𝑓𝑦𝑦𝑐𝑐𝑦𝑦𝑅𝑅21𝑅𝑅22𝑅𝑅23𝑇𝑇𝑦𝑦𝑌𝑌𝑍𝑍
0 0
𝑤𝑤 0 1𝑅𝑅31𝑅𝑅23𝑅𝑅33𝑇𝑇𝑧𝑧1
10
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
11
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
In Eq. (2), 𝑋𝑋, 𝑌𝑌, 𝑍𝑍 refers to three-dimensional coordinates in the world coordinate system.
𝐾𝐾𝑖𝑖𝑖𝑖𝑖𝑖 denotes the intrinsic parametric matrix and 𝐾𝐾𝑒𝑒𝑒𝑒𝑖𝑖 denotes the extrinsic parametric matrix.
The parameters fx and fy are associated with the focal length, whereas the parameters cx and cy
represent the principal point (Tsai 1987). Also, the rotation matrix R and the translation vector T
are required for transformation from the world coordinate system to camera coordinate. This
transformation is necessary since the laser-scanned 3D point cloud data is obtained in 3D world
coordinates. The camera calibration process involves finding these parameter values. In this way,
RGB texture mapping enables accurate texture matching between a point cloud and digital camera
images. Fig. 3 shows the RGB texture mapped point cloud in the construction site.
Once texture-mapped point clouds have been acquired, the next step consists of extracting
distinctive features. The feature points which are invariant to image scaling and image rotation can
be extracted by RGB images from the camera mounted on the laser scanner system and matched
between images from different scan positions. Then, the common features can be used to find the
corresponding points in the 3D point cloud by using the texture-mapped point cloud. This is
because the texture mapping process is done using the relationship between 2D image plane
position (x, y) and 3D point cloud data (X, Y, Z). Feature points are generally used for image
registration, 3D reconstruction, motion tracking, robot navigation, and object detection and
recognition. For common feature extraction, Lowe (1999) proposed the SIFT method to detect and
describe local features in images. SIFT features allow the same corner point to be uniquely
detected even if the image is scaled. The faster version of SIFT was developed by Bay et al. (2006),
known as Speeded Up Robust Features (SURF), which is also a local feature detector and
12
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
descriptor. In this study, SURF features were used to find the common features. SURF is sufficient
to get the reasonable results for the initial alignment taking into account the advantage of shorter
One of the problems when the SURF descriptor is used is identifying false common
features. There are numerous common SURF features; however not all the extracted feature points
indicate the same descriptors. Therefore, false common features should be removed from the list
of the SURF feature point matches. This study uses the Random Sample Consensus (RANSAC)
approach to find sets of consistent matching descriptors. RANSAC works by iteratively sampling
points from a given set of features and finding a set of parameters shown in Eq. (3), where (𝑥𝑥1,
𝑦𝑦1) and (𝑥𝑥2, 𝑦𝑦2) are the pixel coordinates of each image, respectively.
parameter 𝜃𝜃, Tx, 𝑇𝑇𝑦𝑦 are the variables used to find the best estimate by maximizing the number
of points that are considered to be within the model. The obtained optimal parameters are used to
determine the true common SURF descriptors. In this study, we select three different images for
Then, the next step is finding a transformation matrix between point clouds. One way to
estimate the transformation matrix between point clouds is to apply the Kabsch algorithm (root
mean square distance concept), which starts with two sets of paired points, P and Q, where each
set of points is represented as an N×3 matrix. The transformation matrix consists of the rotation
matrix and translation vector. The optimal rotation matrix U between point set P and Q can be
calculated by singular value decomposition (SVD) of the covariance matrix A and the translation
13
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
vector D can be obtained by calculating the difference of centroid of point sets. Therefore, the
initial alignment matrix can be obtained as shown in Eq. (4) by using matched SURF features and
𝑈𝑈11𝑈𝑈12𝑈𝑈13𝐷𝐷𝑒𝑒 𝑥𝑥 𝑥𝑥′
14
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
Final Alignment with ICP and Plane matching based on overlapped area
The most well-known method for point cloud registration is ICP, which was introduced by Besl
and McKay (1992), and works by iteratively finding common matching points of two point clouds
and disregarding outliers, and then minimizes the difference between them. Despite its numerous
advantages, the ICP-based methods are burdened by limitations: 1) it is not useful if the set of data
points contain many points that do not correspond with any model points; 2) it requires an initial
guess; and 3) it is computationally expensive due to the process of finding the closest point pairs.
Although there have been advances made in ICP, the basic concept of ICP algorithms is still
similar; ICP still depends heavily on the size of the overlapped area, and works accurately only if
a good pre-alignment of the point clouds already exists. This is the main reason that the initial
alignment is made based on feature matching in the previous section. The results from the previous
initial alignment provide a good a priori registration which is required for the final registration.
After the common feature-based transformation, the two point clouds are aligned closely enough
to compute the overlapped area. To calculate the overlapped area between two point clouds, k
dimensional tree and k nearest neighbor search algorithms are used, which are optimized ways of
finding the point in a given set that is closest to a given point, in the form of a proximity search.
From this result of overlapped area using nearest neighbor search, the percentage of overlapped
area can be calculated by the ratio of the area of the reference point cloud and that of the overlapped
point cloud. In this paper, the point-to-point ICP with Levenberg-Marquardt (LM) (Fantoni et al.
2012) is used as an iterative algorithm. The LM-ICP method is an iterative procedure similar to
the well-known gradient descent and Gauss-Newton algorithms, which can quickly find a local
15
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
To compare between the results of ICP and plane matching, these testbed results are
visualized as shown in Fig. 5. Fig. 5 (a) reveals the relationship between overlapped area and final
alignment results using the ICP and plane matching methods in RMSE, and Fig. 5 (b) reveals the
relationship between distance and final alignment results using the ICP and plane matching
methods in RMSE between each pair of scans respectively. The overall results between pairs of
16
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
scans are significantly improved using the plane matching method except when the overlapped
area is greater than around 89% and the distance between each pair of scans is smaller than around
10 meters. This shows that the ICP still works better than the plane matching algorithm when the
two consecutive scans have a high-overlapped area. However, the plane matching algorithm yields
better results when scans have a low-overlapped area. Therefore, the framework combines ICP
and the plane matching algorithm in the final alignment process, resulting in more robust outcomes.
Table 2. Experimental condition and results of final alignment between each pair of scans
for Testbed #1
Scan ID #1-#2 #2-#3 #3-#4 #4-#5 #5-#6 #1-#3 #2-#4 #3-#5
Overlap
93 % 91 % 90 % 88 % 89 % 82 % 85 % 82 %
Area
Distance 8m 6m 8m 10 m 9m 14 m 14 m 18 m
Overlap
80 % 68 % 70 % 67 % 57 % 55 % 45 %
Area
Distance 19 m 24 m 24 m 27 m 34 m 32 m 43 m
Table 2 shows the percentage of overlapped area and distance, and the final alignment
results using ICP between each pair of scans. From the results of the ICP calculation between each
scan, the ICP algorithm performed well if the overlapped ratio between point clouds is over around
17
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
89%, as shown in Table 2. If there is a low ratio of overlapped area between the point clouds,
however, the ICP algorithm is no longer applicable. In general, construction sites are large, thus it
will require a very large number of scans in order to obtain highly overlapped scans of the whole
construction area. This is a time-consuming and very labor intensive process. Also, the large data
size incurs high computational costs when applying the ICP algorithm. To avoid this situation,
this research proposes a new plane-based algorithm for finding the final alignment if there exists
a low ratio of overlapped area while taking advantage of ICP when two scans have a sufficient
overlapped area; in our case, it was 89% or greater. As can be seen in Tables 2, the results with
ICP are better than the ones with plane-based final alignments when the threshold is defined as
89% or greater.
This new method relies on finding three plane correspondences between the point cloud to
be registered and the reference point cloud. The selected planes have to be linearly independent
and intersect at a unique point in order for the transformation parameters to be fully recovered. For
example, one of the planes can be the ground plane, whereas the second plane is a vertical wall in
the x-axis, and the third plane is a vertical wall in the y-axis. To identify the walls, the RANSAC
algorithm is used to perform plane segmentation for each point cloud as the first step. The
RANSAC algorithm works by iteratively sampling points from a given point cloud and estimating
a set of plane parameters of the form 𝑎𝑎𝑥𝑥 + 𝑏𝑏𝑦𝑦 + 𝑐𝑐𝑧𝑧 + 𝑑𝑑 = 0. The best estimate of these
parameters is determined by maximizing the number of points that are considered inliers. The
obtained plane parameters are used to segment the original point cloud into points belonging to the
plane and the remaining points. The first step for plane segmentation is to extract the planes with
the largest number of points. This process is repeated until three suitable plane candidates are found
that satisfy the linear independence criteria. Then, the proposed framework compares normal
18
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
vectors of each plane and identifies the matching planes from different scan positions. Once the
closest match between normal vectors is found, the plane correspondences between the input point
cloud and the reference point cloud are determined. Second, the rotation component R of the
transformation matrix is calculated with the plane normal vectors found in the previous step. The
rotation component is determined such that the normal vectors (𝑛𝑛1, 𝑛𝑛2, 𝑛𝑛3) in the input point
cloud are transformed in order to match the normal vectors (𝑛𝑛1, 𝑛𝑛2, 𝑛𝑛3) in the reference point
cloud. An intermediate rotation matrix that rotates a vector 𝑣𝑣1 to another vector 𝑣𝑣2 is derived
Three intermediate rotation matrices are calculated for each plane correspondence and then the
final rotation matrix is obtained by multiplying the intermediate rotation matrices, as in Eq. (6).
𝑅𝑅 = 𝑅𝑅3𝑅𝑅2𝑅𝑅1
Third, the algorithm then matches corner points, which are defined as the unique intersection
points between three planes, between the point clouds to calculate the translation component T in
the transformation matrix. It then solves three plane equations for the (𝑥𝑥, 𝑦𝑦, 𝑧𝑧) values
simultaneously to calculate each corner point. The corresponding plane parameters are used to
formulate the plane equations as a matrix-vector multiplication operation. Once the corner point
is obtained for each point cloud, the translation vector is determined as the difference between the
19
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
positions of the two corner points. The calculations involved in this step are shown in detail in Eq.
(7).
𝑥𝑥 𝑥𝑥′
𝑇𝑇 = 𝑦𝑦 − 𝑦𝑦′
𝑧𝑧 𝑧𝑧′
20
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
The procedure for plane segmentation is to first extract the planes with the largest number of points.
This is because the planes detected from the different scan positions cannot match in terms of the
geometric characteristics in some cases. Therefore, only one set of three planes with the largest
number of points and one corner point for fine registration is utilized in this study. Finally, a
registered version of the input point cloud is obtained after the rotation and translation operations
are applied to each point in the point cloud. Fig. 6 demonstrates the three segmented planes and
21
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
the intersection corner point from the coarse transformed point cloud. The point located at the left
in Fig. 6 indicates the corner point, which lies at the intersection of the three identified planes.
Results
The data acquisition process for validating the proposed framework was performed on the Georgia
Tech campus. The first testbed was performed on a construction site with six different scan
positions. Fig. 7 illustrates the sequence for point cloud registration, and Table 3 represents the
results of the proposed framework. Fig. 7(d) shows the final registered point clouds with six
different scan positions marked with circles. To verify the result for this testbed, the higher scan
ID was assumed as a ground truth, and the deviation angle from each reference axis in degree and
root mean square error (RMSE) in meter are measured at each step of the proposed framework. As
shown in Table 2, the ICP algorithm does not work between scan ID#1 and #4. Also, it generates
the worst result between scan ID# 4 and #6. Therefore, scan IDs #1, #4, and #6, which are not
applicable point cloud data sets for the ICP algorithm, are selected to apply the plane matching
algorithm. From Table 3, it can be observed that the initial alignment using common feature
extraction is effective in obtaining a coarse estimate for registration. The measured RMSE after
the initial alignment ranges from 1 meter to 2 meters, and the deviation angle ranges between 1 o
and 10o. After the final alignment process, the RMSE is reduced to around 0.2 meters and the
Table 3. Deviation angles (degree) and RMSE (m) of each registration process with plane
matching for Testbed #1
Registration scan ID #1 - #4 #4 - #6
Overlapped Area 68 % 80 %
Distance 24 m 19 m
22
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
Table 4. Deviation angles (degree) and RMSE (m) with plane matching for Testbed #2
Registration scan ID #7 - #8 #8 - #9
Overlapped Area 70 % 56 %
Distance 26 m 37 m
Registration process Original Initial Final Original Initial Final
Deviation X axis 11.239o -1.325o -0.342o -8.382o -1.492o -0.129o
angle Y axis -1.928o -0.823o -0.192o -3.321o 0.592o -0.251o
(degree) Z axis -78.836o 12.324o 0.427o 50.329o 4.324o 0.332o
RMSE (m) - 3.012 m 0.095 m - 2.529 m 0.088 m
The second testbed was performed near a target building with three different scan
positions. Fig. 8 illustrates the result of point cloud registration by using the proposed framework.
In Table 4, the percentages of overlapped area for all of the scans were below 89% after initial
alignment where the ICP algorithm did not perform well. Also, it can be observed that the
measured RMSE after the final alignment is under 0.25 meters, and deviation angle is less than
0.34o
23
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
The third testbed was performed in an indoor environment, shown in Fig. 9. The purpose of this
testbed was to evaluate the proposed algorithm under different conditions and verify that the
24
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
method of finding three planes and one corner point is still viable. Similarly, the registration
process was carried out between two scan positions using the proposed method of obtaining an
initial alignment with visual feature points and a final alignment matching planes and a corner
point. From Table 5, it can be observed that the measured RMSE after the final alignment is under
Table 6 shows the information about which scans are used for automatic registration and its final
results. The six sets of laser scan data were taken for Testbed #1 to compare the proposed method
over the ICP based method. Among the six sets of laser scan data, the proposed method only used
three sets of scan data (#1, #4, and #6) for automatic registration while the ICP-based approach
25
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
required all six sets of scan data. Therefore, the proposed framework can reduce the number of
scans required. For Testbed #2, all pairs of scans have a low-overlapped area where ICP cannot be
Table 5. Deviation angles (degree) and RMSE (m) with plane matching for Testbed #3
Registration scan ID #10 - #11
Overlapped Area 96 %
Distance 3m
Registration process Original Initial Alignment Final Alignment
Deviation X axis -17.404o -7.188o 0.092o
angle Y axis -3.155o -1.702o 0.049o
(degree) Z axis 29.899o 9.430o -0.127o
RMSE (m) - - 0.047 m
A robust method for automatic point cloud registration without using marked targets was
introduced and validated with empirical construction site data. A laser scanning system with a
digital camera was used to obtain point clouds with mapped RGB texture data. The proposed
framework consists of four steps. The first step includes data acquisition of point cloud and RGB
26
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
images, and data fusion by using kinematic solution. The second step involves obtaining an initial
alignment by extracting common features from digital images and finding their corresponding 3D
positions in point clouds. The third step calculates an overlapped area between scans and
determines the final alignment method between ICP and the plane matching algorithm. Lastly, the
final accurate alignment is achieved by using the result from the previous step. The main advantage
of this framework is to obtain automatic point cloud registration even there is low overlapped area
between scans where the ICP algorithm does not perform well. Also, it can be extended to any
type of laser scanner which has a built-in digital camera from which a kinematic relationship
between the collected 3D point cloud data and the captured RGB images can be estimated.
The limitation of the proposed framework is that it requires three planes with one
intersection point on the overlapped area. In general, however, construction sites contain many
planes throughout the construction process such as foundations, materials, temporary structures,
and job trailers. Therefore, this proposed framework is reasonably applicable to construction sites
as validated in this study. Furthermore, many planes are easily found in an indoor environment,
thus the proposed method works well indoors as well. In summary, the main contributions of this
study to the existing knowledge of automatic registration of point clouds are: (1) the proposed
framework is applicable when the data is collected from a large and complex environment; (2) it
performs well with a low overlapped area between scans; and (3) it takes a smaller number of scans
to reconstruct a large site, thus reducing time, cost and computational burden.
27
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
ACKNOWLEDGEMENT
This material is based upon work supported by the National Science Foundation (Award #: CMMI-
1358176). Any opinions, findings, and conclusions or recommendations expressed on this material
are those of the authors and do not necessarily reflect the views of the NSF.
REFERENCES
Bay, H., Tuytelaars, T., Van Gool, L., Leonardis, A., Bischof, H., and Pinz, A. (2006). “SURF:
Speeded Up Robust Features.” Computer Vision – ECCV 2006, 3951, 404–417.
Becerik-gerber, B., Jazizadeh, F., Kavulya, G., and Calis, G. (2011). “Automation in Construction
Assessment of target types and layouts in 3D laser scanning for registration accuracy.”
Automation in Construction, Elsevier B.V., 20(5), 649–658.
Besl, P., and McKay, N. (1992). “A Method for Registration of 3-D Shapes.” IEEE Transactions
on Pattern Analysis and Machine Intelligence.
Böhm, J., and Becker, S. (2007). “Automatic marker-free registration of terrestrial laser scans
using reflectance features.” In: 8th Conference on Optical 3-D Measurement Techniques,
338–344.
Brilakis, I., Lourakis, M., Sacks, R., Savarese, S., Christodoulou, S., Teizer, J., and Makhmalbaf,
A. (2010). “Toward automated generation of parametric BIMs based on hybrid video and laser
scanning data.” Advanced Engineering Informatics, Journal Article, 24(4), 456–465. Cho, Y. K.,
and Gai, M. (2014). “Projection-Recognition-Projection Method for Automatic Object
Recognition and Registration for Dynamic Heavy Equipment Operations.” ASCE Journal of
Computing in Civil Engineering, 28(1), A4014002.
Cho, Y. K., and Haas, C. T. (2003). “Rapid geometric modeling for unstructured construction
workspaces.” Computer-Aided Civil and Infrastructure Engineering, 18(4), 242–253.
Dai, F., Asce, A. M., Rashidi, A., Asce, S. M., Brilakis, I., Asce, A. M., and Vela, P. (2013).
“Comparison of Image-Based and Time-of-Flight-Based Technologies for
ThreeDimensional Reconstruction of Infrastructure.” 139(January), 69–79.
Dai, F., Rashidi, A., I., I. B., and Vela, P. A. (2011). “Generating the Sparse Point Cloud of a
Civil Infrastructure Scene Using a Single Video Camera under Practical Constraints.” Winter
Simulation Conference, Phoenix, AZ.
Eo, Y. D., Pyeon, M. W., Kim, S. W., Kim, J. R., and Han, D. Y. (2012). “Coregistration of
terrestrial lidar points by adaptive scale-invariant feature transformation with constrained
geometry.” Automation in Construction, Elsevier B.V., 25, 49–58.
Fantoni, S., Castellani, U., and Fusiello, A. (2012). “Accurate and automatic alignment of range
surfaces.” International Conference on 3D Imaging, Modeling, Processing, Visualization &
Transmission, 73–80.
Fekete, S., Diederichs, M., and Lato, M. (2010). “Geotechnical and operational applications for
3-dimensional laser scanning in drill and blast tunnels.” Tunnelling and Underground Space
Technology incorporating Trenchless Technology Research, Elsevier Ltd, 25(5), 614–628.
28
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
Gai, M., Cho, Y., and Xu, Q. (2013). “Target-free Automatic Point Clouds Registration Using
2D images.” ASCE Journal of Computing in Civil EngineeringEngineering, (Ii), 1–7.
Golparvar-Fard, M., Bohn, J., Teizer, J., Savarese, S., and Peña-Mora, F. (2011). “Evaluation of
image-based modeling and laser scanning accuracy for emerging automated performance
monitoring techniques.” Automation in Construction, Journal Article, Elsevier B.V., 20,
1143–1155.
Golparvar-Fard, M., Peña-Mora, F., Arboleda, C. a., and Lee, S. (2009). “Visualization of
Construction Progress Monitoring with 4D Simulation Model Overlaid on Time-Lapsed
Photographs.” Journal of Computing in Civil Engineering, 23(December), 391–404.
Gong, Y., and Seibel, E. (2016). “Feature-Based Three-Dimensional Registration for Repetitive
Geometry in Machine Vision.” Jounal of Inform Tech Softw Eng, 6(184).
Ham, Y., and Golparvar-Fard, M. (2013). “An automated vision-based method for rapid 3D
energy performance modeling of existing buildings using thermal and digital imagery.”
Advanced Engineering Informatics, Journal Article, 27(3), 395–409.
Ham, Y., and Golparvar-Fard, M. (2014). “Three-Dimensional Thermography-Based Method for
Cost-Benefit Analysis of Energy Efficiency Building Envelope Retrofits.” Journal of
Computing in Civil Engineering, Journal Article, 0(0), B4014009.
Kim, P., Cho, Y. K., and Chen, J. (2016). “Target-Free Automatic Registration of Point Clouds.”
33rd International Symposium on Automation and Robotics in Construction (ISARC 2016),
Auburn, US, 686–693.
Kim, S., Kim, M., Lee, J., Pyo, J., Heo, H., and Yun, D. (2011). “Registration of 3D Point Clouds
for Ship Block Measurement.” 1(V), 1–6.
Lowe, D. G. (1999). “Object recognition from local scale-invariant features.” Proceedings of the
Seventh IEEE International Conference on Computer Vision, 2([8), 1150–1157.
Moussa, W., Abdel-Wahab, M., and Fritsch, D. (2012). “An Automatic Procedure for
Combining Digital Images and Laser Scanner Data.” ISPRS - International Archives of the
Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXIXB5(September),
229–234.
Olsen, M. J., Kuester, F., Chang, B. J., and Hutchinson, T. C. (2010). “Terrestrial Laser
Scanning-Based Structural Damage Assessment.” Journal of Computing in Civil
Engineering, American Society of Civil Engineers, 24(3), 264–272.
Park, H. S., Lee, H. M., and Lee, I. (2007). “A New Approach for Health Monitoring of
Structures : Terrestrial Laser Scanning.” 22, 19–30.
Rashidi, A., Brilakis, I., and Vela, P. (2015). “Generating Absolute Scale Point Cloud Data of
Built Infrastructure Scenes Using a Monocular Camera Setting Abbas Rashidi 1 ; Ioannis
Brilakis 2 ; and Patricio Vela 3.” ASCE Journal of Computing in Civil
EngineeringEngineering, 29(6).
Tang, P., Huber, D., Akinci, B., Lipman, R., and Lytle, A. (2010). “Automatic reconstruction of
as-built building information models from laser-scanned point clouds: A review of related
techniques.” Automation in Construction, 19(7), 829–843.
Tombari, F., and Remondino, F. (2013). “Feature-based automatic 3D registration for cultural
heritage applications.” Proceedings of the DigitalHeritage 2013 - Federating the 19th Int’l
VSMM, 10th Eurographics GCH, and 2nd UNESCO Memory of the World Conferences,
Plus Special Sessions fromCAA, Arqueologica 2.0 et al., 1, 55–62.
29
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
30
Kim, P., Chen, J., and Cho, Y. (2017). "Automated Point Clouds Registration using Visual and Planar Features for Construction
Environments." ASCE Journal of Computing in Civil Engineering,Volume 32, March 2018 .DOI: 10.1061/(ASCE)CP.1943-
5487.0000720.
31