Classical Img Registration Survey
Classical Img Registration Survey
Classical Img Registration Survey
Image Registration is the process of aligning two or more images of the same scene taken
from different viewpoints, at different times, from different sensors. Image registration aligns
two images, i.e., base image and reference image, geometrically. There are different
approaches of image registration and these approaches are categorized according to their
nature that is areas based and feature based. These approaches are also categorized
according to the four simple steps of image registration procedure: feature detection, feature
matching, function mapping and transformation and re-sampling. Advantages and
disadvantages of different methods are discussed in the paper. The main aim of this paper is
to provide the knowledge of different image registrations methods used in different
application area.
Keywords: Image registration, feature detection, feature matching, mapping function,
sampling
1. Introduction
Image registration is the process of aligning two or more images of the same scene taken
from different views, at different times and from different sensors. It geometrically aligns two
images- base and the reference image. Image registration is a very important step in the
process of analyzing an image through which necessary information is found by combining
information from different data sources. Generally image registration required in remote
sensing (for monitoring environment, weather forecasting), in medical image processing
(combining CT, monitoring tumor growth) and in computer vision.
The comprehensive study of image registration techniques was published in 1992 by
Brown. The goal of this paper is to introduce techniques came after that and maps the current
development of image registration. We did not go into the details of particular algorithm and
not shows the results of comparative experiments. Instead of it we summarize the useful
approaches and point out interesting part of image registration.
In section II both the concepts and problems of image registration are discussed. In section
III both area based and feature based techniques of feature detection is discussed. In section
IV various feature matching algorithm is discussed. In section V different methods for
mapping
Function is discussed. In section VI different methods for image sampling and
transformations are discussed. Section VII shows the future trends in image registration.
Target image: The image which doesnt change and is used as the basis of other images.
Source image: The image which is geometrically aligned with the target image.
Transformation and Warping: The mapping function which is used to modify the source
towards the target image.
Image registration can be categorized into four different classes according to the image
application [1].
Different viewpoints: Same scenes of the image are captured from different viewpoints.
The purpose behind this to achieve a two-dimensional image representation. Example is
remote sensing- For the mosaicking of image.
Different times: Same scenes of the image are acquired at different time depending on
different conditions. The purpose behind this is to find and evaluate changes occur in the
scene during different time period. Example is remote sensing and computer vision.
Different sensors: Same scenes of the image can be acquired from different sources. The
purpose behind this is to analyze the information from different sources for obtaining the
more detailed information. Example is remote sensing and medical imaging.
Scene of model registration: Image of the scene and model are to be registered. The
purpose behind this is to localize the acquired image in the model and compare them.
Example is computer vision, remote sensing and medical imaging.
2.1.2
168
2.1.3
Barbara and Jan classified the image registration techniques as follows [1, 3]:
Area Based Methods: These methods are applied information about image is absent and
distinctive information is provided by gray level or colors.
Feature Based Methods: These methods are applied when local structure information
about an image is given.
3. Feature Detection
Formerly, the features are objects detected by manually by expert. There are two main
approaches to feature understanding [1, 3].
3.1 Area-Based Methods
Area-Based methods put emphasis on feature matching rather than detection. No features
are detected in this approach. So the first image registration step is omitted in area-based
methods.
169
The region features can be the closed boundary of appropriate size, reservoirs, forest,
urban areas. Regions are generally represented by their center of gravity. Region feature are
detected by means of segmentation methods. The accuracy of segmentation significantly
influences the resulting registration. S.K. Pal [4] proposed a method as a refinement to
segmentation to produce better registration accuracy. In 2004, Harris-Laplace region detector
[3] locates potential relevant points with the Harris corner detector and then selects the point
with a characteristic scale. A new region feature descriptor based image registration proposed
in 2012.
3.2.2
The line features can be represented by line segments, object contours, roads or anatomical
structure in medical imaging. Standard edge detection methods like canny detector or
Laplacian detector are used for line feature detection. The Marr-Hildreth edge detector [6]
has been a very popular edge detector before canny presented his paper. Canny edge detector
[5] is widely considered to be the standard edge detection algorithm. A recent detailed
comparative study of various edge detection algorithms can be found in [7]. Apart from it Li
proposed a new method to detect the lines in the reference and source image. Figure 1 shows
the artificial image and its corresponding line segments and edges.
Corners are estimated as points of high curvature on the region boundaries. The first corner
detector has been in the late 1970s. In 1977, Moravec [8] defined the concept of point of
interest as different regions in images. Based on Moravec concept Harris developed the
algorithm known as Harris corner detector. Figure 2 shows the image containing different
types of corners.
A comparative study of interest point performance on a data set can be found in 2011 and
different detection method can be found in 2009.
170
Figure 2. Artificial Image Containing Different Corner Types and Real Image of
Block and House Scene
3.2.4
Some objects may be recognized by their outline shape and it is a very powerful feature in
image processing. Yang Mingqiang and KpalmaKidiyo in 2008 discussed the essential
properties of shape to be feature that includes rotation, translation, scaling, noise resistance
and reliability [9]. Figure 3 shows the shapes of different objects like wing and palm tree.
Figure 3. Wings are Longish and Triangular, Palm Trees Consist of a Lengthy
Stem and a Bushy Head
4. Feature Matching
After detecting the features, we have to match them. We can say that we have to determine
which feature come from corresponding locations in images that are different. Again we have
to discuss two different aspect of feature matching. One is area based and other is feature
based.
4.1 Area Based Methods
All techniques in area based methods merge the feature detection step with feature
matching step and deal with the images without detecting the salient feature of object.
4.1.1
Cross correlation is the first basic approach of registration process. It is generally used for
pattern matching. The classical method of area based method is cross correlation [3].
Cross correlation is a type of similarity measure or match metric C (u, v) of image I(x, y)
with displacement u in X direction and v in Y direction. Two dimensional cross correlation
functions is shown below.
(
) (
)
(
)
( )
)
(
This similarity measure is computed for window pairs from the sensed and target images.
The window pair for which the maximum is found should be set as the corresponding one.
171
Two main disadvantages of the correlation like methods has the high computational
complexity.
4.1.2
Fourier Methods
Correlation theorem states that the Fourier transform of the correlation of two images is
the product of the Fourier transform of one image and complex conjugate of Fourier
transform of other [10].
The Fourier transform of image f(x, y) is a complex function in which each function has a
) and an imaginary part (
) at each frequency (
) of
real part ((
frequency spectrum.
(
)
( )
(
)
(
)
Where (
)is a magnitude and (
) is a phase angle.
The phase shift correlation method is based on the Fourier shift which is proposed for the
registration of translated image.
Recently, image mosaic based on phase correlation and Harris operator is proposed by
yang in which first the scaling and translation is performed then the unregistered image is
adjusted. After that feature point are detected and matched.
It is observed that if the computational speed is required or the images are corrupted by
noise then Fourier methods are preferred than correlation methods.
4.1.3
Mutual information based registration process begins with the estimation of the joint
probability of the intensities of corresponding pixels in the two images. Mutual information
between two random variables X and Y is given by the formula [3].
(
)
( )
( )
( )
Where ( ) = -Ex log P(X) represents entropy of random variable and is probability
distribution of X. this method is based on maximizing MI.
It is observed that MI gives accurate result than any other registration method. But when
images have low resolution or it contains little information then it gives worse results.
4.2 Feature Based Methods
Feature based method used image feature derived by feature extraction algorithm instead
of intensity values for matching purpose.
4.2.1
Methods based on spatial relations are usually applied if detected feature are not clear or
their neighbors are distorted.
Barrow in 1977 has introduced the chamfer matching for image registration. Line feature
detected in image are matched by the minimization of the distance between them. Recently,
Gongjian wen developed high performance feature matching method for image registration
by combining spatial and similarity information [12].
4.2.2
Relaxation Methods
In the relaxation method, one of the famous method is consistent labeling problem (CLP)
in which we label each feature from the sensed image with the label of a feature from the
targeted image. So it is consistent with other images [12].
Another solution to CLP problem and to the image registration is backtracking, where
consistent labeling is generated recursively.
172
4.2.3
In 1977, when a sub-window was used to find out the probable candidates of the
corresponding window in reference image and then full size window is applied.After that a
rectangular grid of windows is taken on which cross correlation is performed for reducing the
computational load. All these techniques are just an example of early pyramidal methods [3].
Recently, wavelet decomposition of the images was proposed for pyramidal approach.
There are many comparison tests have been carried out to establish which wavelet family has
the best performance.
4.2.4
Another method for exploiting the spatial relations is the correspondence of features can be
estimated by their descriptor. Descriptor contains information about feature points detected in
both the reference image and source image. The most common method is to use closed
boundary regions as features. Theoretically any invariant and discriminative enough shape
descriptor cab ne employed in region matching such as shape vectors. There are various types
of descriptors are used but we will proceed with the examination of the SIFT, SIFT variants
and SURF feature matching.
EVOLUTI-ON
YEAR
2004
2004
TYPE OF
IMAGE
All types of images
Gradient Images
2005
Gradient Images
4.
NAME OF THE
ALGORITHM
Distinctive SIFT
PCA SIFT ( Principal
Component
Analysis)
GLOH ( Gradient
and orientation
histogram)
CSIFT (Color)
2006
Colored images
5.
ASIFT (Affine)
2009
6.
2010
7.
ROBUST SIFT
2010
Object Images
having smooth
boundary
Synthetic Aperture
Radar
Remote sensing
images
3.
COMPARIS-ON
WITH SIFT
Same as SIFT
Fast descriptor but
less than Distinctive
SIFT
More robust than
SIFT and PCS SIFT
Performs better
with respect to the
number of the
detected features
Reduce complexity
Reduce processing
time
More accuracy and
correct match rate
173
8.
BF SIFT ( Bilateral
Filter)
2012
Synthetic Aperture
Radar
9.
Multilevel SIFT
2012
Solves memory
problem and reduce
computational
complexity
SURF Descriptor
SURF (Speeded UP Robust Feature) is a local robust feature detector and firstly presented
by Herbert BAY in 2006 that can be used in various computer vision tasks like object
recognition, feature matching. SURF is inspired by SIFT descriptor. The standard version of
SURF is several times faster than SIFT and it is also more robust against SIFT [13].
7. Conclusion
Image registration is one of the important tasks when we want to integrate and analyze
information from different sources to obtain the more accurate information. This paper gives
a survey and review of the classical and recent registration methods. The main aim of this
174
survey is to present the major advances in the field of image registration. Inage registration
involves a vast problem set which includes different problems faced in image processing like
image fusion, object detection and others. This paper makesgenuine efforts to cover all
possible techniques and work done in the image registration field.
References
[1] B. Zitova and J. Flusser, Image registration methods: A survey, Image and Vision Computing, vol. 21,
(2003), pp. 977-1000.
[2] M. Deshmukh and U. Bhosle, A survey of image registration, International Journal of Image Processing
(IJIP), vol. 5, Issue 3, (2011).
[3] G. L. Brown, Survey of Image Registration Techniques, ACM Computing Surveys, vol. 24, no. 4, (1992),
pp. 325-376.
[4] N. R. Pal and S. K. Pal, A review on image segmentation techniques, Pattern Recognition, vol. 26, (1993),
pp. 1277 1294.
[5] J. Canny, A computational approach to edge detection, IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 8, (1986), pp. 679-698.
[6] R. Maini and Dr. H. Aggarwal, Study and Comparison of Various Image Edge Detection Techniques,
International Journal of Image Processing (IJIP), vol. 3, Issue 1, (2010).
[7] H. Vishwakarma and S. K. Katiyar, Comparative study of edge Detection algorithm on the remote sensing
images using matlab, International Journal of Advances in Engineering Research (IJAER), vol. 2, Issue VI,
(2011) December.
[8] J. Liu, A. Akas, Al-Obaidi and A. Moravec, A comparative study of different corner detection methods,
Computational Intelligence in Robotics and Automation (CIRA), IEEE International Symposium, (2009).
[9] Y. Mingqiang, K. Kidiy and R. Joseph, A survey of shape feature extraction techniques, published in
Pattern Recognition, Peng-Yeng Yin (Ed.), (2008), pp. 43-90.
[10] R. N. B. Well, The Fourier Transform and Its Applications, McGraw-Hill, New York, (1965).
[11] D. G. Lowe, Distinctive image features from scale-invariant key points, International Journal of Computer
Vision, SIFT., vol. 60, no. 2, (2004), pp. 91110.
[12] K. Sharma and A. Goyal, Classification Based Survey of Image Registration Methods, IEEE, 4th ICCCNT,
(2013) July 4-6, Tiruchengode, India.
[13] H. Bay, A. Ess, T. Tuytelaars and G. L. Van, Speeded-up robust features (SURF), Computer Vision and
Image Understanding, (2008).
[14] J. Sachs, Image Resampling, (2010) Available: www.dlc.com/resampling.pdf.
Authors
Siddharth Saxena, he has received the B.E. degree in Computer
Science and Engineering, from ITM University, Gwalior, India in
2011. Currently he is pursuing M. Tech in Information Technology
from MITS, Gwalior and it will be completed in 2014. His
researchincludeImage restoration and Image Restoration
175
176