Open In App

Feature matching using ORB algorithm in Python-OpenCV

Last Updated : 20 Aug, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

Feature matching is an important technique that helps us find and compare similar points between images. The ORB (Oriented FAST and Rotated BRIEF) algorithm is an efficient method for feature matching. It combines FAST which detects keypoints and BRIEF which describes those keypoints. Since BRIEF struggles with rotation, ORB makes it better by rotating the descriptors based on the keypoints orientation. It is a great alternative to SIFT and SURF, providing similar results without licensing fees, as it is patent-free.

Key Features of ORB

Let's see key features that make ORB a popular choice for feature detection and matching:

  1. Fast and Efficient: ORB is optimized for speed, making it suitable for real-time applications. Its fast processing ensures quick feature detection and matching without compromising performance.
  2. Rotation Invariant: One of the major challenges with feature matching is dealing with image rotation. It addresses this by making the BRIEF descriptors rotation-invariant. This means it can successfully match features even if the images are rotated.
  3. Multiscale Detection: It can detect features at different scales using an image pyramid. This allows the algorithm to handle varying image sizes, making it robust in situations where the images are zoomed in or out.

How to Perform Feature Matching using ORB in Python?

Now let's see various steps involved in implementing it in Python with OpenCV. Here we are using a random sample image which you can download it from here. You can also use any images of your choice.

Step 1: Installing required libraries

Before we start, make sure we have OpenCV installed. If not, we can install OpenCV using below command:

!pip install opencv-python opencv-python-headless

Step 2: Importing Libraries

Here we will be using OpenCV, Numpy and Matplotlib libraries for the implementations.

Python
import numpy as np
import cv2
import matplotlib.pyplot as plt
from google.colab.patches import cv2_imshow

Step 3: Loading the Images

We load the query and train images using cv2.imread(). The images are then converted to grayscale as ORB works with single-channel (grayscale) images, simplifying the feature detection process.

Python
query_img = cv2.imread('/content/sports-car test.webp')
train_img = cv2.imread('/content/sports-car train.webp')

query_img_bw = cv2.cvtColor(query_img, cv2.COLOR_BGR2GRAY)
train_img_bw = cv2.cvtColor(train_img, cv2.COLOR_BGR2GRAY)

Step 4: Detecting Keypoints and Finding Descriptors

We use the ORB detector to identify keypoints (distinct features) in both images and find their corresponding descriptors. Keypoints represent unique image features while descriptors are numerical values that describe them for matching.

Python
orb = cv2.ORB_create()

queryKeypoints, queryDescriptors = orb.detectAndCompute(query_img_bw, None)
trainKeypoints, trainDescriptors = orb.detectAndCompute(train_img_bw, None)

Step 5: Matching the Descriptors

We use the Brute-Force Matcher to compare the descriptors between the query and train images, finding the best matches based on the similarity of the descriptors. This step matches the corresponding keypoints between both images.

Python
matcher = cv2.BFMatcher()
matches = matcher.match(queryDescriptors, trainDescriptors)

Step 6: Visualizing the Matches

Finally, we visualize the matches by drawing lines between the matched keypoints. The output image is resized for improved visibility and displayed using Matplotlib.

Python
final_img = cv2.drawMatches(query_img, queryKeypoints, 
                             train_img, trainKeypoints, matches[:20], None)
final_img = cv2.resize(final_img, (1000, 650))


plt.figure(figsize=(10, 6))
plt.imshow(cv2.cvtColor(final_img, cv2.COLOR_BGR2RGB)) 
plt.title("Feature Matches")
plt.axis('off')  
plt.show()

Output: 

output-orb
Output

Applications of ORB

  1. Object Recognition: Identifying objects in different images, even if they are rotated or scaled differently.
  2. Image Stitching: Combining multiple images to form a panorama by matching common features.
  3. Augmented Reality: Matching real-world scenes to virtual objects by identifying key features in camera feeds.
  4. 3D Reconstruction: Matching features from different views to construct 3D models.

Limitations of ORB

  1. Less Accurate in Low-texture Areas: ORB may not perform as well in areas with fewer distinct features or low texture.
  2. Sensitivity to Noise: Like many feature detectors, it can be sensitive to noise in the images.
  3. No Descriptor Matching Optimization: While it is efficient, the matching process could still be optimized further, especially for large datasets.

Article Tags :
Practice Tags :

Explore