Example of a right floating element.
The objective of this project is make us implement some basic Keypoint detector and build a descriptor of the same using its local patch. Then use these features to match the correspondence between a pair of images taken at different angles and try to compute some basic information about their camera positions. Following are the subsections of this project
This report will talk about each of this subsections in detail with their algorithm and results.
It was analysed on how the Keypoint/corner detection varies with respect to the gaussian blurring and scaling. The scaling to smaller size of the images tends to produce more Keypoints when compared to the original scale. However the blurring decreases the corner detection.
Gaussian blur f=1 | Gaussian blur f=2 | Gaussian blur f=3 |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
After fusing the corner scores across different gaussians by addition operations, the corner scores across each scale is fused by multiplying so as to retain the keypoints detected across different scales. Non-maxima suppression: The keypoints are grouped into labels using connected components method and for each component a maxima corner scored keypoint is selected and rest are suppressed. This gives the keypoints as below.
Resulting in the keypoints as below for varying alpha.
Alpha = 0.2, Keypoints = 3227 |
Alpha = 0.4, Keypoints = 787 |
Alpha = 0.6, Keypoints = 324 |
Alpha = 0.8, Keypoints = 118 |
![]() |
![]() |
![]() |
![]() |
The obtained keypoints are used to create local patches around them and form a signature that could describe the keypoint interms of its local gradients. A study material at http://www.aishack.in/2010/07/implementing-sift-in-opencv/ was used to understand on how to build this descriptors. Steps followed are as below
I failed to compute the Keypoint orientation and was unable to resolve the bug. This code is commented in the submission.
Feature matching is finding the nearest neighbour in the 128 dimension feature space. It should also noted that good matching is typically the unique match that doesn't have multiple matches across the image. Ratio constraint of using the distance of the first and second nearest neighbour is a good trick to resolve this problem. Below are the results of the matches between two pair of images.
This has been a very good excercise to implement the corner detectors followed by finding their signatures using SIFT based descriptors. Matching these keypoints across the images was fun.