CS 143 / Project 2 / Local Feature Matching

Example of a right floating element.

The objective of this project is make us implement some basic Keypoint detector and build a descriptor of the same using its local patch. Then use these features to match the correspondence between a pair of images taken at different angles and try to compute some basic information about their camera positions. Following are the subsections of this project

  1. Interest Point Detection
  2. Local Feature Description
  3. Feature Matching
  4. Conclusion

This report will talk about each of this subsections in detail with their algorithm and results.

(I) Keypoint detection (get_interest_points):

It was analysed on how the Keypoint/corner detection varies with respect to the gaussian blurring and scaling. The scaling to smaller size of the images tends to produce more Keypoints when compared to the original scale. However the blurring decreases the corner detection.

Gaussian blur f=1 Gaussian blur f=2 Gaussian blur f=3

After fusing the corner scores across different gaussians by addition operations, the corner scores across each scale is fused by multiplying so as to retain the keypoints detected across different scales. Non-maxima suppression: The keypoints are grouped into labels using connected components method and for each component a maxima corner scored keypoint is selected and rest are suppressed. This gives the keypoints as below.

Resulting in the keypoints as below for varying alpha.

Alpha = 0.2, Keypoints = 3227

Alpha = 0.4, Keypoints = 787

Alpha = 0.6, Keypoints = 324

Alpha = 0.8, Keypoints = 118




(II) Local Feature Description (get_features):

The obtained keypoints are used to create local patches around them and form a signature that could describe the keypoint interms of its local gradients. A study material at http://www.aishack.in/2010/07/implementing-sift-in-opencv/ was used to understand on how to build this descriptors. Steps followed are as below

  1. Original image is blurred
  2. 16x16 patch of the local region around the keypoint is extracted
  3. For each of this patch gradient magnitude and gradient orientations are calculated
  4. These are again grouped as 4x4 cells and gradient orientation of each cell is computed.
  5. Gradient magnitude is multiplied with a gaussian to weigh the center of the patch higher than the edge.
  6. Gradient orientations of the cells are weighed with the above obtained gradient magnitude.
  7. Resulted orientations are stacked to get 4x4x8 feature histogram for every keypoint patch.

I failed to compute the Keypoint orientation and was unable to resolve the bug. This code is commented in the submission.

(III) Feature Matching (match_features):

Feature matching is finding the nearest neighbour in the 128 dimension feature space. It should also noted that good matching is typically the unique match that doesn't have multiple matches across the image. Ratio constraint of using the distance of the first and second nearest neighbour is a good trick to resolve this problem. Below are the results of the matches between two pair of images.

(IV) Conclusion:

This has been a very good excercise to implement the corner detectors followed by finding their signatures using SIFT based descriptors. Matching these keypoints across the images was fun.