In this project we wrote our own local feature matching algorithm based on the SIFT pipeline.

Interest Point Detection

Here we had to implement a Harris Corner Detector. I played around with filtering using a gaussian filter vs a sobel filter to find the gradients Ix and Iy, and found a gaussian filter worked much better. Playing with the gaussian sigma value, I found filtering to get the Ix^2, Ix^y, and Ixy values gave much more accurate matches for Notre Dame at 2 over any other value (tried 1,2.5,3,and 5); however when testing with the other images, 3 seemed to be the best sigma value.


					gfilter = fspecial('gaussian',feature_width,3);
				

Interest points on the sides were disregarded since matching them would be difficult and not accurate. A sliding window of size 16x16 (feature_width by feature_width) was used using the function colfilt to take into account only the maxima of a collection of interest points all near one another.

Local Feature Description

Here we had to implement something similar to SIFT. This used a lot of for loops which makes the program really slow, which is unfortunate. Two versions of the images were collected and used: magnitudes and orientations. A for loop went through each keypoint (given by x and y), made a box feature_width by feature_width large (16x16), dividing the box into 4x4 cells, and made a histogram denoting magnitudes at 8 different orientations for each mini cell in the 4x4 cell, resulting in a vector of size 1x128. I only adjusted the filter used to get Ix and Iy, and settled on gaussian, like in the Harris Corner Detector.

Feature Matching

Given two lists of features, one from the first image, one from the second image, we had to match the matching features from one to the other. This was achieved by going through every feature in the first image and seeing which feature in the second image it matched to best, using the nearest neighbor distance ratio test. If the ratio was close to 1 (higher than 0.7), the feature was discarded as that would not result in an accurate match. Resulting matches had confidence scores of 0.7 or less. This number resulted in the best matches for Notre Dame, as compared to 0.75 or 0.65.

Results

Feature Matching

Notre Dame (evaluated)
This was the main one to test accurate matching. Sigma value of 5 resulted in 45 good matches, 2 bad matches, total of 96% accuracy. However, the scores for the other images were less accurate at gaussian-sigma value of 5, so final code resulted in 40 good matches, 6 bad matches, total of 87% accuracy.

Pantheon
No Evaluation, but you can see that some features appear to be matching correctly. In this one a lot of features worth matching were found, as opposed to other images: 91 features total (bad and good) found.

Mount Rushmore
No Evaluation, but at least 7 of these are corrrect, and there appears to be 13.

Capricho Gaudi
No Evaluation, but again, main features appear to be matching correctly. (I counted at least 20 correctly matched, and there are 25 total).

Statue of Liberty
At gaussian-sigma value 5, only 1 feature was (wrongly) detected. Final code results in 2 wrongly detected features.

Sacre Coeur
No features were matched between these two images.