Step 1 : Harris Corner Detector to find Interests points
Implementation of the normal harris corner detector algorithm was done.
Used a gaussian with sigma = 1
R threshold value = 0.000005
alpha = 0.6
For non-maximum suppression , firstly we found all the connected components and then retained the interest points with maximum R value in that connected component.
Step 2 : SIFT to get descriptors for each Interest point
The image was first blurred with a gaussian with sigma = 1.
The gradient and magnitude of the gradient was found at each point in the image.
The descriptors of a key point is found by first taking a 16 * 16 patch around it and then weighing all the gradient magnitude with a gaussian.
We then divide this into 16 4*4 regions.
For each 4*4 region we have 8 bins. We then then put into these bins the weighted gradient magnitude depending on the direction of the gradient.
So if the gradient direction at a pixel is between 0 and 45 we add the weighted gradient magnitude to bin 1.
For illumination invariance , normalize , threshold(set all values greater than threshold to threshold) , normalize was followed as mentioned in the handout.A threshold of 0.2 was used.
This approach gave a precison of about 75% (81 good , 27 bad) when matching was done for Notre Dame image pair.
Step 3 : Match Features
This is a implementation of the match features algorithm given in the book.
Used a threshold of 0.75.
If the ratio between the first and second min eucledian distance is less than the threshold, then we say the interest points match.
If match,store the index of the curr interest point for image1 and min eucledian distance interest point in image2.
2. Results in a table
Notre Dame image with vis.jpg(first pair) and eval.jpg(second pair)