An example result of local feature matching using SIFT feature descriptor.
There are three main goals of this project:
The implementation of Harris Feature Detector follows instructions on slides and codebase. The steps are:
I implemented the core meat of SIFT feature descriptor algorithm. The input of the feature descriptor function is the coordinates of interest points got from the Harris stage, the output is the 128 dimension feature descriptor for each interest point. Besides the get_features.m provided, I wrote up two other functions as sub functions:
Here I pick the some of the most important snapshots of code of the above two functions.
%pass in a row/col coordinates, then return the range of its window.
cellWidth = feature_width / 4;
rowMin = row - cellWidth * 2 + 1;
rowMax = row + cellWidth * 2;
colMin = col - cellWidth * 2;
colMax = col + cellWidth * 2 - 1;
[rowMin, rowMax, colMin, colMax] = get_window(row, col, feature_width);
feature = zeros(1, 128);
gaussian = fspecial('gaussian', feature_width, 0.8);
for gradientIndex = 1 : 8
%calculate gaussion for this window
gradientWindow = filteredImages(rowMin : rowMax, colMin : colMax, gradientIndex);
gaussianed = imfilter(gradientWindow, gaussian, 'same');
%for each cells, do the count and adding.
for i = 0 : 3
for j = 0 : 3
%now we entered the cell
cell = gaussianed(i * feature_width / 4 + 1 : (i+1) * feature_width / 4, ...
j * feature_width / 4 + 1 : (j + 1) * feature_width / 4);
%add up all values
cellGradient = mean(cell(:));
%assign it to its position in feature vector.
featureIndex = 8 * (i * 4 + j) + gradientIndex;
feature(1, featureIndex) = cellGradient;
end
end
end
feature = feature / norm(feature);
Besides the coding stuff, another important work I did is to tune the best parameters. The major parameteres tuned here are the sigma value of gaussian kernel, and the 8 orientation kernels. For the given example, I found 0.8 and sobel-like filters performs relatively better.
Finally I wrote up a ratio testing based feature matching algorithm with O(M*N) time complexity. The idea is pretty straightforward. For each interest point in one image, compute the distance of its feature descriptor and all feature descriptors of itnerest points of another image and pick the nearest two and calculate their ratio. The value of the ratio shows the confidence of the matching.
The code to pick two smallest distances and calculate NNDR is:
for j = 1 : num_features2
distance = sum((features1(i,:)-features2(j,:)).^2);
if distance < min1
min2 = min1;
min1 = distance;
minJ = j;
elseif distance > min1 && distance < min2
min2 = distance;
end
end
Here I give some examples of feature matching results on pairs real-world photos. Since I did not implement the scale invariance feature, I didn't zoom in/out between two different pictures.