For get_interest points I first took the dx and dy of the image
maskX = [-1 0 1; -1 0 1; -1 0 1];
maskY = maskX';
dx = conv2(image, maskX, 'same');
dy = conv2(image, maskY, 'same');
Then I computed the three images corresponding to the outer products of these gradients, convolved with a larger Gaussian, and used these to compute the harris measure:
dxSquared = conv2(dx.^2, gaus, 'same');
dySquared = conv2(dy.^2, gaus, 'same');
dxy = conv2(dx.*dy, gaus, 'same');
% Compute the HCM
cornMeas = (dxSquared.*dySquared - dxy.^2) - k*(dxSquared + dySquared).^2; %
Finally, I limited results to local maxima and added a bordermask:
maxi = colfilt(cornMeas,[feature_width,feature_width], 'sliding', @max);
border = zeros(size(cornMeas));
border(ceil(feature_width/2)+1:end-ceil(feature_width/2), ceil(feature_width/2)+1:end-ceil(feature_width/2)) = 1;
corrected = maxi & (cornMeas>thresh) & border;
[x,y] = find(corrected);
I believe get_features is buggy, due to my low performance, but I can't figure out where.
First, I find the gradient at each point.
[gradImX, gradImY] = gradient(image);
First, I go over each of the interest points found by the previous method, and and break the feature_width sized box around it into 4x4 equal sized boxes. for each pixel in these boxes I measure the gradient and bin the gradient into one of 8 bins, creating a histogram for each of my 16 boxes
[ for j=1:4,
for k=1:4,
cellsX(j,k,:,:)=gradImX(X-feature_width+(1+j)*(feature_width/4):X-feature_width+(2+j)*(feature_width/4)-1,Y-feature_width+(1+k)*(feature_width/4):Y-feature_width+(2+k)*(feature_width/4)-1);
cellsY(j,k,:,:)=gradImY(X-feature_width+(1+j)*(feature_width/4):X-feature_width+(2+j)*(feature_width/4)-1,Y-feature_width+(1+k)*(feature_width/4):Y-feature_width+(2+k)*(feature_width/4)-1);
end
end
% (2) each cell should have a histogram of the local distribution of
% gradients in 8 orientations. Appending these histograms together will
% give you 4x4 x 8 = 128 dimensions.
%cellsY(:,:,:,:) = max(cellsY(:,:,:,:),.0001);
for j=1:4,
for k=1:4,
vals = atan(cellsX(j,k,:,:) ./ cellsY(j,k,:,:));
vals = ceil(mod(vals(:,:,:,:), 2*pi));
for g=1:8,
features(i,(j-1)*32+(k-1)*8+g) = [sum(vals(:,:) == g)];
end
end
end
Finally, I normalize each feature vector.
features(i, :) = features(i, :)./norm(features(i, :));
For match_features I would find the 2 nearest neighbors. The the nearest neighbor score/ second highest nearest neighbor score is below some threshold, then I would classify the feature and the nearest neighbor as a match. Because of whatever error I had in the previous section, I was forced to make this threshold very high to get any results
The cathedral I was able to get 10% accuracy on by fiddling with various thresholds. I believe my results were so poor because of an error in find_features
![]() ![]() ![]() |