This project is composed of many steps. The first of which is to find
matching points from one image to the other that I wanted to stitch
together.
To do this, first I ran a Harris corner detector on both images.
This gives a large set of possible correspondences for the images, too large even.
Then a descriptor was created for each point. The descriptor was an 8x8 sample
of the pixels surrounding the feature, sampled at every 5 pixels.
These descriptors were then scored against each other to find the nearest neighbor
of each one (i.e. for a feature in A, which descriptor in B is most similar).
These neighbors were scored and then a second nearest neighbor was scored.
Each descriptor was now scored with the ratio of its 1nn score to its 2nn score.
Now, instead of performing ANMS to limit the points, I tried to eliminate
non-matches in a different strategy. I did this by comparing every feature
of imgA to its nearest neighbor in imgB. If this imgB feature's n.n.
was the feature in imgA, the feature stayed. If not, it was eliminated.
In this manner, I limited the search space only to those pairs of
features that had a 2-way mapping, not just a highly ranked 1-way mapping.
After these features were defined, I ran RANSAC to pick 4 points that
gave a homography which agreed with many other points in the image.
I used 2000 iterations and 14 agreed matches.
![]() |
![]() |
![]() |
![]() |