As detailed in the project specification, my code aligns three similar images recorded in different color channels, and combines them to form a single color image. All data used is drawn from the Prokudin-Gorskii collection in the Library of Congress.
The single-dimensional algorithm for my image alignment is fairly straightforward, as it is mostly a variation on the brute-force "test every offset and choose the best" approach. Steps are as follows:
sum(sum((image_2 - image_1).^2))
between each offset and recording the maximum. This value approximates the summed difference between the values of the pixels, and is thus at least a mediocre metric of the two images' similarities.circshift()
the second image before combining the two.
The multi-dimensional algorithm takes advantage of the fact that resizing a large image is much quicker than testing alignment across many possible offsets. This algorithm extends the above like so:
Following are example outputs from my program, for eight different sets of images within the Prokudin-Gorskii collection. Four are from images provided with the stencil (though all 16 align properly with my algorithm) and four are others found on the Library of Congress site and then aligned using my program. As you can see, they all align with the best possible offsets, though there are still some artifacts due inconsistencies in the positioning of the photo subjects. This is most noticeable with the head of the furthest left man in the last photo.