The first step in my algorithm is to find the minimum scaling factor needed for a large image. Arbitrarily I have chosen the row (or y) dimension as the factor in question, and I keep halving that until it is less than 500 pixels long. This scaling factor represents that which will be used for the smallest image in the pyramid. If the image already has a y-dimension less than 500 then the multi-scale section of the algorithm will not be executed.
After the above mentioned scalar is found, the single-scale implementation section of the algorithm runs. Two new copies of the input images are created and resized using the minimum scalar value with a gaussian filter applied for the purpose of antialiasing. The locations of the center rows and columns are then found, and these are used to calculate the area of the image over which the image matching metric (the sum of squared differences is used) will be applied. This area called the ‘search window’ is based on the size of the image, and searches over the center of approximately half of the width and height of the image. Other said, imagine a mask applied to the center of the image with a height and width half that of the image in question. This mask is used to avoid errors that might occur at the edges of the images. Then, the program exhaustively searches over a window of possible displacements between -15 and 15 pixels in both the x and y directions, hence the double for loop, only taking into account the pixels in the search window mentioned above, in order to find the minimum value of the image matching metric and the corresponding shift_vector. So long as the input image has a y dimension fewer than 500 pixels, the rest of the program need not execute.
If the image has a y-dimension greater than 500 pixels, the multi-scale implementation part of the algorithm will continue. This does very much the same as the single-scale implementation, but is contained in a while loop and will execute so long as the scalar is less than 1. At the beginning of the execution of the while loop, the scalar is doubled. Thus, in the last execution of the while loop the scalar will be 1. As before, the images are resized and filtered accordingly with the now doubled scalar, and the search window is once again calculated based on the new size of the input images. This time, however, the area of displacements (the double for loop) does not use the arbitrary selection of -15 to 15 pixels. Instead, this range is calculated to be just a bit larger than the shift_vector found before, multiplied by two--this multiplication is necessary since in each successive search the image size is doubled. Initially shift_vector was calculated in the single-scale implementation of the algorithm, but new values of this shift_vector are found during each execution of the while loop and are used to narrow down the area of translations necessary to align the images.
At the end of the execution of the while loop the correct shift_vector should be found.
See the Results gallery for the images. They all look very good, except for the colored boarders indicating that the images aren’t taken from the same exact position. This is probably just an artifact of the glass negatives. There is also some color fringing but it is barely noticeable and is probably very difficult to deal with considering the images likely required long exposure times.