CS 129 Project 6 Writeup

Jeroen Chua (jchua)
November 26 2012

Outline

I briefly describe a method to do automatic panorama construction. The method is a simplified version of the one described in "Multi-Image Matching using Multi-Scale Oriented Patches, CVPR 2005" [1]. Broadly, the method first uses a Harris feature point detector (with thresholding of corner response, and adaptive non-maxima suppression) to find feature points. Then, a feature descriptor is used by subsampling pixels in an area around the detected feature point, and using their gradient information. Features are then matched across images using Euclidian distance. Finally, to do the image blending, two methods were tried: simply averaging the projections of the panaoram-ed images, and Poisson blending.

Extra credit work

For the feature description, I first sample pixels from around the feature point, as done in [1], followed by finding the gradient at each pixel. The gradients are then mean-subtracted and wrapped to range [0,2*pi); setting the average gradient in the patch to zero has the effect of making the feature descriptor robust to rotations. This is much faster than a naive method that rotates the local region around the feature point before pixel sampling. However, the proposed method is not invariant to rotation (since sampling of the pixels does not take into account rotations), but should be robust against rotations. The second thing I have done is tried blending images in the panorama using Poisson blending, as in Assignment 2.

Result images for panaorama construction

I show results for the baseline panorama construction method, as outlined in the assignment, as well as the results for using gradient features, and image blending. The images, from left to right, are:
  1. Baseline: Baseline panorama construction method
  2. Gradient: Panorama construction method using gradient features
  3. Image blend: Panorama construction using Poisson blending (using pixel, not gradient, features)
The results are discussed after the displayed images.

Click an image to see it at its full size.
Baseline Gradient Image blend

First, the baseline panorama construction model (column 1)performs decently on the test image set; for the most part, the alignment appears correct. However, the manner in which the panorama-ed iamges were blended (averaging), results in very obvious artifacts.

To deal with the blending artifacts, I tried using image blending during the image compositing step; the results are shown in column 3. Unfortunatey, the blending does not appear to work very well. In particular, it tends to "blow out" parts of the image near where the two panorama-ed images meet. This is due to the panorama-ed images being zero-padded to be of the same size, which creates strong artificial boundaries (and so, strong gradients), around the edge of the target image. However, this method does do a slighty better blending job some of the time- for example, in the picture of the village (last row), the artifacts on the left side of the image are less pronounced.

Lastly, I tried using a different feature representation (image gradients) which were robust to rotrations (achieved by setting the average mean of the gradients in the image, before wrapping to [0,2*pi), to zero). The results appear better for the image of the streetcar (first row); the baseline panorama model squishes the street care on the right side, while the gradient approach does not. The difference is more apparent when flipping back and forth between the images (open both images in a new tab, and flip back and forth. Click on an image to get a bigger version). However, the gradient method performs much, much worse for the water panorama (row 7- second last row). This may because there is not much gradient information to use, as most of the image is smooth (water). for the other images, the gradient approach returns similar results to the baseline, presumably because the baseline performs well on these images to begin with.