Project 4: Scene Completion
cs195-g: Computational Photography Patrick Doran
Spring 2010 pdoran
Directory
Algorithm: Scene Completion Using a Single Patch
High level description of the algorithm.
1 Extract a template from the image with a hole with a few pixel padding on the borders to ensure that there are known pixels in the template.
2 Repeat for each test image scaled from 0.5 to 1.5: Compute the sum of squared differences of the template to each location in the test iamge. Extract the patch that minimizes this metric.
3 Composite the best replacement patch using graphcut and/or Poisson blending.
General Comments

The main focus of my algorithm was to churn through as much data as possible as quickly as possible rather than exploring improvements to the local context matching algorithm. The motivation for this is that while changes to the local context matching algorithm may improve results, if we can apply it to images we will be able see more of the effects of changes to the algorithm

There are a number of ways to speed up the processes. The most obvious is parallel computing. This project is embarrasingly parallelizable. All 120 image comparisons can be done in parallel since they only read from the images (never writing to). All 51 scene completions are independent of one another. This is great, but unfortunately, we do not have the parallel computing toolbox installed for matlab. My solution to that was to distribute processes across multiple machines. I wrote a simple script to run a function that takes in an image ID and performs scene completion on the image and its 120 matching images.

Next, I kernelized the distance function as it was written in the handout. First I used imfilter. This worked great, but was still too slow. Next I turned the convolutions into multiplications in the frequency domain. This allowed me to compare the template to every location in another image in anywhere from 5 to 20 seconds, depending on the sizes of the template and image. That meant I could compare all 120 images that quickly. Done in sequence, it took anywhere from 15 minutes to 40 minutes (depending on the sizes of the template and images).

There were other ideas I wanted to try, but did not implement since I spent time on other parts of the assignment. Here are some of the other ideas that I did not yet implement:

  • Since images are already somewhat aligned, weight the distance of template locations from the center of the template as it would be in the matching image. In other words, patches farther away from trivially compositing receive additional distance penalties
  • Search multiple scales on all 120 images. Again parallelizable but too costly to do that many searches currently.
  • Save the top 20 or so matches whose patches had the lowest distances and compare

Results

Generally the algorithm produced poor results. Likely this is a result of performing template matching over the entire image and picking patches that minimize the sum of squared differences of pixels (in RGB space) because this completely ignores the features that originally aligned the two images.

Results
Source Hole Search Image Result

Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch


Mask

Patch