Automated Face Morphing
cs195-g: Computational Photography Patrick Doran
Spring 2010 pdoran
Directory
Resources
Proposal
With this project, I seek to automatically match features between two faces and then morph the faces using the feature points at control points. I use GrabCut to suppress features located outside of the person in the portrait. I then use Multi-Image matching modified to define corresponding points between less exact matches of feature points. Finally, I morph the two faces using the corresponding points.
Algorithm

High level description of the algorithm.

1 Extract the face using GrabCut.
  1. User selects a rectangle
  2. GMM / K-means
  3. Define Graph terminal and neighbor weights. The neighbor weights based on the difference between neighbors. The terminal weights are based on the probability of a pixel with color C being in the foreground GMM or background GMM
  4. Segment using GraphCut
2 Define corresponding points
  1. Find Harris feature points
  2. Perform Adaptive Non-Maxima Suppression*
  3. Define Features
    • Edges
    • Luminance
  4. Match Feature Points (distance between features)
3 Composite the best replacement patch using graphcut and/or Poisson blending.

*ANMS has been removed to allow for more corresponding points to be chosen.

Foreground Extraction

Things I did not do that were in the GrabCut:

  • Bordering Matting ("soft" segmentation)
  • Multiple iterations.
  • User updates of foreground/background
  • Attempt difficult cases
Automatic correspondence

Things I did not do that were in Brown, et. al:

  • Rotation-invariant features
  • Multi-scale features

Adjustments I made:

  • Removed Adaptive Non-Maxima Suppression
  • Used and weighted multiple feature types: 1*luminance distance, 0.5*edges distance
  • Use a larger error between features because they will not usually be perfect matches
  • RANSAC with larger error - features will not likely be in the same place
Results

My implementation of GrabCut worked well on the easiest cases (all of the class photos). Since I did not implement multiple iterations or user updates it does not work well on harder cases. Below is a failure case.

Failure Case: Extracting Ash from the Middle Ages

Automatic correspondence did not work as robustly as I had hoped, though I was pleasantly suprised by how well it worked on similar looking people. Below is the best result. Tristan and Tim looked quite similar the day we took the photos.

Face morphing worked as well as it ever has.

Conclusion and Future Work

GrabCut was quite useful for suppressing features outside of the portraits, however, more work needs to be done so I can recover portraits from images with arbitrary backgrounds. Soft border matting probably won't be necessary because I am not using the mask for compositing, just feature suppression.

My assumptions about correspondece between faces was not entirely correct. Though I never expected luminance features to work well between Tom Hanks and Wesley Snipes, I did expect adding edge features to the luminance to help solve that problem. Perhaps more features are needed. I want this to work between two images without any foreknowledge so that it can morph between arbitrary foreground objects. Though, since I do most want this to work between faces, template matching features (eyes, nose, mouth, ears, etc.) may work better.

Future Work

  • Add user interaction and multiple iterations to GrabCut
  • Add more features and learn weights for them
  • Replace features with data-driven methods (template match known face features)