# Project 6: Automated Panorama Stitching

### Due Date: 11:59pm on Saturday, March 17th, 2010

## Brief

- This handout:
`/course/cs195g/asgn/proj6/handout/` - Stencil code:
`/course/cs195g/asgn/proj6/stencil/` - Data:
`/course/cs195g/asgn/proj6/data/` - Handin:
`cs195g_handin proj6` - Required files: README, code/, html/, html/index.html

## Requirements

You are required to implement the following:

- Recover homographies: calculate a projective transformmation matrix given a list of at least 8 points
- Warp images given a transformation matrix
- Rectify images
- Composite the warped images into a mosaic
- Detect features
- Match features
- Robustly calculate a transformation matrix from those features (e.g. RANSAC)

## Details

MATLAB stencil code is available in `/course/cs195g/asgn/proj6/stencil/`.
You're free to do this project in whatever language you want, but the TAs are
only offering support in MATLAB. The TAs have supplied a few files to get you started.

`definecorrespondence.m`: Click on an image and get lists of (X,Y) correspondence points between the two images. You will modify or replace it with automatic stitching.`dist2.m`: Calculates squared distance between two sets of points. It will be useful for comparing features vectors.`getpts.m`: Click on an image, get (X,Y) points back.`harris.m`: An implementation of harris corner detection. You may want to modify this to use different border lengths

## Recover Homographies

You will be recovering a projective transformation `H`. H is a 3x3
matrix with 8 degrees of freedom. It is possible to solve H by pairs of
corresponding points since **q=Hp**. You will need to solve a system of
at least 8 linear equations to solve for the 8 unknowns of H. These 8 linear
equations are based on the 4 pairs of corresponding points.
Here is a useful resource on
projective mappings.

You are not allowed to use maketform except to check your results.

## Warp Images

Once you have the homography, you will need to warp images. You should map
the pixels in the warped image to pixels in the input image so that you
don't end up with holes in your image. The matlab function `interp2`
will be useful for warping.
Here is a useful resource on
image warping. Note the following
when reading this resources:

- The authors are not necessarily doing a perspective transform
- The authors do not homogeonize their result coordinates, i.e. divide by homogeneous coordinate (x, y, w)/w = (x/w, y/w, 1)
- The authors have their width and height backwards throughout the entire paper: it should be [h,w] = size(A)

You are not allowed to use imtransform except to check your results.

## Rectify Images

Rectifying images simply means you should be able to turn a rectangle that has been persepctively transformed into an axis-aligned rectangle. You should pick four corners of a square and calculate the transform of those points to [0 0; 0 1; 1 0; 1 1] or something similar. If you know the aspect ratio of the rectangular object you take a picture of, you can input those coordinates instead of [0 0; 0 1; 1 0; 1 1].

## Composite Warped Images

You will need to composite two (or more) images to make a panorama. It is your
choice on whether to incrementally warp one image to another or warp all the images
to one image. It is also your choice on how to composite. You can simply do over,
averaging, feathering (`bwdist` may help with that), GraphCut seams, Poisson
image blending or any other technique you feel like. You will not be penalized for
doing simple composite techniques, but your results may not look as good.

## Detect and Match Features

Up to here, you have been using user selected corresponding points. You may have noticed that the transformation is very sensitive to slight errors in selecting correspondence points. Now you will fix that by automating that process. This part of the process is based on Brown, et al. It is strongly recommended that you read this paper.

- Harris Interest Point Detector (section 2)- done for you
- Implement adaptive non-maximal suppression (section 3)
- Implement feature descriptor extraction (section 4) - Ignore rotation invariance, sub-pixel accuracy, and the wavelet transform. Remember to standardize each of your feature vectors: subtract the mean and divide by the standard deviation - the paper calls this normalizing.
- Implement feature matching (section 5) - Look at pairs of features and compute
the distance between them. This is why we give you
`dist2`. For thresholding, use the Lowe approach of thresholding the ratio between the first and second nearest neighbors.

## Robustly Recover Homographies

With the automatic feature extraction, you will have a overdetermined system of equations for H. Unfortunately, some of the points may be false positives: points that matched under the threshold but are not actually matches. You will need to remove these outliers as they will throw off your transformation matrix. A robust way to solve that is using the RANSAC algorithm. Instead of using one data point, you will need four points to create the transformation matrix. The TAs suggest 1000 iterations and a minimum consensus of 10 and an error of 0.5. Which equates to trying 1000 times to find a transform based on 4 points in which at least 10 other transformed points have at most an error of 0.5 (half a pixel) to their actual correspondence points.

## Write up

For this project, just like all other projects, you must do a project report in HTML. In the report you will describe your algorithm and any decisions you made to write your algorithm a particular way. Then you will show and discuss the results of your algorithm. Also discuss any extra credit you did. Feel free to add any other information you feel is relevant.

## Extra Credit

You are free to do whatever extra credit you can think of, but here are a few examples.

- (up to +5) Instead of projecting your mosaic onto a plane, project it onto a sphere or cylinder.
- (up to +5) Creative use of image warping and compositing: add graffiti to a wall, project a movie onto a wall, etc..
- (up to +15) Video panorama: combine videos taken from stationary locations
- (up to +5) Add multiscale processing for corner and feature detection.
- (up to +5) Add rotation invariance to the descriptors
- (up to +10) Panorama recognition: given a set of images that might form panoramas, automatically discover and stitch them together.
- (up to +10) Automatically create globe panoramas

## Graduate Credit

You are required to do at least 10 points of extra credit.

## Handing in

This is very important as you will lose points if you do not follow instructions. Every time after the first that you do not follow instructions, you will lose 5 points. The folder you hand in must contain the following:

- README - text file containing anything about the project that you want to tell the TAs
- code/ - directory containing all your code for this assignment
- html/ - directory containing all your html report for this assignment (including images)
- html/index.html - home page for your results

Then run: ` cs195g_handin proj6`

If it is not in your path, you can run it directly: `/course/cs195g/bin/cs195g_handin proj6 `

## Rubric

- +20 pts: Robustly recovering homographies (+10 for non-robust)
- +20 pts: Warping, rectifying and compositing images
- +30 pts: Detecting and matching feature points
- +10 pts: Creating at least 2 unique panoramas from your own images.
- +20 pts: Write up
- +20 pts: Extra credit (up to twenty points)
- -5*n pts: Lose 5 points for every time (after the first) you do not follow the instructions for the hand in format

## Final Advice

- Compare the results of your functions to the built in matlab functions maketform and imtransform.
- Have fun with this and be creative with your images

## Credits

Project derived from Alexei A. Efros' Computational Photography course, with permission.