Structure from Multiple Image Views. Figure by Snavely et al.

Project 3: Camera Calibration and Fundamental
Matrix Estimation with RANSAC
CSCI 1430: Introduction to Computer Vision

Brief

Overview

The goal of this project is to introduce you to camera and scene geometry. Specifically we will estimate the camera projection matrix, which maps 3D world coordinates to image coordinates, as well as the fundamental matrix, which relates points in one scene to epipolar lines in another. The camera projection matrix and the fundamental matrix can each be estimated using point correspondences. To estimate the projection matrix—intrinsic and extrinsic camera calibration—the input is corresponding 3d and 2d points. To estimate the fundamental matrix the input is corresponding 2d points across two images. You will start out by estimating the projection matrix and the fundamental matrix for a scene with ground truth correspondences. Then you'll move on to estimating the fundamental matrix using point correspondences from SIFT and RANSAC.

Remember these challenging images of Gaudi's Episcopal Palace from Project 2? By using RANSAC to find the fundamental matrix with the most inliers, we can filter away spurious matches and achieve near perfect point to point matching as shown below:

Episcopal Gaudi

Details and Starter Code

This project consists of three parts:

  1. Estimating the projection matrix,
  2. Estimating the fundamental matrix, and
  3. Estimating the fundamental matrix reliably with RANSAC from unreliable SIFT matches.

These three tasks have top level Matlab scripts proj3_part1.m, proj3_part2.m, and proj3_part3.m respectively.

Part I: Camera Projection Matrix

Given known 3D to 2D point correspondences, how can we recover a projection matrix that transforms from world 3D coordinates to 2D image coordinates? Recall that using homogeneous coordinates the equation for moving from 3D world to 2D camera coordinates is:

Our task is to find M. We are going to set up a system of linear equations to find the least squares regression solution for these camera matrix parameters, given correspondences between 3D world points and 2D image points. In class, we saw two solutions to this problem, which both require rearranging the coefficients of the 3D and 2D point evidence:

With these rearrangements, we're able to set up our linear regression to find the elements of the matrix M. However, in projective geometry, the matrix M is only defined up to an unknown scale. In effect, these equations have many different possible solutions. In the degenerate solution case, M = all zeros, which is not very helpful in our context.

We will consider two ways around this:

  1. We can fix the scale by setting the last element (m_{34}) to 1 and then find the remaining coefficients,
  2. We can use the singular value decomposition to directly solve the constrained optimization problem in which the scale is set:

Having recovered a vector of values with one of these methods, we must reshape it into the estimated camera projection matrix M.

To help you validate M, we provide evaluation code which computes the total "residual error" between the projected 2d location of each 3d point and the actual location of that point in the 2d image. The residual is the Euclidean distance in the image plane (square root of the sum of squared differences in u and v). This should be very small—in the order of a pixel. We also provide a set of “normalized points” in the files ./pts2d-norm-pic_a.txt and ./pts3d-norm.txt. The starter code will load these 20 corresponding normalized 2D and 3D points. If you solve for M using all the points, you should receive a matrix that is a scaled equivalent of the following matrix:

Given this matrix, we can project 3D points in the world onto our camera plane. For example, this matrix will take the normalized 3D point < 1.2323, 1.4421, 0.4506, 1.0 > and project it to 2D image < u, v > of < 0.1419, −0.4518 > (after converting the homogeneous 2D point < us, vs, s > to its nonhomogeneous version by dividing by s).

Once we have an accurate projection matrix M, it is possible to tease it apart into the more familiar and more useful matrix K of intrinsic parameters and matrix [R | T] of extrinsic parameters. For this project we will only ask you to estimate one particular extrinsic parameter: the camera center in world coordinates. Let us define M as being made up of a 3x3 matrix called Q, with 4th column m_4:

The center of the camera C could be found by:

If we use the normalized 3D points and M given above, we would receive camera center:

We've also provided a visualization which will show the estimated 3d location of the camera with respect to the normalized 3d point coordinates.

Part II: Fundamental Matrix Estimation

The next part of this project is to estimate the mapping of points in one image to lines in another by means of the fundamental matrix. This will require you to use similar methods to those in part 1. We will make use of the corresponding point locations listed in pts2d-pic_a.txt and pts2d-pic_b.txt. Recall that the definition of the fundamental matrix is:

The fundamental matrix is sometimes defined as the transpose of the above matrix with the left and right image points swapped. Both are valid fundamental matrices, but the visualization functions in the starter code assume you use the above form.

Another way of writing this matrix equations is:

Which is the same as:

With this, we can build a linear system to solve with least squares regression. For each unknown variable, we need one equation to constrain our solution. A pair of corresponding points will produce one equation for our system. However, as in part I, this matrix is only defined up to scale and the degenerate zero solution solves these equations. We need to fix the scale and solve using the same methods as in part 1. This leaves us able to solve the system with 8 point correspondences (or more, as we recover the best least squares fit to all points).

The least squares estimate of F is full rank; however, the fundamental matrix is a rank 2 matrix. As such we must reduce its rank. To do this, we can decompose F using singular value decomposition into the matrices U ΣV' = F. We can then estimate a rank 2 matrix by setting the smallest singular value in Σ to zero thus generating Σ2. The fundamental matrix is then easily calculated as F = U Σ2 V'. We can check our fundamental matrix estimation by plotting the epipolar lines using the plotting function provided in the starter code.

Part III: Fundamental Matrix with RANSAC

Given two photographs of a scene, it is unlikely that our point corresponence with which to perform regression for the fundamental matrix would all be correct. Our next task is to use RANSAC to reliably estimate a fundamental matrix from unreliable point correspondences computed with SIFT.

The starter code uses the computer vision software library VLFeat to perform SIFT matching for an image pair. We'll use these possible point correspondences and RANSAC to try and find a good fundamental matrix. We will iteratively choose a random set of point correspondences (e.g., 8, 9, or some small number of points), solve for the fundamental matrix using the function you wrote for part II, and then count the number of inliers. Inliers in this context will be point correspondences that "agree" with the estimated fundamental matrix. To count how many inliers a fundamental matrix has, you will need a distance metric based on the fundamental matrix. (Hint: For a point correspondence (x',x) what properties does the fundamental matrix have?). You will also need to pick a threshold between inliers and outliers. The results will be very sensitive to this threshold, so explore a range of values which strikes a balance between permission and exclusion. We must also decide on a stopping criteria for RANSAC, e.g., considering thousands of iterations of RANSAC. Upon stopping, return the fundamental matrix with the most inliers.

Recall from lecture the expected number of iterations of RANSAC to find the "right" solution in the presence of outliers. For example, if half of your input correspondences are wrong, then you have a 0.5^8 = 0.39% chance to randomly pick 8 incorrect correspondences when estimating the fundamental matrix. The correct fundamental matrix must have more inliers than outliers created from spurious matches, else the problem is not solvable.

For some real images, the fundamental matrix that is estimated may imply an impossible relationship between the cameras, e.g., an epipole in the center of one image when the cameras actually have a large translation parallel to the image plane. The estimated fundamental matrix may also be incorrect because the world points are coplanar, or because the cameras are not actually pinhole cameras and have lens distortions. Still, these "incorrect" fundamental matrices tend to remove incorrect SIFT matches (and, unfortunately, many correct ones too).

Evaluation and Visualization

For part I, we have provided expected output (matrix M and camera center C). These are numerical estimates so we won't be checking for exact numbers, just approximately correct locations.

For part II, you will be evaluated based on your estimate of the fundamental matrix. You can test how good your estimate of the fundamental matrix is by drawing the epipolar lines on one image which correspond to a point in the other image. You should see all of the epipolar lines crossing through the corresponding point in the other image, like this:

Epipolar Lines

We provide you with a function in the starter code that draws an epipolar line in an image given the fundamental matrix, and a point from the other image.

Part III is the most open ended part of this assignment. Your goal should be to demonstrate that you can estimate a fundamental matrix for a real image pair and use it to reject spurious keypoint matches. We may check the fundamental matrix you estimate, and you should visualize the epipolar lines in your writeup. The Gaudi image pair shown above is fairly difficult and you might not be able to find a reasonable fundamental matrix without the extra credit described below. Notre Dame is also difficult because the keypoint matches are mostly planar and thus the fundamental matrix is not well constrained.

We do not include the keypoint accuracy evaluation used in project 2. You should be able to get near-perfect accuracy (very few outliers) among the keypoints you designate as inliers.

Data

We provide 2D and 3D ground truth point correspondences for the base image pair (pic_a.jpg and pic_b.jpg). Other images will not have any ground truth correspondences.

Setting Up VLFeat

First, download and uncompress the VLFeat 0.9.20 binary package. VLFeat functions are precompiled to be called from Matlab in Windows, Linux, and MacOS so you shouldn't need to do any compilation. The only setup required is making sure Matlab can see VLFeat. That process is described on the VLFeat install guide.

Useful Functions

svd(). This function returns the singular value decomposition of a matrix. Useful for solving the linear systems of equations you build and reducing the rank of the fundamental matrix.

inv(). This function returns the inverse of a matrix.

randsample(). Let's you pick integers from a range. Useful for RANSAC.

The starter code uses the SIFT implementation from VLFeat and you aren't required to change it. In case you want to, here are the two VLFeat functions being used:
vl_sift(). This function returns SIFT descriptors from an image. You are allowed to use this function because you have already implemented the SIFT descriptor in project 2. You can use your own code if you want, but it is probably too slow.
vl_ubcmatch(). This function takes as input SIFT descriptors for two images, and returns the matches based on the ratio test that you implemented in project 2.

Banned Functions

You may not use the constrained least squares version of the MATLAB lsqlin() function. You may also not using any tools from the MATLAB computer vision toolbox for RANSAC or for estimating the fundamental matrix.

Write up

For this project, and all other projects, you must do a project report in HTML. In the report you will describe your algorithm and any decisions you made to write your algorithm a particular way. Discuss any extra credit you did and show what contribution it had on the results (e.g., performance with and without each extra credit component).

Writeup questions to answer:

  1. Describe the difference between the essential and fundamental matrices.
  2. What does it mean when your epipolar lines: a) cross, b) radiate out of a point on the image plane, or c) converge to a point outside of the image plane?
  3. What is rectification, and why do we rectify image pairs?

Make sure to include the following details in writing up your method:

  1. Your estimate of the projection matrix and the camera center.
  2. Your estimate of the fundamental matrix for the base image pair (pic_a.jpg and pic_b.jpg).
  3. Several different images with the epipolar lines drawn on them and with the inlier keypoint correspondences shown. At least one of these pairs should be "correct" for full credit.

Extra credit

As discussed in lecture, your estimate of the fundamental matrix can be improved by normalizing the coordinates before computing the fundamental matrix. It is suggested you perform the normalization through linear transformations as described below to make the mean of the points zero and the average magnitude about 1.0 or some other small number (square root of 2 is also recommended).

The transform matrix T is the product of the scale and offset matrices. c_u and c_v are the mean coordinates. To compute a scale s you could estimate the standard deviation after substracting the means. Then the scale factor s would be the reciprocal of whatever estimate of the scale you are using. Or you could find the maximum absolute value. Or you could scale the coordinates such that the average squared distance from the origin (after centering) is 2. You could use one scale matrix based on the statistics of the coordinates from both images or you could do it per image.

However you scale your coordinates, you will need to use the scaling matrices to adjust your fundamental matrix so that it can operate on the original pixel coordiantes as follows:

This extra credit can be implemented in your function estimate_fundamental_matrix() written for part II, but you won't actually notice a difference in part II because the input correspondences are pretty much perfect. In part III (which calls estimate_fundamental_matrix()) you are more likely to notice an improvement. Alternatively, you could implement the scaling based on the distribution of all feature coordinates and not just the handful passed into estimate_fundamental_matrix(). In your writeup you should highlight one before and after case where normalization improved your results.

Rubric

Web-Publishing Results

The professor and TAs will select impressive projects to be highlighed in class and on the course website. If you do not want your results published to the web, you can choose to opt out. If you want to opt out, then please email the instructor.

Hand-in process

We will be conducting anonymous TA grading, so please don't include your name or ID in your writeup.

The hand-in structure is very important as you will lose points if you do not follow instructions. Every time after the first that you do not follow instructions, you will lose 5 points. The folder you hand in must contain the following:

Hand in your project through using the `cs1430_handin' script. Navigate to your completed homework directory structure (e.g., ~/CS1430/Proj3/), then call cs1430_handin from the terminal. Choose proj3. You should receive an email confirming your handin.

Credits

This project is based on a project from Aaron Bobick's offering of CS 4495 at GATech, and has been expanded and edited by Henry Hu, Grady Williams, and James Hays.