Making panoramas is very fun and satisfying. In this final project, I continued on from project 6 and explored the making of cylindrical panorama. The advantage of a cylindrical panorama over a rectilinear panorama is that the former is capable of capturing a scene with 360 degree view. A rectilinear panorama fails at this task because homography will generate large warping as the number of images increases. For my project, I received a lot of helpful information from previous year's final project.
Making cylindrical panorama consists of five steps:
I will cover details at each step in my report.
The focal length is the distance from the focal point of the camera to the center of the image. The focal length determines the field of view (fov) of the lens. The longer the focal length, the narrower the angle of view and the shorter the focal length, the wider the angle of coverage. Given the focal length of a camera, we could compute the the number of pictures needed for a panorama. Focal length is also used to approximate the cylindrical projection which maps image coordinates to cylindrical coordinates.
To obtain focal length in pixles, one needs to calibrate the camera. I used Caltech camera calibration tool kit from Jean-Yves Bouguet to calibrate my camera.
The result I got for my Cannon PowerShot S90 is the following:
Focal Length: fc = [ 541.78875 543.92857 ] [ 7.64518 7.36463 ]
The first fc is vertical fov, the second is horizontal fov. Since they are not very different, I am using the horizontal fc for all of my computation.
This document from UWisc professor Charles Dyer
explains in detail how to compute approximate cylindrical projection using focal length.
My conclusion from reading the document is that, given focal length f, image coordinates (x,y), the corresponding cylindrical coordinates (x', y') is:
Note that in this notation, x is the width of the image and y is the height. It took me some time to convert this equation into matlab.
This is also the reason to find the focal length in pixel value since both x, y and x' y' are in pixels.
The nice thing about cylindrical panorama is that we can assume there is only translation, no rotation or scaling. I find at least 30 pairs of feature points and use RANSAC to find the translation that receives the most consensus. From the translation, I get the vertical shift during every stitching iteration and use the drift value to pad my images. As a result of modifying boundaries, I had to adjust the Harris corner detector such that it does not classify the boundary as features.
There are many ways to improve panorama stitching, namely seam removing and image blending. I used poisson blending to improve my panorama. Graph cut should be a great algorithm to remove seams but I ran out of time and had to leave it to the future. Another improvement that could be done is to stitch together HDR images. There are many options. Finally, I regret taking pictures in landscape mode. But by the time I realized the issue, it is too late to take a lot of pictures. I also should have rotated my panorama before I texture map them onto the cylinder.
Here are the results of my work. Since the subject of this project is cylindrical panorama, I decided to map my result to a cylinder. I used three.js, a wrapper around webgl, to create the 3D environment in the browser.