CS 129 Project 5 Writeup
Greg Yauney (gyauney)
12 November 2012

The high dynamic range algorithm has three parts:

1. The first is reconstructing the non-linear response curve from the multiple exposures. I did this by implementing the algorithm described in "Recovering High Dynamic Range Radiance Maps from Photographs" by Debevic and Malik. I additionally used the matrix structure discussed in class on 2 November 2012. This is what the recovered relationships between exposure and pixel value looks like:

2. This curve is then used to compute the radiance of each pixel in the image, which is in fact a weighted average of the radiances of the same pixel in each exposure so as to minimize the effects of noisy and saturated pixels. The resulting radiance map looks like this:

3. The final part is tone mapping, or turning this radiance map back into a displayable image. I implemented the brute-force bilateral filtering algorithm described in "Fast Bilateral Filtering for the Display of High-Dynamic-Range Images" by Durand and Dorsey. Here's a visualization of the decomposition of the image into a base layer and a detail layer (on the left and right, respectively):

  


Here're the final HDR images (which were smallified so that the algorithm would run much faster), with the simple linear scaling on the left, the log of the radiance values in the middle, and the bilateral-filtered version on the right. The linear and log images show why local rather than global tone mapping is required.





As you can see, the bilateral-filtered images aren't what I hoped for. Although some images do look fine (like the parking garage and the mug), most do not. Emanuel helped me narrow the problem down to my bilateral filtering routine, but I haven't found the problem yet.



I unfortunately didn't implement anything above and beyond for extra credit.