CS129 / Project 5 / High Dynamic Range

HDR Garage

This project has two parts: taking an image stack and gaining a function that can map between pixel values and radiance, and then taking a radiance map generated with that data and using bilateral filtering to implement a local tone maping operation.

Algorithm: Radiance Map Construction

The idea for this part of the algorithm is to take a bunch of pictures of the same scene with different exposure times. Each point in the scene will have the same amount of photons bouncing towards the camera during each shot, but the camera will take in fewer the shorter the exposure time is. The number of photons coming towards the camera is indicative of the radience of each pixel in the image. The pixel value in a given image should take the form Z = f(E * t) where Z is the pixel value, E is the intrinsic radiance of that pixel and t is the exposure time of the image. f is a function (not necessarily a linear one) which maps between incoming photons and recorded pixel values. If we can discover f, then we can calculate a radiance map for every pixel in the scene, which we can then use to reconstruct an hdr image of the scene. To do this, we actually take the inverse of f (called g) and find that by using the rearranged function ln(E) = g(Z) - ln(t). This is explained in more detail in this paper.

To solve this problem, we construct a system of linear equations wherein we sample lots of points in each image and set it so that for each pixel, it's radiance value is the same. Our goal is to discover what g is, so we can reliably construct a radiance map. At this point, our system of linear equations is underconstrained, so we add equations that force the second derivative of g at all points to be roughly 0. The last thing we add is a weighting function to weight impact from pixels that are closer to the middle of the range of possible color values higher than from pixels that are closer to the edges, since if a pixel is very close to 128, then we're getting a fairly accurate representation of the radiance of that point in the scene. Once we have all these equations, we solve for g and use that to reconstruct the radiance map, making sure to still take into account weighting for pixel values that are closer to the middle of the possible color range.

Results: Radiance Map Construction

Here we have a plot of the g function for the garage image. You can also see a color visualization of the radiance map generated by the algorithm. I've also included my Durand version of the image, for comparison to the radiance map.

Algorithm: Local and Global Tone Mapping

For this algorithm, we're following a modified version of Durand's hdr photo reconstruction algorithm. The idea is to use a bilateral filter to get the "background" data and "detail" or "texture" data from the radiance map. Once we have these two separate sets of data, we scale the background data so that we're in the 0 to 1 range, and then we reapply the detail. We do this in the log domain so as to avoid overflow and make calculations easier. We also convert the radiance map to grayscale and calculate chrominance for each rgb channel. We use the grayscale intensity for our calculation and bilateral filtering, and at the very end we reapply the color by multiplying by the chrominance at each pixel. The last step is to do gamma compression to account for gamma correction. In this case I used a gamma of 0.5 for all images. I assumed that there were around 4 stops of dynamic range for scaling my background data.

Bilateral filtering makes use of a spacial gaussian to weight contributions by nearby pixels, but it also uses an intensity gaussian to weight intensities that are very different from the local intensity poorly. This serves to maintain edges and detail, because in high gradient areas, the difference in intensities will dominate the spacial closeness of two pixels. In smoother areas, the spatial gaussian will dominate, and so noise in those areas will be reduced. In my algorithm, I used a cached spacial gaussian that had a radius of 20 pixels and a standard deviation of 15. My intensity gaussian I computed on the fly, and it had a standard deviation of 0.1*(the range of values of the log(intensity)).

For the sake of comparison, I also did a global tone mapping operation in which for each pixel I just mapped it with Z = L/(1+L) where Z is the pixel value and L is the radiance.

Results: Global and Local Tone Mapping

Here is my result for the Window image, with the detail and bilaterally filtered results of the radiance map.

Below you'll find a simple directly scaled version, L/(1+L) version and Durand version of each of the test images.
Global Scale Global L/(1+L) Local with Durand Algorithm

Discussion

This algorithm seems to work reasonably well. We can see the weakness of taking multiple images in the garden or house examples, where we can see very detectable unintentional blurring. The images look a little bit washed out sometimes, but that's a matter of tuning the parameters to the algorithm, including the parameters to the bilateral filtering and the way in which the background part of the images' color is scaled.