The dynamic range is "the luminance range of a scene being photographed", and modern cameras often do a poor job in capturing the full dynamic range of a scene. Scenes with high dynamic ranges usually suffer from either over-exposure in the brighter regions or under-exposure in the darker regions. In this project, a pipeline has been implemented to reconstruct an image of a scene with high dynamic range from a set of low dynamic range images of that scene captured under different exposures so that the reconstructed image has better exposures for both the darker pixels and brighter pixels. The pipeline consists of two steps: radiance map reconstruction and tone mapping. Radiance map reconstruction is based on Debevec and Malik 1997, and a local tone mapping method based on Durand 2002 has been used for tone mapping. During tone mapping, a method based on Paris 2006 has been used to speed up the bilateral filtering.

A radiance map is an image that represents the true illuminance values of a scene. Radiance map reconstruction further contains two separate steps: 1) recovering the response curves for the three color channels which map the pixel values to the log of exposure values, and 2) mapping the observed pixel values and exposure times to radiance.

The intuition behind response curve recovery is that all the pixel values for a given color channel in an image are outputs from the same function f, which takes as input the product of the radiance at the given point in the scene and the exposure time. The response curve g that we'd like to recover is the inverse function of f. As we only have 256 possible pixel values, we only need to find out g's outputs for 256 values (0 - 255). There are three constrainsts: 1) minimize the sum of squared difference between the outputs of g and the actual products of radiance and exposure time, 2) minimize the second derivatives of g to make g smoother, and 3) set g(128) to be the same for three color channels so that the channels would be aligned. We also want to assign different weights to different pixel values to favor those in the middle, as pixel values on the two extremes are more likely to be under-exposed or over-exposed and therefore unreliable.

The recovered response curves could then be used to compute the radiance map based on ln(radiance at the ith pixel) = g(ith pixel value in image j) - ln(exposure time of image j). Note that the weighting function used in response curve recovery have been applied in the same manner here.

The reconstructed radiance maps are shown for some of the test images below, together with the response functions. For the first two images, the radiance map is displayed by linearly mapping the lower 0.1% of their dynamic ranges; for the latter two, the log radiance map is displayed.

An average-exposed input image | Radiance Map | Recovered response curves |

The reconstructed radiance map can then be used to reconstruct a high dynamic range image. Two baseline methods have been implemented to compare with the bilateral filtering method. In the first baseline method (global scale), the values in the radiance map are simply linearly compressed into 0 - 255 to produce the result. In the second method (global simple), each pixel p in the radiance map is first adjusted to be p = p / (p + 1) before the linear compression. The comparisons with the bilateral filtering method are shown in result 3.

In the bilateral filtering method, the intensity of the radiance map is computed and compressed into the log domain. Then, a bilateral filter is applied on the log intensity radiance map to obtain the base - the large structure of the image. In the implementation, the standard deviation for the spatial domain Gaussian is 10, while the SD for the intensity Gaussian is 4. Subtracting the base from the log intensity image gives us the detail layer. The detail layer is left intact while the base is compressed. Finally, a gamma compression with factor 0.9 is applied to produce the final output. The base layer and the detail layer are shown below.

Intensity | Base after bilateral filtering | The detail layer |

Global scale | GLobal simple | Durand |

To speed up the bilateral filtering, an alternative method to the brute force approach has been implemented. The intuition behind the alternative approach is that we could treat the image intensity as the third dimension and perform convolutions with a 3D Gaussian kernel. After the reformulation, the filtering result could be obtained by dividing the convolution of the Gaussian kernel with the data by the convolution of the kernel with the weights. In order to reduce running time, the image is first downsampled before convolution, and then upsampled. In the implementation, the sampling rate for the spatial domain is 5, and that for the intensity is 2.

The 3D method has a significantly reduced running time compared with the brute force approach. With the implementation, all the test images can be filtered within about one second, while the brute force approach requires 20 ~ 100 seconds. The chart below shows the comparison, where the red bars are the running time for the brute force approach. It shows that the running time under the fast bilateral filtering implementation is almost negligible compared to the brute force approach.