In the real world, lighting is in a larger range than modern cameras can capture. To determine the real-world radiances, photos are taken with multiple exposure times, and combined. First of all, the inverse pixel value to exposure function g is calculated.
g for garage | g for arch |
![]() |
![]() |
This function is then used to find the radiance of each pixel for each channel. The radiances across the images are weighted differently and combined to account for cutoffs at the two extremes.
At first, I only used one exposure image and remapped it, but areas where the radiance was cut off were lost. So instead, I combine the exposures, weighing values in the middle more to make up for over and under exposed values. The following images are the high radiance images with global simple scaling, and natural log scaling.
Bonsai | Garage | Window | Arch | Mug | |
Global Simple Scaling |
![]() |
![]() |
![]() |
![]() |
![]() |
Global Natural Log Scaling |
![]() |
![]() |
![]() |
![]() |
![]() |
Finally, below are results from Durand's algorithm. Here is a side-by-side comparison with simple scaling. The difference is most obvious in the almost underexposed and overexposed areas (bottom of the bonsai tree right above the pot).
Durand's separates color from intensity, and uses a bilateral filter on the intensity to find a detail layer of the image. The detail layer is used to scale the intensities of the base and textures differently. When the color is merged back in, the final image then shows contrast in the details almost equal to before, even though the radiances were scaled to fit the entire range.
![]() |
![]() |
![]() |
![]() |
![]() |
I also tried using different ranges in durand (dR = 1 to 8). It seems as if lower values caused the algorithm to have less effect, such as the top left corner.
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |