In
this project, the goal is to recover a real world radiance map from
a set of images captured with different exposition settings and to use
the recovered map to compose images with more detail than the original
images. Original images lack of detail in some regions because some
pixels are
under-exposed and other are over-exposed as a consequence of the
limited dynamic
range of the device. The radiance map recovered is related to the true
scene radiance map by an unknown scale factor.
Radiance map visualization
The first step is to compute
the inverse mapping g(z) and to reconstruct the world radiance map. I
did
this by solving the same linear system as in Debevec
and Malik 1997,
but in my Matlab code I have removed the loops to speed-up the
computation. The table below shows a visualization of the function and
the radiance map recovered for all the test images.
Inverse
mapping function g(z) |
Radiance
map |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
Global operator
Once
the radiance map was recovered, we like to map the world radiance
values to a small range suitable for visualization. First, I have
tried the global operator from
Reinhard
al.
et al. 2002 but later I decided
to implement the modified version presented in
Drago
et al. 2003 "Adaptive Logarithmic Mapping For Displaying High
Contrast Scenes" because
it includes a parameter b that can be adjusted according to the desired
brightness in the final image. Additionally, the paper proposes a
modified version of the gamma function which I have applied to
my images in order to replicate their results.
The equation of this operator is the
following:
The
next figure shows the effect in the mapping for different values of the
parameter b. Note that clipping will occur for values of
b<0.85.
The
following table show how the value of b affects the final
images.
Local tone mapping
As
local tone mapping scheme
I have implemented the simplified version
of the method proposed
Durand
2002 as detailed in the project
handout. The key idea is to split the image in three layers, base,
detail, and color, and to process each one
differently: base intensity is compressed to match the display
values, details
are added to keep scene fidelity, and colors are applied
unmodified.
Fast bilateral filter
I
have implemented the Fast Bilateral Filter as described in Sylvain
Paris presentation as extra credit. A simple implementation of a
bilateral filter is very slow because, different from a Gaussian blur,
it cannot be represented as a separable kernel and applied as separate
convolutions in rows and columns. To solve this issue, Fast Bilateral
Filter extends the domain to one extra dimension containing the range
values (e.g. in the case of images the domain is extended to a 3D
volume with x, y, and intensity as axis). In this extended domain, the
filter can be expressed as a separable kernel and applied to each axes
independently as other linear filters. Once the volume has been
processed, the final step is to extract the filtered intensity
values from it and build a filtered image.
To reduced the size of the extended representation in memory and
speed-up the convolution, the original domain is upsampled by some
factor. The
final values are recovered by trilinear interpolation.
It is known that for-loops are specially slow in
Matlab and I have done no special
optimization to minimize loops usage in any of my two filter
implementations. For instance, the standard filter implementation
requires four nested loops in order to traverse all the image pixels
and compute the two-dimensional convolution. In the case of the Fast
Filter implementation my code includes several more loops: two nested
loops to traverse the image and build the volume, three-nested loops to
traverse the volume, the volume is traversed three times (one for each
axes convolution), and two extra nested loops to extract the final
values
from the filtered volume and build the output image. However, as
can be seen in the table
below, the Fast Bilateral Filter implementation took less than a minute
to filter the test image, whereas the standard implementation ran for
six minutes. The improvement is about 6X besides the non-optimal
implementation.
In all my tests I have used a down-sampling factor of 4 in the spatial
coordinates and I have divided the range values in 10
bins.
|
Standard
Bilateral filter |
Fast
bilateral filter |
Execution
time for "garage" test image |
359.62
seconds |
56.47
seconds |
Test images results
Linear
mapping |
Global
operator |
Local
tone mapping |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
Discussion
From the result images above is clear
that the linear mapping is not enough to represent the scene radiance
in the compressed image format, and that the other two methods deliver
acceptable results. The only observation I like to make about them is
that neither is better than the other for all scenes. For instance, the
"arch" on top have ghosts in the local tone mapping meanwhile the other
version looks great. Similarly, the "window" image colors look unreal
in the right image. On the other hand, the outdoor images look
washed-out in the global operator column but the local tone mapped ones
look very good. Finally, I have not played with all the available
parameters in both methods, I just have chosen values that worked fine
in general and used them for all the images. Thus, these results are
not the "best ones" but are representative for the general case.