In
this project, I have implemented
the Image Quilting algorithm from
Image
Quilting for Texture Synthesis and Transfer,
a SIGGRAPH 2001 paper
by Alexei A. Efros and William T. Freeman.
Patch search
To create the synthetic image,
we need to search a proper patch to
add at each position. Scanning and testing all possible patches from
the source texture might be inefficient and slow, specially if the
texture is large. As an alternative to exhaustive search, we can select
a subset of all possible patches and test only those. In my code, I
sample patch centers from a uniform distribution each time a new patch
is required, usually I sampled 10000 patches but a small patch count
gives good results too. Additionally, the stochastic characteristic of
this process brings more diversity to the final texture than if the
best possible patches is always selected.
Avoiding too many repetitions
The sampling process from above
introduces some randomness in the
textures but I have found that in many cases the final result still
have considerable patch repetitions, making the image to look
unrealistic (e.g. in the toast). To further randomize the patch
selection, I have added a flag that, when enabled, makes the algorithm
to select the final patch randomly between a small set of the best
patches found. If the flag is disabled, the algorithm always select the
best patch, the one with the minimum SSD cost. The set of best patches
is chosen by selecting the top ranked patches from all the sampled
ones. In my experiments, this procedure improves the final results in
general but is not perfect: we still get some repetitions from the
"toast" texture and sometimes it deteriorates a little structured
textures likes "apple". By adjusting the tolerance used to select the
subset of best patches we get a reasonable balance between both cases:
too much or to few randomness.
Minimum cut
In order to find a minimum cut
between patches I have reused the
code from the previous project which search the minimum path with
dynamic programming. First, a vertical cut is found and the new patch
is merged to the patch from its left position, and then a horizontal
cut between this new patch and the patch on top is found.
Texture |
Synthetic Random |
Synthetic SSD |
Synthetic SSD &
Min-Cut |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
Additional images
I
have tested the algorithm in the
additional images shown below. The first two textures are not
structured and the result is close to perfect, we cannot see any
artifacts in the synthetic image from SSD & Min-Cut. The next
image, the bark, is semi-structured and even though the result
is
good, it is not as good as the previous ones. Finally, the
last image corresponds to a wooden fence and we can see that
the
algorithm makes a good job in preserving the vertical lines but small
color changes reveal the union of some of the patches.
Texture |
Synthetic Random |
Synthetic SSD |
Synthetic SSD
&
Min-Cut |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
Texture transfer
Here
we want to modify the texture
synthesis procedure to makes it create textures that resemble closely a
source image. This is done without copying patches from the source
image, only patches from the texture image are chosen. Previously, the
patches were selected to minimize the difference in the overlap with
neighbor patches. Now, we want to minimize the overlap cost and the
difference from the patch at same location in the source image.
Additionally, we want to minimize the difference between the newly
selected patch and the existing patch at the same location in the image
being synthesized. This way, by iterating the previous algorithm a few
times we can refine the solution to get a better result. Modifying the
previous patch search functions to consider this new cost function is
almost trivial. In the table below are shown both the source
image
and final image with the new texture.
Simple image inpainting
I
wanted to explore if the texture transfer algorithm could be used to do
some simple image inpainting. On this purpose I modified the previous
code to read an image mask and to fill the selected region using
patches from the rest of the image. The result, shown below, was not
very good. The first issue I have found was that the region to fill has
to be initialized to some prior values to guide the algorithm to the
expected result, in my case I have initialized the empty region by
using colors from the border, this way, the region is painted with
patches matching the surrounding. The main issue was that the new
patches have to be merged on all directions (top, bottom, left, and
right), depending on their location in the image, opposite to before
that they were only merge on top and right.