Image Analogies

Huilian Qiu (hqiu) - Dec 20 2012

Description

The goal of image analogies is to take two input images, A and A', and apply their relationship to B in order to synthesis an image B'. In other words we are given A:A':: B:_ and the output is A:A':: B:B'

Algorithm

The program takes in 3 images: A - source, A' - filtered source, and B - target. I reused the code for project 4 and changed it according to the paper Hertzmann et. al.. The basic algorithm is as follows:

I also tried assigning the whole feature patch to B', in order to find out how well the patch-based image analogies can be improved.

Feature

Features are just the patches that we have used in project 4. However, the paper mentioned that human eyes are more sensitive to luminance changes than to color changes, instead of using RGB or grayscale, I tranferred the images to YIQ and used the Y channel (the luminance channel) as the features. The paper stressed that the feature selection is an open problem, so I also experimented some other kinds of features. I used the luminance feature for filters and texture by numbers. For colorization, I also included the edge as the feature, so that the algorithm can compare the pattern of the image in addition to the luminance. I found that this is useful when the images contain some detials, like leaves.

Texture by number

The size of the feature used in the paper was 5*5. Since I am experimenting with patch-based image analogies, I played around with the feature size. The size of the feature should be big enough so that there can be some overlap region for the program to computer how well two patches can match and perform seam carving; but small enough to elliminate the artifact effect of patches to make the whole image as smoother as possible. I found that larger features preserve more detail of the target image, but the consistency between patches can not be guaranteed; the smaller features yield better results, yet the details can not be preserved.

A A' B
Assigned pixel Assigned small patch Assigned large patch

filters

The winning project of project 4 talked about the limitation of the patch-based image analogies. One of the limitation is that the training images should be similar enough to the target image. So I tried my algorithm on the following data. The source image - flowers is pretty different from the target image - the scene from the shore. The texture and the colors are completely different. However, since I used luminance channel as the features, and while mapping back the pixels, I took the luminance from A' and the color from B, the program gave reasonable results.

A A' B
Simple patch match Patch match preserving colors Assign pixel

colorization

For the colorazation part, the program takes in two gray scale images - A and B, and a color image - A', then transfer color from A' based on B to B'. The algorithm takes the luminance from B and the color from A'. The patch limiatation is more obvious here. The parts of mis-coloring may be due to the patch size or the missing of similar texture in A.

A A'
B B'

Some other results:

A A'
B B'
A A'
B B'

In general, the results of this algorithm still have strong patch effects, no matter assigning the whole patch or the single pixel, with or without seam carving. However, I managed to break some limitation of the patch-based algorithm and implemented several applications.