Copying and pasting an object from one image to another results in an immediately obvious seam between the images, even if the backgrounds are similar. A simple solution to this exploits a feature of human vision: we are just as perceptive (if not more so) to change in color as we are to the absolute color value itself. This project adjusts a target image's pixels such that their gradients (essentially change in pixel colors) match those of the source image, within the area of the object to be copied into the new image
Given source, target, and mask images (all previously fixed to the same size), we will generate a new image such that outside the mask the pixel values are the same as in the target, and within the mask the pixel values are such that their gradients match those of the source image. This can be formulated as several matrix equations. We'll ultimately be solving the matrix equation A * X = B
three times, once for each color channel. A is a large, sparse matrix, with a column and row for each pixel, and we'll have three different B column vectors: one for each color channel. These are derived as follows.
While iterating through all pixels of the image:
x * y
of current pixel as both the column and row index. Also, for each channel, store in the proper B array the corresponding value in the target image, again at index x * y
. This ensures that the resulting X vector for each channel (i.e. each channel of the resulting image) has the same pixel values of the target image outside the mask.
4*x(i,j) - x(i-1, j) - x(i+1, j) - x(i, j-1) - x(i, j+1) = 4*s(i,j) - s(i-1, j) - s(i+1, j) - s(i, j-1) - s(i, j+1)
Once we have the A matrix and B vectors as formulated above, it is simply a matter of left-dividing A by the B vector for each channel to find the X for each red, green, and blue. Then we can combine the channels and: voila! A seamlessly blended image.
Results
Following are results from some of the sample images provided, as well as my own experimentations. Notice in some images that, comparing the source and blended images, there is significant change in the color of the copied object, though we don't easily notice without the source image as a reference. This is especially noticeable in the octopus's enhanced orangeness below.
As you can see with the rainbow above, a drawback of this approach is the entire loss of structural features of the target image within the mask. To counteract this, the gradients of the source and target images can be mixed within the mask, so that features of both are present in the resulting image. The optimal way to do this is to use the maximum gradient available for each pixel, but in my implementation (with the Laplacian formulation, used above, which is less well-constrained than its single-dimensional counterpart) this seems to push the resulting pixel values way out of their expected range (0-1). Thus, as a compromise, I take a simple average of the source and target gradients, with pleasant results for necessarily transparent source features: