Look at the image from very close, then from far away.

Image Filtering and Hybrid Images
CSCI 1430: Introduction to Computer Vision

Logistics

Overview

We will write an image convolution function (image filtering) and use it to create hybrid images! The technique was invented by Oliva, Torralba, and Schyns in 2006, and published in a paper at SIGGRAPH. High frequency image content tends to dominate perception but, at a distance, only low frequency (smooth) content is perceived. By blending high and low frequency content, we can create a hybrid image that is perceived differently at different distances.

Image Filtering

This is a fundamental image processing tool (see Chapter 3.2 of Szeliski and the lecture materials to learn about image filtering, specifically about linear filtering). Common computer vision software packages have efficient functions to perform image filtering, but we will write our own from scratch via convolution.

Requirements / Rubric

Forbidden functions: Anything that filters or convolves or correlates for you, e.g., numpy.convolve(), scipy.signal.convolve2d(), scipy.ndimage.convolve(), scipy.ndimage.correlate(). Further, any routines in packages not included in the 1430 virtual environment are forbidden.

Potentially useful functions: Basic numpy operations, e.g., addition numpy.add() (or simply +), element-wise multiplication numpy.multiply() (or simply *), summation numpy.sum(), flipping numpy.flip, range clipping numpy.clip(), padding numpy.pad(), rotating numpy.rot90(), etc.

Hybrid Images

A hybrid image is the sum of a low-pass filtered version of a first image and a high-pass filtered version of a second image. We must tune a free parameter for each image pair to controls how much high frequency to remove from the first image and how much low frequency to leave in the second image. This is called the "cut-off frequency". The paper suggests to use two cut-off frequencies, one tuned for each image, and you are free to try this too. In the starter code, the cut-off frequency is controlled by changing the standard deviation of the Gausian kernel used to construct a hybrid image.

We provide 5 pairs of aligned images which can be merged reasonably well into hybrid images. The alignment is important because it affects the perceptual grouping (read the paper for details). We encourage you to create additional examples, e.g., change of expression, morph between different objects, change over time, etc. For inspiration, please see the hybrid images project page.

Hybrid Image Example

For the example shown at the top of the page, the two original images look like this:

The low-pass (blurred) and high-pass versions of these images look like this:

The high frequency image is actually zero-mean with negative values so it is visualized by adding 0.5. In the resulting visualization, bright values are positive and dark values are negative.

Adding the high and low frequencies together gives you the image at the top of this page. If you're having trouble seeing the multiple interpretations of the image, a useful way to visualize the effect is by progressively downsampling the hybrid image:

The starter code provides a function in helpers.py as vis_hybrid_image() to save and display such visualizations.

Credits

Python port by Seungchan Kim and Yuanning Hu. Assignment originally developed by James Hays based on a similar project by Derek Hoiem.