Overview from the project description: "The goal of this assignment is to write an image filtering function and use it to create hybrid images using a simplified version of the SIGGRAPH 2006 paper by Oliva, Torralba, and Schyns. Hybrid images are static images that change in interpretation as a function of the viewing distance. The basic idea is that high frequency tends to dominate perception when it is available, but, at a distance, only the low frequency (smooth) part of the signal can be seen. By blending the high frequency portion of one image with the low-frequency portion of another, you get a hybrid image that leads to different interpretations at different distances."
First, the algorithm determines if the image is color or grayscale. If it is color, it divides the image into separate red, green, and blue matrices (which will be padded and filtered separately before they are re-combined). The image is padded on the top and bottom with the filter height divided by two (rounded down) and on the left and right with the filter width divided by two (rounded down). The filter then runs over the entire image (see Optimizations below for more details). The algorithm keeps track of how many pixels into the image it is so it can use ind2sub to compute the correct row,col coordinates to put the newly calculated pixel value back into the new image.
I added some optimizations to make the program run more quickly and so that the final image looks good. All my hybrid images take less than 2 minutes to compute!
1. The algorithm uses im2col on both the image (each box of pixels equal to the size of the filter becomes a column) and the filter (the entire filter becomes one column). This way, there is only one "for" loop going over all the columns representing the image and a single dot product (filter column dotted with sub-image column) computed at each iteration.
2. Before filtering, the new image is pre-allocated (by creating a zero matrix the same size as the image) so that each time it is updated, no new memory is used.
3. The image is padded with pixels symmetric to those at the edges of the image since it is more accurate than padding it with zeros (no dark box around the final image).
Here are some samples of an image with different filters applied.
Top Row: Identity Filter, Small Blur with Box Filter, Large Blur
Bottom Row: Oriented Filter (Sobel Operator), High Pass Filter (Discrete Laplacian), High Pass Filter Alternative
![]() ![]() ![]() |
![]() ![]() ![]() |
Hybrid Image 1: Dog and Cat, Cutoff Frequency: 7, Time to Run "proj1.m": 123.3700 seconds
![]() ![]() |
Hybrid Image 2: Bird and Plane, Cutoff Frequency: 4, Time to Run "proj1.m": 66.9300 seconds
![]() ![]() |
Hybrid Image 3: Bike and Motorcycle, Cutoff Frequency: 7, Time to Run "proj1.m": 96.1000 seconds
![]() ![]() |
Hybrid Image 4: Fish and Submarine, Cutoff Frequency: 5, Time to Run "proj1.m": 66.7400 seconds
![]() ![]() |
Hybrid Image 5: Einstein and Marilyn, Cutoff Frequency: 4, Time to Run "proj1.m": 32.3700 seconds
![]() ![]() |