Image Filtering and Hybrid Images
Image Filtering and Hybrid Images
Image Filtering and Hybrid Images
I. Objectives
a) Understand image filtering
b) Understand image representation in frequency domain
c) Learn basics of Python image processing using OpenCV
II. Background
The goal of this assignment is to write an image filtering function and use it to
create hybrid images [1] using a simplified version of the SIGGRAPH 2006 paper [2] by
Oliva, Torralba, and Schyns. Hybrid images are static images that change in interpretation
as a function of the viewing distance.
The basic idea is that high frequency tends to dominate perception when it is available, but,
at a distance, only the low frequency (smooth) part of the signal can be seen. By blending
the high frequency portion of one image with the low-frequency portion of another, you
get a hybrid image that leads to different interpretations at different distances.
III. Details
a) Image Filtering
Image filtering is a fundamental image processing tool. Images filters are meant
to remove unwanted components of images, such as noise, textures or certain
bands in frequency domain (high-pass/low-pass/band-pass). Here we focus on
the simplest convolution filters.
OpenCV has many built-ins to perform image filtering, but you need to write your own
for this assignment. More specifically, you will implement my_imfilter().
In the above example, filtering process is to perform convolution between input image
(68 pixels) and kernel image (33 pixels). The output is the convolution result. As
specified in my_imfilter.m, your filtering algorithm must do the following.
(1) Support both grayscale and color images.
Input
Pad Zero
Kernel
Pad Replicated
Pad Symmetric
(4) Return a filtered image which is the same resolution (pixels) as the input image.
b) Hybrid Images
A hybrid image is the sum of a low-pass filtered version of the one image and a highpass filtered version of a second image. There is a free parameter, which can be tuned
for each image pair, which controls how much high frequency to remove from the first
image and how much low frequency to leave in the second image. This is called the
"cutoff-frequency". In the paper it is suggested to use two cutoff frequencies (tuned
for each image). In the skeleton code, the cutoff frequency is controlled by changing
the standard deviation of the Gaussian filter used in constructing the hybrid images.
The high frequency image is actually zero-mean with negative values so it is visualized
by adding 0.5 (assuming images in range 0~1). In the resulting visualization, bright
values are positive and dark values are negative.
Adding the high and low frequencies together gives you the image at the top of this page.
If you have trouble seeing the multiple interpretations of the image, a method is by
progressively downsampling the hybrid image as is done below.
c) TODOs
i.
ii.
iii.
iv.
d) Useful Hints
We provide: hybrid.py
You CANNOT use:
cv2.filter2D()
numpy.pad(),
numpy.convolve()
IV. Submission
a) Upload package
i.
and
ii.
Readme file containing anything about the project that you want to tell the
TAs, including brief introduction of the usage of the codes
<your student ID>-Asgn1\README.txt
iii.
iv.
You must report in HTML. In the report, describe your algorithm and any
decisions you made to write your algorithms in a particular way. Show and
discuss your results in your report. Discuss algorithms efficiency and
highlight all extra credits you did.
Place all your html report in subfolder html\. The home page should be
index.html
<your student ID>-Asgn1\html\index.html
v.
Compress the folder into <your student ID>-Asgn1.zip, and upload to e-learn
system.
Go to http://elearning.cuhk.edu.hk/
ii.
iii.
iv.
a) Rubric
i.
ii.
iii.
iv.
v.
vi.
vii.
viii.
b) Extra Credit
i.
10%: use Fast Fourier Transform to accelerate convolution filters and show
comparisons.
b)
c)
d)
VII. Reference
[1] Hybrid image gallery: http://cvcl.mit.edu/hybrid_gallery/gallery.html
[2] Hybrid images:
http://cvcl.mit.edu/publications/OlivaTorralb_Hybrid_Siggraph06.pdf