CIS769
CIS769
CIS769
Ahmet M. Eskicioglu
Department of CIS
Brooklyn College
1
OVERVIEW
2
TEXTBOOKS
Required Textbook
R. C. Gonzalez and R. E. Woods, Digital Image Processing, Prentice Hall,
2002.
Recommended Textbooks
W. K. Pratt, Digital Image Processing, John Wiley and Sons, 2001.
T. Bose, Digital Signal and Image Processing, John Wiley and Sons,
2004.
I. J. Cox, M. L. Miller and J. A. Bloom, Digital Watermarking, Morgan
Kaufmann Publishers, 2001.
F. Halsall, Multimedia Communications, Addison-Wesley, 2001.
K. R. Rao, Zoran S. Bojkovic, Dragorad A. Milovanovic, Multimedia
Communication Systems, Prentice Hall, 2002.
R. Steinmetz, K. Nahrstedt, Multimedia Fundamentals, Prentice Hall,
2002.
M. Arnold, M. Schmucker, S. D. Wolthusen, Techniques and Applications
of Digital Watermarking and Content Protection, Artech House, 2003.
Mark S Drew, Ze-Nian Li, Fundamentals of Multimedia, Prentice Hall,
2004.
3
GRAPHIC FILES
4
EXAMPLE: BIT-MAPPED VS. OBJECT-ORIENTED
5
WHAT IS A DIGITAL IMAGE?
6
WHAT IS DIGITAL IMAGE PROCESSING?
7
IMAGE ENHANCEMENT
8
A SPATIAL DOMAIN EXAMPLE
9
A FREQUENCY DOMAIN EXAMPLE
Filtering in the
frequency domain
1. f(x,y)(-1)x+y
2. Compute F(u,v)
3. F(u,v)H(u,v)
4. Inverse DFT
5. Take the real part
6. Multiply the result
by (-1)x+y
10
IMAGE RESTORATION
11
A SPATIAL DOMAIN EXAMPLE
12
COLOR IMAGE PROCESSING
13
RGB MODEL
14
PSEUDO COLOR
15
IMAGE COMPRESSION
Lossless compression
Huffman coding
Symbol Probability Code
a2 0.4 1
a6 0.3 00
a1 0.1 011
a4 0.1 0100
a3 0.06 01010
a5 0.04 01011
Lossy compression
JPEG
JPEG 2000
16
IMAGE COMPRESSION
17
MORPHOLOGICAL IMAGE PROCESSING
18
IMAGE SEGMENTATION
19
LINE DETECTION
Suppose we are interested in finding all the lines in the wire-bond mask
that are one pixel thick and are oriented at -450. Which one of the above
masks do we use?
Strongest Thresholding
response using the max
value in the
image.
20
REGION GROWING
21
IMAGE WATERMARKING
22
EXAMPLE IN THE DCT DOMAIN
W: watermark to be embedded.
X: sequence of pixel values.
XD and YD: row-concatenated DCT coefficients of X and Y.
a = scaling factor: determines the intensity of the watermark.
YD (i ) = X D (i )(1 + aW )
* 1 Z D (i ) * W ⋅W *
W (i ) = 1 ==> S (W , W ) =
a X D (i )
W ⋅W *
23
SCALING FACTOR a = 0.1, 0.5, 1.0, 5.0
Original image
24
CHAPTER 1: INTRODUCTION
25
BARTLANE CABLE PICTURE
TRANSMISSION SYSTEM
26
MORE TRANSATLANTIC TRANSMISSIONS
IN 1920’S
27
OTHER HISTORICAL DEVELOPMENTS
28
APPLICATIONS OF IMAGE PROCESSING
30
ELECTROMAGNETIC SPECTRUM
31
GAMMA-RAY
A star in the
constellation Cygnus
exploded about 15K Gamma radiation
years ago. from a valve in a
nuclear reactor
The superheated
stationary gas cloud
(Cygnus Loop) glows
in a spectacular array
of colors.
32
X-RAY
X-rays are among the oldest sources of EM radiation used for imaging.
Chest X-ray
X-ray image of an
electronic circuit
board
Aortic angiogram
Cygnus Loop
Head CAT slice imaged in the X-ray
band
33
ULTRAVIOLET
Fluorescence microscopy is a excellent method for studying materials that can be made to
fluoresce (either in natural form or when treated with chemicals capable of fluorescing).
34
VISIBLE AND INFRARED – LIGHT MICROSCOPE
Major uses: light microscopy, astronomy, remote sensing, industry, law enforcement.
Cholesterol
Organic superconductor
35
VISIBLE AND INFRARED – REMOTE SENSING
LANDSAT obtains
and transmits
images of the Earth
from space, for
Remote sensing: purposes of
usually includes several monitoring
bands in the visual and environmental
infrared regions. conditions.
Multispectral imaging:
The differences
between visual and
infrared image
features are quite
noticeable.
36
VISIBLE AND INFRARED – WEATHER
OBSERVATION AND PREDICTION
37
INFRARED – HUMAN SETTLEMENTS
(THE AMERICAS)
38
INFRARED – HUMAN SETTLEMENTS
(OTHER PARTS OF THE WORLD)
39
VISIBLE – AUTOMATED VISUAL INSPECTION OF
MANUFACTURED GOODS
Pill container
A circuit
board
controller
(the back
square is a
missing
component)
A clear plastic
part with an
unacceptable
A bottle that number of air
is not filled pockets.
up to an
acceptable
level.
An intraocular
A batch of cereal implant
inspected for (replacement
color and the lens for the
presence of human eye)
anomalies such
as burned flakes.
40
VISIBLE – OTHER APPLICATIONS
Paper
currency
A thumb
print
Automated
license plate
reading
41
MICROWAVE – IMAGING RADAR
Imaging radar
42
RADIO – MRI
43
CRAB PULSAR IMAGES
In the summer of 1054 A.D., Chinese astronomers reported that a star in the
constellation of Taurus suddenly became as bright as the full Moon. Fading
slowly, it remained visible for over a year. It is now understood that a spectacular
supernova explosion - the detonation of a massive star whose remains are now
visible as the Crab Nebula - was responsible for the apparition. The core of the
star collapsed to form a rotating neutron star or pulsar, one of the most exotic
objects known to 20th century astronomy. Like a cosmic lighthouse, the rotating
Crab pulsar generates beams of radio, visible, x-ray and gamma-ray energy
which, as the name suggests, produce pulses as they sweep across our view.
44
OTHER IMAGING MODALITIES:
ACOUSTIC IMAGING
45
OTHER IMAGING MODALITIES:
ULTRASOUND IMAGING
46
OTHER IMAGING MODALITIES:
ELECTRON MICROSCOPY
Electron microscopes
47
OTHER IMAGING MODALITIES:
COMPUTER-GENERATED OBJECTS
48
FUNDAMENTAL STEPS IN
DIGITAL IMAGE PROCESSING
49
WHAT ARE THE FUNDAMENTAL STEPS?
50
COMPONENTS OF AN
IMAGE PROCESSING SYSTEM
51
WHAT ARE THE COMPONENTS?
Choroid: contains a
network of blood vessels.
53
CONE AND ROD CELLS
54
DISTRIBUTION OF RODS AND CONES
55
IMAGE FORMATION IN THE EYE
56
BRIGHTNESS ADAPTATION
The range of light intensity levels to which the HVS can adapt is
enormous – on the order of 1010!
57
BRIGHTNESS DISCRIMINATION
A classical experiment:
A subject looks at a flat, uniformly illuminated area (large enough to occupy the
entire field of view).
The intensity I can be varied.
∆I is added in the form of a short-duration flash that appears as a circle in the
middle.
If ∆I is not bright enough, the subject says “no.”
As ∆I gets stronger, the subject may say “yes.”
When ∆I is stronger enough, the subject will say “yes” all the time.
Weber ratio: ∆Ic/ I, ∆Ic: increment of illumination discriminable 50% of the time with
background illumination I.
58
WEBER RATIO AS FUNCTION OF INTENSITY
59
PERCEIVED BRIGHTNESS: NOT A SIMPLE
FUNCTION OF INTENSITY – MACH BANDS
60
PERCEIVED BRIGHTNESS: NOT A SIMPLE FUNCTION
OF INTENSITY – SIMULTANEOUS CONTRAST
61
OPTICAL ILLUSIONS: OTHER EXAMPLES OF
HUMAN PERCEPTION PHENOMENA
A few lines
The outline of are sufficient
a square is to give the
seen clearly illusion of a
although complete
there are no circle.
lines defining
such a figure.
62
LIGHT AND ELECTROMAGNETIC SPECTRUM
Electromagnetic spectrum
The range of colors we perceive in visible light represents a
very small portion of the spectrum.
Radio waves with wavelengths billions of times longer.
Gamma rays with wavelengths billions of times smaller.
Wavelength, frequency and energy
λ = c/v, c: speed of light (2.998x108 m/s)
E = hv, h: Planck’s constant
Electromagnetic waves can be visualized as
propagating sinusoidal waves with wavelength λ.
a stream of massless particles, each traveling in a wavelike
pattern and moving at the speed of light.
Each massless particle contains a bundle of energy called a photon.
Higher frequency electromagnetic phenomena carry more energy
per photon.
The visible band: 0.43 µm (violet) – 0.79 µm (red)
Six broad color regions: violet, blue, green, yellow, orange, red.
63
ELECTROMAGNETIC SPECTRUM
64
COLOR PERCEPTION
3 principal
sensor
arrangements
Illumination energy is
transformed into
digital images.
67
SINGLE SENSOR
Arrangement used in
high-precision scanning
68
SENSOR STRIPS
Typical arrangement in
most flat bed scanners
Basis for
medical and
industrial CAT
69
SENSOR ARRAYS
70
A SIMPLE IMAGE FORMATION MODEL
71
GRAY-SCALE IMAGES
Lmin ≤ l ≤ Lmax
Lmin= iminrmin
Lmax= imaxrmax
72
SAMPLING AND QUANTIZATION
73
GENERATING A DIGITAL IMAGE
74
IMAGE ACQUISITION WITH A SENSING ARRAY
75
DIGITAL IMAGE REPRESENTATION
f(0,0)
f(0,N-1)
L = 2k
b=MxNxk
f(M-1,0)
76
STORAGE REQUIREMENTS
77
SPATIAL AND GRAY LEVEL RESOLUTION
Sampling is performed by deleting rows and columns from the original image.
78
RESAMPLING INTO 1024X1024 PIXELS
79
256/128/64/32 GRAY LEVELS
In these
images, the #
of samples is
constant but
the # of gray
levels was
reduced from False contouring
256 to 32.
80
16/8/4/2 GRAY LEVELS
In these
images, the #
of samples is
constant but
the # of gray
levels was
reduced from
16 to 2.
81
ISOPREFERENCE CURVES
82
ZOOMING DIGITAL IMAGES
84
IMAGE ZOOMING: NEAREST NEIGHBOR
INTERPOLATION VS. BILINEAR INTERPOLATION
85
NEIGHBORS OF A PIXEL
4 diagonal neighbors
of p, ND(p)
pixel p at (x,y)
86
ADJACENCY, CONNECTIVITY
87
REGIONS, BOUNDARIES
88
DISTANCE MEASURES
89
LINEAR AND NONLINEAR OPERATIONS
90
CHAPTER 3: IMAGE ENHANCEMENT IN THE
SPATIAL DOMAIN
Processed Input
image image
91
DEFINING A NEIGHBORHOOD
92
GRAY LEVEL TRANSFORMATION FUNCTION
93
BASIC GRAY LEVEL TRANSFORMATIONS
94
IMAGE NEGATIVES
s=L-1–r
Produces an equivalent of a photographic negative.
Enhances white or gray detail embedded in dark regions.
95
LOG TRANSFORMATIONS
8-bit display
Range: [0, 1.5x106]
96
POWER LAW TRANSFORMATIONS
s = crγ
Map a narrow range
of dark input values
or
into a wider range of c(r + ε)γ , c and γ > 0
output values.
s = r 1/2.5
A variety of devices for
image capture, printing
and display respond
according to power law.
98
CONTRAST MANIPULATION
Original image
is predominantly
dark! s = r 0.6
An expansion of
gray levels is
desired.
s = r 0.4 s = r 0.3
99
CONTRAST MANIPULATION
s = r 4.0 s = r 5.0
100
PIECEWISE-LINEAR TRANSFORMATIONS
101
CONTRAST STRETCHING
Typical
transformation
Ø
(r1,s1) and (r2,s2)
control the shape of
the function.
r1 = s1
r2 = s2 Ö linear
r1 = r2
s1 = 0 Ö threshold
s2 = L - 1
102
GRAY-LEVEL SLICING
Two approaches:
103
BIT-PLANE SLICING
104
HISTOGRAM PROCESSING
105
FOUR IMAGE TYPES
106
HISTOGRAM EQUALIZATION
k nj
sk = ∑ n , k = 0,1,2,..., L −1
j=0
107
AN EXAMPLE: LENA
108
HISTOGRAM MATCHING
Histogram
equalization
Is the result
any good?
109
PROCEDURE FOR HISTOGRAM MATCHING
G(z)
G-1(s)
111
MAPPING FROM rk TO zk
112
LOCAL ENHANCEMENT WITH
HISTOGRAM PROCESSING
Histogram equalization
Histogram matching Global methods for overall enhancement
113
HISTOGRAM STATISTICS
FOR IMAGE ENHANCEMENT
Input Output
114
ENHANCEMENT USING ARITHMETIC/LOGIC
OPERATIONS
Arithmetic operations
Subtraction
Most useful in image enhancement
Addition
Multiplication: used as a masking operation
Division: multiplication of one image by the reciprocal of the
other.
Logical operations
AND For these operations, gray scale
Used for masking
OR pixel values are processed as
NOT strings of binary numbers.
Frequently used in conjunction with morphological
operations.
115
AND/OR MASKS
116
IMAGE SUBTRACTION
117
MASK MODE RADIOGRAPHY
The net effect of subtracting the mask from each sample: areas that are
different appear as enhanced detail in the output image.
118
IMAGE AVERAGING
119
GALAXY PAIR NGC 3314
Totally useless!
Averaging
120
DIFFERENCE IMAGES AND THEIR HISTOGRAMS
121
BASICS OF SPATIAL FILTERING
122
3x3 MASK
123
FILTERING AT THE BORDERS
124
SMOOTHING SPATIAL FILTERS
∑ ∑ ω ( s, t ) f ( x + s, y + t )
g ( x, y ) = s =− a t =−b
a b
∑ ∑ ω ( s, t )
s =− as =−b
Order-statistics filters
Nonlinear spatial filters
Best-known example: Median filter
125
TWO 3X3 SMOOTHING FILTERS
126
FIVE DIFFERENT FILTER SIZES
127
BLURRING AND THRESHOLDING
128
ORDER-STATISTICS FILTERS
129
EFFECT OF AVERAGING AND MEDIAN FILTERS
130
SHARPENING SPATIAL FILTERS
131
1ST AND 2ND DERIVATIVES
132
∇
133
IMPLEMENTATIONS OF THE LAPLACIAN
134
SHARPENED NORTH POLE OF THE MOON
0 -1 0
-1 5 -1
0 -1 0
135
TWO LAPLACIAN MASKS
136
UNSHARP MASKING & HIGH-BOOST FILTERING
Unsharp masking: f s ( x, y ) = f ( x, y ) − f ( x, y )
Blurred version of f
High-boost filtering: f hb ( x, y ) = Af ( x, y ) − f ( x, y )
137
AN APPLICATION OF BOOST FILTERING
-1 -1 -1
-1 A+8 -1
-1 -1 -1
Laplacian with A = 0
138
1ST DERIVATIVES OF ENHANCEMENT – THE
GRADIENT
∇f ≈ |Gx| + |Gy|
139
APPLICATION OF SOBEL OPERATORS
140
COMBINING SPATIAL ENHANCEMENT METHODS
141
CHAPTER 4: IMAGE ENHANCEMENT IN THE
FREQUENCY DOMAIN
∞ ∞
F (u ) = ∫ f ( x)e − j 2πux
dx f ( x) = ∫ F (u )e j 2πux du
−∞ −∞
143
FOURIER TRANSFORM IN 2 VARIABLES
∞ ∞
F (u, v) = ∫ ∫ f ( x, y )e − j 2π (ux + vy ) dxdy
−∞ −∞
∞ ∞
f ( x, y ) = ∫ ∫ F (u, v)e j 2π (ux +vy ) dudv
−∞ −∞
144
ONE-DIMENSIONAL DFT AND ITS INVERSE
M −1
1
F (u ) =
M
∑ f ( x )e
x =0
− j 2πux / M
, u = 0,1,..., M − 1
M −1
f ( x) = ∑ F (u )e j 2πux / M , x = 0,1,..., M − 1
u =0
145
FREQUENCY DOMAIN &
FREQUENCY COMPONENTS
e jθ = cosθ + j sin θ
cos(−θ ) = cosθ
M −1
The domain over which the values of F(u) range is called the frequency domain.
Glass prism: a physical device that separates light into its various color components.
146
FOURIER TRANSFORM
IN POLAR COORDINATES
F (u ) =| F (u ) | e − jφ (u ) ,
⎡ I (u ) ⎤
where | F (u ) |= [ R 2 (u ) + I 2 (u )]1/ 2 and φ (u ) = tan −1 ⎢ ⎥
⎣ R (u ) ⎦
magnitude phase angle
P(u ) =| F (u ) | 2 = R 2 (u ) + I 2 (u )
power spectrum
147
A ONE-DIMENSIONAL EXAMPLE
M = 1024
A=1
K = 16
148
SAMPLES IN SPATIAL AND FREQUENCY
DOMAINS
149
TWO DIMENSIONAL DFT AND ITS INVERSE
1 M −1 N −1
F (u, v) = ∑
MN x =0
∑ f (x, y)e
y =0
− j 2π (ux / M + vy / N )
, u = 0,1,2,...,M −1, v = 0,1,2,...,N −1.
frequency variables
M −1 N −1
f ( x, y ) = ∑ ∑ F (u, v)e
u =0 v =0
j 2π ( ux / M + vy / N )
, x = 0,1,2..., M − 1, y = 0,1,2,..., N − 1.
spatial variables
⎡ I (u, v) ⎤
Polar coordinates: | F (u, v) |= [ R 2 (u, v) + I 2 (u, v)]1 / 2 φ (u, v) = tan −1 ⎢ ⎥
⎣ R(u, v) ⎦
150
SHIFTING OF ORIGIN
ℑ[ f ( x, y )(−1) x + y ] = F (u − M / 2, v − N / 2)
dc component
1 1
∆u = and ∆v = Spectrum is symmetric
M∆x M∆y
151
CENTERING THE SPECTRUM
152
RELATIONSHIP BETWEEN THE DFT COMPONENTS
AND SPATIAL CHARACTERISTICS OF AN IMAGE
2 principal features:
Corresponds to the
long protrusion.
153
BASIC STEPS IN
FREQUENCY DOMAIN FILTERING
(-1)x+y
Cropping to even dimensions Each component of H multiplies
Gray-level scaling both real and imaginary parts of
Conversion to floating point on input the corresponding component of F.
Conversion to 8-bit format on output
Etc.
Imaginary components
set to zero.
Complex Complex
Real filter
(zero-phase-shift filters)
154
FILTERING IN THE FREQUENCY DOMAIN
G(u,v) = H(u,v)F(u,v)
H(u,v): real, zero-phase-shift filters
Zero-phase-shift filters do not change the phase angle.
In general, the components of F(u,v) are complex
quantities.
Filtered image = ℑ−1 [G(u,v)]
In general, the inverse DFT is complex.
If the input image F and the filter H are real, the imaginary
components of inverse DFT should all be zero.
Due to round-off errors, the inverse DFT has parasitic
imaginary components. These are ignored.
155
AN INTRODUCTORY EXAMPLE OF FILTERING
F(0,0) = 0
0 if (u,v) = (M/2, N/2)
H(u,v) =
1 otherwise
156
LOW-PASS AND HIGH-PASS FILTERS
Low-pass filter: A filter that attenuates high frequencies while passing low frequencies.
High-pass filter: A filter that attenuates low frequencies while passing high frequencies.
blurring
Circularly
symmetric
sharpening
157
ADDING A CONSTANT TO A HIGH-PASS FILTER
A constant is added to the HP filter so that it does not completely eliminate F(0,0).
158
CONVOLUTION
M −1 N −1
1
f ( x, y ) * h ( x, y ) =
MN
∑∑ f (m, n)h( x − m, y − n)
m =0 n =0
f ( x, y ) * h( x, y ) ⇔ F (u , v) H (u , v)
Convolution theorem
f ( x , y ) h ( x , y ) ⇔ F (u , v ) * H (u , v )
159
IMPULSE FUNCTION OF STRENGTH A
M −1 N −1
Definition: ∑∑ s( x, y) Aδ ( x − x , y − y ) = As( x , y )
x =0 y =0
0 0 0 0
Unit impulse M −1 N −1
located at the
origin:
∑∑ s( x, y)δ ( x, y) = s(0,0)
x =0 y =0
real constant
160
FOURIER TRANSFORM PAIRS
f ( x , y ) * h ( x , y ) ⇔ F ( u , v ) H (u , v )
δ ( x, y ) * h( x, y ) ⇔ ℑ[δ ( x, y )]H (u , v )
1 1
h ( x, y ) ⇔ H (u , v)
MN MN
h ( x , y ) ⇔ H (u , v )
Hence, filters in the spatial and frequency domains
constitute a Fourier transform pair.
161
GAUSSIAN FILTERS
/ 2σ 2
H (u ) = Ae −u
2
162
LOWPASS AND HIGHPASS GAUSSIAN FILTERS
/ 2σ 12 / 2σ 22
H (u ) = Ae −u − Be −u
2 2
h( x) = 2π σ 1 Ae −2π σ 12 x 2
− 2π σ 2 Be −2π σ 22 x 2
2 2
163
SMOOTHING AND SHARPENING FILTERS
Smoothing filters
Ideal lowpass filters (very sharp)
Butterworth lowpass filters
Gaussian lowpass filters (very smooth)
Sharpening filters
Ideal highpass filters
Butterworth highpass filters
Gaussian highpass filters
164
2-D IDEAL LOWPASS FILTERS
1 if D(u,v) ≤ D0 Nonnegative
quantity
H(u,v) =
0 if D(u,v) > D0
165
2-D IDEAL LOWPASS FILTERS
Cutoff frequency
166
2-D IDEAL LOWPASS FILTERS AS A FUNCTION
OF CUTOFF FREQUENCIES
M −1 N −1
∑∑
2
Total image power: PT = P(u , v), where P(u , v) = F (u , v)
u =0 v =0
Summation is taken
A circle of radius r ⎡ ⎤
α = 100 ⎢∑∑ P(u, v) / PT ⎥
over the values of
encloses α percent (u,v) that lie inside
of the power: ⎣u v ⎦ the circle or on its
boundary.
Radius %
5 92.0
15 94.6
30 96.4
80 98
230 pixels 99.5
167
ILP FILTERING WITH RADII 5,15,30,80,230
Ideal lowpass
filtering is not very
practical but they Less blurring
can be
implemented on a
computer to study
their behavior.
168
BLURRING AS A CONVOLUTION PROCESS
Gray-scale profile of a
horizontal scan line
through the center
of the spatial filter.
Gray-scale profile of a
diagonal scan line
through the center
of the filtered image.
169
BUTTERWORTH LOWPASS FILTERS
1
H (u , v) =
1 + [ D (u , v) / D0 ] 2 n
170
BLPF WITH ORDERS 1 THROUGH 4
171
BUTTERWORTH FILTERING WITH RADII 5,15,30,80,230
172
BLPFS OF ORDER 1,2,5,20
To faciliate comparisons,
additional enhancing with
a gamma transformation
was applied to all images.
173
GAUSSIAN LOWPASS FILTERS
− D 2 ( u ,v ) / 2σ 2
H (u, v) = e σ is a measure of the spread
of the Gaussian curve.
D(u , v) = [(u − M / 2) 2 + (v − N / 2) 2 ]1 / 2
A = 1 (to be consistent with the other filters)
The inverse Fourier transform of the Gaussian lowpass filter is also Gaussian.
174
GLPFS WITH DIFFERENT σ VALUES
σ = D0
175
GAUSSIAN FILTERING WITH RADII 5,15,30,80,230
no ringing
176
PRACTICAL APPLICATION OF LPF:
MACHINE PERCEPTION
177
PRACTICAL APPLICATION OF LPF:
PRINTING & PUBLISHING
178
PRACTICAL APPLICATION OF LPF:
PROCESSING SATELLITE AND AERIAL IMAGES
179
SHARPENING FREQUENCY DOMAIN FILTERS
H hp (u , v ) = 1 − H lp (u , v )
Sharpening filters:
180
3 TYPES OF SHARPENING FILTERS
181
CORRESPONDING SPATIAL DOMAIN FILTERS
182
IDEAL HIGHPASS FILTERS
0 if D(u,v) ≤ D0
H(u,v) =
1 if D(u,v) > D0
183
BUTTERWORTH HIGHPASS FILTERS
1
H (u , v) =
1 + [ D0 / D(u , v)]2n
184
GAUSSIAN HIGHPASS FILTERS
− D 2 ( u ,v ) / 2 D02
H (u , v) = 1 − e results are smoother than
with the previous 2 filters.
185
CHAPTER 5: IMAGE RESTORATION
186
A MODEL OF IMAGE
DEGRADATION/RESTORATION PROCESS
g ( x , y ) = h ( x, y ) * f ( x, y ) + η ( x , y )
G (u , v) = H (u , v) F (u , v) + N (u , v)
187
NOISE MODELS
188
NOISE PROBABILITY DENSITY FUNCTIONS
189
MODELING A BROAD RANGE OF NOISE
CORRUPTIONS
190
A TEST PATTERN
191
NOISY IMAGES AND THEIR HISTOGRAMS:
GAUSSIAN, RAYLEIGH, AND GAMMA
The parameters of
the noise were
chosen in each
case so that the
histogram
corresponding
to the 3 gray levels
in the test pattern
would start to
merge.
192
NOISY IMAGES AND THEIR HISTOGRAMS:
EXPONENTIAL, UNIFORM, AND IMPULSE
The parameters of
the noise were
chosen in each
case so that the
histogram
corresponding
to the 3 gray levels
in the test pattern
would start to
merge.
193
PERIODIC NOISE
194
ESTIMATION OF NOISE PARAMETERS
Periodic noise
Inspection of the Fourier spectrum
Inspection of the image (possible only in simple cases)
Automated analysis
Noise spikes are exceptionally pronounced.
Some knowledge is available about the general location of the
frequency components.
Noisy PDFs
Parameters may be partially known from sensor specs
Imaging system available
Capture a set of images of flat environments.
Images are available
Crop small patches of reasonably constant gray level.
Obtain the histogram.
Compute mean and variance.
Gaussian PDF: Completely determined by the mean and variance.
Impulse noise: actual probability of occurrence of white and black
pixels is needed.
Others: Use the mean and variance to solve for a and b.
195
ESTIMATION OF PDF PARAMETERS
FROM SMALL PATCHES
196
SPATIAL DOMAIN FILTERING
FOR ADDITIVE NOISE
g ( x , y ) = f ( x , y ) + η ( x, y ) additive noise
Mean filters
Order-statistics filters
Median filter
Max & min filters
Midpoint filter
Alpha-trimmed mean filter
Adaptive filters
Adaptive local noise reduction filter
Adaptive median filter
197
MEAN FILTERS
1
fˆ ( x , y ) =
mn
∑ g ( s, t )
( s , t )∈S xy
Arithmetic mean
1
⎡ ⎤ mn
Geometric mean
ˆf ( x , y ) = ⎢
∏ g (s, t )⎥ (comparable to arithmetic
⎣⎢ ( s ,t )∈ S xy ) ⎦⎥
mean but tends to lose less
image detail)
mn
fˆ ( x , y ) = Harmonic mean
(works well for salt noise but
1
∑
( s , t )∈ S xy g (s, t )
fails for pepper noise. OK for
other types of noise as well)
Q=0 Ö arithmetic mean
Q=-1 Ö harmonic mean
∑ g ( s, t )
( s ,t )∈S xy
Q +1
Contraharmonic mean
(+Q: eliminates pepper noise
fˆ ( x, y ) =
∑ g ( s, t ) Q -Q: eliminates salt noise
Order not simultaneously!)
( s ,t )∈S xy of the filter
198
ARITHMETIC AND GEOMETRIC MEAN FILTERS
mean=0, variance=400
199
SPATIAL FILTERING FOR ADDITIVE NOISE
Better job of
cleaning the
background at the
expense of blurring
the dark areas!
200
CONTRAHARMONIC FILTERING
WITH THE WRONG SIGN
201
ORDER-STATISTICS FILTERS
Effective for
bipolar and
unipolar impulse fˆ ( x, y ) = median {g ( s, t )} Median filter
noise ( s ,t )∈S xy
⎡ ⎤
ˆf ( x, y) = 1 ⎢
{g ( s, t )} + min {g ( s, t )}⎥
Works best for
randomly max Mid-point filter
distributed noise 2 ⎣⎢( s ,t )∈S xy ( s , t )∈S xy ⎦⎥
d = 0 Ö arithmetic
1
d = (mn-1)/2 Ö median
Other d: Useful for
multiple types of noise
fˆ ( x, y) = ∑ g r ( s, t )
mn − d ( s ,t )∈S xy
Alpha-trimmed
mean filter
202
3 PASSES OF MEDIAN FILTER
FOR IMPULSE NOISE
1ST pass
Significant
Improvement!
203
MAX & MIN FILTERS FOR PEPPER NOISE
204
REDUCTION OF NOISE
WITH 4 TYPES OF FILTERS
205
ADAPTIVE FILTERS
206
ADAPTIVE, LOCAL NOISE REDUCTION FILTER
ˆf ( x, y ) = g ( x, y ) − σ [ g ( x, y ) − m ]
2
η
Definition: L
σ2
L
Analysis of ση 2:
Best results:
Noise reduction is
comparable but the
restored image is
much sharper!
208
ADAPTIVE MEDIAN FILTER
A1 = zmed – zmin
A2 = zmed – zmax
If A1 > 0 and A2 < 0, goto level B
Else increase the window size 3 main purposes:
If window size ≤ Smax, repeat level A
Else output zxy 1. To remove impulsive noise
2. To provide smoothing of non-
Level B (to check if zxy is an impulse) impulsive noise
3. To reduce distortion (e.g., excessive
B1 = zxy – zmin thinning or thickening of object
B2 = zxy – zmax boundaries)
If B1 > 0 and B2 < 0, output zxy
Else output zmed
209
A SIMPLE EXAMPLE ADAPTIVE MEDIAN FILTER
center
point 10 20 20 zmin = 10
20 15 20 zmax = 100
20 25 100 zmed = 20
A1 = 20-10 = 10
A2 = 20-100 = -80
A1 > 0 & A2 < 0: zmin < zmed < zmax Hence, zmed cannot be an impulse.
Go to level B
B1 = 15-10 = 5
B2 = 15-100 = -85
B1 > 0 & B2 < 0: zmin < zxy < zmax Hence, zxy cannot be an impulse.
Output zmed = 20
210
COMPARISON OF MEDIAN AND ADAPTIVE
MEAN FILTERS
Noise reduction is
comparable but the filter
preserved sharpness!
211
FREQUENCY DOMAIN FILTERING
FOR PERIODIC NOISE
212
BANDREJECT FILTERS
W
1 if D (u , v) < D0 −
2
2
1 ⎡ D 2 ( u ,v ) − D02 ⎤
W W − ⎢ ⎥
H (u , v ) = 1
0 if D0 − ≤ D(u, v) ≤ D0 + H (u , v) = H (u , v) = 1 − e
2 ⎢⎣ D ( u ,v )W ⎥⎦
2 2 2n
⎡ D(u , v)W ⎤
W 1+ ⎢ 2 2 ⎥
1 if D (u , v) > D0 −
2 ⎣ D (u, v) − D0 ⎦
A principal application: noise removal in situations where the general location of the noise
components in the frequency domain is approximately known.
213
APPLICATION OF A BANDREJECT FILTER
Restoration
is evident!
H bp (u , v ) = 1 − H br (u , v )
Performs the
opposite operation
of a bandreject
filter.
215
APPLICATION OF A BANDPASS FILTER
Generated by:
• using the bandpass filter corresponding to
the bandreject filter in the previous
example
• taking the inverse transform
216
NOTCH FILTERS
0 D1 (u , v) ≤ D0 or D2 (u, v) ≤ D0
H (u, v) =
1 otherwise
1 ⎡ D ( u ,v ) D2 ( u ,v ) ⎤
1 − ⎢ 1 ⎥
H (u , v ) = 2 ⎣⎢ D02 ⎦⎥
⎡ D 02 ⎤
n
H (u, v) = 1 − e
1+ ⎢ ⎥
D
⎣ 1 ( u , v ) D 2 ( u , v ) ⎦
D1 (u , v) = [(u − M / 2 − u0 ) 2 + (v − N / 2 − v0 ) 2 ]1/ 2
D2 (u , v) = [(u − M / 2 + u0 ) 2 + (v − N / 2 + v0 ) 2 ]1/ 2
218
NOTCH PASS FILTERS
H np (u , v) = 1 − H nr (u , v)
Performs the
opposite operation
of a notch reject
filter.
219
APPLICATION OF A NOTCH PASS FILTER
220
LINEAR, POSITION-INVARIANT DEGRADATIONS
222
ESTIMATION BY IMAGE OBSERVATION
223
ESTIMATION BY EXPERIMENTATION
224
ESTIMATION BY MODELING
− k ( u 2 + v 2 )5 / 6
H (u, v) = e
A degradation model
based on the
k = 0.0025 physical
characteristics of
atmospheric
turbulence.
Sometimes, Gaussian
LPF is used to model
mild, uniform
k = 0.001 k = 0.00025 blurring.
225
AN EXAMPLE OF MODELING
226
INVERSE FILTERING
G (u , v ) = H (u , v ) F (u , v ) + N (u , v )
N (u , v)
⇒ Fˆ (u , v) = F (u , v) +
H (u, v)
Degradation function:
H (u, v) = e − k [(u − M / 2) + ( v − N / 2 ) 2 ]5 / 6
2
with k = 0.0025
228
MINIMUM MEAN SQUARE ERROR (WIENER)
FILTERING
229
DERIVATION OF THE WIENER FILTER
Minimize e = E{( f − fˆ ) }
2 2
⎡ 1 | H (u , v) |2 ⎤
⇒ F (u , v) = ⎢
ˆ ⎥G (u , v)
⎢⎣ H (u, v) | H (u, v) | + Sη (u, v) / S f (u, v) ⎥⎦
2
230
AN EXAMPLE OF WIENER FILTERING
K was chosen
interactively to yield
the best possible visual
results.
231
AN EXAMPLE OF WIENER FILTERING
Noise variance is
reduced by one
order of magnitude
Noise variance is
reduced by five
orders of magnitude
H ( u , v ) = e − k [( u − M / 2 ) + ( v − N / 2 ) 2 ]5 / 6
2
where k = 0.0025.
233
COLOR FUNDAMENTALS
234
COLOR SPECTRUM
235
CHROMATIC LIGHT
236
VISIBLE SPECTRUM
237
ABSORPTION OF LIGHT BY THE CONES
238
PRIMARY AND SECONDARY COLORS OF LIGHT
239
COLOR TV
240
CIE CHROMATICITY DIAGRAM
A straight line
segment joining
any 2 points
defines all the
different color
variations that can
Any point within be obtained by
the diagram mixing these 2
colors additively.
represents some
mixture of
spectrum colors.
Points on the
boundary are
pure colors in
the visible Any color in the
spectrum. triangle can be
produced by
various
combinations of
the corner colors.
241
COLOR MODELS
242
RGB MODEL
Images represented
with the RGB color
model have 3
component images:
Red component
Green component
Blue component It is assumed that all color
values are normalized.
If 8 bits are used for
each pixel, we have Each color is represented by a
a 24-bit RGB image. point in or on the unit cube.
243
RGB 24-BIT COLOR CUBE
244
COLOR PLANES
A color image is
acquired using the
process in reverse
order.
R = 127
G = 0-255
B = 0-255
245
SAFE RGB COLORS
246
RBG SAFE-COLOR CUBE
247
CMY & CMYK COLOR MODELS
Most devices (color printers, copiers, etc.) that deposit color pigments on
paper require CMY data input or perform an internal RGB to CMY
conversion.
RGB to CMY conversion (all color values are in the range [0,1]):
C 1 R light reflected from a surface coated with pure cyan does not contain red
M = 1 - G light reflected from a surface coated with pure magenta does not contain green
Y 1 B light reflected from a surface coated with pure yellow does not contain blue
248
HSI COLOR MODEL
249
CONCEPTUAL RELATIONSHIP BETWEEN RGB
AND HSI COLOR MODELS
The intensity
component of
this color As the planes moves up
point can be and down, the boundaries
determined defined by the
intersection of each plane
by passing a with the faces of the cube
plane (0,0,0) have either a triangular or
perpendicular hexagonal shape.
to the axis
and
containing
the point. Conclusion: H, S, and I values required to form the HSI
space can be obtained from the RGB cube.
250
HUE AND SATURATION IN THE HSI MODEL
View obtained by
looking at the
RBG cube down
its gray-scale It is not unusual to see
axis. HSI planes defined in
terms of a hexagon, a
triangle, or even a circle.
251
TRIANGULAR AND CIRCULAR COLOR PLANES
IN THE HSI MODEL
mid-point of the
vertical intensity axis
252
CONVERTING COLORS FROM RGB TO HSI
⎧ ⎫
θ if B ≤ G ⎪⎪
1
[( R − G ) + ( R − B ) ] ⎪⎪
H = with θ = cos −1 ⎨ 2
1/ 2 ⎬
360 − θ if B > G ⎪
⎪⎩
[
( R − G ) 2
+ ( R − B )(G − B ) ] ⎪
⎪⎭
Can be normalized
to the range [0,1] by
diving all the values
by 360.
3
S = 1− [min( R, G, B)]
( R + G + B)
1
I = ( R + G + B)
3
253
CONVERTING COLORS FROM HSI TO RGB
⎡ S cos H ⎤
B = I (1 − S ) R = I ⎢1 + ⎥ G = 3I − ( R + B )
⎣ cos(60 − H ) ⎦
0
⎡ S cos H ⎤
R = I (1 − S ) G = I ⎢1 + ⎥ B = 3I − ( R + G )
⎣ cos(60 − H ) ⎦
0
⎡ S cos H ⎤
G = I (1 − S ) B = I ⎢1 + ⎥ R = 3 I − (G + B )
⎣ cos(60 − H ) ⎦
0
254
PSEUDO-COLOR IMAGE PROCESSING
255
INTENSITY SLICING
A different color is
assigned to each side of
the plane.
Algorithm:
[0,L-1]: gray-scale
L0: black
lL-1: white
P planes: l1,l2,…,lP
f(x,y) = ck if f(x,y) ∈ Vk
256
INTENSITY SLICING INTO 8 COLORS
257
INTENSITY SLICING INTO 2 COLORS
258
INTENSITY SLICING INTO MULTIPLE COLORS
Average monthly
rainfall over a
period of 3 years
Much
easier to
interpret
259
3 INDEPENDENT COLOR TRANSFORMATIONS
260
PSEUDO-COLOR ENHANCEMENT: AN EXAMPLE
261
COMBINATION OF SEVERAL MONOCHROME
IMAGES INTO A SINGLE COLOR COMPOSITE
262
3 MONOCHROME IMAGES ARE COMBINED
visible red
visible green
First 3 images
are combined Red component was
into an RGB replaced with the
image infrared image
263
COMBINING IMAGES FROM A SPACECRAFT
One way to
combine the
sensed image
data is by how
they show This image was
differences in obtained by combining
surface chemical several of the sensor
composition. images from the Galileo
spacecraft.
Bright red
depicts material An analysis of
newly ejected individual images
from an active would not convey
volcano. similar information.
Surrounding
yellow materials
are older sulfur
deposits.
264
BASICS OF FULL-COLOR IMAGE PROCESSING
266
COLOR-SPACE COMPONENTS OF A
FULL-COLOR IMAGE
CMYK
RGB
HSI
267
MODIFIED INTENSITY OF THE FULL-COLOR
IMAGE
268
COLOR COMPLEMENTS
Complements are
analogous to gray-scale
negatives: they are useful
in enhancing detail
embedded in dark region of
a color image.
269
COLOR SLICING
The width of the cube and the radius of the sphere were determined
interactively.
270
AN EXAMPLE OF COLOR SLICING
271
HISTOGRAM PROCESSING
It is generally unwise to
histogram equalize the
components of a color
image independently.
This results in erroneous
color.
1
c ( x, y ) =
K
∑ c ( x, y )
( x , y )∈S xy
1
K
∑ R ( x, y )
( x , y )∈S xy
Smoothing by
neighborhood
1 averaging can be
c ( x, y ) = K
∑ G ( x, y ) carried out using
( x , y )∈S xy either individual color
planes or the RGB
1 color vectors.
K
∑ B ( x, y )
( x , y )∈S xy
273
AN RGB IMAGE AND ITS COLOR PLANES
274
HSI PLANES
275
SMOOTHED IMAGES
276
COLOR IMAGE SHARPENING
∇ 2 R ( x, y )
The Laplacian of a full-
color image can be
∇ 2 [c( x, y )] = ∇ 2 G ( x, y ) obtained by computing
the Laplacian of each
component plane
separately.
∇ 2 B ( x, y )
The Laplacian
277
SHARPENED IMAGES
278
COLOR SEGMENTATION IN HSI COLOR SPACE
HSI space:
279
COLOR SEGMENTATION IN RGB COLOR SPACE
Classify each pixel in the given image according to the specified range.
280
AN EXAMPLE OF COLOR SEGMENTATION
IN RGB COLOR SPACE
281
COLOR EDGE DETECTION
So, computing the gradient on individual planes, and then using the
results to form a color image will lead to erroneous results.
1 ⎡ 2 g xy ⎤
Direction of max rate of change: θ = tan −1 ⎢ ⎥
2 ⎢⎣ ( g xx − g yy ) ⎥⎦
1
⎧1
Value the rate of change: F (θ ) = ⎨ [ ] ⎫
( g xx + g yy ) + ( g xx − g yy ) cos 2θ + 2 g xy sin 2θ ⎬
2
⎩2 ⎭
282
AN EXAMPLE OF COLOR EDGE DETECTION
USING 2 APPROACHES
The edge
detail is
more
complete!
Both
approaches
yielded
reasonable
results. Is the
extra detail
worth the added
computational
burden of the
vector
approach?
283
COMPONENT GRADIENT IMAGES
284
NOISE IN COLOR IMAGES
Gaussian noise
Rayleigh noise
Erlang noise
Exponential noise
Uniform noise
Impulse (salt and pepper) noise
285
GAUSSIAN NOISE IN A COLOR IMAGE
286
NOISY RGB IMAGE CONVERTED TO HSI
The intensity
plane is
slightly
smoother than
any of the 3
RGB noisy
planes.
1
I = ( R + G + B)
3
Compare
Image
averaging
reduces
random noise!
287
ONE NOISY RGB CHANNEL
AFFECTS ALL HSI PLANES
288
EXCEPTIONS FOR COLOR VECTOR
PROCESSING
• Vector ordering
• Some of the filters based on the ordering concept
289
COLOR IMAGE COMPRESSION
Compression reduces
the amount of data
required to represent a
digital image.
Compressed with
JPEG 2000.
The compressed
image contains
only 1 data bit for
every 230 bits of
data in the original
image.
290
CHAPTER 7: IMAGE COMPRESSION
291
MULTIMEDIA STORAGE REQUIREMENTS
(WINDOW OF 640X480 PIXELS)
Text
2 bytes for every 8x8 pixel character
# of characters/page = (640x480)/(8x8) = 4,800 bytes
Storage/screen page = 4,800x2 = 9,600 bytes = 9.4 KB
Vector images
A typical image with 500 lines
Each line is defined by its coordinates in x & y directions,
and by an 8-bit attribute field.
Coordinates in the x direction require 10 bits: log2(640).
Coordinates in the y direction require 9 bits: log2(480).
Bits/line = 9 + 10 + 9 + 10 + 8 = 46 bits.
Storage/screen page = 500x46/8 = 2,875 bytes = 2.8 KB
Bit-mapped images
256 different colors
Storage/screen page = 640x480x1 = 307,200 bytes = 300 KB
292
MULTIMEDIA STORAGE REQUIREMENTS
(WINDOW OF 640X480 PIXELS)
293
JPEG
294
FOUR MODES OF OPERATIONS
Sequential baseline
A simple and efficient algorithm.
Adequate for most applications.
The image is scanned in a raster scan fashion l-to-r/t-to-b.
Progressive
The image is encoded in multiple scans at the same spatial
resolution.
Hierarchical
The image is encoded at multiple spatial resolutions.
Lower resolution images may be displayed without having to
decompress the image at a higher spatial resolution.
Can be implemented using sequential, progressive, or lossless
modes.
Lossless
The image is encoded to guarantee exact recovery of every sample
value.
Compression efficiency is inherently lower than those of lossy
methods.
295
SEQUENTIAL BASELINE
DCT Quantization
Color
components
8x8
blocks
Quantization
tables
Coding
tables
Header
Tables
DPCM
Entropy
coding
Data RLC
296
DCT & INVERSE DCT
DCT
1 N −1 N −1
(2i + 1)uπ (2 j + 1)vπ
F (u, v) = C (u )C (v)∑∑ f (i, j ) cos cos
2N i =0 j =0 2N 2N
1
C ( x) = if x = 0, else 1 if x > 0
2
Inverse DCT
~ 1 N −1 N −1
( 2i + 1)uπ (2 j + 1)vπ
f (i, j ) =
2N
∑∑ C (u )C (v) F (u, v) cos
u =0 v =0 2N
cos
2N
1
C ( x) = if x = 0, else 1 if x > 0
2
297
A CASE STUDY: TEST IMAGE
298
A CASE STUDY: AN 8X8 BLOCK
52 55 61 66 70 61 64 73
63 59 66 90 109 85 69 72
67 61 68 104 126 88 68 70
79 65 60 70 77 68 58 75
85 71 64 59 55 61 65 83
87 79 69 68 65 76 78 94
299
A CASE STUDY: LEVEL SHIFTING
n=8 => 2n-1= 128 -49 -63 -68 -58 -51 -65 -70 -53
300
A CASE STUDY: APPLICATION OF DCT
7 -21 -62 9 11 -7 -6 6
-50 13 35 -15 -9 6 0 3
11 -8 -13 -2 -1 1 -4 1
-10 1 3 -3 -1 0 2 -1
-4 -1 2 -1 2 -3 1 -2
-1 -1 -1 -2 -1 -1 0 -1
301
A CASE STUDY: NORMALIZATION MATRIX
16 11 10 16 24 40 51 61
12 12 14 19 26 58 60 55
14 13 16 24 40 57 69 56
14 17 22 29 51 87 80 62
Z (u , v) 18 22 37 56 68 109 103 77
24 35 55 64 81 104 113 92
302
A CASE STUDY: QUANTIZATION
-26 -3 -6 2 2 0 0 0
1 -2 -4 0 0 0 0 0
-3 1 5 -1 -1 0 0 0
DCT coefficients -4 1 2 -1 0 0 0 0
are quantized
using the formula
1 0 0 0 0 0 0 0
⎡ T (u , v) ⎤
Tˆ (u , v) = round ⎢ ⎥
⎣ Z (u , v) ⎦ 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
38 consecutive
zeros! 0 0 0 0 0 0 0 0
303
A CASE STUDY: ZIGZAG REORDERING
[-26 -3 1 -3 -2 -6 2 -4 1 -4 1 1 5 0 2 0 0 -1 2 0 0 0 0 0 -1 -1 EOB]
a special EOB
Huffman code word
indicates that the
remainder of the
coefficients are
zeros.
304
A CASE STUDY: PREPARATION FOR
ENTROPY CODING
305
A CASE STUDY: DPCM ON DC COEFFICIENTS
306
A CASE STUDY: RLC ON AC COEFFICIENTS
307
A CASE STUDY: CODING CATEGORIES
308
A CASE STUDY: DC BASE CODES
309
A CASE STUDY: AC BASE CODES
310
A CASE STUDY: AC BASE CODES
311
A CASE STUDY: COMPLETELY CODED ARRAY
1010110 0100 001 0100 0101 100001 0110 100011 001 100011 001
[-26 -3 1 -3 -2 -6 2 -4 1 -4 1 1 5 0 2 0 0 -1 2 0 0 0 0 0 -1 -1 EOB]
312
A CASE STUDY: DECOMPRESSION BEGINS
-26 -3 -6 2 2 0 0 0
1 -2 -4 0 0 0 0 0
-3 1 5 -1 -1 0 0 0
-4 1 2 -1 0 0 0 0
1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
313
A CASE STUDY: DENORMALIZATION
12 -24 -56 0 0 0 0 0
-56 17 44 -29 0 0 0 0
18 0 0 0 0 0 0 0
T& (u , v ) = Tˆ (u , v ) Z (u , v ) 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
314
A CASE STUDY: INVERSE DCT
315
A CASE STUDY: LEVEL SHIFTING
58 64 67 64 59 62 70 78
56 55 67 89 98 88 74 69
83 69 59 60 61 61 67 78
93 81 67 62 69 80 84 84
316
A CASE STUDY: DIFFERENCE
52 55 61 66 70 61 64 73 58 64 67 64 59 62 70 78
63 59 66 90 109 85 69 72 56 55 67 89 98 88 74 69
79 65 60 70 77 68 58 75 76 57 56 74 75 57 57 74
85 71 64 59 55 61 65 83 83 69 59 60 61 61 67 78
87 79 69 68 65 76 78 94 93 81 67 62 69 80 84 84
-6 -9 -6 2 11 -1 -6 -5
7 4 -1 1 11 -3 -5 3
2 9 -2 -6 -3 -12 -14 9
-6 7 0 -4 -5 -9 -7 1
-7 8 4 -1 11 4 3 -2
3 8 4 -4 2 11 1 1
2 2 5 -1 -6 0 -2 5
-6 -2 2 6 -4 -4 -6 10
317
JPEG 2000
318
JPEG 2000: BLOCK DIAGRAM
Image Compressed
data data
Entropy
DWT Quantization
coding
DC level
shifting
319
1-DIM DISCRETE WAVELET TRANSFORM (DWT)
1
Wϕ ( j 0 , k ) =
M
∑ f ( x)ϕ
x
j0 , k (x) : approximation coefficients
1
Wψ ( j , k ) =
M
∑ f ( x)ψ
x
j ,k (x) : detail coefficients
∞
1 1
f ( x) =
M
∑Wϕ ( j
k
0 , k )ϕ j0 ,k ( x) +
M
∑∑Wψ ( j, k )ψ
j = j0 k
j ,k ( x)
320
1-DIM FAST WAVELET TRANSFORM (FWT)
321
2-DIM DISCRETE WAVELET TRANSFORM (DWT)
ψ H ( x, y ), ψ V ( x, y ), ψ D ( x, y ) : 2 - dim wavelets
ψ ( x, y ) = ψ ( x)ψ ( y )
D
: measures variations along diagonals
322
2-DIM DISCRETE WAVELET TRANSFORM (DWT)
ϕ j ,m,n ( x, y ) = 2 j / 2 ϕ (2 j x − m, 2 j x − n)
i = {H , V , D}
ψ j ,m,n ( x, y ) = 2 ψ (2 x − m, 2 x − n)
j/2 j j
M −1 N −1
1
Wϕ ( j0 , m, n) =
MN
∑∑ f ( x, y)ϕ
x =0 y =0
j0 , m , n ( x, y )
M −1 N −1
1
Wψ ( j , m, n) =
i
MN
∑∑ f ( x, y )ψ
x =0 y =0
j ij , m , n , m , n
( x, y )
1 1
f ( x, y ) =
MN
∑∑Wϕ ( j0 , m, n)ϕ j0 ,m,n( x, y ) +
m n MN
∑ ∑∑∑Wψ ( j, m, n)ψ
i = H ,V , D j = j0 m n
i i
j ,m,n ( x, y )
j
Normally, j = 0, and M = N = 2 .
0
j
Summations are performed over x = 0,1,2,..., M - 1, j = 0,1,2,..., J - 1, m,n = 0,1,...,2 − 1.
323
2-DIM FAST WAVELET TRANSFORM (FWT)
324
A FWT EXAMPLE
128x128
computer-
generated 1st level
image decomposition
325
CHAPTER 8: MORPHOLOGICAL IMAGE PROCESSING
326
B̂
A ∈ Z2.
a = (a1, a2) ∈ A.
Empty set: the set with no elements.
w = (w1,w2) & C = {w|w = -d, d ∈ D}: C is the set of elements w
s.t. w is formed by multiplying each of the 2 coordinates of all
the elements of D by -1.
A ⊆ B: A is a subset of B.
C = A U B: C is the union of A and B.
D = A ∩ B: D is the intersection of A and B.
A ∩ B = ∅: A and B are disjoint sets.
Ac = {w|w ∉ A}: The complement of A.
A – B = {w|w ∈ A, w ∉ B} = A ∩ Bc : The difference of A and B.
B̂ = {w|w = -b, b ∈ B}: Reflection of B.
(A)z = {c|c = A + z, a ∈ A}: Translation of A by z = (z1,z2)
327
BASIC SET OPERATIONS
328
3 BASIC LOGICAL OPERATIONS
Example:
p q p XOR q
0 0 0
0 1 1
1 0 1
1 1 0
329
LOGICAL OPERATIONS BETWEEN
BINARY IMAGES
p q p AND q
0 0 0
0 1 0
1 0 0
1 1 1
330
DILATION
A ⊕ B = {w ∈ Z 2 | w = a + b, a ∈ A, b ∈ B}
A ⊕ B = U ( A) b
b∈B
331
2 EXAMPLES OF DILATION
332
AN APPLICATION OF DILATION:
BRIDGING GAPS
333
EROSION
AOB = {z | ( B) z ⊆ A} : Erosion of A by B.
The set of all points z s.t. B, translated by z, is contained in A.
Alternative definitions of erosion
A O B ={w∈Z 2 | w+ b∈ A, ∀b∈B}
AOB = I ( A) −b
b∈B
AOB = Ac ⊕ Bˆ
334
2 EXAMPLES OF EROSION
335
AN APPLICATION OF EROSION
Erosion Dilation
Structure element: Structure element:
13X13 pixels 13X13 pixels
(all 1’s) (all 1’s)
336
OPENING & CLOSING
337
GEOMETRIC INTERPRETATION OF OPENING
the union
of all
translates
of B that
fit into A.
338
GEOMETRIC INTERPRETATION OF CLOSING
339
OPENING & CLOSING OPERATIONS
Erosion
Opening
Outward pointing of A by B
corners were rounded;
Dilation inward pointing
corners were not
affected.
340
PROPERTIES SATISFIED BY
OPENING & CLOSING
341
A MORPHOLOGICAL FILTER:
OPENING FOLLOWED BY CLOSING
Morphological
operations can be
used to construct
filters. The size
of the
The noise manifests dark
itself as light spots
elements on a dark increased
background and as in size.
dark elements on the
light components of
the fingerprint.
Local background
Set A is the union
of X w.r.t. W
of 3 disjoint sets
A O X: set of all
locations of the
origin of X at which
X found a match
(hit) in A.
Generalized notation:
B = (B1, B2) Ac O (W-X): set of
all locations of the
B1= X
origin of (W-X) at
B2= (W – X) which (W-X) found a
match (hit) in Ac.
A ∗ B = (A O B1) ∩ (Ac O B2)
343
SOME BASIC MORPHOLOGICAL ALGORITHMS
Boundary extraction
Region filling
Convex hull
Thinning
Thickening
Skeletons
Pruning
344
BOUNDARY EXTRACTION
1’s
345
AN APPLICATION OF BOUNDARY EXTRACTION
The boundary is
1 pixel thick
1’s 0’s
1 1 1
1 1 1 Structural element
1 1 1
346
REGION FILLING
A subset
whose
elements are
8-connected
boundary
The objective is to fill the
points of a entire region with 1’s.
region.
X0 = p X k = ( X k −1 ⊕ B ) I A c , k = 1,2,3,...
347
AN APPLICATION OF REGION FILLING
348
EXTRACTION OF CONNECTED COMPONENTS
Y is a
connected X k = ( X k −1 ⊕ B) I A, k = 1,2,3,...
component
in A.
X0 = p
Y = Xk
349
AN APPLICATION OF
CONNECTED COMPONENT EXTRACTION
4 of the
connected
components
are dominant
in size.
350
CONVEX HULL
351
CONVEX HULL: AN EXAMPLE
B i , i = 1,2,3,4 : 4
structuring elements
X ki = ( X k −1 ∗ B i ) U A,
i = 1,2,3,4, and k = 1,2,3,...
X0 = A
X 01 = A D i = X conv
i
with X k = X k −1
X =A
2
0 4
Initial points
X =A
3 C ( A) = U D i
0
i =1
X =A
4
0
352
SHORTCOMING OF THE ALGORITHM FOR
OBTAINING THE CONVEX HULL
Shortcoming of the algorithm: the convex hull can grow beyond the
min dimensions required to guarantee convexity.
353
EXTENSIONS TO GRAY SCALE IMAGES
354
GRAY SCALE DILATION
Gray scale: ( s − x ), (t − y ) ∈ D f ; ( x, y ) ∈ Db
analogous
Binary: the 2 sets have to overlap by at least 1 element
355
1-D EXAMPLE OF GRAY SCALE DILATION
( f ⊕ b)( s ) = max{ f ( s − x) + b( x) | ( s − x) ∈ D f ; x ∈ Db }
Conceptually, f
Unlike the
sliding by b is binary case, f,
not different not b, is
from b sliding by shifted.
f.
The actual
mechanics of
gray scale
dilation is easier
to visualize if b
is the function
that slides past f.
At each position
of b, the value of
dilation at that
point is the max
of f+b in the
interval spanned
by b.
356
GRAY SCALE EROSION
• Bright details that are smaller in area than the structuring element are
reduced, with the degree of reduction determined by the gray level
values surrounding the bright detail, and by the shape and values of b.
357
1-D EXAMPLE OF GRAY SCALE EROSION
( f Ο b)( s ) = max{ f ( s + x) − b( x) | ( s + x) ∈ D f ; x ∈ Db }
Conceptually, f
sliding by b is not Unlike the binary
case, f, not b, is
different from b shifted.
sliding by f.
At each position of
b, the value of
erosion at that
point is the min of
f-b in the interval
spanned by b.
( f Ο b) c ( s, t ) = ( f c ⊕ bˆ)( s, t )
Gray scale dilation and erosion are duals w.r.t.
function complementation and reflection.
f c = − f ( x, y ) & bˆ = b(− x,− y )
358
AN EXAMPLE OF GRAY SCALE
DILATION & EROSION
359
OPENING AND CLOSING FOR
GRAY SCALE IMAGES
360
GEOMETRIC INTERPRETATION OF
OPENING & CLOSING
To simplify the
illustration, a scan line of
a gray scale image is
shown as a continuous
function.
361
OPENING AND CLOSING OF
A GRAY SCALE IMAGE
Original image
f o b = ( f Ο b) ⊕ b f • b = ( f ⊕ b)Οb
Opening is Closing is
generally used to generally used to
remove small remove dark
light details from details from an
an image while image while
leaving the leaving bright
overall gray features
levels and larger relatively
bright features undisturbed.
relatively
undisturbed.
362
MORPHOLOGICAL SMOOTHING
363
MORPHOLOGICAL GRADIENT
dilation erosion
g = ( f ⊕ b) − ( f Ο b)
364
TOP-HAT TRANSFORMATION
Original image h = f − ( f o b)
365
TEXTURAL SEGMENTATION
Procedure:
366
GRANULOMETRY
Granulometry: a field that deals principally with determining the size distribution of particles in an image.
Procedure:
1. Opening operations with structural elements of increasing size are performed on the original image.
2. The difference between the original image and its opening is computed after each pass when a different
structural element is completed.
3. At the end of the process, these differences are normalized, and then used to construct a histogram of
particle size distribution.
The approach is based on the idea that opening operations of a particular size have the most effect on
regions of the input image that contain particles of similar size. Thus, a measure of the relative number of
such particles is obtained by computing the difference between the input and output images.
Light objects of 3
different sizes.
367
CHAPTER 9: IMAGE SEGMENTATION
368
DETECTION OF DISCONTINUITIES
w1 w2 w3 z1 z2 z3
w7 w8 w9 z7 z8 z9
369
POINT DETECTION
A point is detected at the location on which the mask is centered if |R| ≥ T, T > 0.
A single
black pixel
embedded
within the
porosity.
Mask responses
R1 R2 R3 R4
At a certain point, |Ri| > |Rj|, for all j ≠ i: the point is more
likely associated with a line in the direction of mask i.
371
AN EXAMPLE OF LINE DETECTION
We are interested in
finding all the lines
that are 1 pixel
thick, and are
oriented at -450.
T = the max
Absolute
value in the
value of
image
the result
372
EDGE DETECTION
373
MODELING OF A EDGE
374
DETAIL NEAR AN EDGE
376
DEFINITION OF AN EDGE POINT
377
THE GRADIENT
∇f = [Gx2+Gy2]1/2
(not preferred)
2x2 filters
∇f ≈ |Gx|+|Gy|
(more attractive)
Gx = (z9-z5) Gy = (z8-z6)
α(x,y) =tan-1(Gy/Gx)
direction
Gx = (z7+z8+z9)-(z1+z2+z3) Gy = (z3+z6+z9)-(z1+z4+z7)
3x3 filters
Gx = (z7+2z8+z9)-(z1+2z2+z3) Gy = (z3+2z6+z9)-(z1+2z4+z7)
378
DIAGONAL EDGE DETECTION
379
AN EXAMPLE OF HORIZONTAL & VERTICAL
EDGE DETECTION
380
PRINCIPLE EDGE DETECTION
381
EMPHASIS ON DIAGONAL EDGE DETECTION
382
THE LAPLACIAN
z1 z2 z3
z7 z8 z9
0 -1 0
-1 -1 -1
383
ROLE OF THE LAPLACIAN IN SEGMENTATION
r2
⎡ r 2 − σ 2 ⎤ − 2σ 2
The Laplacian of h: ∇ h(r ) = − ⎢
2
⎥e (Laplacian of a Gaussian)
⎣ σ
4
⎦
384
THE LAPLACIAN OF A GAUSSIAN (LoG)
The second derivative is a linear operation: convolving an image with ∇2h is the
same as convolving the image with h first and then computing the Laplacian of the
result.
A mask that
approximates ∇2h.
385
COMPARISON OF 2 APPROACHES
FOR EDGE FINDING
Sobel gradient
387
LOCAL PROCESSING
The objective is to find rectangles whose sizes make them candidates for license plates.
Gy
E = 25
Gx A = 15
f(x,y) > T: an object point T1 < f(x,y) < T2: (x,y) ∈ object class 1
f(x,y) < T: a background point f(x,y) > T2: (x,y) ∈ object class 2
f(x,y) < T1: (x,y) ∈ the background
An image An image
composed of composed of 2
light objects types of light
on a dark objects on a
background. dark
background.
390
THRESHOLDING FUNCTION
391
THE ROLE OF ILLUMINATION
Histogram of r(x,y)
i(x,y)
i(x,y)r(x,y)
Difficult to segment
392
BASIC GLOBAL THRESHOLDING
For example, in
industrial
applications,
illumination can
be controlled.
393
AUTOMATIC DETERMINATION OF T
394
AN EXAMPLE OF AUTOMATIC
DETERMINATION OF T
Note the
clear valley
T0 = 0
T = 125
After 3 iterations: T = 125.4
395
BASIC ADAPTIVE THRESHOLDING
396
AN EXAMPLE OF BASIC
ADAPTIVE THRESHOLDING
The global
threshold is
manually placed
in the valley of the
histogram.
Each subimage
with σ 2 > 100:
Subimages not
containing a T automatically
boundary: σ 2 < 75 determined.
T0: midway
Subimages between the min
containing a and max gray
boundary: σ 2 > 100 levels.
clearly
bimodal
almost
unimodal
398
THRESHOLDS BASED ON SEVERAL VARIABLES
399
AN EXAMPLE OF COLOR SEGMENTATION
The color image has 3 16- Thresholding about one Thresholding about a
level RGB components. of the histogram clusters cluster close to the red
corresponding to facial axis.
tones.
401
REGION GROWING
Several cracks
Histogram of (a) and porosities
65
404
QUADTREE REPRESENTATION
A quadtree: a tree in
which nodes have exactly
4 descendants
405
AN EXAMPLE OF REGION
SPLITTING AND MERGING
P(Ri) = TRUE if at least 80% of the pixels in Ri have the property |zj – mi| ≤ 2σi
407
CHAPTER 10: DIGITAL IMAGE WATERMARKING
attack
W
detection
Image, I Iw Iw *
embedding ……….
distribution
extraction
Key, k
409
CLASSIFICATION
410
POPULAR TRANSFORMS
411
DCT DOMAIN WATERMARKING
Watermark embedding
W: watermark to be embedded.
X: sequence of pixel values.
XD and YD: row-concatenated DCT coefficients of X and Y.
a = scaling factor: Determines the intensity of the watermark.
YD (i ) = X D (i )(1 + aW )
Watermark extraction
W*: extracted version of the watermark.
ZD: possibly forged watermarked image.
1 ZD (i) W ⋅W *
W (i) =
*
1 ==> S(W ,W ) =
*
a XD (i) W ⋅W *
T = user-defined threshold.
If S > T, image is authentic.
412
DCT DOMAIN WATERMARKING
S>T
413
SCALING FACTOR a = 0.1, 0.5, 1.0, 5.0
Original image
414
CONCLUSIONS
415
DWT DOMAIN WATERMARKING
LL2 HL2
HL1 DWT decomposition
LH2 HH2 with 2 levels
LH1 HH1
416
MULTIPLE WATERMARKING
IN THE DWT DOMAIN - FIRST PAPER
418
ALGORITHM
1.000 0.977
(c) (e) (f)
0.985 0.986
(a) (b) (d) (g) (h)
SR = S/(S+D)
S: # of matching pixel values in compared images
D: # of different pixel values in compared images.
420
ATTACKS – 1ST LEVEL
JPEG 50 (Matlab): 33.11 Gaussian Noise [0 0.001] Intensity Adj. ([0 0.8],[0 1]) Crop Rewatermark
(Matlab): 29.74 (Matlab): 18.87 (Matlab): 11.88 (Matlab): 38.51
JPEG 25 (Matlab): 31.27 Rescale 512 -> 256 -> 512 Gamma Correction 1.5 Pixelate 2 (mosaic) Collusion
(Matlab): 19.81 (Matlab): 17.90 (Photoshop): 30.13 (Matlab): 45.35
421
RECOVERED WATERMARKS – 1ST LEVEL
JPEG Quality 75 Blur (3,3) Histogram Equalization Rotate 200 (Matlab) Sharpen (Photoshop)
JPEG Quality 50 Gaussian Noise [0 0.001] Intensity Adj. ([0 0.8],[0 1]) Crop Rewatermark
JPEG Quality 25 Rescale 512 -> 256 -> 512 Gamma correction 1.5 Pixelate 2 (mosaic) Collusion
422
ATTACKS – 2nd LEVEL
JPEG 50 Gaussian Noise [0 0.001] Intensity Adj. ([0 0.8],[0 1]) Crop Rewatermark
(Matlab): 33.04 (Matlab): 29.73 (Matlab): 18.88 (Matlab): 11.88 ( Matlab): 37.76
JPEG 25 Rescale 512 -> 256 -> 512 Gamma Correction 1.5 Pixelate 2 (mosaic) Collusion
(Matlab): 31.22 (Matlab): 19.80 (Matlab): 17.89 (Photoshop): 30.08 (Matlab): 44.08
423
RECOVERED WATERMARKS – 2nd LEVEL
JPEG Quality 50 Gaussian Noise [0 0.001] Intensity Adj. ([0 0.8],[0 1]) Crop Rewatermark
JPEG Quality 25 Rescale 512 -> 256 -> 512 Gamma correction 1.5 Pixelate 2 (mosaic) Collusion
424
CONCLUSIONS
425
DWT-SVD DOMAIN WATERMARKING
LL2 HL2
HL1
LH2 HH2
LH1 HH1
Watermark embedding
1. Using DWT, decompose the cover image into 4 subbands: LL, HL, LH, and HH.
k
k k kT
2. Apply SVD to each subband image: A =U Σ V
a a a
T
3. Apply SVD to the visual watermark: W =U Σ V
W W W
Watermark extraction
1. Decompose the watermarked cover image into 4 subbands: LL, HL, LH, and HH.
2. Apply SVD to each subband image: *k k * k kT
A =U Σ V
a a a
3. Extract the singular values from each subband: λ kwi = (λ*i k − λik ) / α k , i = 1,..., n
k k T
4. Construct the four visual watermarks using the singular vectors: W =U Σ V
W W W
427
COVER IMAGE AND VISUAL WATERMARK
428
ATTACKS
Blur 5x5 (xnview) Noise 0.3 (xnview) Pixelate 2 (mosaic) (Photoshop) JPEG 30:1 (xnview)
JPEG2000 50:1 (xnview) Sharpen 80 (xnview) Rescale 256 (xnview) Rotate 200 (Photoshop)
Crop on both sides (Photoshop) Contrast -25 (Photoshop) Histogram Equalization (Photoshop) Gamma correction 0.60 (ImageReady)
429
EXTRACTED WATERMARKS
430
EXTRACTED WATERMARKS
431
BEST EXTRACTIONS
Gaussian Blur 5x5 Gaussian Noise 0.3 Pixelate 2 (mosaic) JPEG 30:1
Crop on both sides Contrast -20 Histogram Equalization Gamma Correction 0.60
432
PURE SVD DOMAIN WATERMARKING
Gaussian Blur 5x5 Gaussian Noise 0.3 Pixelate 2 (mosaic) JPEG 30:1
Crop on both sides Contrast -20 Histogram Equalization Gamma Correction 0.60
433
CONCLUSIONS
434
DFT DOMAIN WATERMARKING
435
EMBEDDING MULTIPLE CIRCULAR WATERMARKS
IN THE DFT DOMAIN
436
MAJOR DISADVANTAGE OF CIRCULAR WATERMARKS
437
TEST IMAGE
438
ATTACKS
Cropping
Scaling
439
DETECTION
Decision rule
H0: the image is watermarked with W if c > T
H1: the image is not watermarked with W if c < T
Threshold T = (µ0 + µ1)/2
0: the expected values of the Gaussian probability density
functions (pdfs) associated with the hypotheses H0
1: the expected values of the Gaussian probability density
functions (pdfs) associated with the hypotheses H1
Detection anomalies
False positives: detecting the watermark in an unmarked image
False negatives: not detecting the watermark in a marked image
440
EXPERIMENTAL RESULTS:
THRESHOLDS AND FALSE NEGATIVES
Radius = 96 Radius = 32
T % T %
JPEG 0.086 48 0.228 12
Gaussian noise 0.110 37 0.206 18
blurring 0.120 51 0.228 13
resizing 0.093 55 0.227 13
histogram equalization 0.272 1 0.267 14
contrast adjustment 0.273 0 0.232 11
gamma correction 0.271 0 0.231 11
scaling 0.251 1 0.233 11
rotation 0.142 35 0.174 42
cropping 0.154 21 0.150 34
441
EXPERIMENTAL RESULTS:
THRESHOLDS AND FALSE POSITIVES
Radius = 96 Radius = 32
T % T %
JPEG 0.086 40 0.228 7
Gaussian noise 0.110 24 0.206 13
blurring 0.120 41 0.228 8
resizing 0.093 45 0.227 8
histogram equalization 0.272 0 0.267 4
contrast adjustment 0.273 0 0.232 6
gamma correction 0.271 0 0.231 6
scaling 0.251 0 0.233 7
rotation 0.142 23 0.174 26
cropping 0.154 8 0.150 31
442
CONCLUSIONS
443
AN EFFECTIVE BLIND WATERMARKING SYSTEM
FOR COLOR IMAGES
445
EXTRACTION
Compute the DFT of the NxN watermarked (and possibly attacked) image.
Move the origin to the center.
Obtain the magnitudes of DFT coefficients.
Divide the NxN matrix of magnitudes into four (N/2)x(N/2) matrices Mul, Mur, Mll,
Mlr.
Use the three frequency bands and the embedding locations defined in the
embedding process: low, middle, and high.
In each band, if a > b then bit = 0 else bit = 1.
446
EMBEDDING THE WATERMARK
INTO LUMINANCE LAYER
p: 120%
12x12 WM Low frequency area
Modify every
4th magnitude
p: 60-90%
Mid frequency area
(180,180)
Mul Mur
(20,20)
447
WATERMARKING A FULL COLOR IMAGE
USING MATLAB
Cover image DFT coefficient magnitudes Watermarked image DFT coefficient magnitudes
448
EXTRACTIONS FROM UNATTACKED
LUMINANCE LAYER
SR = 0.986
SR = 1.000
449
EXTRACTIONS FROM UNWATERMARKED
LUMINANCE LAYER
SR = 0.521
SR = 0.458
450
EXTRACTIONS FROM CROPPED
LUMINANCE LAYER
SR = 0.910
SR = 0.944
Cropping
SR = 0.826
451
EXTRACTIONS FROM HISTOGRAM EQUALIZED
LUMINANCE LAYER
SR = 0.896
SR = 0.979
Histogram equalization
SR = 0.972
452
EXTRACTIONS FROM LOW PASS FILTERED
LUMINANCE LAYER
SR = 0.465
SR = 0.625
453
EXTRACTIONS FROM GAMMA CORRECTED
LUMINANCE LAYER
SR = 0.958
SR = 0.972
454
EXTRACTIONS FROM GAUSSIAN NOISY
LUMINANCE LAYER
SR = 0.535
SR = 0.806
455
EXTRACTIONS FROM INTENSITY ADJUSTED
LUMINANCE LAYER
SR = 0.958
SR = 0.993
456
EXTRACTIONS FROM JPEG COMPRESSED
LUMINANCE LAYER
SR = 0.458
SR = 0.486
457
EXTRACTIONS FROM RESIZED
LUMINANCE LAYER
SR = 0.549
SR = 0.667
458
EXTRACTIONS FROM ROTATED
LUMINANCE LAYER
SR = 0.618
SR = 0.924
Rotation (50)
SR = 0.931
459
EXTRACTIONS FROM SCALED
LUMINANCE LAYER
SR = 0.819
SR = 0.958
SR = 0.972
SR = 0.986
SR = 1.000
SR = 1.000
461
EXTRACTIONS FROM COLLUDED
LUMINANCE LAYER
SR = 0.736
SR = 0.701
SR = 0.938 SR = 1.000
SR = 1.000 SR = 1.000
SR = 1.000
464
CONCLUSIONS
465