Dip Notes@Azdocuments - in

Download as pdf or txt
Download as pdf or txt
You are on page 1of 181

Module- 3

IMAGE RESTORATION
Syllabus
Restoration: Noise models, Restoration in the Presence of Noise Only using Spatial
Filtering and Frequency Domain Filtering, Linear, Position-Invariant Degradations,
Estimating the Degradation Function, Inverse Filtering, Minimum Mean Square Error
(Wiener) Filtering, Constrained Least Squares Filtering.
[Text: Chapter 5: Sections 5.2, to 5.9] L1, L2, L3
Introduction
The principal goal of restoration techniques is to improve an image in some predefined sense.
Restoration is an objective process and it attempts to recover an image that has been degraded
by using a priori knowledge of the degradation phenomenon. Thus, restoration techniques are
oriented toward the modeling the degradation and applying the inverse process in order to
recover the original image.
Image restoration vs. image enhancement:
• Enhancement:
• largely a subjective process.
• Priori knowledge about the degradation is not a must (sometimes no
degradation is involved).
• Procedures are heuristic and take advantage of the psychophysical
aspects of human visual system.
• Restoration:
• more an objective process.
• Images are degraded.
• Tries to recover the images by using the knowledge about the
degradation.
• Image restoration: recover an image that has been degraded by using a prior knowledge
of the degradation phenomenon.
• Model the degradation and applying the inverse process in order to recover the original
image.
• Image enhancement: “improve ” an image subjectively.
• Image restoration: remove distortion from image, to go back to the “original”--objective
process.
Why Image Restoration?
Image restoration is to recover the original image by removing noise and blur from image.
Image blur is difficult to avoid in many situations like photography, to remove motion blur
caused by camera shake, radar imaging to remove the effect of image system response, etc.
Image noise is unwanted signal which comes in image from sensor such as thermal or electrical
signal and Environmental condition such as rain, snow etc.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


A Model of Image Degradation / Restoration Process :
As fig. shows, degradation process is modeled as degradation function that, together with the
additive noise term, operates on an input image f(x, y) to produce a degraded image g(x, y).
Given g(x, y), some knowledge about the degradation function H, and additive noise term, the
objective of restoration is to obtain an estimate of the original image.
Estimation of the output image should be as close as possible to the original input image.

If H is a linear, position- invariant process, then the degraded image is given in the spatial
domain by
g(x, y) = h(x, y)*f(x, y) + η(x, y)
Where h(x, y) is the spatial representation of the degradation function and symbol * indicates
convolution. Convolution in the spatial domain is analogous to multiplication in the frequency
domain.
The model of the degraded image is given in the frequency domain by
G(u, v) = H(u,v)F(u, v) + N(u, v)
where the terms in capital letters are the Fourier transforms of the Corresponding terms of
previous spatial domain expression.
Noise Models
Noise Sources :
The principal sources of noise in digital images arise during image acquisition and/or
transmission.
Image acquisition:
e.g., light levels, sensor temperature, etc.
Sensor performance is affected by environmental conditions during image acquisition, and by
the quality of sensing elements.
Transmission:
Interference in the channel used for transmission.
e.g., lightning or other atmospheric disturbance in wireless network.
Spatial and frequency properties of Noise:

When the Fourier spectrum of noise is constant, the noise is usually called as white noise.
Dr. Suresh Delampady, Professor, RNS Institute of Technology
With the exception of spatially periodic noise, we assume that the Noise is independent of
spatial coordinates and that it is uncorrelated with respect to the image itself

Important Noise probability Density Functions:

Statistical behavior of the intensity values in the noise component is a matter of concern in the
analysis of image restoration model. These may be considered random variables, characterized
by a probability density function(PDF). The following are the most common PDFs found in
image processing applications.
Gaussian Noise:
Gaussian noise is characterized by two parameters,  (mean) and σ2 (variance), by
1
p( z ) = e −( z −  ) / 2
2 2

2 

70% values of z fall in the range [(-σ),(+σ)]


95% values of z fall in the range [(-2σ),(+2σ)]
Rayleigh noise:
The PDF of Rayleigh noise is given by
2
 ( z − a )e − ( z − a ) / b for z  a
2

p( z ) =  b

0 for z  a
The mean and variance of this density are given by
b( 4 −  )
 = a + b / 4 and  2 =
4
a and b can be obtained through mean and variance

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Erlang (Gamma) noise:
The PDF of Erlang (Gamma) noise is given by
 a b z b −1 − az
 e for z  0
p( z ) =  (b − 1)!

0 for z  0
Where a>0, b is a positive integer and “!” indicates factorial.
The mean and variance of this density are given by
b
 = b / a and  2 = 2
a
a and b can be obtained through mean and variance.

Exponential noise:
The PDF of Exponential noise is given by
ae− az for z  0
p( z ) = 
0 for z  0
The mean and variance of this density are given by
1
 = 1 / a and  2 = 2
a
Special case of Erlang PDF with b=1.

Uniform noise:
The PDF of Uniform noise is given by
 1
 if a  z  b
p( z ) =  b − a
 0 otherwise
The mean and variance of this density are given by
(b − a) 2
 = (a + b) / 2 and  2 =
12
Impulse (Salt-and-Pepper) Noise:
The PDF of (bipolar) impulse noise is given by
 Pa for z = a

p( z ) =  Pb for z = b
0
 otherwise
if b  a, gray-level b will appear as a light dot,
while level a will appear like a dark dot.
If either Pa or Pb is zero, the impulse noise is called
unipolar

Above PDFs provide useful tools for modeling a broad range of noise corruption situations
given below.
Gaussian noise arises in an image due to factors such as Electronic circuit noise, sensor noise
due to poor illumination and/or high temperature

Dr. Suresh Delampady, Professor, RNS Institute of Technology


The Rayleigh density is helpful in characterizing noise phenomenon in Range imaging.
The exponential and gamma densities find application in laser imaging.
Impulse noise is found in situations where quick transients, such as faulty switching take place
during imaging.
Uniform density is useful as the basis for numerous random number generators that are used in
simulations.
Noisy images and their histograms:
Following figure 3.3 shows a test pattern used to illustrate the characteristics noise PDFs.
Figure 3.4 shows the test pattern after the addition of six types of noise.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Periodic noise:
▪ Periodic noise in an image arises typically from electrical or electromechanical
interference during image acquisition.
▪ It can be observed by visual inspection both in the spatial domain and frequency
domain.
▪ It is a type of spatially dependent noise
▪ Periodic noise can be reduced significantly via frequency domain filtering

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Estimation of Noise Parameters:
Periodic noise:
▪ Parameters can be estimated by inspection of the Fourier spectrum of the image.
▪ Periodic noise tends to produce frequency spikes that can be detected by visual
analysis.
Noise PDFs:
▪ From sensor specifications
▪ If imaging sensors are available, capture a set of images of plain environments
▪ If only noisy images are available, parameters of the PDF involved can be
estimated from small patches of constant regions of the noisy images

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Consider a subimage denoted by S , and let ps ( zi ), i = 0, 1, ..., L -1,
denote the probability estimates of the intensities of the pixels in S .
The mean and variance of the pixels in S :
L −1
z =  zi ps ( zi )
i =0
L −1
and  2 =  ( zi − z ) 2 ps ( zi )
i =0

The shape of the histogram identifies the closest PDF match.

Restoration in the Presence of Noise Only ̶ Spatial Filtering


Degraded image in the spatial domain and frequency domain are as follows
g(x, y) = h(x, y)*f(x, y) + η(x, y)
and
G(u, v) = H(u,v)F(u, v) + N(u, v)
When the only degradation present in the image is noise, then the above equations become
g(x, y) = f(x, y) + η(x, y)
and
G(u, v) = F(u, v) + N(u, v)

Mean Filters
In this this topic noise reduction capabilities of the various spatial filters are discussed.
Arithmetic mean filter:
The simplest mean filter is arithmetic mean filter.
Let S xy represent the set of coordinates in a rectangle
subimage window of size m  n, centered at ( x, y ).

Dr. Suresh Delampady, Professor, RNS Institute of Technology


This filter computes the average value of the corrupted image g(x,y) in the area defined by Sx,y.
The value of the Restored image is shown below is at point (x,y) is the arithmetic mean
Computed using the pixels in the region defined by Sx,y.
1
fˆ ( x, y ) =  g ( s, t )
mn ( s ,t )S x , y

Here, the noise is reduced as a result of blurring.


Geometric mean filters:
An image is restored using this filter is given by the expression
1
  mn
fˆ ( x, y ) =   g ( s, t )
( s ,t )S x , y 

Generally, a geometric mean filter achieves smoothing comparable to the arithmetic mean
filter, but it tends to lose less image detail in the process.
Harmonic mean filter:
The Harmonic mean filter operation is given by the expression
mn
fˆ ( x, y ) =
1

( s ,t )S x , y g ( s, t )

It works well for salt noise, but fails for pepper noise. It does well also with other types of
noise like Gaussian noise.
Contraharmonic mean filter:
This filter yields a restored image based on the expression

 g ( s, t )
( s ,t )S x , y
Q +1

fˆ ( x, y ) =
 g ( s, t )
( s ,t )S x , y
Q

Q: order of the filter


Positive Q works for pepper noise
Negative Q works for salt noise
Q=0➔arithmetic mean filter
Q=-1➔harmonic mean filter
It is well suited for reducing the effects of salt-and-pepper noise. Q>0 for pepper noise and
Q<0 for salt noise.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Dr. Suresh Delampady, Professor, RNS Institute of Technology
Order Statistic Filters:
Here some additional order statistics filters are discussed. Order statistics filters are spatial
filters whose response is based on ordering (ranking) the values of the pixels contained in the
image area encompassed by the filter. The ranking result determines the response of the filter.
Median filter:
In this method value of a pixel is replaced by the median of the intensity levels in the
neighborhood of that pixel.
▪ Median represents the 50th percentile of a ranked set of numbers

fˆ ( x, y) = median {g ( s, t )}
(s,t)S x , y

It has excellent noise reduction capabilities for certain types of random noise.
Less blurring than linear smoothing filters of similar size.
It is effective in the presence of both bipolar and unipolar noise.
Max and min filter:
Max filter uses the 100th percentile of a ranked set of numbers
Maxfilter
fˆ ( x, y) = max {g ( s, t )}
(s,t)S x , y

It is Good for removing pepper noise present in the image.


Min filter is the oth percentile of a ranked set of numbers.
Min. filter
fˆ ( x, y) = min {g ( s, t )}
(s,t)S x , y

It is Good for removing salt noise present in the image.


Midpoint filter:
The midpoint filter computes between the maximum and Minimum Values in the area
encompassed by the filter.

fˆ ( x, y ) =  max {g ( s, t )} + min {g ( s, t )}


1
2 ( s ,t )S xy ( s ,t )S xy 

This filter combines order statistics and averaging filter. It works best for randomly distributed
noise, like Gaussian or uniform noise.
Alpha-Trimmed Mean Filter:

We delete the d / 2 lowest and the d / 2 highest intensity values of


g ( s, t ) in the neighborhood S xy . Let g r ( s, t ) represent the remaining
mn - d pixels.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


A filter formed by averaging these remaining pixels is called an Alpha-trimmed mean filter.
1
fˆ ( x, y ) =
mn − d
 g ( s, t )
( s ,t )S xy
r

Where d is ranging from 0 to mn-1. when d = 0, it becomes arithmetic mean filter. When d =
mn – 1, this filter becomes a median filter.
gr(s,t) represent the remaining mn-d pixels
It is useful in situations involving multiple types of noise like a combination of salt-and-pepper
and Gaussian.
Adaptive filters
The behavior of adaptive filter changes based on statistical characteristics of the image inside
the filter region defined by the mхn rectangular window.
The performance is superior to that of the mean filters or order statistics filter.
Filter complexity is increased when the filter is designed for improved filtering power.
Adaptive, Local Noise Reduction Filters:
The simplest statistical measures of random are its mean and variance.
The mean gives a measure of average intensity in the region over which the mean is computed.
The variance gives a measure of contrast in that region.
Filter is to operate on local region Sxy
S xy : local region
The response of the filter at the center point (x,y) of S xy
is based on four quantities:
(a) g ( x, y ), the value of the noisy image at (x, y );
(b)  2 , the variance of the noise corrupting f ( x, y )
to form g ( x, y );
(c) mL , the local mean of the pixels in S xy ;
(d)  L2 , the local variance of the pixels in S xy .

The behavior of the filter:


(a) if  2 is zero, the filter should return simply the value
of g ( x, y ).
(b) if the local variance is high relative to  2 , the filter
should return a value close to g ( x, y );
(c) if the two variances are equal, the filter returns the
arithmetic mean value of the pixels in S xy .

Dr. Suresh Delampady, Professor, RNS Institute of Technology


An adaptive expression for obtaining f ( x, y )
based on the assumptions:
 2
f ( x, y ) = g ( x , y ) − 2  g ( x , y ) − m L 
L
Adaptive Filters:
Adaptive Median Filters :
Median filter is effective for removing salt-and-pepper noise.
The density of the impulse noise can not be too large. This filter preserves image details
while smoothing non-impulse noise.
The notation:
zmin = minimum intensity value in S xy
zmax = maximum intensity value in S xy
zmed = median intensity value in S xy
z xy = intensity value at coordinates ( x, y )
S max = maximum allowed size of S xy
The adaptive median-filtering works in two stages:
Stage A:
A1 = zmed − zmin ; A2 = zmed − zmax
if A1>0 and A2<0, go to stage B
Else increase the window size
if window size  S max , repeat stage A; Else output zmed
Stage B:
B1 = z xy − zmin ; B2 = z xy − zmax
if B1>0 and B2<0, output z xy ; Else output zmed

This filter has three main purposes:


To remove salt and pepper (impulse) noise.
To provide smoothing of other noise that may not impulsive, and
To reduce distortion such as excessive thinning or thickening of object boundaries.
The purpose of stage A is to determine if the median filter output, zmed , is an impulse (black
or white) or not. If the condition zmin < zmed < zmax holds, then zmed can not be an impulse. In this
case go to stage B and test to see if the point in the center of the window, zxy, is itself an impulse.
If the condition B1>0 AND B2<0 is true, then zmin < zxy < zmax , and zxy can not be an impulse.

If the condition B1 > 0 AND B2 < 0 is false, then either zxy = zmin or zxy = zmax. In either case
the value of the pixel is an extreme value and the algorithm outputs the median value zmed
which is not a noise impulse.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Periodic Noise Reduction by Frequency Domain Filtering
The basic idea
Periodic noise can be analyzed and filtered effectively using frequency domain techniques.
The basic idea is that the Periodic noise appears as concentrated bursts of energy in the Fourier
transform, at locations corresponding to the frequencies of the periodic interference.
Approach
• A selective filter is used to isolate the noise.
• Band reject, band pass, and notch filters are used as tools for periodic noise reduction.
• Band reject filters remove or attenuate a band of frequencies about the origin of the
Fourier transform.
Band reject filtering is for noise removal in applications where the general locations of noise
components in frequency domain is approximately known.
Example is an image corrupted by additive periodic noise that can be approximated as 2 D
sinusoidal functions. Fourier transform of sine consists of two impulses that are mirror images
of each other about the origin of the transform.
The sinusoidal noise components appear as symmetric pairs of bright dots can be observed in
its Fourier Transform.
Band reject Filters
The transfer functions of ideal, butter-worth and Gaussian filters are as shown below.Its
perspective plots are shown in Figure 3.9.

Ideal band reject filter:


Band reject filters remove or attenuate a band of frequencies about the origin of the Fourier
Transform.
The transfer function of ideal band reject filter is given by
 W
1 if D(u, v)  D0 − 2
 W W
H (u, v) = 0 if D0 −  D(u, v)  D0 +
 2 2
1 if D(u, v)  D0 + W
 2
Where D(u,v) is the distance from the origin of the centred frequency rectangle, W is the width
of the band of frequency and D0 is the radial centre.
Similarly, the transfer function of butter-worth band reject filter of the order n is the given
by the expression

Dr. Suresh Delampady, Professor, RNS Institute of Technology


1
H (u, v) = 2n
 D(u, v)W 
1+  2 2
 D (u, v) − D0 

And transfer function of Gaussian band reject filter of the order n is the given by the
expression
1  D 2 ( u ,v ) − D02 
−  
2  D ( u ,v )W 
H (u, v) =1 − e

Band-pass Filters
Band-pass filter performs the opposite of a band-pass filter.
The transfer function of Band-pass filter is obtained from corresponding transfer function of
Band – reject filter.
H bp (u, v) = 1 − H br (u, v)

Generally It removes too much image details. It is quite useful in isolating the effects on an
image caused by selected frequency bands.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Notch Filters
Notch filter rejects (or passes) frequencies in predefined neighborhoods about a center
frequency.
It appears in symmetric pairs about the origin because the Fourier transform of a real valued
image is symmetric.
The transfer function of Notch pass filter is
HNP(u,v) = 1 – HNR(u,v)
where HNP(u,v) is the transfer function of the notch pass filter Corresponding to the notch reject
filter with transfer function HNR(u,v). This filter reduces the noise in the image with out
introducing appreciable blurring. The transfer functions of ideal, Butterworth and Gaussian
notch (reject) filters are as follows.

Figure 3.12 shows the 3 D plots of ideal, Butterworth and Gaussian notch (reject) filters.
The notch filter reduces the noise in the image with out blurring the given image.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Optimum Notch Filter
Several interference components are present in the methods discussed in the preceding sections
are not always acceptable because they remove much image information.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Interference components are generally are not single frequency bursts, instead the components
tend to have broad skirts that carry information about the interference pattern and the skirts are
not always easily detectable. Image and its Interference pattern can be seen in Figure 3.14

This filtering method reduce the effect of degradations of the image.


The method discussed here is optimum, in the sense that it minimizes local variances of the
restored estimated image.
Procedure for restoration tasks in multiple periodic interference
Isolate the principal contributions of the interference pattern.
Subtract a variable, weighted portion of the pattern from the corrupted image.
As a first step extract the principal frequency components of the interference pattern
Place a notch pass filter at the location of each spike.
N (u, v) = H NP (u, v)G(u, v)
Where G(u,v) is the Fourier transform of the corrupted image. After a filter has been selected,
the corresponding pattern in the spatial domain is obtained from the expression.
 ( x, y) = −1 H NP (u, v)G(u, v)
In this section, we try to improve the restored image by introducing a modulation function.

fˆ ( x, y ) = g ( x, y ) − w( x, y )ˆ ( x, y )...............(1)

Here the modulation or weighting function w(x, y) is a constant within a neighborhood of size
(2a+1) by (2b+1) about a point (x,y). Eqn. (1) is an estimate of f(x, y).
We optimize its performance by minimizing the local variance of the restored image at the
position (x,y).
a b 2
1  fˆ ( x + s, y + t ) − fˆ ( x, y ) ..(2)
 2 ( x, y ) = 
(2a + 1)( 2b + 1) s = − at = −b  
AverageValue..of ..eqn.(1)
a b
1
fˆ ( x, y ) =  
(2a + 1)( 2b + 1) s = − at = − b
fˆ ( x + s, y + t )

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Points on or near Edge of the image can be treated by considering partial neighborhoods or by
padding border with 0s. Sub. Eqn.(1) in to (2) yields.
a b
1
 2 ( x, y ) =  {[ g ( x + s, y + t )
(2a + 1)( 2b + 1) s = − at = − b
− w( x + s, y + t )ˆ ( x + s, y + t )]
− [ g ( x, y ) − w( x, y )ˆ ( x, y ]}2 .....(3)
Assuming that w(x, y) remains constant over the neighborhood gives the approximation.
w( x + s, y + t ) = w( x, y ) for − a  s  a and − b  t  b....(4)
 w( x, y )ˆ ( x, y ) = w( x, y )ˆ ( x, y ).....(5)
With these approximations, eqn. (3) becomes
b
1
 2 ( x, y ) =  {[ g ( x + s, y + t )
a

(2a + 1)( 2b + 1) s = − a t = − b
− w( x, y )ˆ ( x + s, y + t )]
− [ g ( x, y ) − w( x, y )ˆ ( x, y )}2 .....(6)

To minimize σ2(x, y), we solve


 2 ( x, y)
= 0.....(7)
w( x, y)
g ( x, y)ˆ ( x, y) − g ( x, y)ˆ ( x, y)
 w( x, y) = .....(8)
ˆ 2 ( x, y) −ˆ 2 ( x, y)
To obtain the restored image, we compute w(x, y) from eqn.(8) and then use Eqn.(1)

Linear, Position-Invariant Degradations

The input – output relationship in the above figure before the Restoration stage is expressed as

g ( x, y) = H  f ( x, y) +  ( x, y)
H is linear
H  af1 ( x, y ) + bf 2 ( x, y )  = aH  f1 ( x, y )  + bH  f 2 ( x, y ) 
f1 and f 2 are any two input images.

If a = b = 1, above equation becomes


H[f1(x,y) + f2(x,y)] = H[f1(x,y)] + H[f2(x,y)]

Dr. Suresh Delampady, Professor, RNS Institute of Technology


An operator having the input-output relationship
g ( x, y ) = H  f ( x, y )  is said to be position invariant
if
H  f ( x −  , y −  ) = g(x − , y −  )
for any f ( x, y ) and any  and  .
 
f ( x, y ) =   − −
f ( ,  ) ( x −  , y −  ) d d 

Assume for a moment that  ( x, y ) = 0


if H is a linear operator,
g ( x, y ) = H  f ( x, y ) 

= H    f ( ,  ) ( x −  , y −  ) d d  
 

 − − 
 
=  H  f ( ,  ) ( x −  , y −  )  d d 
− −
 
=  f ( ,  ) H  ( x −  , y −  )  d d 
− −

Assume for a moment that  ( x, y ) = 0


if H is a linear operator and position invariant,
H  ( x −  , y −  )  = h( x −  , y −  )
g ( x, y ) = H  f ( x , y ) 
 
=  f ( ,  ) H  ( x −  , y −  )  d d 
− −
 
=  f ( ,  ) h( x −  , y −  )d d 
− −

In the presence of additive noise,


if H is a linear operator and position invariant,

 
g ( x, y ) =  
− −
f ( ,  ) h( x −  , y −  ) d d  +  ( x, y )
= h ( x, y ) f ( x, y ) +  ( x, y )

G (u , v ) = H (u , v ) F (u , v ) + N (u , v)

Many types of degradation can be approximated by linear, position-invariant processes


Extensive tools of linear system theory are available. In this situation, restoration is image
deconvolution.
Estimating the Degradation Function
Three principal ways to estimate the degradation function
1. Observation
2. Experimentation
3. Mathematical Modeling.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Estimation by Image Observation:
Let the observed sub-image be denoted by gs(x, y), and let the processed sub-image be denoted
ˆ
f s ( x, y) . Then, assuming that the effect of noise is negligible because of selection of
by
strong signal area, it follows that

Gs (u, v)
H s (u, v) =
Fˆs (u, v)
Now, we can construct a function H(u, v) on a large scale, but having the same shape.
Estimation by Experimentation:
If equipment similar to the equipment used to acquire the degraded image is available, it is
possible in principle to obtain an accurate estimate of the degradation.
Images similar to the degraded image can be acquired with various system settings until they
are degraded as closely as possible to the image we wish to restore.
An impulse is simulated by a bright dot of light, as bright as possible to reduce the effect of
noise. Then the estimated function is
G (u, v)
H (u, v) =
A
where G(u,v) is the Fourier Transform of the observed image, and A is the constant describing
the strength of the impulse.
Mathematical Modeling:
Environmental conditions cause degradation. A degradation model is based on physical
characteristic of atmospheric turbulence.
H (u , v) = e − k ( u + v 2 )5/6
2

k : a constant that depends on


the nature of the turbulence
With the exception of 5/6 power on the exponent, this equation has the same form as the
Gaussian low pass filter.
Derive a mathematical model from basic principles
E.g., An image blurred by uniform linear motion between the image and the sensor during
image acquisition.
Suppose that an image f ( x, y ) undergoes planar motion,
x0 (t ) and y0 (t ) are the time-varying components of motion
in the x- and y -directions, respectively.
The optical imaging process is perfect. T is the duration
of the exposure. The blurred image g ( x, y )
f  x − x0 (t ), y − y0 (t ) dt
T
g ( x, y ) = 
0

where T is exposure time.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


 f  x − x (t ), y − y (t ) dt
T
g ( x, y ) = 0 0
0
 
G (u , v ) =   g ( x, y )e  − j2 ( ux + vy )
dxdy
− −
 
 T f  x − x0 (t ), y − y0 (t ) dt  e − j 2 ( ux + vy ) dxdy
=  
−  0
− 

T
   f  x − x0 (t ), y − y0 (t )  e − j 2 ( ux + vy ) dxdy dt
= 
0  − −

T − j 2 ux0 ( t ) + vy0 ( t ) 
= 
0
F (u , v )e dt
T − j 2 ux0 ( t ) + vy0 ( t ) 
= F (u , v)  e dt
0

T − j 2 ux0 ( t ) + vy0 ( t ) 
H (u , v ) =  0
e dt
Suppose that the image undergoes uniform linear motion
in the x -direction only, at a rate given by x0 (t ) = at / T .
T
H (u , v ) =  0
e − j 2 ux0 ( t ) dt
T
= e − j 2 uat /T dt
0

T
= sin( ua )e − j ua
 ua
Suppose that the image undergoes uniform linear motion
in the x -direction and y -direction, at a rate given by
x0 (t ) = at / T and y0 (t ) = bt / T
T − j 2 ux0 ( t ) + vy0 ( t ) 
H (u , v ) =  0
e dt
T
= e − j 2 [ ua + vb ]t /T dt
0

T
= sin  (ua + vb ) e − j ( ua + vb )
 (ua + vb)

Inverse Filtering
The simplest approach to restoration is direct inverse filtering. Estimation of the transformation
of the original image is obtained by dividing the transform of the degraded image, G(u, v), by
the degradation function as given below
G(u, v)
Fˆ (u, v) =
H (u, v)

G(u,v) = H(u,v)F(u,v) + N(u,v)


N (u, v)
Fˆ (u, v) = F (u, v) +
H (u, v)

Dr. Suresh Delampady, Professor, RNS Institute of Technology


If the degradation has zero or very small values, then the ratio N/H could easily dominate our
estimation of F . One approach to get around the zero or small-value problem is to limit the filter
frequencies to value near the origin.

N (u, v)
Fˆ (u, v) = F (u, v) +
H (u, v)

1. We can't exactly recover the undegraded image


because N (u, v) is not known.
2. If the degradation function has zero or very
small values, then the ratio N (u , v) / H (u, v) could
easily dominate the estimate F (u, v).
Minimum Mean Square Error (Wiener) Filtering
This approach incorporate both the degradation function and statistical characteristic of noise
into the restoration process.
Objective is to find an estimate of the uncorrupted image such that the mean square error
between them is minimized. This error measure is given by
e 2 = E[( f − fˆ ) 2 ]

where E {.} is the expected value of the argument. It is assumed that the noise and the image
are uncorrelated.
The minimum of the error function is given in the frequency domain by the Expression.

 H * (u, v) S f (u, v) 
Fˆ (u, v) =  G (u, v)
 S f (u, v) H (u, v) + S (u, v) 
2

 H * (u, v) 
= G (u, v)
 H (u, v) + S (u, v) / S f (u, v) 
2

 1 H (u, v)
2

= G (u, v)
 H (u, v) H (u, v) + S (u, v) / S f (u, v) 
2

S (u, v) = N (u, v) = power spectrum of the noise


2

S f (u, v) = F (u, v) = power spectrum of the undegraded image


2

Dr. Suresh Delampady, Professor, RNS Institute of Technology


 1 | H (u , v) |2 
F (u , v) =   G (u , v)
 H (u , v ) | H (u , v ) | + S (u , v ) / S f (u , v) 
2

H (u , v ) : degradation function
H * (u , v): complex conjugate of H (u , v )
| H (u , v) |2 = H * (u , v) H (u , v)
S (u , v) =| N (u , v) |2 = power spectrum of the noise
S f (u , v) =| F (u , v) |2 = power spectrum of the undegraded image

G(u, v) is the transform of the degraded image. The restored image in the Spatial domain is
obtained by the inverse Fourier transform of the frequency domain estimate.
If the noise is zero, noise power spectrum vanishes and the Wiener filter reduces to inverse
filter.

A number of useful measures are based on the power spectra of noise and the un-degraded
image. These are as follows.
Singal-to-Noise Ratio (SNR)
M −1 N −1

  | F (u, v) | 2

SNR = u =0 v =0
M −1 N −1

  | N (u, v) |
u =0 v =0
2

This ratio gives a measure of the level of information


bearing singal power to the level of noise power.

This is an important metric used in characterizing the performance of restoration algorithm.


Mean Square error given in statistical form can be approximated in terms of summation
involving the original and restored images.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


 1 H (u, v)
2

ˆ
F (u, v) =  G(u, v)
 H (u, v) H (u, v) + S (u, v) / S f (u, v) 
2

When the power spectrum of undegraded image can not be estimated, then the above equation
is approximated by the following expression.

 1 H (u, v)
2

ˆ
F (u, v) =  G(u, v)
 H (u, v) H (u, v) + K 
2

Constrained Least Squares Filtering


In Wiener filter, the power spectra of the undegraded image and noise must be known.
Although a constant estimate is sometimes useful, it is not always suitable. Constrained least
squares filtering just requires knowledge of only the mean and variance of the noise.

From the definition of convolution, and from the explanation of vector and matrix
operations, We can express following equation.
g ( x, y ) = f ( x, y )  h ( x, y ) +  ( x, y )
This can be written in matrix form as
g = Hf + η
Here H is sensitive to noise, and this can be minimized by second derivative of an image
called Laplacian.
It is desired to find the minimum of a criterion function, defined as

 
M −1 N −1
C =   2 f ( x, y)
2

x =0 y =0

subject to the constraint


2
g − Hfˆ = η
2

w =w w
2 T
Where is the Euclidean vector norm, and f̂ estimate of the
undegraded image.
The frequency domain solution to this optimization is given by the expression

Dr. Suresh Delampady, Professor, RNS Institute of Technology


 H * (u , v) 
F (u , v) =  2 
G (u , v)
 | H (u , v) | + | P (u , v) | 
2

P (u , v) is the Fourier transform of the function


 0 −1 0 
p ( x, y ) =  −1 4 −1
 0 −1 0 
 is a parameter
g is adaptively adjusted to achieve the best result.
If  = 0, above equation becomes inverse filtering.
Procedure for computing  by iteration is as follows.
ˆ  ( ) = r T r = r
2

Define r = g − Hf , It can be shown that


= η  a......(1)
2 2
r
We want to adjust gamma so that
where a = accuracy factor
1. Specify an initial value of 
2

2. Compute r
3. Stop if (1) is satisfied

Otherwise return step 2 after increasing  if r  η − a


2 2

 η +a
2 2
r
or decreasing  if
Use the new value of  to recompute

 H * (u, v) 
Fˆ (u, v) =  G(u, v)
 H (u, v) +  P(u, v) 
2 2

2 2
In order to use this algorithm, we need the quantities r and η

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Question Bank
1) With necessary equation s and graph, explain any four noise probability
density functions.
2) Explain minimum mean square error filtering method of restoring images.
3) Explain how image degradation is estimated using, (i) Observation (ii)
Mathematical modeling.
4) Explain the order statistics filters used for restoring images in the presence
of noise.
5) Explain the following noise models. i) Gaussian noise ii) Raleigh Noise
iii) Impulse noise iv) Uniform noise.
6) Explain inverse filter and Weiner filter with the help of equations. Explain
the advantages of Wiener filter over inverse filter.
7) Define the process of image restoration. How is restoration is different
from enhancement?
8) What are adaptive filters ? Explain adaptive mean filter and its advantages.
9) What are order statistics filters ? List any four such filters.
10) Explain the basic model of image restoration process. Explain any
four important noise probability density functions.
11) Explain minimum mean square error (Wiener) filtering in image
processing.
12) Explain adaptive mean filter and list its advantages.
13) With necessary mathematical equations, explain estimate the
degradation function by modeling.

Dr. Suresh Delampady, Professor, RNS Institute of Technology


Digital Image Processing

Module-4

Wavelets:
Background, Multiresolution Expansions.

Morphological Image Processing:


Preliminaries, Erosion and Dilation, Opening and
Closing, The Hit-or-Miss Transforms, Some Basic
Morphological Algorithms.

[Text: Chapter 7: Sections 7.1 and 7.2, Chapter 9: Sections 9.1 to 9.5]

L1, L2, L3

Dr Suresh D, Professor, RNS Institute of Technology.


Digital Image Processing
Module-4
Basic Morphological Algorithms:
1 – Boundary Extraction
2 – Hole Filling
3 – Extraction of Connected Components
4 – Convex Hull
5 – Thinning
6 – Thickening
7 – Skeletons.

1. Boundary Extraction: The boundary of a set A, can be obtained by first eroding A by


B and then performing the set difference between A and its erosion.

 ( A)  A   A  B 
Where B is suitable structuring element.
Following Figure 9.13 illustrates the mechanics of boundary extraction. It shows a simple
binary object, a structuring element B, and the result of using the above equation. Structuring
element shown in fig.b is most commonly used. Using 5x5 structuring element of 1s would
result in a boundary between 2 and 3 pixels thick.

Dr Suresh. D, Professor, RNSIT


2. Hole Filling:
A hole may be defined as a background region surrounded by a connected border of
foreground pixels.
Let A denote a set whose elements are 8-connected boundaries, each boundary enclosing a
background region (i.e., a hole). Given a point in each hole, the objective is to fill all the
holes with 1s.
1. Forming an array X0 of 0s (the same size as the array containing A), except the
locations in X0 corresponding to the given point in each hole, which we set to 1.
2. Xk = (Xk-1 + B) Ac k=1,2,3,…
Stop the iteration if Xk = Xk-1
Example:

Dr Suresh. D, Professor, RNSIT


3. Extraction of Connected Components:
 This algorithm extracts a component by selecting a point on a binary object A
 Works similar to region filling, but this time we use in the conjunction the object A,
instead of it’s complement.
 Extraction of connected components from a binary image is Central to many automated
image applications.
 Let A be a set containing one or more connected components, and form an array X 0 (of
the same size as the array containing A) whose elements are 0s, except at each location
known to correspond to a point in each connected component in A, which is set to 1.
The following iterative procedure accomplishes this objective.
X k  ( X k 1  B )  A
B : structuring element

until X k  X k -1

Dr Suresh. D, Professor, RNSIT


Example:
This shows automated inspection of chicken-breast, that contains bone fragment
The original image is thresholded
We can get by using this algorithm the number of pixels in each of the connected
components
Now we could know if this food contains big enough bones and prevent hazards.

4. Convex Hull
A set A is said to be convex if the straight line segment joining any two points in A lies
entirely within A.
The convex hull H or of an arbitrary set S is the smallest convex set containing S.

Dr Suresh. D, Professor, RNSIT


Let B i , i  1, 2, 3, 4, represent the four structuring elements.
The procedure consists of implementing the equation:
X ki  ( X k 1 * B i )  A
i  1, 2,3, 4 and k  1, 2,3,...
with X 0i  A.
When the procedure converges, or X ki  X ki 1 , let D i  X ki ,
the convex hull of A is
4
C ( A)   D i
i 1

Fig. 9.19 illustrates the procedure given in above two equations.

Figure 9.19(a) shows the structuring elements used to extract the convex hull. The origin of
each element is at its center. The X entries indicate “don’t care” conditions. This means that a
structuring element is said to have found a match in A if the 3x3 region of A under the
structuring element mask at that location matches the pattern of the mask.

Dr Suresh. D, Professor, RNSIT


5. Thinning:
The thinning of a set A by a structuring element B, defined

A  B  A  ( A * B)
 A  ( A * B)c
A more useful expression for thinning A symmetrically is based on a sequence of structuring
elements:

B  B1 , B 2 , B3 ,..., B n 


where B i is a rotated version of B i -1
The thinning of A by a sequence of structuring element {B}
A  {B}  ((...(( A  B1 )  B 2 )...)  B n )

Dr Suresh. D, Professor, RNSIT


6. Thickening:
Thickening is the morphological dual of thinning and is defined by the expression. As in
thinning, thickening can be defined as a sequential operation:

The structuring elements used for thickening have the same form as in thinning, but with
all 1’s and 0’s interchanged.
A separate algorithm for thickening is often used in practice, Instead the usual procedure
is to thin the background of the set in question and then complement the result.
In other words, to thicken a set A, we form C=Ac , thin C and than form Cc. Figure 9.22
illustrates this procedure.
Depending on the nature of A, this procedure may result in some disconnected points.
Therefore thickening by this procedure usually require a simple post-processing step to
remove disconnected points.

Dr Suresh. D, Professor, RNSIT


 We will notice in the next example 9.22(c) that the thinned background forms a
boundary for the thickening process, this feature does not occur in the direct
implementation of thickening
 This is one of the reasons for using background thinning to accomplish thickening.

7. Skeletons:
As Fig.9.23 shows, the notion of a skeleton S(A) of a set A is intuitively defined, we deduce
from this figure that:
a. If z is a point of S(A) and (D)z is the largest disk centered in z and contained in A
(one cannot find a larger disk that fulfils this terms) – this disk is called “maximum
disk”.
b. The disk (D)z touches the boundary of A at two or more different places.

The skeleton of A is defined by terms of erosions and openings:

With

Where B is the structuring element and indicates k successive erosions


of A:

Dr Suresh. D, Professor, RNSIT


k times, and K is the last iterative step before A erodes to an empty set, in other words:

in conclusion S(A) can be obtained as the union of skeleton subsets Sk(A).


A can be also reconstructed from subsets Sk(A) by using the equation:

Where denotes k successive dilations of Sk(A) that is:

Example:

8. Pruning:
a. Thinning and skeletonizing tend to leave parasitic components
b. Pruning methods are essential complement to thinning and
skeletonizing procedures

Dr Suresh. D, Professor, RNSIT


Fig. 9.25(a) shows the skeleton of a hand-printed “a”. Thinning of an input set A with a
sequence of structuring elements designed to detect only end points achieves the desired
result. That is, let

X 1  A  {B}
Where {B} denotes the structuring element sequence shown in Figs. 9.25(b) and (c).
Where Structuring Element sequence is as given below
{B} = { B1, B2, B3, ……………. Bn }
The x in Fig. 9.25(b) signifies don’t care conditions, in the sense that it does not matter whether
the pixel in that location has a value of 0 or 1.

Applying above equation X1 to A three times yields the set X1 in Fig.9.25 (d). The next step is
to store character to its original form, but with the parasitic branches removed. This require
forming a set X2 containing all end points in X1.

X 2    X1 * B k 
8

k 1

Dr Suresh. D, Professor, RNSIT


Where the Bk are the same end point detectors shown in Fig. 9.25(b) and (c).
The next step is dilation of the end points three times, using set A as a delimiter.

X3   X2  H   A
H : 3  3 structuring element
The union of X3 and X1 yields the desired result,

X 4  X1  X 3
is shown in Fig. 9.25 (g).
9. Morphological Reconstruction:
It involves two images and a structuring element
a. One image contains the starting points for the
transformation (The image is called marker)
b. Another image (mask) constrains the transformation
c. The structuring element is used to define connectivity
Morphological Reconstruction: Geodesic Dilation:
Let F denote the marker image and G the mask image,
F  G. The geodesic dilation of size 1 of the marker image
with respect to the mask, denoted by DG(1) ( F ), is defined as
DG(1) ( F )   F  B   G
The geodesic dilation of size n of the marker image F
with respect to G , denoted by DG( n ) ( F ), is defined as
DG( n ) ( F )  DG(1) ( F )  ( n 1)
 DG (F )

with DG(0) ( F )  F .

Dr Suresh. D, Professor, RNSIT


Morphological Reconstruction: Geodesic Erosion:

Let F denote the marker image and G the mask image.


The geodesic erosion of size 1 of the marker image with
respect to the mask, denoted by EG(1) ( F ), is defined as
EG(1) ( F )   F  B   G
The geodesic erosion of size n of the marker image F
with respect to G, denoted by EG( n ) ( F ), is defined as
EG( n ) ( F )  EG(1) ( F )  EG( n 1) ( F ) 
with EG(0) ( F )  F .

Morphological Reconstruction by Dilation:

Morphological reconstruction by dialtion of a mask image


G from a marker image F , denoted RGD ( F ), is defined as
the geodesic dilation of F with respect to G , iterated until
stability is achieved; that is,

RGD ( F )  DG( k ) ( F )

with k such that DG( k ) ( F )  DG( k 1) ( F ).

Dr Suresh. D, Professor, RNSIT


Morphological Reconstruction by Erosion:

Morphological reconstruction by erosion of a mask image


G from a marker image F , denoted RGE ( F ), is defined as
the geodesic erosion of F with respect to G, iterated until
stability is achieved; that is,

RGE ( F )  EG( k ) ( F )

with k such that EG( k ) ( F )  EG( k 1) ( F ).


Opening by Reconstruction:

The opening by reconstruction of size n of an image F is


defined as the reconstruction by dilation of F from the
erosion of size n of F ; that is

 F  nB  
OR( n ) ( F )  RFD  

where  F  nB  indicates n erosions of F by B.

Figure 9.29 shows an exampleof opening by reconstruction.

Dr Suresh. D, Professor, RNSIT


Filling Holes:

Let I ( x, y ) denote a binary image and suppose that we


form a marker image F that is 0 everywhere, except at
the image border, where it is set to 1- I ; that is

1  I ( x, y ) if ( x, y ) is on the border of I
F ( x, y )  
 0 otherwise
then
c
H 
 RI c ( F ) 
D

is a binary image equal to I with all holes filled.

Dr Suresh. D, Professor, RNSIT


Fig. 9.30(a) shows a simple image I containing one hole, and fig. (b) show its complement (Ic).
Since Ic is used as an AND mask, we are protecting all foreground pixels (including wall around
the hole ) from changing during iteration of the procedure.
Fig. 9.30 © is array F formed according to the above equation F(x,y).
Fig. 9.30 (d) is F dilated with a 3x3 SE whose elements are all 1s.
Fig. 9.30 (e) shows the geodesic dilation of F using I c as the mask.
Another iteration will yield the same result which, when complemented as required by the
above equation H, gives the result in fig. 9.30 (f).
The hole is now filled and the rest of image I was unchanged.
The operation H ∩ Ic yields an image containing 1-valued pixels in the locations corresponding
to the holes in I, as shown in Fig. 9.30 (g).
Example:

Border Clearing:
An algorithm for removing objects that touch ( i.e., are connected to )the border is a useful tool
because

Dr Suresh. D, Professor, RNSIT


Example:
Fig. 9.32 (a) shows the reconstruction RID (F) obtained using a 3x3 structuring element of all
1s, and Fig. 9.32 (b) shows image X computed using the above equation X.

Fig. 9.33 summarizes the basic types of structuring elements used in the various morphological
processed discussed.

******************

Dr Suresh. D, Professor, RNSIT

You might also like