21AD1603_UNIT__II_IMAGE_ENHANCEMENT[1]

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

UNIT II IMAGE ENHANCEMENT 9

Spatial Domain: Gray level transformations – Histogram processing – Basics of Spatial


Filtering– Smoothing and Sharpening Spatial Filtering, Frequency Domain: Introduction to
Fourier Transform– Smoothing and Sharpening frequency domain filters – Ideal, Butterworth
and Gaussian filters, Homomorphic filtering, Color image enhancement

Image enhancement approaches fall into two broad categories: spatial domain methods and
frequency domain methods. The term spatial domain refers to the image plane itself, and
approaches in this category are based on direct manipulation of pixels in an image.
Frequency domain processing techniques are based on modifying the Fourier transform of an
image. Enhancing an image provides better contrast and a more detailed image as compare to
non-enhanced image. Image enhancement has very good applications. It is used to enhance
medical images, images captured in remote sensing, images from satellite etc. The term spatial
domain refers to the aggregate of pixels composing an image. Spatial domain methods are
procedures that operate directly on these pixels. Spatial domain processes will be denoted by
the expression.
g(x,y) = T[f(x,y)]
where f(x, y) is the input image, g(x, y) is the processed image, and T is an operator on f,
defined over some neighborhood of (x, y).

The principal approach in defining a neighborhood about a point (x, y) is to use a square or
rectangular sub image area centered at (x, y), Fig. shows. The center of the sub image is moved
from pixel to pixel starting, say, at the top left corner. The operator T is applied at each location
(x, y) to yield the output, g, at that location. The process utilizes only the pixels in the area of
the image spanned by the neighborhood.
The simplest form of T is when the neighborhood is of size 1*1 (that is, a single pixel). In this
case, g depends only on the value of f at (x, y), and T becomes a gray-level (also called an
intensity or mapping) transformation function of the form
s=T(r)
where r is the pixels of the input image and s is the pixels of the output image. T is a
transformation function that maps each value of r to each value of s.
For example, if T(r) has the form shown in Fig. 2.2(a), the effect of this transformation would
be to produce an image of higher contrast than the original by darkening the levels below m
and brightening the levels above m in the original image. In this technique, known as contrast
stretching, the values of r below m are compressed by the transformation function into a narrow
range of s, toward black. The opposite effect takes place for values of r above m.
In the limiting case shown in Fig. 2.2(b), T(r) produces a two-level (binary) image. A mapping
of this form is called a thresholding function.
One of the principal approaches in this formulation is based on the use of so-called masks
(also referred to as filters, kernels, templates, or windows). Basically, a mask is a small (say,
3*3) 2-D array, in which the values of the mask coefficients determine the nature of the
process, such as image sharpening. Enhancement techniques based on this type of approach
often are referred to as mask processing or filtering.

Fig. 2.2 Gray level transformation functions for contrast enhancement


Image enhancement can be done through gray level transformations which are discussed
below.
BASIC GRAY LEVEL TRANSFORMATIONS
1. Image negative
2. Log transformations
3. Power law transformations
4. Piecewise-Linear transformation functions

LINEAR TRANSFORMATION:
Linear transformation includes simple identity and negative transformation.
Identity transition is shown by a straight line. In this transition, each value of the input image
is directly mapped to each other value of output image. That results in the same input image
and output image. And hence is called identity transformation. It has been shown below:

Fig. Linear transformation between input and output


NEGATIVE TRANSFORMATION:
The second linear transformation is negative transformation, which is invert of identity
transformation. In negative transformation, each value of the input image is subtracted from
the L-1 and mapped onto the output image
IMAGE NEGATIVE:
The image negative with gray level value in the range of [0, L-1] is obtained by negative
transformation given by S = T(r) or
S = L -1 – r
Where r= gray level value at pixel (x,y)
L is the largest gray level consists in the image
It results in getting photograph negative. It is useful when for enhancing white details
embedded in dark regions of the image.
The overall graph of these transitions has been shown below.
Input gray level, r
Fig. Some basic gray-level transformation functions used for image enhancement
In this case the following transition has been done.
S = (L – 1) – r
Since the input image of Einstein is an 8 bpp image, so the number of levels in this image
are256. Putting 256 in the equation, we get this
S = 255 – r
So, each value is subtracted by 255 and the result image has been shown above. The lighter
pixels become dark and the darker picture becomes light. And it results in image negative.
It has been shown in the graph below.

Fig. Negative transformations


LOGARITHMIC TRANSFORMATIONS:
Logarithmic transformation further contains two type of transformation. Log transformation
and inverse log transformation.
LOG TRANSFORMATIONS:
The log transformations can be defined by this formula
S = c log(r + 1).
Where S and r are the pixel values of the output and the input image and c is a constant. The
value 1 is added to each of the pixel value of the input image because if there is a pixel intensity
of 0 in the image, then log (0) is equal to infinity. So, 1 is added, to make the minimum value
at least 1.
During log transformation, the dark pixels in an image are expanded as compare to the higher
pixel values. The higher pixel values are kind of compressed in log transformation. This result
in following image enhancement.
ANOTHER WAY TO REPRESENT LOG TRANSFORMATIONS:
Enhance details in the darker regions of an image at the expense of detail in brighter regions.
T(f) = C * log (1+r)
Here C is constant and r≥ 0
The shape of the curve shows that this transformation maps the narrow range of low gray
level values in the input image in to a wider range of output image.
The opposite is true for high level values of input image.

Fig. Log Transformation Curve input vs output

POWER – LAW TRANSFORMATIONS:


There are further two transformation is power law transformations, that include nth power and
nth root transformation. These transformations can be given by the expression:
S=Crγ
This symbol γ is called gamma, due to which this transformation is also known as gamma
transformation.
Variation in the value of γ varies the enhancement of the images. Different display devices /
monitors have their own gamma correction, that’s why they display their image at different
intensity, where c and g are positive constants.
S = C (r +ε) γ to account for an offset (that is, a measurable output when the input is zero).
Plots of s versus r for various values of γ are shown in Fig. As in the case of the log
transformation, power-law curves with fractional values of γ map a narrow range of dark input
values into a wider range of output values, with the opposite being true for higher values of
input levels. Unlike the log function, however, we notice here a family of possible
transformation curves obtained simply by varying γ.
In Fig that curves generated with values of γ>1 have exactly the opposite effect as those
generated with values of γ<1. Eq. S = C (r +ε) γ reduces to the identity transformation when
c=γ=1.

Fig. Plot of the equation S = Crγ for various values of γ


(C =1 in all cases)
This type of transformation is used for enhancing images for different type of display devices.
The gamma of different display devices is different. For example, Gamma of CRT lies in
between of 1.8 to 2.5, that means the image displayed on CRT is dark.
Varying gamma (γ) obtains family of possible transformation curves
S = C* r γ
Here C and γ are positive constants. Plot of S versus r for various values of γ is γ > 1 compresses
dark values and expands bright values γ < 1 (similar to Log transformation) but expands dark
values Compresses bright values
When C = γ = 1, it reduces to identity transformation.
CORRECTING GAMMA:
S=Cr γ
S=Cr (1/2.5)
PIECEWISE-LINEAR TRANSFORMATION FUNCTIONS
A complementary approach to the methods discussed above is to use piecewise linear
functions. The principal advantage of piecewise linear functions over the types of functions is
that the form of piecewise functions can be arbitrarily complex. The principal disadvantage of
piecewise functions is that their specification requires considerably more user input.
Contrast Stretching
One of the simplest piecewise linear functions is a contrast-stretching transformation. Low-
contrast images can result from poor illumination, lack of dynamic range in the imaging sensor,
or even wrong setting of a lens aperture during image acquisition.
S= T(r)
Figure 1(a) shows a typical transformation used for contrast stretching. The locations of points
(r1, s1) and (r2, s2) control the shape of the transformation function. If r1=s1 and r2=s2, the
transformation is a linear function that produces no changes in gray levels. If r1=r2, s1=0 and
s2= L-1, the transformation becomes a thresholding function that creates a binary image fig.
thresholding.
Intermediate values of ar1, s1b and ar2, s2b produce various degrees of spread in the gray
levels of the output image, thus affecting its contrast. In general, r1≤ r2 and s1 ≤ s2 is assumed
so that the function is single valued and Monotonically increasing.

Fig. 1 Contrast Stretching.


(a) Form of transformation function
(b) A low-contrast stretching.
(c) Result of contrast stretching
(d) Result of thresholding
Figure 1(b) shows an 8-bit image with low contrast.
Fig. 1(c) shows the result of contrast stretching, obtained by setting (r1, s1) =(rmin, 0) and (r2,
s2)=(rmax,L-1) where rmin and rmax denote the minimum and maximum gray levels in the
image, respectively. Thus, the transformation function stretched the levels linearly from their
original range to the full range[0, L-1].
Finally, Fig. 1(d) shows the result of using the thresholding function defined previously, with
r1=r2=m, the mean gray level in the image. The original image on which these results are based
is a scanning electron microscope image of pollen, magnified approximately 700 times.
Gray-level Slicing
Highlighting a specific range of gray levels in an image often is desired. Applications include
enhancing features such as masses of water in satellite imagery and enhancing flaws in X-ray
images. There are several ways of doing level slicing, but most of them are variations of two
basic themes. One approach is to display a high value for all gray levels in the range of interest
and a low value for all other gray levels.
This transformation, shown in Fig. 2(a), produces a binary image. The second approach, based
on the transformation shown in Fig 2(b), brightens the desired range of gray levels but
preserves the background and gray-level tonalities in the image. Figure 2 (c) shows a gray-
scale image, and Fig. 2(d) shows the result of using the transformation in Fig. 2(a). Variations
of the two transformations shown in Fig. are easy to formulate.
Fig. 2 (a) This transformation highlights range [A,B] of gray levels and reduces all others to a
constant level
(b) This transformation highlights range [A,B] but preserves all other levels.
(c) A gray-scale image
(d) Result of using the transformation in (a).
BIT-PLANE SLICING
Instead of highlighting gray-level ranges, highlighting the contribution made to total image
appearance by specific bits might be desired. Suppose that each pixel in an image is represented
by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging from bit-plane 0
for the least significant bit to bit plane 7 for the most significant bit. In terms of 8-bit bytes,
plane 0 contains all the lowest order bits in the bytes comprising the pixels in the image and
plane 7 contains all the high-order bits.
Fig. shows the various bit planes for the image. Note that the higher-order bits
(especially the top four) contain the majority of the visually significant data. The other bit
planes contribute to more subtle details in the image. Separating a digital image into its bit
planes is useful for analyzing the relative importance played by each bit of the image, a process
that aids in determining the adequacy of the number of bits used to quantize each pixel.
In terms of bit-plane extraction for an 8-bit image, it is not difficult to show that the (binary)
image for bit-plane 7 can be obtained by processing the input image with a thresholding gray-
level transformation function that (1) maps all levels in the image between 0 and 127 to one
level (for example, 0); and (2) maps all levels between 129 and 255 to another (for example,
255).
HISTOGRAM PROCESSING
The histogram of a digital image with gray levels in the range [0, L-1] is a discrete function of
the form
H(rk)=nk
where rk is the kth gray level and nk is the number of pixels in the image having the level rk.
A normalized histogram is given by the equation
P(rk)=nk/n for k=0,1,2,…..,L-1
P(rk) gives the estimate of the probability of occurrence of gray level rk. The sum of all
components of a normalized histogram is equal to 1.
The histogram plots are simple plots of H(rk)=nk versus rk.
In the dark image the components of the histogram are concentrated on the low (dark) side of
the gray scale. In case of bright image, the histogram components are biased towards the high
side of the gray scale. The histogram of a low contrast image will be narrow and will be
centered towards the middle of the gray scale.
The components of the histogram in the high contrast image cover a broad range of the gray
scale. The net effect of this will be an image that shows a great deal of gray levels details and
has high dynamic range.
HISTOGRAM EQUALIZATION:
Histogram equalization is a common technique for enhancing the appearance of images.
Suppose we have an image which is predominantly dark. Then its histogram would be skewed
towards the lower end of the grey scale and all the image detail are compressed into the dark
end of the histogram. If we could stretch out the grey levels at the dark end to produce a more
uniformly distributed histogram then the image would become much clearer.
Let there be a continuous function with r being gray levels of the image to be enhanced. The
range of r is [0, 1] with r=0 repressing black and r=1 representing white. The transformation
function is of the form
S=T(r) where 0<r<1
It produces a level s for every pixel value r in the original image.

The transformation function is assumed to fulfill two condition T(r) is single valued and
monotonically increasing in the internal 0<T(r)<1 for 0<r<1.The transformation function
should be single valued so that the inverse transformations should exist. Monotonically
increasing condition preserves the increasing order from black to white in the output image.
The second conditions guarantee that the output gray levels will be in the same range as the
input levels. The gray levels of the image may be viewed as random variables in the interval
[0.1]. The most fundamental descriptor of a random variable is its probability density function
(PDF) Pr(r) and Ps(s) denote the probability density functions of random variables r and s
respectively. Basic results from an elementary probability theory states that if Pr(r) and Tr are
known and T-1(s) satisfies conditions (a), then the probability density function Ps(s) of the
transformed variable is given by the formula

Thus, the PDF of the transformed variable s is the determined by the gray levels PDF of the
input image and by the chosen transformations function.
A transformation function of a particular importance in image processing

This is the cumulative distribution function of r.


L is the total number of possible gray levels in the image.
BASICS OF SPATIAL FILTERING– SMOOTHING AND SHARPENING SPATIAL
FILTERING
Spatial filtering is an example of neighborhood operations, in this the operations are done on
the values of the image pixels in the neighborhood and the corresponding value of a sub image
that has the same dimensions as of the neighbourhood. This sub image is called a filter, mask,
kernel, template or window; the values in the filter sub image are referred to as coefficients
rather than pixel. Spatial filtering operations are performed directly on the pixel values
(amplitude/gray scale) of the image.
The process consists of moving the filter mask from point to point in the image. At each point
(x,y) the response is calculated using a predefined relationship.
For linear spatial filtering the response is given by a sum of products of the filter coefficient
and the corresponding image pixels in the area spanned by the filter mask.
The results R of liner filtering with the filter mask at point (x,y) in the image is

The sum of products of the mask coefficient with the corresponding pixel directly under the
mask.
The coefficient w (0,0) coincides with image value f(x,y) indicating that mask it centered at
(x,y) when the computation of sum of products takes place. For a mask of size MxN we
assume m=2a+1 and n=2b+1, where a and b are nonnegative integers. It shows that all the
masks are of add size.
In the general liner filtering of an image of size f of size M*N with a filter mask of size m*m
is given by the expression

Where a= (m-1)/2 and b = (n-1)/2


To generate a complete filtered image this equation must be applied for x=0, 1, 2, -----M-1 and
y=0, 1, 2---,N-1. Thus the mask processes all the pixels in the image.
The process of linear filtering is similar to frequency domain concept called convolution. For
this reason, linear spatial filtering often is referred to as convolving a mask with an image.
Filter mask are sometimes called convolution mask.
R= W, Z,+W2, Z2 +….+ Wmn Zmn
Where w’s are mask coefficients and z’s are the values of the image gray levels corresponding
to those coefficients.
mn is the total number of coefficients in the mask.
An important point in implementing neighborhood operations for spatial filtering is the issue
of what happens when the center of the filter approaches the border of the image.
There are several ways to handle this situation.
i) To limit the excursion of the center of the mask to be at distance of less than (n-1) /2 pixels
form the border. The resulting filtered image will be smaller than the original but all the pixels
will be processed with the full mask.
ii) Filter all pixels only with the section of the mask that is fully contained in the image. It will
create bands of pixels near the border that will be processed with a partial mask.
iii) Padding the image by adding rows and columns of o’s & or padding by replicating rows
and columns. The padding is removed at the end of the process.
SMOOTHING SPATIAL FILTERS
These filters are used for blurring and noise reduction blurring is used in preprocessing steps
such as removal of small details from an image prior to object extraction and bridging of small
gaps in lines or curves.
Smoothing Linear Filters
The output of a smoothing liner spatial filter is simply the average of the pixel contained in the
neighborhood of the filter mask. These filters are also called averaging filters or low pass filters.
The operation is performed by replacing the value of every pixel in the image by the average
of the gray levels in the neighborhood defined by the filter mask. This process reduces sharp
transitions in gray levels in the image.
A major application of smoothing is noise reduction but because edge are also provided using
sharp transitions so smoothing filters have the undesirable side effect that they blur edges . It
also removes an effect named as false contouring which is caused by using insufficient number
of gray levels in the image.
Irrelevant details can also be removed by these kinds of filters, irrelevant means which are not
of our interest. A spatial averaging filter in which all coefficients are equal is sometimes
referred to as a “box filter” A weighted average filter is the one in which pixel are multiplied
by different coefficients.
Order Statistics Filter
These are nonlinear spatial filter whose response is based on ordering of the pixels contained
in the image area compressed by the filter and the replacing the value of the center pixel with
value determined by the ranking result.
The best example of this category is median filter. In this filter the values of the center pixel is
replaced by median of gray levels in the neighborhood of that pixel. Median filters are quite
popular because, for certain types of random noise, they provide excellent noise-reduction
capabilities, with considerably less blurring than linear smoothing filters. These filters are
particularly effective in the case of impulse or salt and pepper noise. It is called so because of
its appearance as white and black dots superimposed on an image. The median £ of a set of
values is such that half the values in the set less than or equal to £ and half are greater than or
equal to this. In order to perform median filtering at a point in an image, first sort the values of
the pixel in the question and its neighbors, determine their median and assign this value to that
pixel.
Order-statistics filters are spatial filters whose response is based on ordering (ranking) the
pixels contained in the image area encompassed by the filter. The response of the filter at any
point is determined by the ranking result.
Median filter
The best-known order-statistics filter is the median filter, which, as its name implies, replaces
the value of a pixel by the median of the gray levels in the neighborhood of that pixel:

The original value of the pixel is included in the computation of the median. Median filters are
quite popular because, for certain types of random noise, they provide excellent noise-reduction
capabilities, with considerably less blurring than linear smoothing filters of similar size.
Median filters are particularly effective in the presence of both bipolar and unipolar impulse
noise. In fact, the median filter yields excellent results for images corrupted by this type of
noise.
Max and min filters
Although the median filter is by far the order-statistics filter most used in image processing. it
is by no means the only one. The median represents the 50th percentile of a ranked set of
numbers, but the reader will recall from basic statistics that ranking lends itself to many other
possibilities. For example, using the 100th perccntile results in the so-called max filter given
by:

This filter is useful for finding the brightest points in an image. Also, because pepper noise
has very low values, it is reduced by this filter as a result of the max selection process in the
subimage area S. The 0th percentile filter is the Min filter.
SHARPENING SPATIAL FILTERS
The principal objective of sharpening is to highlight fine details in an image or to enhance
details that have been blurred either in error or as a natural effect of particular method for image
acquisition.
The applications of image sharpening range from electronic printing and medical imaging to
industrial inspection and autonomous guidance in military systems.
As smoothing can be achieved by integration, sharpening can be achieved by spatial
differentiation. The strength of response of derivative operator is proportional to the degree of
discontinuity of the image at that point at which the operator is applied. Thus image
differentiation enhances edges and other discontinuities and deemphasizes the areas with slow
varying grey levels.
It is a common practice to approximate the magnitude of the gradient by using absolute
values instead of square and square roots.
A basic definition of a first order derivative of a one dimensional function f(x) is the difference.

Similarly we can define a second order derivative as the difference


THE LAPLACIAN
The second order derivative is calculated using Laplacian. It is simplest isotropic filter.
Isotropic filters are the ones whose response is independent of the direction of the image to
which the operator is applied.
The Laplacian for a two dimensional function f(x,y) is defined as

Partial second order directive in the x-direction

And similarly in the y-direction

The digital implementation of a two-dimensional Laplacian obtained by summing the two


Components

The equation can be represented using any one of the following masks
Laplacian highlights gray-level discontinuities in an image and deemphasize the regions of
slow varying gray levels. This makes the background a black image. The background texture
can be recovered by adding the original and Laplacian images.
For example:
The strength of the response of a derivative operator is propositional to the degree of
discontinuity of the image at that point at which the operator is applied. Thus image
differentiation enhances eddies and other discontinuities and deemphasizes areas with slowly
varying gray levels.
The derivative of a digital function is defined in terms of differences. Any first derivative
definition
(1) Must be zero in flat segments (areas of constant gray level values)
(2) Must be nonzero at the onset of a gray level step or ramp
(3) Must be nonzero along ramps.
Any second derivative definition
(1) Must be zero in flat areas
(2) Must be nonzero at the onset and end of a gray level step or ramp
(3) Must be zero along ramps of constant slope.
It is common practice to approximate the magnitude of the gradient by using also lute values
instead or squares and square roots
BASIS OF FILTERING IN FREQUENCY DOMAIN
Basic steps of filtering in frequency Domain
i) Multiply the input image by (-1) X+Y to centre the transform
ii) Compute F(u,v), Fourier Transform of the image
iii) Multiply f(u,v) by a filter function H(u,v)
iv) Compute the inverse DFT of Result of (iii)
v) Obtain the real part of result of (iv)
vi) Multiply the result in (v) by (-1)x=y

H(u,v) called a filter because it suppresses certain frequencies from the image while leaving
others unchanged.
FILTERS
SMOOTHING FREQUENCY DOMAIN FILTERS
Edges and other sharp transition of the gray levels of an image contribute significantly to the
high frequency contents of its Fourier transformation. Hence smoothing is achieved in the
frequency domain by attenuation a specified range of high frequency components in the
transform of a given image.
Basic model of filtering in the frequency domain is
G(u,v) = H(u,v)F(u,v)
F(u,v) - Fourier transform of the image to be smoothed
Objective is to find out a filter function H (u,v) that yields G (u,v) by attenuating the high
frequency component of F (u,v).
There are three types of low pass filters
1. Ideal
2. Butterworth
3. Gaussian
IDEAL LOW PASS FILTER
 It is the simplest of all the three filters
 It cuts of all high frequency component of the Fourier transform that are at a distance
greater that a specified distance D0 form the origin of the transform.
 It is called a two – dimensional ideal low pass filter (ILPF) and has the transfer
function

Where D0 is a specified nonnegative quantity and D(u,v) is the distance from point (u,v) to
the center of frequency rectangle
If the size of image is M*N, filter will also be of the same size so center of the frequency
rectangle (u,v) = (M/2, N/2) because of center transform

Because it is ideal case. So all frequency inside the circle are passed without any attenuation
whereas all frequency outside the circle are completely attenuated.
For an ideal low pass filter cross section, the point of transition between H (u,v) =1 and H
(u,v)=0 is called of the “cut of frequency”
One way to establish a set of standard cut of frequency locus is to compute circle that include
specified amount of total image Power Pt

u=0,1,2,3,4,..,,,,,,,,,,,,,,,, N-1.
If transform has been centered a circle of radius r with origin at the center of the frequency

rectangle encloses percent of the power

For R = 5 = 92 % most blurred image because all sharp details are removed
R = 15 = 94.6 %

R = 30 = 96.4 %

R = 80 = 98 % maximum ringing only 2 % power is removed

R = 230 = 99.5 % very slight blurring only 0.5 % power is removed


ILPF is not suitable for practical usage. But they can be implemented in any computer system
Visualization: Ideal Low Pass Filter

Fig: Ideal Low Pass Filter 3-D view and 2-D view and line graph
BUTTERWORTH LOW PASS FILTER
It has a parameter called the filter order.
For high values of filter order it approaches the form of the ideal filter whereas for low filter
order values it reach Gaussian filter. It may be viewed as a transition between two extremes.
The transfer function of a Butterworth low pass filter (BLPF) of order n with cut off frequency
at distance Do from the origin is defined as

Most appropriate value of n is 2.


It does not have sharp discontinuity unlike ILPF that establishes a clear cutoff between passed
and filtered frequencies.
Defining a cutoff frequency is a main concern in these filters. This filter gives a smooth
transition in blurring as a function of increasing cutoff frequency. A Butterworth filter of
order 1 has no ringing. Ringing increases as a function of filter order. (Higher order leads to
negative values)
GAUSSIAN LOW PASS FILTER
The transfer function of a Gaussian low pass filter is

Where:
D(u,v)- the distance of point (u,v) from the center of the transform

= D0- specified cut off frequency


The filter has an important characteristic that the inverse of it is also Gaussain.

SHARPENING FREQUENCY DOMAIN FILTERS


Image sharpening can be achieved by a high pass filtering process, which attenuates the low
frequency components without disturbing high-frequency information. These are radially
symmetric and completely specified by a cross section.
If we have the transfer function of a low pass filter the corresponding high pass filter can be
obtained using the equation
Hhp (u,v)=1- Hlp (u,v)
IDEAL HIGH PASS FILTER
This filter is opposite of the Ideal Low Pass filter and has the transfer function of the form

BUTTERWORTH HIGH PASS FILTER


The transfer function of Butterworth High Pass filter of order n is given by the equation

GAUSSIAN HIGH PASS FILTER


The transfer function of a Gaussain High Pass Filter is given by the equation

HOMOMORPHIC FILTERING
Homomorphic filters are widely used in image processing for compensating the effect of no
uniform illumination in an image. Pixel intensities in an image represent the light reflected
from the corresponding points in the objects. As per as image model, image f(z,y) may be
characterized by two components:
(1) the amount of source light incident on the scene being viewed, and
(2) the amount of light reflected by the objects in the scene. These portions of light are called
the illumination and reflectance components, and are denoted i (x , y) and r ( x , y)
respectively. The functions i (x , y) and r ( x , y) combine multiplicatively to give the image
function
f ( x , y): f ( x , y) = i ( x , y).r(x, y)
where 0 < i ( x , y ) < a and 0 < r( x , y ) < 1.
Homomorphic filters are used in such situations where the image is subjected to the
multiplicative interference or noise as depicted in Eq. It cannot easily use the above product to
operate separately on the frequency components of illumination and reflection because the
Fourier transform of f(x, y) is not separable; that is F[f(x,y)) not equal to F[i(x, y)].F[r(x, y)].
Separate the two components by taking the logarithm of the two sides
ln f(x,y) = ln i(x, y) + ln r(x, y).
Taking Fourier transforms on both sides,
F[ln f(x,y)} = F[ln i(x, y)} + F[ln r(x, y)]. that is, F(x,y) = I(x,y) + R(x,y), where F, I and R
are the Fourier transforms ln f(x,y),ln i(x, y), and ln r(x, y) respectively.
The function F represents the Fourier transform of the sum of two images: a low-frequency
illumination image and a high-frequency reflectance image. If we now apply a filter with a
transfer function that suppresses low- frequency components and enhances high-frequency
components, then we can suppress the illumination component and enhance the reflectance
component. Taking the inverse transform of F (x, y) and then anti-logarithm, we get
f’ ( x , y) = i’ ( x , y) + r’(x, y)

COLOR IMAGE ENHANCEMENT


Color image enhancement: An image enhancement technique that consists of changing the
colors of an image or assigning colors to various parts of an image.
1. Color code regions of an image based on frequency content
2. The Fourier transform of an image is modified independently by three filters to produce
three images used as Fourier transform of the R, G, B components of a color image
3. Additional processing can be any image enhancement algorithm like histogram
equalization 4.Take inverse color transformation to obtain

You might also like