Dsip Lab Manual Latest Updated
Dsip Lab Manual Latest Updated
LAB MANUAL
Subject:
Digital Signal
And
Image Processing
Class: B.E. Sem: VII
AY: 2019-20
7
lmplementation of Contrast Stretching ,Dynamic range
compression & Bit plane Slicing
10
Implementation of Edge detection using Sobel and previtt
masks
3. Sine Wave :
Sine function operates the functions element wise on arrays. The
function domain and range include complex values.
4. Cosine wave:
Cosine function operates the functions element wise on arrays. The
function domain and range include complex values.
5. Exponential Signal:
The exponential function is an elementary function that operates element
wise on arrays. Its domain includes complex numbers.
ex = ex (cos y + i sin y)
% Generating Unit Step Signal
t= -5:1:5;
y=[zeros(1,5),ones(1,6)];
subplot(2,2,1);
stem(t,y);
title(„Unit Step Signal‟);
xlabel(„time‟);
ylabel(„Amplitude‟);
Output:
Experiment No: 2
Convolution:
Convolution is a concept that extends to all systems that are both linear and
time-invariant (LTI). The idea of discrete-time convolution is exactly the same as that of
continuous-time convolution. For this reason, it may be useful to look at both versions to
help your understanding of this extremely important concept. Recall that convolution is a
very powerful tool in determining a system's output from knowledge of an arbitrary input
and the system's impulse response. It will also be helpful to see convolution graphically
with your own eyes and to play around with it some, so experiment with the applets
available on the internet. These resources will offer different approaches to this crucial
concept.
Convolution Sum
by a simple change of variables. The sum in Eqs. (2.1) and (2.2) is called the convolution
sum of the sequences x[n] and h[n], and is represented compactly as:
Output:
Enter the first sequences [1 2 3 4]
Enter the second sequences [2 3 5]
Correlation:
Correlation (often measured as a correlation coefficient) indicates the
strength and direction of a linear relationship between two random variables. That is in
contrast with the usage of the term in colloquial speech, which denotes any relationship,
not necessarily linear. In general statistical usage, correlation or co-relation refers to the
departure of two random variables from independence. In this broad sense there are
several coefficients, measuring the degree of correlation, adapted to the nature of the
data.
Auto-correlation: Correlation with the signal itself, is given by,
Cross-correlation : Correlation of one signal with the other signal, is given by,
Algorithm:
Cross correlation is defined as, if x(n) and h(n) are finite signals then cross correlation
In cross correlation folding of h(n) doesn‟t occur.
Code:
% Program to find correlation of two signals
x=input ('Enter the first sequences ');
h=input(' Enter the second sequences ');
subplot(2,2,1);
stem(x);
title(' First sequence ');
subplot(2,2,2);
stem(h);
title(' Second sequence ');
c= xcorr(x,h); subplot(2,1,2);
stem(c); title(' Cross correlation of above sequences '); disp (c);
Columns 1 through 5
0.0000 5.0000 13.0000 23.0000 33.0000
Columns 6 through 7
18.0000 8.0000
Theory:
The discrete-time Fourier transform, DTFT, provided the frequency-domain representation for
absolutely summable sequences. The transformation is a function of the continuous variable () and
defined for infinite-length sequences, which is impractical. In other words, the discrete-time Fourier
transform is not numerically computable transform.
Therefore we turn our attention to a numerically computable transform. It is obtained by sampling the discrete-
time Fourier transform in the frequency domain. When applied to finite-duration sequences, this gives us a
new transform, called the Discrete Fourier Transform (or DFT). The DFT is the ultimate numerically
computable Fourier transform for arbitrary finite-duration sequences.
N 1
X (k ) x(n)WNnk , 0 k N 1 (1)
n 0
2
j
where WN e N
. Note that the DFT X (k ) is also an N-point sequence, that is, it is not
defined outside 0 k N 1.
The inverse discrete Fourier transform (IDFT) of an N-point DFT X (k ) is given by (once again x(n)
is not defined outside 0 n N 1 )
1 N 1
x ( n)
N k 0
X (k )WNnk , 0 n N 1 (2)
x = [2 3 -1 4];
N = length(x);
X = zeros(4,1)
for k = 0:N-1
for n = 0:N-1
X(k+1) = X(k+1) + x(n+1)*exp(-j*pi/2*n*k)
end
end
t = 0:N-1
subplot(311)
stem(t,x);
xlabel('Time (s)');
ylabel('Amplitude');
title('Time domain - Input sequence')
subplot(312)
stem(t,X)
xlabel('Frequency');
ylabel('|X(k)|');
title('Frequency domain - Magnitude response')
subplot(313)
stem(t,angle(X))
xlabel('Frequency');
ylabel('Phase');
title('Frequency domain - Phase response')
X % to check |X(k)|
angle(X) % to check phase
X =
8.0000 + 0.0000i
3.0000 + 1.0000i
-6.0000 - 0.0000i
3.0000 - 1.0000i
ans =
0
0.3218
-3.1416
-0.3218
The response $X[k]$ is what we expected and it gives exactly the same as we calculated.
The plots are:
Conclusion: Thus we have studied DFT
Experiment No.5
Theory:
In earlier DFT methods, we have seen that the computational part is too long. We want to reduce
that. This can be done through FFT or fast Fourier transform. So, we can say FFT is nothing but
computation of discrete Fourier transform in an algorithmic format, where the computational part
will be reduced.
The main advantage of having FFT is that through it, we can design the FIR filters. Mathematically,
the FFT can be written as follows;
x[K]=∑n=0N−1x[n]WnkNx[K]=∑n=0N−1x[n]WNnk
Let us take an example to understand it better. We have considered eight points named
from x0tox7x0tox7. We will choose the even terms in one group and the odd terms in the other.
Diagrammatic view of the above said has been shown below −
Here, points x0, x2, x4 and x6 have been grouped into one category and similarly, points x1, x3, x5 and
x7 has been put into another category. Now, we can further make them in a group of two and can
proceed with the computation. Now, let us see how these breaking into further two is helping in
computation.
x[k]=∑r=0N2−1x[2r]W2rkN+∑r=0N2−1x[2r+1]W(2r+1)kNx[k]=∑r=0N2−1x[2r]WN2rk+∑r=0N2
−1x[2r+1]WN(2r+1)k
=∑N2−1r=0x[2r]WrkN/2+∑N2−1r=0x[2r+1]WrkN/2×WkN=∑r=0N2−1x[2r]WN/2rk+∑r=0N2−1x[
2r+1]WN/2rk×WNk
=G[k]+H[k]×WkN=G[k]+H[k]×WNk
Initially, we took an eight-point sequence, but later we broke that one into two parts G[k] and H[k].
G[k] stands for the even part whereas H[k] stands for the odd part. If we want to realize it through a
diagram, then it can be shown as below −
From the above figure, we can see that
W48=−1W84=−1
W58=−W18W85=−W81
W68=−W28W86=−W82
W78=−W38W87=−W83
Similarly, the final values can be written as follows −
G[0]−H[0]=x[4]G[0]−H[0]=x[4]
G[1]−W18H[1]=x[5]G[1]−W81H[1]=x[5]
G[2]−W28H[2]=x[6]G[2]−W82H[2]=x[6]
G[1]−W38H[3]=x[7]G[1]−W83H[3]=x[7]
The above one is a periodic series. The disadvantage of this system is that K cannot be broken
beyond 4 point. Now Let us break down the above into further. We will get the structures something
like this
Example
Consider the sequence x[n]={ 2,1,-1,-3,0,1,2,1}. Calculate the FFT.
N=4;
x=[2 3 -1 4];
t=0:N-1;
subplot(311)
stem(t,x);
xlabel('Time (s)');
ylabel('Amplitude');
title('Input sequence')
subplot(312);
stem(0:N-1,abs(fft(x)));
xlabel('Frequency');
ylabel('|X(k)|');
title('Magnitude Response');
subplot(313);
stem(0:N-1,angle(fft(x)));
xlabel('Frequency');
ylabel('Phase');
title('Phase Response');
X =
8.0000 + 0.0000i
3.0000 + 1.0000i
-6.0000 - 0.0000i
3.0000 - 1.0000i
ans =
0
0.3218
-3.1416
-0.3218
The response $X[k]$ is what we expected and it gives exactly the same as we calculated.
The plots are:
Conclusion: Thus we have studied FFT
Experiment No.6
A: Image negation:
The transfer function is defined as
S= T(r)
S= L-1-r
Where L=total number of gray levels available
Application: It is useful when we have to analyze small white details embedded on dark
background such as X- ray images
Program:
clear all;
a=imread('Cameraman.tif');
figure(1),imshow(a);
[row col]=size(a);
b=max(a);
l=max(b);
for i=1:row
for j=1:col
p(i,j)=l-a(i,j);
end
end
figure(2),imshow(uint8(p));
Output:
B. Gray level slicing:
It is used to highlight specific range of gray levels. One approach is to display a high
value for all gray levels in the range of interest and a low value for all other gray levels.
In terms of equation,
x, if A r B
s=
y, otherwise
This is called gray level slicing without
background since except the highlighted object, entire
background is destroyed.
The second approach is to brighten the desired range
but preserve the background.
In terms of equation,
x, if A r B
s=
r, otherwise.
Program:
clear all
clc
r=imread('cameraman.tif');
s=r
z=double(r);
[row col]=size(r);
for i=1:1:row
for j=1:1:col
if ((z(i,j)>50)&&(z(i,j)<150))
z(i,j)=255;
else
z(i,j)=r(i,j);
end
end
end
subplot(2,1,1);
imshow(r);
title('original image');
subplot(2,1,2);
imshow(uint8(z));
title('Image after slicing')
Output:
original image
Program:
clear all;
close all;
a=imread('cameraman.tif');
figure(1),imshow(a);
[row col]=size(a);
k=input('enter bit plane no');
for i=1:row
for j=1:col
a1=bitget(a(i,j),k);
if a1==1
p1(i,j)=255;
else
p1(i,j)=0;
end
end
end
figure(2),imshow(uint8(p1));
The number of different light intensities in an image often does not use the whole
available spectrum and mostly accentuate a narrow spectrum. Images with such poor
intensity distributions can be helped with a process known as histogram equalization,
which in essence redistributes intensity distributions.
The function p(rk) represents the fraction of the total number of pixels with gray value rk.
Histogram provides a global description of the appearance of the image.
If we consider the gray values in the image as realizations of a random variable R, with
some probability density, histogram provides an approximation to this probability
density. In other words,
Pr(R rk ) p(rk )
Histogram Equalization
The histogram equalization is an approach to enhance a given image. The approach is to
design a transformation T(.) such that the gray values in the output is uniformly
distributed in [0, 1]. Let us assume for the moment that the input image to be enhanced
has continuous gray values, with r = 0 representing black and r = 1 representing white.
We need to design a gray value transformation s = T(r), based on the histogram of the
input image, which will enhance the image.
We consider the gray values in the input image and output image as random variables in
the interval [0, 1].
Let pin(r) and pout(s) denote the probability density of the Gray values in the input and
output images.
· If pin(r) and T(r) are known, and r = T -1(s) satisfies condition 1, we can write (result
from probability theory):
· One way to enhance the image is to design a transformation T(.) such that the gray
values in the output is uniformly distributed in [0, 1], i.e. pout (s) = 1, 0 £ s £1
· In terms of histograms, the output image will have all gray values in “equal proportion”.
· Note that this is the cumulative distribution function (CDF) of pin (r) and satisfies the
previous two conditions.
From the previous equation and using the fundamental theorem of calculus,
ds
pin (r)
dr
· The output probability density function is uniform, regardless of the input. Thus, using a
transformation function equal to the CDF of input gray values r, we can obtain an image
with uniform gray values. This usually results in an enhanced image, with an increase in
the dynamic range of pixel values. implement histogram equalization
Example
Original image and its histogram
Program:
%%histogram equalization%%
clear all
clc
a=imread('cameraman.tif');
a=double(a);
big=max((max(a)));
[row,col]=size(a);
c=row*col;
h=zeros(1,300);
z=zeros(1,300);
for n=1:1:row
for m=1:1:col
if (a(n,m)==0)
a(n,m)=1;
end
end
end
for n=1:1:row
for m=1:1:col
t=a(n,m);
h(t)=h(t)+1;
end
end
pdf=h/c;
cdf(1)=pdf(1);
for x=2:1:big
cdf(x)=pdf(x)+cdf(x-1);
end
new=round(cdf*big);
new=new+1;
for p=1:1:row
for q=1:1:col
temp=a(p,q);
b(p,q)=new(temp);
t=b(p,q);
z(t)=z(t)+1;
end
end
b=b-1;
subplot(2,2,1)
imshow(uint8(a))
title('Original image')
subplot(2,2,2)
bar(h)
title('Histogram of original Image')
subplot(2,2,3)
imshow(uint8(b))
title('Equilized Image')
subplot(2,2,4)
bar(z)
title('Histogram of Equilized Image')
Output:
30
Histogram of original
20
10
0
0 100 200 300 400
0
0 10 20 30 40
Experiment No. 9
They are two types of mask used for averaging: (1) standard average filter and (2)
weighted average filter.
Program:
clear all
I = imread('cameraman.tif');
J=imnoise(I,'salt & pepper',0.02); %adding noise
a=double(J);
b=a;
[row,col]=size(a);
for x=2:1:row-1
for y=2:1:col-1
a1=[a(x-1,y-1) a(x-1,y) a(x-1,y+1) a(x,y-1) a(x, y) a(x,y+1) a(x+1,y-1) a(x+1,y)
a(x+1,y+1)];
a2=sort(a1);
med=a2(5); %the fifth value is the median
b(x,y)=med;
end
end
figure(1)
imshow(uint8(J))
figure(2)
imshow(uint8(b))
Output:
1/9 *
The sum of the coefficients is zero. So, when the mask is over an area of constant gray
level, the output of the filter is zero. The filter reduces the average gray level of the image
to zero. It means, there are some negative gray levels in the output of the filter. So, some
form of scaling is required. The problem with this filter is that only sharp transitions such
as edges and boundaries are dominent in the output image and all constant regions are
destroyed.
a=double(f);
w=[-1 -1 -1; -1 8 -1; -1 -1 -1];
[row,col]=size(a);
for x=2:1:row-1;
for y=2:1:col-1
a1(x,y)=[w(1)*a(x-1,y-1)+w(2)*a(x-1,y)+w(3)*a(x-1,y+1)+w(4)*a(x,y-
1)+w(5)*a(x,y)+w(6)*a(x,y+1)+w(7)*a(x+1,y-1)+w(8)*a(x+1,y)+w(9)*a(x+1,y+1)];
end
end
figure(1)
imshow(uint8(a))
figure(2)
imshow(uint8(a1))
Output:
Output:
Experiment No. 10
The key difference between a boundary and an edge is that, the boundary forms a closed
path whereas edge need not. An ideal edge is a set of connected pixels, each of which is
located at the step transition in gray level. In practice, most of the times, edges are
blurred due to some or other reasons. So, edges are more closely modeled as having a
ramp like profile. An edge pixel is any pixel contained in the ramp and an edge would be
a set of such pixels that are connected.
The first order derivative is positive at the points of transition into and out of the ramp as
we move from left to right along the profile, and it is constant for points in the ramp and
zero in the area of constant gray levels.
The second order derivative is positive at the transition associated with the dark side of
the edge, negative at the transition with the light side of the edge, and zero along the
ramp and constant gray level.
The magnitude of the first order derivative can be used to detect the presence of the edge
at a point in an image. Similarly, the sign of the second derivative can be used to
determine whether an edge pixel is on the dark or light side of an edge.
There are two properties of second order derivative.
(i) it produces two values for every edge.
(ii) An imaginary straight line joining the extreme positive and negative values of
the second order derivative would cross zero near the mid point of the edge.
This zero crossing property is useful for locating the centers of the thick
edges.
Both the derivatives are very sensitive to the noise.
The first order derivatives are computed using the Gradient and second order derivatives
are obtained using the Laplacian.
Gx Gy
2 2
mag(f) = |Gx| + |Gy|
gives the maximum rate of change of f(x,y) per unit distance in the direction of the
Gradient.
The direction of the Gradient,
1 Gy
(x, y) tan
Gx
Where angle is w.r.t. the x-axis.
The direction of edge at (x,y) is perpendicular to the direction of the Gradient at that
point.
The Robert‟s, Perwitt‟s or Sobel‟s operators can be used to approximate the magnitude of
the Gradient.
Prewitts Operator:
mag(f) | (z7+z8+z9) – (z1+z2+z3) |
+ | (z3+z6+z9) – (z1+z4+z7) |
The difference between 3rd and 1st row of 3x3 mask approximates the derivative in x
direction and the difference between 3rd and 1st columns approximates the derivative in y
direction.
1 1 1 1 0 1
0 0 0 1 0 1
1 1 1 1 0 1
These masks are called Prewitt‟s operators.
Sobel Operator
mag(f) | (z7+2z8+z9) – (z1+2z2+z3) |
+ | (z3+2z6+z9) – (z1+2z4+z7) |
1 2 1 1 0 1
0 0 0 2 0 2
1 2 1 1 0 1
These masks are called Sobel operators.
Robert’s Operator:
The most common method of differentiation in IP application is the gradient. For a
function f(x,y) the gradient of f at (x,y) is defined as a vector,
f x
Gx
f = f =
y Gy
The magnitude of the vector is,
z1 z2 z3
mag(f) = Gx G
2 2 |G | + |G |
y x y z4 z5 z6
z7 z8 z9
Let z‟s denote the gray levels. Equation can be approximated at z5 in many ways.
mag(f) |z5 – z6| + |z5 – z8|
This equation can be implemented using the following masks of size 2X2.
1 1 1 0
0 0 1 0
These masks are called Robert‟s gradient operators.
Program:
%Program for Prewitts Operator(Edge Detection)
clear all
clc
aa=imread('cameraman.tif');
a=double(aa);
[r c]=size(a);
w1=[-1 -1 -1; 0 0 0; 1 1 1];
w2=[-1 0 1; -1 0 1; -1 0 1];
for x=2:r-1
for y=2:c-1
a1(x,y)=[w1(1)*a(x-1,y-1)+w1(2)*a(x-1,y)+w1(3)*a(x-1,y+1)+w1(4)*a(x,y-
1)+w1(5)*a(x,y)+w1(6)*a(x,y+1)+w1(7)*a(x+1,y-
1)+w1(8)*a(x+1,y)+w1(9)*a(x+1,y+1)];
a2(x,y)=[w2(1)*a(x-1,y-1)+w2(2)*a(x-1,y)+w2(3)*a(x-1,y+1)+w2(4)*a(x,y-
1)+w2(5)*a(x,y)+w2(6)*a(x,y+1)+w2(7)*a(x+1,y-
1)+w2(8)*a(x+1,y)+w2(9)*a(x+1,y+1)];
end
end
a3=abs(a1)+abs(a2);
figure(1)
imshow(uint8(a3))
Output:
Program:
%Program for Sobel Operator(Edge Detection)
clear all
clc
aa=imread('cameraman.tif');
a=double(aa);
[r c]=size(a);
w1=[-1 -2 -1; 0 0 0; 1 2 1];
w2=[-1 0 1; -2 0 2; -1 0 1];
for x=2:r-1
for y=2:c-1
a1(x,y)=[w1(1)*a(x-1,y-1)+w1(2)*a(x-1,y)+w1(3)*a(x-1,y+1)+w1(4)*a(x,y-
1)+w1(5)*a(x,y)+w1(6)*a(x,y+1)+w1(7)*a(x+1,y-
1)+w1(8)*a(x+1,y)+w1(9)*a(x+1,y+1)];
a2(x,y)=[w2(1)*a(x-1,y-1)+w2(2)*a(x-1,y)+w2(3)*a(x-1,y+1)+w2(4)*a(x,y-
1)+w2(5)*a(x,y)+w2(6)*a(x,y+1)+w2(7)*a(x+1,y-
1)+w2(8)*a(x+1,y)+w2(9)*a(x+1,y+1)];
end
end
a3=abs(a1)+abs(a2);
figure(1)
imshow(uint8(a3))
Output:
Program:
%Program for Roberts Operator(Edge Detection)
clear all
clc
aa=imread('cameraman.tif');
a=double(aa);
[r c]=size(a);
w1=[1 0; 0 -1];
w2=[0 1; -1 0];
for x=2:r-1
for y=2:c-1
a1(x,y)=[w1(1)*a(x,y)+w1(2)*a(x,y+1)+w1(3)*a(x+1,y)+w1(4)*a(x+1,y+1)];
a2(x,y)=[w2(1)*a(x,y)+w2(2)*a(x,y+1)+w2(3)*a(x+1,y)+w2(4)*a(x+1,y+1)];
end
end
a3=abs(a1)+abs(a2)
figure(1)
imshow(uint8(a3))
Output: