Digital Signal Processing
Digital Signal Processing
Digital Signal Processing
NO.OF
UNIT TOPIC
CLASSES
Introduction to DSP
Discrete Time Signals & Sequences 2
Linear Shift invariant Systems 2
Stability & Causality 1
Linear Constant Coefficient Difference Equations 2
UNIT I
Frequency Domain Representation of Discrete Time Signals & Systems 2
Realization Of Digital Filters
INTRODUCTION
Applications of Z-Transforms 1
Solution of Difference Equations of Digital Filters 2
System Function & Stability Criterion 2
Frequency Response of Stable Systems 1
Realization of Digital Filters-Direct,Canonic,Cascade and Parallel Forms 4
19
Discrete Fourier Series
DFS Representation of Periodic Sequences, Properties of DFS 3
Discrete Fourier Transform: Properties of DFT 2
Linear Convolution of Sequences using DFT 1
UNIT II Computation of DFT :Over-lap Add ,Over-lap Save Method 2
Relation between DTFT ,DFS,DFT,Z-Transform 1
Fast Fourier Transforms
FFT-Radix-2 Decimation-in-Time 4
FFT-Radix-2 Decimation-in-Frequency 4
Inverse FFT and FFT with general Radix-N 2
19
UNIT III Analog filter approximations-Butterworth and Chebyshev 2
Design of IIR Digital Filters from Analog Filters 2
IIR DIGITAL Step and Impulse Invariant Techniques 2
FILTERS Bilinear Transformation Method, Spectral Transformations 2
8
Characteristics of FIR Digital Filters, Frequency Response 3
UNIT IV
Design of FIR filters-Fourier Method 1
Digital filters using Window Techniques 2
FIR DIGITAL
Frequency sampling Technique 1
FILTERS
Comparison of FIR and IIR filters 1
8
Introduction Down sampling, Decimation 2
Up sampling, Interpolation, Sampling rate Conversion 2
Finite Word Length Effects: Limit Cycles, Overflow Oscillations 2
UNIT V Round Off Noise in IIR Digital Filters 1
Computational Output Round Off Noise 1
MULTIRATE Methods to prevent Overflow 1
Page 1
DIGITAL SIGNAL PROCESSING
UNIT 1
INTRODUCTION
1.1INTRODUCTION TO DIGITAL SIGNAL PROCESSING:
SIGNAL: A signal is defined as any physical quantity that varies with time, space or another
independent variable.
SYSTEM: A system is defined as a physical device that performs an operation on a signal.
SIGNAL PROCESSING: System is characterized by the type of operation that performs on
the signal. Such operations are referred to as signal processing. This type of processing by
Digital systems is called DIGITAL SIGNAL PROCESSING.
Page 2
DIGITAL SIGNAL PROCESSING
5. Cheaper to implement.
6. Small size.
7. Several filters need several boards in analog, whereas in digital same DSP processor is
used for many filters.
Disadvantages of DSP
1. When analog signal is changing very fast, it is difficult to convert digital form
(beyond 100KHz range)
2. w=1/2 Sampling rate.
3. Finite word length problems.
4. When the signal is weak, within a few tenths of mill volts, we cannot amplify the signal
after it is digitized.
5. DSP hardware is more expensive than general purpose microprocessors & micro
controllers.
6. Dedicated DSP can do better than general purpose DSP.
Applications of DSP
1. Filtering.
2. Speech synthesis in which white noise (all frequency components present to the same
level) is filtered on a selective frequency basis in order to get an audio signal.
3. Speech compression and expansion for use in radio voice communication.
4. Speech recognition.
5. Signal analysis.
6. Image processing: filtering, edge effects, enhancement.
7. PCM used in telephone communication.
8. High speed MODEM data communication using pulse modulation systems such as FSK,
QAM etc. MODEM transmits high speed (1200-19200 bits per second) over a band
limited (3-4 KHz) analog telephone wire line.
Page 3
DIGITAL SIGNAL PROCESSING
DISCRETE TIME SIGNAL: A signal that has values at discrete instants of time which is
obtained by sampling a continuous time signal.
1. Graphical representation
2. Functional representation
3. Tabular representation
4. Sequence representation
Graphical representation
1 𝑓𝑜𝑟 𝑛 = −1
2 𝑓𝑜𝑟 𝑛 = 0,1
X(n)= 0.5 𝑓𝑜𝑟 𝑛 = 2
1.5 𝑓𝑜𝑟 𝑛 = 3
( 0 𝑒𝑙𝑠𝑒 )
Page 4
DIGITAL SIGNAL PROCESSING
Sequence representation: The signal is represented as sequence with time origin indicated by
symbol ↑.
It is defined as
δ(n)=1 n=0
0 n≠0
Page 5
DIGITAL SIGNAL PROCESSING
x(n)=an
Sinusoidal signal:
It is represented as x(n)=Acos(won+Φ)
Page 6
DIGITAL SIGNAL PROCESSING
Energy and power signal: For a discrete time signal x(n) the energy E is given by
Page 7
DIGITAL SIGNAL PROCESSING
the smallest value of n for which the signal is periodic is called fundamental period.
Example: Show that the exponential sequence x(n)=ejw0n is periodic if w0/2π is rational
number.
Causal and non causal signals: A signal x(n) is said to be causal if its value is zero for n<0,
otherwise the signal is non causal. A signal that is zero for all n 0 is called anti causal signal.
Page 9
DIGITAL SIGNAL PROCESSING
To test if any given system is time invariant first apply a sequence x(n) and find y(n). Now delay
the input sequence by k samples and find the output sequence.
Note: A linear time invariant system satisfies both linearity and time invariance property.
If the input to the system is unit impulse i.e x(n)=δ(n) then the output of the system is called
impulse response denoted by h(n).
h(n)=T[δ(n)]
for an LTI system if the input and impulse response are known then output y(n) is given by
y(n)=∑∞
𝑘=−∞ 𝑥 (𝑘 )ℎ(𝑛 − 𝑘)
the above equation represents output is the convolution sum of input sequence x(n) and impulse
response h(n) represented as
y(n)=x(n)*h(n)
An LTI system is said to be stable if it produces bounded output sequence for every
bounded input sequence. If the input is bounded and output is unbounded then it is unstable
system. The necessary and sufficient condition for stability is
∞
∑ ⃒ℎ(𝑛) ⃒ < ∞
𝑛=−∞
Page 10
DIGITAL SIGNAL PROCESSING
Generally a causal system is a system whose output depends on only past and present
values of input. The output of an LTI system is given by
y(n)=∑∞
𝑘=−∞ ℎ (𝑘 )𝑥(𝑛 − 𝑘)
−1
= y(n)=∑𝑘=−∞ ℎ(𝑘)𝑥(𝑛 − 𝑘)+ y(n)=∑∞
𝑘=0 ℎ (𝑘 )𝑥(𝑛 − 𝑘)
=…………….h(-2)x(n+2)+h(-1)x(n+1)+h(0)x(n)+h(1)x(n-1)+…………..
As the causal system output does not depends on future inputs so neglect the terms then y(n)
reduces to
y(n)= h(0)x(n)+h(1)x(n-1)+…………..
=∑∞
𝑘=0 ℎ(𝑘 )𝑥(𝑛 − 𝑘)
Therefore an LTI system is causal if and only if its impulse response is zero for negative values
of n.
There are different methods of analyzing the behavior or response of LTI system.
Direct Solution Of Difference Equation: the input and output relation of LTI system is
governed by constant coefficient difference equation of form
y(n)= − ∑𝑁 𝑀
𝑘=1 𝑎𝑘𝑦(𝑛 − 𝑘)+∑𝑘=0 𝑏𝑘𝑥(𝑛 − 𝑘)
Mathematically the direct solution of above equation can be obtained to determine the response
of the system.
y(n)=x(n)*h(n)
Page 11
DIGITAL SIGNAL PROCESSING
Z transform: The convolution property of z transform of the convolution of input and impulse
response is equa to the product of their individual z transforms.
i.e. Z[𝑥(n)*h(n)]=X(Z)H(Z)
but y(n)=x(n)*h(n)
so Z[y(n)]= X(Z)H(Z)
therefore y(n)=Z-1(X(Z)H(Z))
i.e the response y(n) of an LTI system is obtained by taking inverse Z transform of X(Z) and
H(Z). Conversely if the transfer function of the system is known then we can determine the
impulse response of system by taking inverse Z transform of transfer function.
i.e. h(n)=Z-1[H(Z)]=Z-1{Y(Z)/X(Z)}
Fourier transform gives an effective representation of signals and systems in frequency domain.
The Fourier transform of discrete time signal is given as
X(ω)=∑∞
𝑛=−∞ 𝑥(𝑛)e
-jwn
w is the frequency and it varies continuously from o to 2π. The magnitude of X(w) gives
frequency spectrum of x(n).
Y(w)=X(w)H(w)
Consider a periodic sequence x(n) with period N and this is expressed in discrete fourier series as
x(n)= ∑𝑁−1
𝑘=0 𝐶𝑘 e
j2πnk/N
the values of Ck k=0,1,2,3…………….N-1 are called discrete spectra of x(n). Each Ck appears at
frequency wk=2πk/N.
Page 12
DIGITAL SIGNAL PROCESSING
a sufficient condition for the existence of DTFT for a periodic sequence x(n) is
The z-transform has imaginary and real parts like fourier tansform .A plot of imaginary
part Vs real part is called Z-plane .This is also called complex Z-plane.
The poles and zeros of discrete LTI systems are plotted in the complex Z-plane. The
stability of LTI systems can also be determined from pole-zero plot.
Z-transform:
Page 13
DIGITAL SIGNAL PROCESSING
x(n) Z X(Z)
Page 14
DIGITAL SIGNAL PROCESSING
Solution:
Given x(n)=anu(n)+bn(-n-1)
X(Z ) = ∑∞ 𝑛 -n −1 𝑛 -n
𝑛=0 𝑎 z + ∑𝑛=−∞ 𝑏 z
𝑎 𝑎 2
= [1 + 2 + ( 2 ) + ⋯ ]+∑∞ −1 n
𝑛=1(𝑏 z)
𝑎 𝑎 2
=[1 + 2 + ( 2 ) + ⋯ ]+[ ∑∞ −1 n -
𝑛=0(𝑏 z) 1]
a
𝑎 𝑎 2 𝑧 𝑧 2
=[1 + 2 + ( 2 ) + ⋯ ]+[1 + 𝑏 + (𝑏) + ⋯ − 1] b
1 1
= 𝑎 + 𝑧 -1
1− 𝑧 1−𝑏
𝑎 𝑧
= | 𝑧 |<1 + | 𝑏 |<1
Page 15
DIGITAL SIGNAL PROCESSING
ROC: |𝑧|> |𝑎| & |𝑧| < |𝑏| |𝑏| > |𝑧|>|𝑎|
𝑧 𝑏
X(Z) = + -1
𝑧−𝑎 𝑏−𝑧
𝑧 𝑧
= +
𝑧−𝑎 𝑏−𝑧
The Nth order system or digital filters are described by a general form of linear constant
coefficient difference equation as
∑𝑁 𝑁
𝑘=0 𝑎𝑘 y(n-k) = ∑𝑘=0 𝑏𝑘 x(n-k)
Taking 𝑎0 = 1, y(n) = - ∑𝑁 𝑁
𝑘=0 𝑎𝑘 y(n-k) + ∑𝑘=0 𝑏𝑘 x(n-k)
System function H(Z) of system ,represent H(Z) as a ratio of two polynomials B(Z)/A(Z),
A(Z) is the denominator polynomial that determines poles of H(Z) input signal x(n) has a
rational z-transform X(Z).
Page 16
DIGITAL SIGNAL PROCESSING
N(Z)
ie,. X(Z) =
Q(Z)
B(Z)N(Z)
Y(Z)= H(Z) X(Z) =
A(Z)Q(Z)
Suppose that system contains simple poles P1, P2, P3,… PN and Z-transform of the input signal
contains poles q1, q2, q3,… qL, where Pk ≠ qm for all k = 1,2,…N and m=1,2,…L.
Page 17
DIGITAL SIGNAL PROCESSING
A necessary and sufficient condition for linear time invariant system to be BIBO
stable is ∑∞
𝑛=−∞|ℎ(𝑛)| ˂ ∞
In turn, this condition implies that H(Z) most contain the unit circle within its ROC
Since H(Z) = ∑∞
n=−∞ h(n) z
−n
= ∑∞
𝑛=−∞|h(n)| z
−n
Page 18
DIGITAL SIGNAL PROCESSING
|𝐻(𝑍) | ≤ ∑∞
𝑛=−∞|h(n)|
Hence if the system is BIBO stable, the unit circle is constrained in the ROC of H(Z).
This can also be stated like “A linear time-invariant system is BIBO stable if and only if the
ROC of the system function includes the unit circle.
|𝑧|=𝑏𝑘 , k=1,2,…M
= ∑∞ n
𝑛=0 |𝑎𝑘 (𝑏𝑘 ) |
= 𝑎𝑘 ∑∞ n
𝑛=0 |(𝑏𝑘 ) |
For the above system to be finite ,the magnitude of each term must be less than unity , ie,.. each
|𝑏𝑘 | ˂ 1, where 𝑏𝑘 is a pole ie,.. |Z| <1.So , all poles of the system must lie inside the unit circle,
for the system to be stable.
When the denominator polynomial of a transfer function of the system is large and which
cannot be factorized, it is not possible to find the poles of the system .Consequently we cannot
decide whether the system is stable or not. In such cases stability can be decided by using Schur-
Cohn Stability Test.
1
H(Z) = 7 1 consider only the denominator polynomial, here order of the
1− 4 𝑧 −1− 2 𝑧 −2
7 1
denominator polynomial is 2 .So denote the polynomial as D2(Z) = 1 − 𝑧 −1 − 𝑧 −2 .
4 2
1 1
Let k2 = - and k2 = | | ,If k2 is greater than or equal to 1,system is unstable .
2 2
If k2 is less than 1,then find k1 by forming reverse polynomial R2 (Z) from which D1(Z) .
Page 19
DIGITAL SIGNAL PROCESSING
−1 7
Here | k2 | < 1 So form the reverse polynomial R2(Z) = − 𝑧 −1 − 𝑧 −2
2 4
Here |k1| ˃ 1 ,so the system is unstable. If DN(Z) is given from RN(Z) use recursive equation
𝑍 2 + 𝑍+1
Example: Find the stability of the following transfer function H(Z) =
𝑍 4 +2 𝑍 3 +3 𝑍 2 + 4𝑍+6
𝑍 2 + 𝑍+1
Solution: given H(Z) =
𝑍 4 +2 𝑍 3 +3 𝑍 2 + 4𝑍+6
𝑍 −2 + 𝑍 −3 +𝑍 −4
=
1+2 𝑍 −1 +3 𝑍 −2 + 4𝑍 −3 +6𝑍 −4
Discrete time Fourier transform and Z-transform s are used to obtain frequency response
of discrete time systems. If we set z = ejωt ie,.. evaluate z-transform around unit circle ,we get
the Fourier transform of the system with sampling time period,T.
H(ω) is the frequency response of the system ,Its modulus gives the magnitude response and its
phase is the phase response
Page 20
DIGITAL SIGNAL PROCESSING
Example: Calculate the frequency response for the LTI system representation
1
y(n) + y(n-1)=x(n)-x(n-1)
4
Page 21
DIGITAL SIGNAL PROCESSING
1
Solution: Given y(n) + y(n-1)=x(n)-x(n-1)
4
Taking Fourier transform on both sides
1
Y(ejω ) + 4 e- jω Y(ejω) =X(ejω) - e- jω X(ejω)
1
Y(ejω ) [1+ 4 e- jω ] = X(ejω) (1- e- jω )
Y(ejω ) 1− e− jω
H(ejω ) = = 1
X(ejω) 1+ e− jω
4
𝑤
1−cos 𝜔+jsin 𝜔 2 𝑠𝑖𝑛 2
|H(𝑒 𝑗𝜔 )| = 1 = 1/2
1+ 4 cos 𝜔−j/4sin 𝜔 (1.062+0.5𝑐𝑜𝑠𝜔)
sin 𝜔 −0.25 sin 𝜔
Phase Response :∠𝐻(𝑒𝑗𝜔 ) = tan−1 ( ) - tan−1 ( )
1−cos 𝜔 1+0.25 cos 𝜔
Ω 0 ᴨ ᴨ ᴨ
6 4 3
|H(𝑒 𝑗𝜔 )|
∠𝐻(𝑒𝑗𝜔 )
A digital filter transfer function can be realized in a variety of ways .There are two types 0f
realization 1. Recursive 2. Non Recursive
For recursive realization the current output y(n) is a function of past outputs ,past and
present inputs. This form corresponds to an Infinite Impulse Response (IIR) digital filter. For a
Page 22
DIGITAL SIGNAL PROCESSING
Non recursive realization current output sample y(n) is a function of only past and present inputs.
This form corresponds to a Finite Impulse Response (FIR) digital filter.
IIR filter can be realized in many forms. They are :
Page 23
DIGITAL SIGNAL PROCESSING
Page 24
DIGITAL SIGNAL PROCESSING
Page 25
DIGITAL SIGNAL PROCESSING
Example:
Page 26
DIGITAL SIGNAL PROCESSING
EXAMPLE:
Page 27
DIGITAL SIGNAL PROCESSING
Page 28
DIGITAL SIGNAL PROCESSING
EXAMPLE:
Page 29
DIGITAL SIGNAL PROCESSING
Page 30
DIGITAL SIGNAL PROCESSING
Page 31
DIGITAL SIGNAL PROCESSING
UNIT 2
DISCRETE FOURIER SERIES
2.1 DFS REPRESENTATION OF PERIODIC SEQUENCE:
1.LINEARITY OF DFS:
2. SHIFT OF A SEQUENCE:
Page 32
DIGITAL SIGNAL PROCESSING
4.TIME REVERSAL:
Page 33
DIGITAL SIGNAL PROCESSING
5.TIME SCALING:
6.DIFFERENCE:
7.ACCUMULATION:
Page 34
DIGITAL SIGNAL PROCESSING
For convenience
Page 35
DIGITAL SIGNAL PROCESSING
1.LINEARITY:
Page 36
DIGITAL SIGNAL PROCESSING
4.DUALITY:
5.SYMMETRY PROPERTIES:
6.CIRCULAR CONVOLUTION:
Page 37
DIGITAL SIGNAL PROCESSING
Page 38
DIGITAL SIGNAL PROCESSING
Page 39
DIGITAL SIGNAL PROCESSING
Example :
Page 40
DIGITAL SIGNAL PROCESSING
Page 41
DIGITAL SIGNAL PROCESSING
Page 42
DIGITAL SIGNAL PROCESSING
Relation is given by
Z=ejw
Page 43
DIGITAL SIGNAL PROCESSING
DFT:
FFT:
INTRODUCTION:
In this section we present several methods for computing the DFT efficiently. In view of the
importance of the DFT in various digital signal processing applications, such as linear filtering,
correlation analysis, and spectrum analysis, its efficient computation is a topic that has received
considerable attention by many mathematicians, engineers, and applied scientists.
From this point, we change the notation that X(k), instead of y(k) in previous sections,
represents the Fourier coefficients of x(n).
Page 44
DIGITAL SIGNAL PROCESSING
Basically, the computational problem for the DFT is to compute the sequence {X(k)}
of N complex-valued numbers given another sequence of data {x(n)} of length N, according to
the formula
In general, the data sequence x(n) is also assumed to be complex valued. Similarly, The IDFT
becomes
Since DFT and IDFT involve basically the same type of computations, our discussion of
efficient computational algorithms for the DFT applies as well to the efficient computation of the
IDFT.
We observe that for each value of k, direct computation of X(k) involves N complex
multiplications (4N real multiplications) and N-1 complex additions (4N-2 real additions).
Consequently, to compute all N values of the DFT requires N 2 complex multiplications and N 2-
N complex additions.
Direct computation of the DFT is basically inefficient primarily because it does not exploit the
symmetry and periodicity properties of the phase factor WN. In particular, these two properties
are :
The computationally efficient algorithms described in this sectio, known collectively as fast
Fourier transform (FFT) algorithms, exploit these two basic properties of the phase factor.
Let us consider the computation of the N = 2v point DFT by the divide-and conquer approach.
We split the N-point data sequence into two N/2-point data sequences f1(n) and f2(n),
corresponding to the even-numbered and odd-numbered samples of x(n), respectively, that is,
Page 45
DIGITAL SIGNAL PROCESSING
Thus f1(n) and f2(n) are obtained by decimating x(n) by a factor of 2, and hence the resulting
FFT algorithm is called a decimation-in-time algorithm.
Now the N-point DFT can be expressed in terms of the DFT's of the decimated sequences as
follows:
But WN2 = WN/2. With this substitution, the equation can be expressed as
where F1(k) and F2(k) are the N/2-point DFTs of the sequences f1(m) and f2(m), respectively.
Since F1(k) and F2(k) are periodic, with period N/2, we have F1(k+N/2) = F1(k) and F2(k+N/2)
= F2(k). In addition, the factor WNk+N/2 = -WNk. Hence the equation may be expressed as
We observe that the direct computation of F1(k) requires (N/2)2 complex multiplications. The
same applies to the computation of F2(k). Furthermore, there are N/2 additional complex
multiplications required to compute WNkF2(k). Hence the computation of X(k) requires
2(N/2)2 + N/2 = N 2/2 + N/2 complex multiplications. This first step results in a reduction of the
number of multiplications from N 2 to N 2/2 + N/2, which is about a factor of 2 for N large.
Page 46
DIGITAL SIGNAL PROCESSING
By computing N/4-point DFTs, we would obtain the N/2-point DFTs F1(k) and F2(k) from the
relations
The decimation of the data sequence can be repeated again and again until the resulting
sequences are reduced to one-point sequences. For N = 2v, this decimation can be performed v =
log2N times. Thus the total number of complex multiplications is reduced to (N/2)log2N. The
number of complex additions is Nlog2N.
Page 47
DIGITAL SIGNAL PROCESSING
Page 48
DIGITAL SIGNAL PROCESSING
An important observation is concerned with the order of the input data sequence after it is
decimated (v-1) times. For example, if we consider the case where N = 8, we know that the first
decimation yeilds the sequence x(0), x(2), x(4), x(6), x(1), x(3), x(5), x(7), and the second
decimation results in the sequence x(0), x(4), x(2), x(6), x(1), x(5), x(3), x(7). This shuffling of the
input data sequence has a well-defined order as can be ascertained from observing Figure
TC.3.5, which illustrates the decimation of the eight-point sequence.
Page 49
DIGITAL SIGNAL PROCESSING
Example :Find the FFT of a given sequence x(n)={1,2,3,4}by using DIT-FFT Algorithm?
Page 50
DIGITAL SIGNAL PROCESSING
Now, let us split (decimate) X(k) into the even- and odd-numbered samples. Thus we obtain
The computational procedure above can be repeated through decimation of the N/2-point
DFTs X(2k) and X(2k+1). The entire process involves v = log2N stages of decimation, where
each stage involves N/2 butterflies of the type shown in Figure TC.3.7. Consequently, the
computation of the N-point DFT via the decimation-in-frequency FFT requires (N/2)log2N
complex multiplications and Nlog2N complex additions, just as in the decimation-in-time
algorithm. For illustrative purposes, the eight-point decimation-in-frequency algorithm is given
in Figure.
Page 51
DIGITAL SIGNAL PROCESSING
Page 52
DIGITAL SIGNAL PROCESSING
Example :Find the FFT of a given sequence x(n)={1,2,3,4}by using DIF-FFT Algorithm?
Page 53
DIGITAL SIGNAL PROCESSING
Unit 3
IIR FILTERS
Basically a digital filter is a linear time invariant discrete time system. The terms infinite
impulse response (IIR) and finite impulse response (FIR) are used to distinguish filter types. The
FIR filters are of non-recursive type, where the present output sample depends on present and
previous input samples. IIR are of non-recursive type where the present output depends on
previous and past input samples and output samples.
FREQUENCY SELECTIVE FILTERS:
A filter is one which rejects unwanted frequencies from the input and allows the desired
frequencies to obtain the required shape of output signal. The range of frequencies that are
passed through the filter is called pass band and those frequencies that are blocked are called stop
band. The filters are of different types :
1.Low Pass Filter 2.High Pass Filter 3.Band Pass Filter 4.Band Reject Filter
Fig (a): Magnitude response of analog LPF Fig (b): Magnitude response of digital LPF
Page 54
DIGITAL SIGNAL PROCESSING
Where,
𝜔𝑝 = Pass band frequency in radians
𝑁(𝑠) ∑𝑀
𝑖=0 𝑎𝑖 𝑠
𝑖
H(s) = =
𝐷(𝑠) 1+∑𝑁
𝑖=1 𝑏𝑖 𝑠
𝑖
Where H(s) is the Laplace Transform of impulse response of h(t) and N≥M must be satisfied.
For a stable analog filter the poles of H(s) lies in the left half of the s-plane.
The two types of analog filters we design are: 1.Butterworth Filter 2.Chebyshev Filter.
Page 55
DIGITAL SIGNAL PROCESSING
We can get magnitude square function of normalized butterworth filter(1 rad/sec cut off
frequency) as
1
|H(jΩ)|2= N=1,2,3…
1+Ω2𝑁
1
|H(jΩ)|2=|H(Ω)|2=H(-s2)=H(jΩ)H(-jΩ)= 𝑠
1+( )2𝑁
𝑗
1 1
H(s)H(-s)= =
1+(−1)𝑁𝑠 2𝑁 1+(−𝑠 2)𝑁
Page 56
DIGITAL SIGNAL PROCESSING
Page 57
DIGITAL SIGNAL PROCESSING
Ωs 2N
20 log |H(jΩ)|=10log1-10log[1 + ε2 ( ) ]
Ωp
Ω
20 log |H(jΩp)|= - αs= -10log[1 + ε2 ( s )2N ]
Ωp
Ωs 2N
0.1αs=log[1 + ε2 ( ) ]
Ωp
Ω 100.1 αs −1
( s )2N=
Ωp 100.1 αp −1
Taking log on both sides
0.1 αs
𝑙𝑜𝑔√ 10 −1
0.1 αp
10 −1
N = Ωs
𝑙𝑜𝑔
Ωp
Round off N to next higher integer.
0.1 αs
𝑙𝑜𝑔√ 10 −1
0.1 αp
10 −1
N ≥
Ωs
𝑙𝑜𝑔
Ωp
𝜆
log(𝜀)
N ≥ Ω where λ2=[100.1 αs − 1]𝑎𝑛𝑑 ε2=[100.1 αp − 1]
𝑙𝑜𝑔Ω s
p
𝜆 Ωp
For simplicity A= and k= the transition ratio
𝜀 Ωs
log A
Therefore, the order of the low pass butterworth analog filter N= 1
log𝑘
Page 58
DIGITAL SIGNAL PROCESSING
The magnitude squared response of the analog lowpassType I Chebyshevfilter of Nth order is
given by:
|H(W)|2= 1/[1 + e2TN2(W/Wp)].
where TN(W) is the Chebyshev polynomial of order N:
TN(W) = cos(Ncos-1W),|W| ≤1,
= cosh(Ncosh-1W),|W| > 1.
The polynomial can be derived via a recurrence relation given by
Tr(W) = 2WTr-1(W) –Tr-2(W),r ≥2, with T0(W) = 1 and T1(W) = W.
The magnitude squared response of the analog lowpassType II or inverse Chebyshevfilter of Nth
order is given by:
|H(W)|2= 1/[1 + e2{TN(Ws/Wp)/ TN(Ws/W)}2].
Equiripple in the passband and monotonic in the stopband.
Or equiripple in the stopband and monotonic in the passband.
Page 59
DIGITAL SIGNAL PROCESSING
Page 60
DIGITAL SIGNAL PROCESSING
1. IMPULSE INVARIANCE
2. STEP INVARIANT
3. BILINEAR TRANSFORMATION
Impulse Invariance Method is simplest method used for designing IIR Filters. Important
Features of this Method are
1. In impulse variance method, Analog filters are converted into digital filter just by replacing
unit sample response of the digital filter by the sampled version of impulse response of analog
filter. Sampled signal is obtained by putting t=nT hence h(n) = ha(nT) n=0,1,2. ………….
where h(n) is the unit sample response of digital filter and T is sampling interval.
2. But the main disadvantage of this method is that it does not correspond to simple algebraic
mapping of S plane to the Z plane. Thus the mapping from analog frequency to digital frequency
Page 61
DIGITAL SIGNAL PROCESSING
is many to one. The segments (2k-1)Π/T ≤ Ω ≤ (2k+1) Π/T of jΩ axis are all mapped on the unit
circle Π≤ω≤Π. This takes place because of sampling.
3. Frequency aliasing is second disadvantage in this method. Because of frequency aliasing, the
frequency response of the resulting digital filter will not be identical to the original analog
frequency response.
4. Because of these factors, its application is limited to design low frequency filters like LPF or
a limited class of band pass filters.
RELATIONSHIP BETWEEN Z PLANE AND S PLANE
In impulse invariant method the IIR filter is designed such that unit impulse response h(n)
of digital filter is the sampled version of the impulse response of analog filter.
The Z transform of IIR is given by
H(Z)=∑∞
𝑛=0 ℎ(𝑛 )𝑧
−𝑛
H(Z)/z=esT =∑∞
𝑛=0 ℎ (𝑛 )𝑒
−𝑠𝑇𝑛
Z is represented as rejω in polar form and relationship between Z plane and S plane is given as
Z=eST where s= σ + j Ω.
Z= eST (Relationship Between Z plane and S plane)
Z= e (σ + j Ω) T
= e σT . e jΩT
Comparing Z value with the polar form we have.
r= e σ T and ω = Ω T
Here we have three condition
1) If σ = 0 then r=1
2) If σ < 0 then 0 < r < 1
3) If σ > 0 then r> 1
Thus
1) Left side of s-plane is mapped inside the unit circle.
2) Right side of s-plane is mapped outside the unit circle.
3) jΩ axis is in s-plane is mapped on the unit circle.
Im(z)
)
Page 62
DIGITAL SIGNAL PROCESSING
S plane
𝑐𝑘
=∑𝑁
𝑘=1 𝑝
1−𝑒 𝑘𝑇 𝑍 −1
Page 63
DIGITAL SIGNAL PROCESSING
1. For the given specifications, find Ha(s), transfer function of an analog filter.
2. Select the sampling rate of the digital filter .
3. Express the analog filter transfer function as the sum of single pole filters.
𝑐𝑘
Ha(s)= ∑𝑁
𝑘=1 𝑠−𝑝𝑘
4. Compute the z transform of the digital filter by using the formula.
𝑐𝑘
H(z)= ∑𝑁
𝑘=1 𝑝
1−𝑒 𝑘𝑇 𝑍 −1
Example:
The step response y(t) is defined as the output of a LTI system due to a unit step input
signal x(t)=u(t).Then
1 1
X(s)= and Y(s)= X(s)H(s)= H(s).
𝑠 𝑠
We know that a digital filter is equivalent to an analog filter in the sense of time domain
invariance, if equivalent input yield equivalent outputs.
Therefore the sampled input to digital filter is x(nT)=x(n)=u(n)Then
1
X(z)= and y(n)=y(nT).
1−z−1
H(z)= Y(z)/X(z)=(1-z-1)Y(z).
Page 64
DIGITAL SIGNAL PROCESSING
Page 65
DIGITAL SIGNAL PROCESSING
𝟐 𝒔𝒊𝒏𝝎
Ω=
𝑻 𝟏+𝒄𝒐𝒔𝝎
Up on simplification, we get
WARPING EFFECT
Let Ω and ω represents the frequency variables in the analog filter and the derived digital
filter resp.
𝟐 𝝎
Ω= tan
𝑻 𝟐
For small value of ω
𝟐 𝝎 𝝎
Ω= =
𝑻 𝟐 𝑻
ω=ΩT
For low frequencies the relation between ω and Ω are linear, as a result the digital filter have the
same amplitude response as analog filter. For high frequencies however the relation between
ω and Ω becomes nonlinear and distortion is introduced in the frequency scale of digital filter to
that of analog filter. This is known as warping effect.
Page 66
DIGITAL SIGNAL PROCESSING
The influence of the warping effect on amplitude response is shown in figure below. The analog
filter with a number of pass bands centered at regular intervals. The derived digital filter will
have same number of pass bands. But the centre frequencies and bandwidth of higher frequency
pass band will tend to reduce disproportionately.
The influence of warping effect on the phase response is as shown below ,Considering an analog
filter with linear phase response, the phase response of derived digital filter will be non linear.
∟H(ejω) ∟H(jΩ)
ω Ω
ΩT
Page 67
DIGITAL SIGNAL PROCESSING
Prewarping
The prewarping effect can be eliminated by prewarping the analog filter. This can be done
by finding prewarping analog frequencies using the formula
𝟐 𝝎
Ω= tan
𝑻 𝟐
𝟐 𝝎𝒑
Therefore we have Ωp= tan
𝑻 𝟐
𝟐 𝝎𝒔
And Ωs=
tan
𝑻 𝟐
STEPS TO DESIGN DIGITAL FILTER USING BILINEAR TRANSFORM
TECHNIQUE:
1. From the given specifications, find prewarping analog frequencies using formula
𝟐 𝝎
Ω= tan .
𝑻 𝟐
2. Using the analog frequencies find H(s) of the analog filter.
3. Select the sampling rate of the digital filter, call it T sec/sample.
2 1−𝑧 −1
4. Substitute z = into the transfer function.
𝑇 1+𝑧 −1
SPECTRAL TRANSFORMATIONS:
IN ANALOG DOMAIN : A analog low pass filter can be converted into a analog High Pass,
Band Stop, Band Pass or another Low Pass digital filter as given below
Page 68
DIGITAL SIGNAL PROCESSING
IN DIGITAL DOMAIN:
A digital low pass filter can be converted into a digital High Pass, Band Stop, Band Pass
or another Low Pass digital filter as given below
𝐬𝐢𝐧[(𝛚𝐩−𝛚𝐩’)/𝟐]
Where α =
𝐬𝐢𝐧[(𝛚𝐩+𝛚𝐩’)/𝟐]
ω
p = Pass band frequency of low pass filter
ω
p’ = Pass band frequency of new filter
Low Pass to High Pass:
𝒛−𝟏 +𝛂
Z-1 ---
[𝟏+𝛂𝒛−𝟏]
𝐜𝐨𝐬[(𝛚𝐩+𝛚𝐩’)/𝟐]
Where α = −
𝐜𝐨𝐬[(𝛚𝐩′−𝛚𝐩)/𝟐]
ω
p = Pass band frequency of low pass filter
ω
p’ = Pass band frequency of high pass filter
𝐜𝐨𝐬(𝝎𝒖 +𝝎𝒍)/𝟐
Where α= and k= [(𝐜𝐨𝐭(𝝎𝒖 − 𝝎𝒍) )/𝟐][tan(𝝎𝒑 /2)]
𝐜𝐨𝐬(𝝎𝒖 −𝝎𝒍)/𝟐
Page 69
DIGITAL SIGNAL PROCESSING
PROBLEMS
1.
Page 70
DIGITAL SIGNAL PROCESSING
Page 71
DIGITAL SIGNAL PROCESSING
2.
Solution:
Page 72
DIGITAL SIGNAL PROCESSING
UNIT 4
FIR FILTERS
4.1 INTRODUCTION
The FIR Filters can be easily designed to have perfectly linear Phase. These filters can be
realized recursively and Non-recursively. There is greater flexibility to control the Shape of their
Magnitude response. Errors due to round off noise are less severe in FIR Filters, mainly because
Feedback is not used.
1. FIR filter always provides linear phase response. This specifies that the signals in the pass
band will suffer no dispersion Hence when the user wants no phase distortion, then FIR
filters are preferable over IIR. Phase distortion always degrades the system performance. In
various applications like speech processing, data transmission over long distance FIR filters
are more preferable due to this characteristic.
2. FIR filters are most stable as compared with IIR filters due to its non feedback nature.
3. Quantization Noise can be made negligible in FIR filters. Due to this sharp cutoff
FIR filters can be easily designed.
4. Disadvantage of FIR filters is that they need higher ordered for similar magnitude response of
IIR filters.
System is stable only if system produces bounded output for every bounded input. This is
stability definition for any system.
Here h(n)={b0, b1, b2, } of the FIR filter are stable. Thus y(n) is bounded if input x(n) is
Page 73
DIGITAL SIGNAL PROCESSING
bounded. This means FIR system produces bounded output for every bounded
input. Hence FIR systems are always stable.
The various method used for FIR Filer design are as follows
1. Fourier Series method
2. Windowing Method
3. DFT method
4. Frequency sampling Method. (IFT Method)
GIBBS PHENOMENON:
1. In Fourier series method, limits of summation index is -∞ to ∞. But filter must have finite
terms.Hence limit of summation index change to -Q to Q where Q is some finite integer. But this
type of truncation may result in poor convergence of the series. Abrupt truncation of infinite
Page 74
DIGITAL SIGNAL PROCESSING
series is equivalent to multiplying infinite series with rectangular sequence. i.e at the point of
discontinuity some oscillation may be observed in resultant series.
2. Consider the example of LPF having desired frequency response Hd (ω) as shown in figure.
The oscillations or ringing takes place near band-edge of the filter.
3. This oscillation or ringing is generated because of side lobes in the frequency response
W(ω) of the window function. This oscillatory behavior is called "Gibbs Phenomenon”.
Windowing is the quickest method for designing an FIR filter. A windowing function simply
truncates the ideal impulse response to obtain a causal FIR approximation that is non causal and
infinitely long. Smoother window functions provide higher out-of band rejection in the filter
response.
However this smoothness comes at the cost of wider stopband transitions.Various windowing
method attempts to minimize the width of the main lobe (peak) of the frequency response. In
addition, it attempts to minimize the side lobes (ripple) of the frequency response.
Rectangular Window: Rectangular This is the most basic of windowing methods. It does not
require any operations because its values are either 1 or 0. It creates an abrupt discontinuity that
results in sharp roll-offs but large ripples.
Page 75
DIGITAL SIGNAL PROCESSING
Triangular Window: The computational simplicity of this window, a simple convolution of two
rectangle windows, and the lower sidelobes make it a viable alternative to the rectangular
window.
Kaiser Window: This windowing method is designed to generate a sharp central peak. It has
reduced side lobes and transition band is also narrow. Thus commonly used in FIR filter design.
Page 76
DIGITAL SIGNAL PROCESSING
Hamming Window: This windowing method generates a moderately sharp central peak. Its
ability to generate a maximally flat response makes it convenient for speech processing filtering.
Hanning Window: This windowing method generates a maximum flat filter design.
Page 77
DIGITAL SIGNAL PROCESSING
Filters can be designed from its pole zero plot. Following two constraints should be
imposed while designing the filters.
1. All poles should be placed inside the unit circle on order for the filter to be stable. However
zeros can be placed anywhere in the z plane. FIR filters are all zero filters hence they are always
stable. IIR filters are stable only when all poles of the filter are inside unit circle.
2. All complex poles and zeros occur in complex conjugate pairs in order for the filter
coefficients to be real.
Page 78
DIGITAL SIGNAL PROCESSING
In the design of low pass filters, the poles should be placed near the unit circle at points
corresponding to low frequencies ( near ω=0)and zeros should be placed near or on unit circle at
points corresponding to high frequencies (near ω=Π). The opposite is true for high pass filters.
Page 79
DIGITAL SIGNAL PROCESSING
Page 80
DIGITAL SIGNAL PROCESSING
Page 81
DIGITAL SIGNAL PROCESSING
Page 82
DIGITAL SIGNAL PROCESSING
Page 83
DIGITAL SIGNAL PROCESSING
Page 84
DIGITAL SIGNAL PROCESSING
PROBLEMS
solution:
Page 85
DIGITAL SIGNAL PROCESSING
Page 86
DIGITAL SIGNAL PROCESSING
Page 87
DIGITAL SIGNAL PROCESSING
Page 88
DIGITAL SIGNAL PROCESSING
UNIT-5
MULTIRATE SIGNAL PROCESSING
INTRODUCTION:
Multirate means "multiple sampling rates". A multirate DSP system uses multiple
sampling rates within the system. Whenever a signal at one rate has to be used by a system that
expects a different rate, the rate has to be increased or decreased, and some processing is required to
do so. Therefore "Multirate DSP" really refers to the art or science of changing sampling
The most immediate reason is when you need to pass data between two systems which
use incompatible sampling rates. For example, professional audio systems use 48 kHz rate, but
consumer CD players use 44.1 kHz; when audio professionals transfer their recorded music to
CDs, they need to do a rate conversion.But the most common reason is that multirate DSP can
greatly increase processing efficiency (even by orders of magnitude!), which reduces DSP
system cost. This makes the subject of multirate DSP vital to all professional DSP practitioners
APPLICATIONS:
1. Used in A/D and D/A converters.
2. Used to change the rate of a signal. When two devices that operate at different rates are to be
interconnected, it is necessary to use a rate changer between them.
3. In transmultiplexers
4. In speech processing to reduce the storage space or the transmitting rate of speech data.
5. Filter banks and wavelet transforms depend on multi rate methods.
Page 89
DIGITAL SIGNAL PROCESSING
DOWN SAMPLING:
y(m) = x(mM)
Where y(m) is the down sampled sequence, Obtained by taking a sample from the data
sequence x(n) for every M samples (discarding M – 1 samples for every M samples). As an
example, if the original sequence with a sampling period T = 0.1 second (sampling rate = 10
samples per sec) is given by
Consider x(n):8 7 4 8 9 6 4 2 –2 –5 –7 –7 –6 –4 …
and we down sample the data sequence by a factor of 3, we obtain the down sampled sequence
as
y(m):8 8 4 –5 –6 … ,
with the resultant sampling period T = 3 × 0.1 = 0.3 second (the sampling rate now is 3.33
samples per second).
From the Nyquist sampling theorem, it is known that aliasing can occur in the down sampled
signal due to the reduced sampling rate. After down sampling by a factor of M, the new sampling
period becomes MT, and therefore the new sampling frequency is
fsM = 1/(MT) = fs /M,
fsM/2 = fs/(2M).
Page 90
DIGITAL SIGNAL PROCESSING
This tells us that after down sampling by a factor of M, the new folding frequency will be
decreased M times. If the signal to be down sampled has frequency components larger than the
new folding frequency, f > fs/(2M), aliasing noise will be introduced into the down sampled data.
To overcome this problem, it is required that the original signal x(n) be processed by a low pass
filter H(z) before down sampling, which should have a stop frequency edge at fs/(2M) (Hz). The
corresponding normalized stop frequency edge is then converted to be
In this way, before down sampling, we can guarantee that the maximum frequency of the filtered
signal satisfies fmax < fs/(2M),
such that no aliasing noise is introduced after down sampling. A general block diagram of
decimation is given in Figure, where the filtered output in terms of the z-transform can be written
as W(z) = H(z)X(z),
where X(z) is the z-transform of the sequence to be decimated,x(n), and H(z) is the lowpass filter
transfer function. After anti-aliasing filtering, the down sampled signal y(m) takes its value from
the filter output as: y(m) = w(mM).
The process of reducing the sampling rate by a factor of 3 is shown in Figure The corresponding
spectral plots for x(n),w(n), and y(m) in general are shown in Figure
Page 91
DIGITAL SIGNAL PROCESSING
Page 92
DIGITAL SIGNAL PROCESSING
UP-SAMPLER :
y(m) = x(m/L)
x(n) : 8 8 4 –5 –6 …
After up sampling the data sequence x(n) by a factor of 3 (adding L– 1 zeros for each sample),
we have the up sampled data sequence w(m) as:
w(m): 8 0 0 8 0 0 4 0 0 –5 0 0 –6 0 0 …
The next step is to smooth the up sampled data sequence via an interpolation filter. The process
is illustrated in Figure
Page 93
DIGITAL SIGNAL PROCESSING
Similar to the downsampling case, assuming that the data sequence has the current sampling
period of T, the Nyquist frequency is given by fmax = fs/2. After upsampling by a factor ofL, the
new sampling period becomes T/L, thus the new sampling frequency is changed to be
This indicates that after up sampling, the spectral replicas originally centered at fs, 2fs, … are
included in the frequency range from 0 Hz to the new Nyquist limit Lfs=2 Hz, as shown in
Figure. To remove those included spectral replicas, an interpolation filter with a stop frequency
edge of fs=2 in Hz must be attached, and the normalized stop frequency edge is given by
Page 94
DIGITAL SIGNAL PROCESSING
After filtering via the interpolation filter, we will achieve the desired spectrum for y(n), as shown
in Figure 5.2.b. Note that since the interpolation is to remove the high-frequency images that are
aliased by the upsampling operation, it is essentially an anti-aliasing lowpass filter.
Page 95
DIGITAL SIGNAL PROCESSING
The anti Imaging Filter and anti Aliasing Filter are operated at same sampling rate and
hence can be replaced by simple lowpass filter with cut off frequency,
Wc = min[π/I, π/D]
It is Important to note that, in order to preserve the spectral characteristics of x(n), the
interpolation has to be performed first and decimation is to performed next
Example: Show that the upsampler and down sampler are time variant systems.
Consider a factor of L upsampler defined by
y(n) = x(n/L)
The o/p due to delayed i/p is
y( n, k) = x(n/L - k)
the delayed output is
y(n-k) = x[(n-k)/L]
y(n ,k ) ≠ y(n-k)
therefore up sampler is a time variant systems.
Similarly for down sampler
Y(n) = x(nM)
y(n,k) = x(nM-k)
y(n-k) = x(M(n-k))
y(n ,k ) ≠ y(n-k)
Therefore down sampler is a time variant systems.
Page 96
DIGITAL SIGNAL PROCESSING
In digital signal processing, (B +1)-bit fixed-point numbers are usually represented as two’s-
complementsigned fractions in the format bo b-ib-2 …… b-B
where bo is the sign bit and the number range is —1 <X <1. The advantage of this representation
is that the product of two numbers in the range from — 1 to 1 is another number in the same
range. Floating-point numbers are represented as
where s is the sign bit, mis the mantissa, and cis the characteristic or exponent.To make the
representation of a number unique, the mantissa is normalized so that 0.5 <m <1.
Although floating-point numbers are always represented in the form of , the way in which this
representation is actually storedin a machine may differ. Since m >0.5, it is not necessary to store
the 2-1-weight bit of m, which is always set. Therefore, in practice numbers are usually stored as
Most floating-point processors now use the IEEE Standard 754 32-bit floating point format for
storing numbers. According to this standard the exponent is stored as an unsigned integer pwhere
p = c +126
where s is the sign bit, fis a 23-b unsigned fraction in the range 0 <f <0.5, and p is an 8-b
unsigned integer in the range 0 <p <255. The total number of bits is 1 + 23 + 8 = 32. For
example, in IEEE format 3/4 is written (-1)0(0.5 + 0.25)2° so s =0, p =126, and f =0.25. The
value X =0 is a unique case and is represented by all bits zero (i.e., s = 0, f =0, and p =0).
Although the 2-1-weight mantissa bit is not actually stored, it does exist so the mantissa has 24 b
plus a sign bit.
In fixed-point arithmetic, a multiply doubles the number of significant bits. For example, the
product of the two 5-b numbers 0.0011 and 0.1001 is the 10-b number 00.000 110 11. The extra
bit to the left of the decimal point can be discarded without introducing any error. However, the
Page 97
DIGITAL SIGNAL PROCESSING
least significant four of the remaining bits must ultimately be discarded by some form of
quantization so that the result can be stored to 5 b for use in other calculations. In the example
above this results in 0.0010 (quantization by rounding) or 0.0001(quantization by truncating).
When a sum of products calculation is performed, the quantization can be performed either after
each multiply or after all products have been summed with double length precision.
Since rounding selects the quantized value nearest the unquantized value, it gives a value which
is never more than ± A /2 awayfrom the exact value. If we denote the rounding error by
Truncation simply discards the low-order bits, giving a quantized value that is always less than
or equal to the exact value so
Magnitude truncation chooses the nearest quantized value that has a magnitude less than or equal
to the exact value so
The error resulting from quantization can be modeled as a random variable uniformly distributed
over the appropriate error range. Therefore, calculations with roundoff error can be considered
error-free calculations that have been corrupted by additive white noise. The meanof this noise
for rounding is
Page 98
DIGITAL SIGNAL PROCESSING
where E{}represents the operation of taking the expected value of a random variable. Similarly,
the variance of the noise for rounding is
To determine the roundoff noise at the output of a digital filter we will assume that the noise due
to a quantization is stationary, white, and uncorrelated with the filter input, output, and internal
Page 99
DIGITAL SIGNAL PROCESSING
variables. This assumption isgood if the filter input changes from sample to sample in a
sufficiently complex manner. It is not valid for zero or constant inputs for which the effects of
rounding are analyzed from a limit cycle perspective.
To satisfy the assumption of a sufficiently complex input, roundoff noise in digital filters is often
calculated for the case of a zero-mean white noise filter input signal x(n)of variance a1. This
simplifies calculation of the output roundoff noise because expected values of the form
E{x(n)x(n — k)}are zero for k =0 and give a2 when k =0. This approach to analysis has been
found to give estimates of the output roundoff noise thatare close to the noise actually observed
for other input signals.
Another assumption that will be made in calculating roundoff noise is that the product of two
quantization errors is zero. To justify this assumption, consider the case of a 16-b fixed-point
processor. In thiscase a quantization error is of the order 2-15, while the product of two
quantization errors is of the order 2-30, which is negligible by comparison.
If a linear system with impulse response g(n)is excited by white noise with mean mx and
variance a2, the output is noise of mean
And variance
Therefore, if g(n)is the impulse response from the point where a round off takes place to the filter
output, the contribution of that round off to the variance (mean square value) of the output round
off noise is given by with a2 replaced with the variance of the round off. If there is more than
one source of round off error in the filter, it is assumed that the errors are uncorrelated so the
output noise variance is simply the sum of the contributions from each source.
Page 100
DIGITAL SIGNAL PROCESSING
A limit cycle, sometimes referred to as a multiplier round off limit cycle, is a low level
oscillation that can exist in an otherwise stable filter as a result of the nonlinearity associated
with rounding (or truncating) internal filter calculations . Limit cycles require recursion to exist
and do not occur in non recursive FIR filters. As an example of a limit cycle, consider the
second-order filter realized by
where Qr{} represents quantization by rounding. This is stable filter with poles at 0.4375 ±
j0.6585. Consider the implementation of this filter with 4-b (3-b and a sign bit) two’s
complement fixed-point arithmetic, zero initial conditions (y(—1) = y(—2) = 0), and an input
sequence x(n) =|S(n), where S(n)is the unit impulse or unit sample. The following sequence is
obtained.
Notice that while the input is zero except for the first sample, the output oscillates with
amplitude 1/8 and period 6. Limit cycles are primarily of concern in fixed-point recursive filters.
As long as floating-point filters are realized as the parallel or cascade connection of first- and
second-order sub filters, limit cycles will generally not be a problem since limit cycles are
practically not observable in first and second-order systems implemented with 32-bit floating-
point arithmetic . It has been shown that such systems must have an extremely small margin of
Page 101
DIGITAL SIGNAL PROCESSING
stability for limit cycles to exist at anything other than underflow levels, which are at an
amplitude of less than . There are at least three ways of dealing with limit cycles when fixed-
point arithmetic is used. One is to determine a bound on the maximum limit cycle amplitude,
expressed as an integral number of quantization steps . It is then possible to choose a word length
that makes the limit cycle amplitude acceptably low. Alternately, limit cycles can be prevented
by randomly rounding calculations up or down. However, this approach is complicated to
implement. The third approach is to properly choose the filter realization structure and then
quantize the filter calculations using magnitude truncation . This approach has the disadvantage
of producing more round off noise than truncation or rounding .
With fixed-point arithmetic it is possible for filter calculations to overflow. This happens when
two numbers of the same sign add to give a value having magnitude greater than one. Since
numbers with magnitude greater than one are not representable, the result overflows. For
example, the two’s complement numbers 0.101 (5/8) and 0.100 (4/8) add togive 1.001 which is
the two’s complement representation of -7/8.
The overflow characteristic of two’s complement arithmetic can be represented as R{} where
An overflow oscillation, sometimes also referred to as an adder overflow limit cycle, is a high-
level oscillation that can exist in an otherwise stable fixed-point filter due to the gross
nonlinearity associated with the overflow of internal filter calculations .Like limit cycles,
overflow oscillations require recursion to exist and do not occur in non recursive FIR filters.
Overflow oscillations also do not occur with floating-point arithmetic due to the virtual
impossibility of overflow.
Quantization:
Total number of bits in x is reduced by using two methods namely Truncation and Rounding.
These are known as quantization Processes.
The Quantized signal are stored in a b bit register but for nearest values the same digital
equivalent may be represented. This is termed as Input Quantization Error.
The Multiplication of a b bit number with another b bit number results in a 2b bit number but it
should be stored in a b bit register. This is termed as Product Quantization Error.
Page 102
DIGITAL SIGNAL PROCESSING
The Analog to Digital mapping of signals due to the Analog Co-efficient Quantization results in
error due to the Fact that the stable poles marked at the edge of the jΩ axis may be marked as an
unstable pole in the digital domain.
If the input is made zero, the output should be made zero but there is an error occur due to the
quantization effect that the system oscillates at a certain band of values.
Overflow error occurs in addition due to the fact that the sum of two numbers may result in
overflow. To avoid overflow error saturation arithmetic is used.
Dead band:
The range of frequencies between which the system oscillates is termed as Deadband of the
Filter. It may have a fixed positive value or it may oscillate between a positive and negative
value.
Signal scaling:
The inputs of the summer is to be scaled first before execution of the addition operation to find
for any possibility of overflow to be occurredafter addition. The scaling factor s0is multiplied
with the inputs to avoid overflow.
Page 103