PDF DSP - Real Time Digital Signal Processing
PDF DSP - Real Time Digital Signal Processing
PDF DSP - Real Time Digital Signal Processing
Real-Time
Digital Signal Processing
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
Real-Time
Digital Signal Processing
Implementations, Applications, and
Experiments with the TMS320C55X
Sen M Kuo
Northern Illinois University, DeKalb, Illinois, USA
Bob H Lee
Texas Instruments, Inc., Schaumburg, Illinois, USA
All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in
any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the
terms of the Copyright Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright
Licensing Agency, 90 Tottenham Court Road, London, W1P 9HE, UK, without the permission in writing of the
Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a
computer system, for exclusive use by the purchaser of the publication.
Neither the authors nor John Wiley & Sons Ltd accept any responsibility or liability for loss or damage occasioned to
any person or property through using the material, instructions, methods or ideas contained herein, or acting or
refraining from acting as a result of such use. The authors and Publisher expressly disclaim all implied warranties,
including merchantability of fitness for any particular purpose. There will be no duty on the authors or Publisher to
correct any errors or defects in the software.
Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where
John Wiley & Sons is aware of a claim, the product names appear in initial capital or capital letters. Readers,
however, should contact the appropriate companies for more complete information regarding trademarks and
registration.
A catalogue record for this book is available from the British Library
Contents
Preface xv
Index 489
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
Preface
Real-time digital signal processing (DSP) using general-purpose DSP processors is very
challenging work in today's engineering fields. It promises an effective way to design,
experiment, and implement a variety of signal processing algorithms for real-world
applications. With DSP penetrating into various applications, the demand for high-
performance digital signal processors has expanded rapidly in recent years. Many
industrial companies are currently engaged in real-time DSP research and development.
It becomes increasingly important for today's students and practicing engineers to
master not only the theory of DSP, but equally important, the skill of real-time DSP
system design and implementation techniques.
This book offers readers a hands-on approach to understanding real-time DSP
principles, system design and implementation considerations, real-world applications,
as well as many DSP experiments using MATLAB, C/C++, and the TMS320C55x. This
is a practical book about DSP and using digital signal processors for DSP applications.
This book is intended as a text for senior/graduate level college students with emphasis
on real-time DSP implementations and applications. This book can also serve as a
desktop reference for practicing engineer and embedded system programmer to learn
DSP concepts and to develop real-time DSP applications at work. We use a practical
approach that avoids a lot of theoretical derivations. Many useful DSP textbooks with
solid mathematical proofs are listed at the end of each chapter. To efficiently develop a
DSP system, the reader must understand DSP algorithms as well as basic DSP chip
architecture and programming. It is helpful to have several manuals and application
notes on the TMS320C55x from Texas Instruments at http://www.ti.com.
The DSP processor we will use as an example in this book is the TMS320C55x, the
newest 16-bit fixed-point DSP processor from Texas Instruments. To effectively illustrate
real-time DSP concepts and applications, MATLAB will be introduced for analysis and
filter design, C will be used for implementing DSP algorithms, and Code Composer
Studio (CCS) of the TMS320C55x are integrated into lab experiments, projects, and
applications. To efficiently utilize the advanced DSP architecture for fast software
development and maintenance, the mixing of C and assembly programs are emphasized.
Chapter 1 reviews the fundamentals of real-time DSP functional blocks, DSP hard-
ware options, fixed- and floating-point DSP devices, real-time constraints, algorithm
development, selection of DSP chips, and software development. In Chapter 2, we
introduce the architecture and assembly programming of the TMS320C55x. Chapter
3 presents some fundamental DSP concepts in time domain and practical considerations
for the implementation of digital filters and algorithms on DSP hardware. Readers who
are familiar with these DSP fundamentals should be able to skip through some of these
sections. However, most notations used throughout the book will be defined in this
chapter. In Chapter 4, the Fourier series, the Fourier transform, the z-transform, and
the discrete Fourier transforms are introduced. Frequency analysis is extremely helpful
xvi PREFACE
Availability of Software
The MATLAB, C, and assembly programs that implement many DSP examples and
applications are listed in the book. These programs along with many other programs
for DSP implementations and lab experiments are available in the software package
at http://www.ceet.niu.edu/faculty/kuo/books/rtdsp.html and http://pages.prodigy.net/
sunheel/web/dspweb.htm. Several real-world data files for some applications introduced
in the book also are included in the software package. The list of files in the software
package is given in Appendix D. It is not critical you have this software as you read the
book, but it will help you to gain insight into the implementation of DSP algorithms, and it
will be required for doing experiments at the last section of each chapter. Some of these
experiments involve minor modification of the example code. By examining, studying and
modifying the example code, the software can also be used as a prototype for other practical
applications. Every attempt has been made to ensure the correctness of the code. We would
appreciate readers bringing to our attention (kuo@ceet.niu.edu) any coding errors so that
we can correct and update the codes available in the software package on the web.
Acknowledgments
We are grateful to Maria Ho and Christina Peterson at Texas Instruments, and Naomi
Fernandes at Math Works, who provided the necessary support to write the book in a
short period. The first author thanks many of his students who have taken his DSP courses,
Senior Design Projects, and Master Thesis courses. He is indebted to Gene Frentz, Dr.
Qun S. Lin, and Dr. Panos Papamichalis of Texas Instruments, John Kronenburger of
Tellabs, and Santo LaMantia of Shure Brothers, for their support of DSP activities at
Northern Illinois University. He also thanks Jennifer Y. Kuo for the proofreading of the
book. The second author wishes to thank Robert DeNardo, David Baughman, and Chuck
Brokish of Texas Instruments, for their valuable inputs, help, and encouragement during
the course of writing this book. We would like to thank Peter Mitchell, editor at Wiley, for
his support of this project. We also like to thank the staff at Wiley for the final preparation
of the book. Finally, we thank our parents and families for their endless love, encourage-
ment, and the understanding they have shown during the whole time.
Sen M. Kuo and Bob H. Lee
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
1
Introduction to Real-Time
Digital Signal Processing
1. Flexibility. Functions of a DSP system can be easily modified and upgraded with
software that has implemented the specific algorithm for using the same hardware.
One can design a DSP system that can be programmed to perform a wide variety of
tasks by executing different software modules. For example, a digital camera may
be easily updated (reprogrammed) from using JPEG ( joint photographic experts
group) image processing to a higher quality JPEG2000 image without actually
changing the hardware. In an analog system, however, the whole circuit design
would need to be changed.
2 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
3. Reliability. The memory and logic of DSP hardware does not deteriorate with
age. Therefore the field performance of DSP systems will not drift with changing
environmental conditions or aged electronic components as their analog counter-
parts do. However, the data size (wordlength) determines the accuracy of a DSP
system. Thus the system performance might be different from the theoretical expect-
ation.
With the rapid evolution in semiconductor technology in the past several years, DSP
systems have a lower overall cost compared to analog systems. DSP algorithms can be
developed, analyzed, and simulated using high-level language and software tools such as
C=C and MATLAB (matrix laboratory). The performance of the algorithms can be
verified using a low-cost general-purpose computer such as a personal computer (PC).
Therefore a DSP system is relatively easy to develop, analyze, simulate, and test.
There are limitations, however. For example, the bandwidth of a DSP system is
limited by the sampling rate and hardware peripherals. The initial design cost of a
DSP system may be expensive, especially when large bandwidth signals are involved.
For real-time applications, DSP algorithms are implemented using a fixed number of
bits, which results in a limited dynamic range and produces quantization and arithmetic
errors.
There are two types of DSP applications ± non-real-time and real time. Non-real-time
signal processing involves manipulating signals that have already been collected and
digitized. This may or may not represent a current action and the need for the result
is not a function of real time. Real-time signal processing places stringent demands
on DSP hardware and software design to complete predefined tasks within a certain
time frame. This chapter reviews the fundamental functional blocks of real-time DSP
systems.
The basic functional blocks of DSP systems are illustrated in Figure 1.1, where a real-
world analog signal is converted to a digital signal, processed by DSP hardware in
INPUT AND OUTPUT CHANNELS 3
Input channels
DSP
hardware
Output channels
Amplifier
Reconstruction Other digital
DAC
y(t) filter y⬘(t) y(n) systems
digital form, and converted back into an analog signal. Each of the functional blocks in
Figure 1.1 will be introduced in the subsequent sections. For some real-time applica-
tions, the input data may already be in digital form and/or the output data may not need
to be converted to an analog signal. For example, the processed digital information may
be stored in computer memory for later use, or it may be displayed graphically. In other
applications, the DSP system may be required to generate signals digitally, such as
speech synthesis used for cellular phones or pseudo-random number generators for
CDMA (code division multiple access) systems.
In this book, a time-domain signal is denoted with a lowercase letter. For example, x
t
in Figure 1.1 is used to name an analog signal of x with a relationship to time t. The time
variable t takes on a continuum of values between 1 and 1. For this reason we say
x
t is a continuous-time signal. In this section, we first discuss how to convert analog
signals into digital signals so that they can be processed using DSP hardware. The
process of changing an analog signal to a xdigital signal is called analog-to-digital (A/D)
conversion. An A/D converter (ADC) is usually used to perform the signal conversion.
Once the input digital signal has been processed by the DSP device, the result, y
n, is
still in digital form, as shown in Figure 1.1. In many DSP applications, we need to
reconstruct the analog signal after the digital processing stage. In other words, we must
convert the digital signal y
n back to the analog signal y
t before it is passed to an
appropriate device. This process is called the digital-to-analog (D/A) conversion, typi-
cally performed by a D/A converter (DAC). One example would be CD (compact disk)
players, for which the music is in a digital form. The CD players reconstruct the analog
waveform that we listen to. Because of the complexity of sampling and synchronization
processes, the cost of an ADC is usually considerably higher than that of a DAC.
For example, a microphone can be used to pick up sound signals. The sensor output,
x0
t, is amplified by an amplifier with gain value g. The amplified signal is
The gain value g is determined such that x
t has a dynamic range that matches the
ADC. For example, if the peak-to-peak range of the ADC is 5 volts (V), then g may be
set so that the amplitude of signal x
t to the ADC is scaled between 5V. In practice, it
is very difficult to set an appropriate fixed gain because the level of x0
t may be
unknown and changing with time, especially for signals with a larger dynamic range
such as speech. Therefore an automatic gain controller (AGC) with time-varying gain
determined by DSP hardware can be used to effectively solve this problem.
As shown in Figure 1.1, the ADC converts the analog signal x
t into the digital signal
sequence x
n. Analog-to-digital conversion, commonly referred as digitization, consists
of the sampling and quantization processes as illustrated in Figure 1.2. The sampling
process depicts a continuously varying analog signal as a sequence of values. The basic
sampling function can be done with a `sample and hold' circuit, which maintains the
sampled level until the next sample is taken. Quantization process approximates a
waveform by assigning an actual number for each sample. Therefore an ADC consists
of two functional blocks ± an ideal sampler (sample and hold) and a quantizer (includ-
ing an encoder). Analog-to-digital conversion carries out the following steps:
1. The bandlimited signal x
t is sampled at uniformly spaced instants of time, nT,
where n is a positive integer, and T is the sampling period in seconds. This sampling
process converts an analog signal into a discrete-time signal, x
nT, with continuous
amplitude value.
2. The amplitude of each discrete-time sample is quantized into one of the 2B levels,
where B is the number of bits the ADC has to represent for each sample. The
discrete amplitude levels are represented (or encoded) into distinct binary words
x
n with a fixed wordlength B. This binary sequence, x
n, is the digital signal for
DSP hardware.
A/D converter
The reason for making this distinction is that each process introduces different distor-
tions. The sampling process brings in aliasing or folding distortions, while the encoding
process results in quantization noise.
1.2.3 Sampling
An ideal sampler can be considered as a switch that is periodically open and closed every
T seconds and
1
T ,
1:2:2
fs
where fs is the sampling frequency (or sampling rate) in hertz (Hz, or cycles per
second). The intermediate signal, x
nT, is a discrete-time signal with a continuous-
value (a number has infinite precision) at discrete time nT, n 0, 1, . . ., 1 as illustrated
in Figure 1.3. The signal x
nT is an impulse train with values equal to the amplitude
of x
t at time nT. The analog input signal x
t is continuous in both time and
amplitude. The sampled signal x
nT is continuous in amplitude, but it is defined
only at discrete points in time. Thus the signal is zero except at the sampling instants
t nT.
In order to represent an analog signal x
t by a discrete-time signal x
nT accurately,
two conditions must be met:
1. The analog signal, x t, must be bandlimited by the bandwidth of the signal fM .
2. The sampling frequency, fs , must be at least twice the maximum frequency com-
ponent fM in the analog signal x
t. That is,
fs 2 fM : 1:2:3
This is Shannon's sampling theorem. It states that when the sampling frequency is
greater than twice the highest frequency component contained in the analog signal, the
original signal x
t can be perfectly reconstructed from the discrete-time sample x
nT.
The sampling theorem provides a basis for relating a continuous-time signal x
t with
x(nT)
x(t)
Time, t
0 T 2T 3T 4T
Figure 1.3 Example of analog signal x
t and discrete-time signal x
nT
6 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
the discrete-time signal x
nT obtained from the values of x
t taken T seconds apart. It
also provides the underlying theory for relating operations performed on the sequence
to equivalent operations on the signal x
t directly.
The minimum sampling frequency fs 2fM is the Nyquist rate, while fN fs =2 is
the Nyquist frequency (or folding frequency). The frequency interval fs =2, fs =2
is called the Nyquist interval. When an analog signal is sampled at sampling frequency,
fs , frequency components higher than fs =2 fold back into the frequency range 0, fs =2.
This undesired effect is known as aliasing. That is, when a signal is sampled
perversely to the sampling theorem, image frequencies are folded back into the desired
frequency band. Therefore the original analog signal cannot be recovered from the
sampled data. This undesired distortion could be clearly explained in the frequency
domain, which will be discussed in Chapter 4. Another potential degradation is due to
timing jitters on the sampling pulses for the ADC. This can be negligible if a higher
precision clock is used.
For most practical applications, the incoming analog signal x
t may not be band-
limited. Thus the signal has significant energies outside the highest frequency of
interest, and may contain noise with a wide bandwidth. In other cases, the sampling
rate may be pre-determined for a given application. For example, most voice commu-
nication systems use an 8 kHz (kilohertz) sampling rate. Unfortunately, the maximum
frequency component in a speech signal is much higher than 4 kHz. Out-of-band signal
components at the input of an ADC can become in-band signals after conversion
because of the folding over of the spectrum of signals and distortions in the discrete
domain. To guarantee that the sampling theorem defined in Equation (1.2.3) can be
fulfilled, an anti-aliasing filter is used to band-limit the input signal. The anti-aliasing
filter is an analog lowpass filter with the cut-off frequency of
fs
fc :
1:2:4
2
Ideally, an anti-aliasing filter should remove all frequency components above the
Nyquist frequency. In many practical systems, a bandpass filter is preferred in order
to prevent undesired DC offset, 60 Hz hum, or other low frequency noises. For example,
a bandpass filter with passband from 300 Hz to 3200 Hz is used in most telecommunica-
tion systems.
Since anti-aliasing filters used in real applications are not ideal filters, they cannot
completely remove all frequency components outside the Nyquist interval. Any fre-
quency components and noises beyond half of the sampling rate will alias into the
desired band. In addition, since the phase response of the filter may not be linear, the
components of the desired signal will be shifted in phase by amounts not proportional to
their frequencies. In general, the steeper the roll-off, the worse the phase distortion
introduced by a filter. To accommodate practical specifications for anti-aliasing filters,
the sampling rate must be higher than the minimum Nyquist rate. This technique is
known as oversampling. When a higher sampling rate is used, a simple low-cost anti-
aliasing filter with minimum phase distortion can be used.
Example 1.1: Given a sampling rate for a specific application, the sampling period
can be determined by (1.2.2).
INPUT AND OUTPUT CHANNELS 7
(c) In audio CDs, the sampling rate is fs 44:1 kHz, thus T 1=44 100 seconds
22:676 ms.
In the previous sections, we assumed that the sample values x
nT are represented
exactly with infinite precision. An obvious constraint of physically realizable digital
systems is that sample values can only be represented by a finite number of bits.
The fundamental distinction between discrete-time signal processing and DSP is the
wordlength. The former assumes that discrete-time signal values x
nT have infinite
wordlength, while the latter assumes that digital signal values x
n only have a limited
B-bit.
We now discuss a method of representing the sampled discrete-time signal x
nT as a
binary number that can be processed with DSP hardware. This is the quantizing and
encoding process. As shown in Figure 1.3, the discrete-time signal x
nT has an analog
amplitude (infinite precision) at time t nT. To process or store this signal with DSP
hardware, the discrete-time signal must be quantized to a digital signal x
n with a finite
number of bits. If the wordlength of an ADC is B bits, there are 2B different values
(levels) that can be used to represent a sample. The entire continuous amplitude range is
divided into 2B subranges. Amplitudes of waveform that are in the same subrange are
assigned the same amplitude values. Therefore quantization is a process that represents
an analog-valued sample x
nT with its nearest level that corresponds to the digital
signal x
n. The discrete-time signal x
nT is a sequence of real numbers using infinite
bits, while the digital signal x
n represents each sample value by a finite number of bits
which can be stored and processed using DSP hardware.
The quantization process introduces errors that cannot be removed. For example, we
can use two bits to define four equally spaced levels (00, 01, 10, and 11) to classify the
signal into the four subranges as illustrated in Figure 1.4. In this figure, the symbol `o'
represents the discrete-time signal x
nT, and the symbol `' represents the digital signal
x
n.
In Figure 1.4, the difference between the quantized number and the original value is
defined as the quantization error, which appears as noise in the output. It is also called
quantization noise. The quantization noise is assumed to be random variables that are
uniformly distributed in the intervals of quantization levels. If a B-bit quantizer is used,
the signal-to-quantization-noise ratio (SNR) is approximated by (will be derived in
Chapter 3)
8 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
Quantization level
Quantization errors
11
x(t)
10
01
00 Time, t
0 T 2T 3T
This is a theoretical maximum. When real input signals and converters are used, the
achievable SNR will be less than this value due to imperfections in the fabrication of
A/D converters. As a result, the effective number of bits may be less than the number
of bits in the ADC. However, Equation (1.2.5) provides a simple guideline for determin-
ing the required bits for a given application. For each additional bit, a digital signal has
about a 6-dB gain in SNR. For example, a 16-bit ADC provides about 96 dB SNR. The
more bits used to represent a waveform sample, the smaller the quantization noise will
be. If we had an input signal that varied between 0 and 5 V, using a 12-bit ADC, which
has 4096
212 levels, the least significant bit (LSB) would correspond to 1.22 mV
resolution. An 8-bit ADC with 256 levels can only provide up to 19.5 mV resolution.
Obviously with more quantization levels, one can represent the analog signal more
accurately. The problems of quantization and their solutions will be further discussed in
Chapter 3.
If the uniform quantization scheme shown in Figure 1.4 can adequately represent
loud sounds, most of the softer sounds may be pushed into the same small value. This
means soft sounds may not be distinguishable. To solve this problem, a quantizer whose
quantization step size varies according to the signal amplitude can be used. In practice,
the non-uniform quantizer uses a uniform step size, but the input signal is compressed
first. The overall effect is identical to the non-uniform quantization. For example, the
logarithm-scaled input signal, rather than the input signal itself, will be quantized. After
processing, the signal is reconstructed at the output by expanding it. The process of
compression and expansion is called companding (compressing and expanding). For
example, the m-law (used in North America and parts of Northeast Asia) and A-law
(used in Europe and most of the rest of the world) companding schemes are used in most
digital communications.
As shown in Figure 1.1, the input signal to DSP hardware may be a digital signal
from other DSP systems. In this case, the sampling rate of digital signals from other
digital systems must be known. The signal processing techniques called interpolation or
decimation can be used to increase or decrease the existing digital signals' sampling
rates. Sampling rate changes are useful in many applications such as interconnecting
DSP systems operating at different rates. A multirate DSP system uses more than one
sampling frequency to perform its tasks.
INPUT AND OUTPUT CHANNELS 9
Most commercial DACs are zero-order-hold, which means they convert the binary
input to the analog level and then simply hold that value for T seconds until the next
sampling instant. Therefore the DAC produces a staircase shape analog waveform y0
t,
which is shown as a solid line in Figure 1.5. The reconstruction (anti-imaging and
smoothing) filter shown in Figure 1.1 smoothes the staircase-like output signal gener-
ated by the DAC. This analog lowpass filter may be the same as the anti-aliasing filter
with cut-off frequency fc fs =2, which has the effect of rounding off the corners of the
staircase signal and making it smoother, which is shown as a dotted line in Figure 1.5.
High quality DSP applications, such as professional digital audio, require the use of
reconstruction filters with very stringent specifications.
From the frequency-domain viewpoint (will be presented in Chapter 4), the output of
the DAC contains unwanted high frequency or image components centered at multiples
of the sampling frequency. Depending on the application, these high-frequency compon-
ents may cause undesired side effects. Take an audio CD player for example. Although
the image frequencies may not be audible, they could overload the amplifier and cause
inter-modulation with the desired baseband frequency components. The result is an
unacceptable degradation in audio signal quality.
The ideal reconstruction filter has a flat magnitude response and linear phase in the
passband extending from the DC to its cut-off frequency and infinite attenuation in
the stopband. The roll-off requirements of the reconstruction filter are similar to those
of the anti-aliasing filter. In practice, switched capacitor filters are preferred because of
their programmable cut-off frequency and physical compactness.
There are two basic ways of connecting A/D and D/A converters to DSP devices: serial
and parallel. A parallel converter receives or transmits all the B bits in one pass, while
the serial converters receive or transmit B bits in a serial data stream. Converters with
parallel input and output ports must be attached to the DSP's address and data buses,
0 T 2T 3T 4T 5T Time, t
which are also attached to many different types of devices. With different memory
devices (RAM, EPROM, EEPROM, or flash memory) at different speeds hanging on
DSP's data bus, driving the bus may become a problem. Serial converters can be
connected directly to the built-in serial ports of DSP devices. This is why many practical
DSP systems use serial ADCs and DACs.
Many applications use a single-chip device called an analog interface chip (AIC) or
coder/decoder (CODEC), which integrates an anti-aliasing filter, an ADC, a DAC, and a
reconstruction filter all on a single piece of silicon. Typical applications include modems,
speech systems, and industrial controllers. Many standards that specify the nature of the
CODEC have evolved for the purposes of switching and transmission. These devices
usually use a logarithmic quantizer, i.e., A-law or m-law, which must be converted into a
linear format for processing. The availability of inexpensive companded CODEC justi-
fies their use as front-end devices for DSP systems. DSP chips implement this format
conversion in hardware or in software by using a table lookup or calculation.
The most popular commercially available ADCs are successive approximation, dual
slope, flash, and sigma-delta. The successive-approximation ADC produces a B-bit
output in B cycles of its clock by comparing the input waveform with the output of a
digital-to-analog converter. This device uses a successive-approximation register to split
the voltage range in half in order to determine where the input signal lies. According to
the comparator result, one bit will be set or reset each time. This process proceeds
from the most significant bit (MSB) to the LSB. The successive-approximation type of
ADC is generally accurate and fast at a relatively low cost. However, its ability to follow
changes in the input signal is limited by its internal clock rate, so that it may be slow to
respond to sudden changes in the input signal.
The dual-slope ADC uses an integrator connected to the input voltage and a reference
voltage. The integrator starts at zero condition, and it is charged for a limited time. The
integrator is then switched to a known negative reference voltage and charged in the
opposite direction until it reaches zero volts again. At the same time, a digital counter
starts to record the clock cycles. The number of counts required for the integrator
output voltage to get back to zero is directly proportional to the input voltage. This
technique is very precise and can produce ADCs with high resolution. Since the
integrator is used for input and reference voltages, any small variations in temperature
and aging of components have little or no effect on these types of converters. However,
they are very slow and generally cost more than successive-approximation ADCs.
A voltage divider made by resistors is used to set reference voltages at the flash ADC
inputs. The major advantage of a flash ADC is its speed of conversion, which is simply
the propagation delay time of the comparators. Unfortunately, a B-bit ADC needs
2B 1 comparators and laser-trimmed resistors. Therefore commercially available
flash ADCs usually have lower bits.
The block diagram of a sigma±delta ADC is illustrated in Figure 1.6. Sigma±delta
ADCs use a 1-bit quantizer with a very high sampling rate. Thus the requirements for an
anti-aliasing filter are significantly relaxed (i.e., the lower roll-off rate and smaller flat
response in passband). In the process of quantization, the resulting noise power is spread
evenly over the entire spectrum. As a result, the noise power within the band of interest is
lower. In order to match the output frequency with the system and increase its resolution,
a decimator is used. The advantages of the sigma±delta ADCs are high resolution and
good noise characteristics at a competitive price because they use digital filters.
DSP HARDWARE 11
Delta
Analog Sigma
input + 1-bit 1-bit Digital B-bit
Σ ∫ ADC decimator
−
1-bit
DAC
As shown in Figure 1.1, the processing of the digital signal x
n is carried out using the
DSP hardware. Although it is possible to implement DSP algorithms on any digital
computer, the throughput (processing rate) determines the optimum hardware plat-
form. Four DSP hardware platforms are widely used for DSP applications:
3. digital building blocks (DBB) such as multiplier, adder, program controller, and
Processor
Address bus 1
Processor
Address bus 2
Address bus
Data bus 1
(a) (b)
Figure 1.7 Different memory architectures: (a) Harvard architecture, and (b) von Newmann
architecture
standard development tools support, and the lack of reprogramming flexibility some-
times outweigh their benefits.
Digital building blocks offer a more general-purpose approach to high-speed DSP
design. These components, including multipliers, arithmetic logic units (ALUs), sequen-
cers, etc., are joined together to build a custom DSP architecture for a specific applica-
tion. Performance can be significantly higher than general-purpose DSP devices.
However, the disadvantages are similar to those of the special-purpose DSP devices ±
lack of standard design tools, extended design cycles, and high component cost.
General architectures for computers and microprocessors fall into two categories:
Harvard architecture and von Neumann architecture. Harvard architecture has a
separate memory space for the program and the data, so that both memories can be
accessed simultaneously, see Figure 1.7(a). The von Neumann architecture assumes that
DSP HARDWARE 13
there is no intrinsic difference between the instructions and the data, and that the
instructions can be partitioned into two major fields containing the operation command
and the address of the operand. Figure 1.7(b) shows the memory architecture of the von
Neumann model. Most general-purpose microprocessors use the von Neumann archi-
tecture. Operations such as add, move, and subtract are easy to perform. However,
complex instructions such as multiplication and division are slow since they need a
series of shift, addition, or subtraction operations. These devices do not have the
architecture or the on-chip facilities required for efficient DSP operations. They may
be used when a small amount of signal processing work is required in a much larger
system. Their real-time DSP performance does not compare well with even the cheaper
general-purpose DSP devices, and they would not be a cost-effective solution for many
DSP tasks.
A DSP chip (digital signal processor) is basically a microprocessor whose architecture
is optimized for processing specific operations at high rates. DSP chips with architec-
tures and instruction sets specifically designed for DSP applications have been launched
by Texas Instruments, Motorola, Lucent Technologies, Analog Devices, and many
other companies. The rapid growth and the exploitation of DSP semiconductor tech-
nology are not a surprise, considering the commercial advantages in terms of the fast,
flexible, and potentially low-cost design capabilities offered by these devices. General-
purpose-programmable DSP chip developments are supported by software develop-
ment tools such as C compilers, assemblers, optimizers, linkers, debuggers, simulators,
and emulators. Texas Instruments' TMS320C55x, a programmable, high efficiency, and
ultra low-power DSP chip, will be discussed in the next chapter.
of the DSP device itself. Floating-point DSP chips also allow the efficient use of
the high-level C compilers and reduce the need to identify the system's dynamic range.
A limitation of DSP systems for real-time applications is that the bandwidth of the
system is limited by the sampling rate. The processing speed determines the rate at
which the analog signal can be sampled. For example, a real-time DSP system demands
that the signal processing time, tp , must be less than the sampling period, T, in order to
complete the processing task before the new sample comes in. That is,
tp < T: 1:3:1
This real-time constraint limits the highest frequency signal that can be processed by a
DSP system. This is given as
fs 1
fM < :
1:3:2
2 2tp
It is clear that the longer the processing time tp , the lower the signal bandwidth fM .
Although new and faster DSP devices are introduced, there is still a limit to the
processing that can be done in real time. This limit becomes even more apparent when
system cost is taken into consideration. Generally, the real-time bandwidth can be
increased by using faster DSP chips, simplified DSP algorithms, optimized DSP pro-
grams, and parallel processing using multiple DSP chips, etc. However, there is still a
trade-off between costs and system performances, with many applications simply not
being economical at present.
The algorithm for a given application is initially described using difference equations or
signal-flow block diagrams with symbolic names for the inputs and outputs. In docu-
menting the algorithm, it is sometimes helpful to further clarify which inputs and
outputs are involved by means of a data flow diagram. The next stage of the develop-
ment process is to provide more details on the sequence of operations that must be
performed in order to derive the output from the input. There are two methods for
characterizing the sequence of steps in a program: flowcharts or structured descriptions.
DSP SYSTEM DESIGN 15
Application
S H
O A
Software Hardware
F R
architecture Schematic
T D
W W
A A
Coding and Hardware
R R
Debugging Prototype
E E
DSP
algorithms
MATLAB or C/C++
ADC DAC
Data DSP Data
files software files
Other Other
computers computers
Signal generators
At the algorithm development stage, we most likely work with high-level DSP tools
(such as MATLAB or C=C) that enable algorithmic-level system simulations. We
then migrate the algorithm to software, hardware, or both, depending on our specific
needs. A DSP application or algorithm can be first simulated using a general-purpose
computer, such as a PC, so that it can be analyzed and tested off-line using simulated
input data. A block diagram of general-purpose computer implementation is illustrated
in Figure 1.9. The test signals may be internally generated by signal generators or
digitized from an experimental setup based on the given application. The program uses
the stored signal samples in data file(s) as input(s) to produce output signals that will be
saved in data file(s).
16 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
1. Using the high-level languages such as MATLAB, C=C, or other DSP software
packages can significantly save algorithm and software development time. In add-
ition, C programs are portable to different DSP hardware platforms.
3. Input/output operations based on disk files are simple to implement and the behav-
iors of the system are easy to analyze.
A choice of DSP chip from many available devices requires a full understanding of the
processing requirements of the DSP system under design. The objective is to select the
device that meets the project's time-scales and provides the most cost-effective solution.
Some decisions can be made at an early stage based on computational power, resolu-
tion, cost, etc. In real-time DSP, the efficient flow of data into and out of the processor
is also critical. However, these criteria will probably still leave a number of candidate
devices for further analysis. For high-volume applications, the cheapest device that can
do the job should be chosen. For low- to medium-volume applications, there will be a
trade-off between development time, development tool cost, and the cost of the DSP
device itself. The likelihood of having higher-performance devices with upwards-
compatible software in the future is also an important factor.
When processing speed is at a premium, the only valid comparison between devices is
on an algorithm-implementation basis. Optimum code must be written for both devices
and then the execution time must be compared. Other important factors are memory size
and peripheral devices, such as serial and parallel interfaces, which are available on-chip.
In addition, a full set of development tools and supports are important for DSP chip
selection, including:
2. Commercially available DSP boards for software development and testing before
the target DSP hardware is available.
The four common measures of good DSP software are reliability, maintainability,
extensibility, and efficiency. A reliable program is one that seldom (or never) fails.
Since most programs will occasionally fail, a maintainable program is one that is easy
to fix. A truly maintainable program is one that can be fixed by someone other than
the original programmer. In order for a program to be truly maintainable, it must be
portable on more than one type of hardware. An extensible program is one that can
be easily modified when the requirements change, new functions need to be added, or
new hardware features need to be exploited. An efficient DSP program will use the
processing capabilities of the target hardware to minimize execution time.
A program is usually tested in a finite number of ways much smaller than the number
of input data conditions. This means that a program can be considered reliable only
after years of bug-free use in many different environments. A good DSP program often
contains many small functions with only one purpose, which can be easily reused by
other programs for different purposes. Programming tricks should be avoided at all
costs as they will often not be reliable and will almost always be difficult for someone
else to understand even with lots of comments. In addition, use variable names that are
meaningful in the context of the program.
As shown in Figure 1.8, the hardware and software design can be conducted at the
same time for a given DSP application. Since there is a lot of interdependence factors
between hardware and software, the ideal DSP designer will be a true `system' engineer,
capable of understanding issues with both hardware and software. The cost of hardware
has gone down dramatically in recent years. The majority of the cost of a DSP solution
now resides in software development. This section discussed some issues regarding
software development.
The software life cycle involves the completion of a software project: the project
definition, the detailed specification, coding and modular testing, integration, and
maintenance. Software maintenance is a significant part of the cost of a software
system. Maintenance includes enhancing the software, fixing errors identified as the
software is used, and modifying the software to work with new hardware and software.
It is essential to document programs thoroughly with titles and comment statements
because this greatly simplifies the task of software maintenance.
As discussed earlier, good programming technique plays an essential part in success-
ful DSP application. A structured and well-documented approach to programming
should be initiated from the beginning. It is important to develop an overall specifica-
tion for signal processing tasks prior to writing any program. The specification includes
the basic algorithm/task description, memory requirements, constraints on the program
size, execution time, etc. Specification review is an important component of the software
development process. A thoroughly reviewed specification can catch mistakes before
code is written and reduce potential code rework risk at system integration stage. The
potential use of subroutines for repetitive processes should also be noted. A flow
diagram will be a very helpful design tool to adopt at this stage. Program and data
blocks should be allocated to specific tasks that optimize data access time and address-
ing functions.
A software simulator or a hardware platform can be used for testing DSP code.
Software simulators run on a host computer to mimic the behavior of a DSP chip. The
18 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
simulator is able to show memory contents, all the internal registers, I/O, etc., and the
effect on these after each instruction is performed. Input/output operations are simu-
lated using disk files, which require some format conversion. This approach reduces the
development process for software design only. Full real-time emulators are normally
used when the software is to be tested on prototype target hardware.
Writing and testing DSP code is a highly iterative process. With the use of a simulator
or an evaluation board, code may be tested regularly as it is written. Writing code in
modules or sections can help this process, as each module can be tested individually,
with a greater chance of the whole system working at the system integration stage.
There are two commonly used methods in developing software for DSP devices: an
assembly program or a C=C program. Assembly language is one step removed from
the machine code actually used by the processor. Programming in assembly language
gives the engineers full control of processor functions, thus resulting in the most efficient
program for mapping the algorithm by hand. However, this is a very time-consuming
and laborious task, especially for today's highly paralleled DSP architectures. A C
program is easier for software upgrades and maintenance. However, the machine code
generated by a C compiler is inefficient in both processing speed and memory usage.
Recently, DSP manufactures have improved C compiler efficiency dramatically.
Often the ideal solution is to work with a mixture of C and assembly code. The overall
program is controlled by C code and the run-time critical loops are written in assembly
language. In a mixed programming environment, an assembly routine may be either
called as a function, or in-line coded into the C program. A library of hand-optimized
functions may be built up and brought into the code when required. The fundamentals
of C language for DSP applications will be introduced in Appendix C, while the
assembly programming for the TMS320C55x will be discussed in Chapter 2. Mixed C
and assembly programming will be introduced in Chapter 3. Alternatively, there are
many high-level system design tools that can automatically generate an implementation
in software, such as C and assembly language.
Software tools are computer programs that have been written to perform specific
operations. Most DSP operations can be categorized as being either analysis tasks
or filtering tasks. Signal analysis deals with the measurement of signal properties.
MATLAB is a powerful environment for signal analysis and visualization, which are
critical components in understanding and developing a DSP system. Signal filtering,
such as removal of unwanted background noise and interference, is usually a time-
domain operation. C programming is an efficient tool for performing signal filtering
and is portable over different DSP platforms.
In general, there are two different types of data files: binary files and ASCII (text)
files. A binary file contains data stored in a memory-efficient binary format, whereas an
ASCII file contains information stored in ASCII characters. A binary file may be
viewed as a sequence of characters, each addressable as an offset from the first position
in the file. The system does not add any special characters to the data except null
characters appended at the end of the file. Binary files are preferable for data that is
going to be generated and used by application programs. ASCII files are necessary if the
EXPERIMENTS USING CODE COMPOSER STUDIO 19
Libraries Data
Machine Program
C program code output
C compiler Linker/loader Execution
(Source) (Object)
The code composer studio (CCS) is a useful utility that allows users to create, edit,
build, debug, and analyze DSP programs. The CCS development environment supports
20 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
several Texas Instruments DSP processors, including the TMS320C55x. For building
applications, the CCS provides a project manager to handle the programming tasks.
For debugging purposes, it provides breakpoint, variable watch, memory/register/stack
viewing, probe point to stream data to and from the target, graphical analysis, execution
profiling, and the capability to display mixed disassembled and C instructions. One
important feature of the CCS is its ability to create and manage large projects from a
graphic-user-interface environment. In this section, we will use a simple sinewave
example to introduce the basic built-in editing features, major CCS components, and
the use of the C55x development tools. We will also demonstrate simple approaches to
software development and debugging process using the TMS320C55x simulator. The
CCS version 1.8 was used in this book.
Installation of the CCS on a PC or a workstation is detailed in the Code Composer
Studio Quick Start Guide [8]. If the C55x simulator has not been installed, use the
CCS setup program to configure and set up the TMS320C55x simulator. We can start
the CCS setup utility, either from the Windows start menu, or by clicking the Code
Composer Studio Setup icon. When the setup dialogue box is displayed as shown in
Figure 1.11(a), follow these steps to set up the simulator:
± Choose Install a Device Driver and select the C55x simulator device driver,
tisimc55.dvr for the TMS320C55x simulator. The C55x simulator will appear
in the middle window named as Available Board/Simulator Types if the installation
is successful, as shown in Figure 1.11(b).
± Drag the C55x simulator from Available Board/Simulator Types window to the
System Configuration window and save the change. When the system configuration
is completed, the window label will be changed to Available Processor Types as
shown in Figure 1.11(c).
This experiment introduces the basic features to build a project with the CCS. The
purposes of the experiment are to:
Let us begin with the simple sinewave example to get familiar with the TMS320C55x
simulator. In this book, we assume all the experiment files are stored on a disk in the
computer's A drive to make them portable for users, especially for students who may
share the laboratory equipment.
EXPERIMENTS USING CODE COMPOSER STUDIO 21
(a)
(b)
(c)
Figure 1.11 CCS setup dialogue boxes: (a) install the C55x simulator driver, (b) drag the C55x
simulator to system configuration window, and (c) save the configuration
The best way to learn a new software tool is by using it. This experiment is partitioned
into following six steps:
#define BUF_SIZE 40
const int sineTable[BUF_SIZE]
{0x0000,0x000f,0x001e,0x002d,0x003a,0x0046,0x0050,0x0059,
0x005f,0x0062,0x0063,0x0062,0x005f,0x0059,0x0050,0x0046,
0x003a,0x002d,0x001e,0x000f,0x0000,0xfff1,0xffe2,0xffd3,
0xffc6,0xffba,0xffb0,0xffa7,0xffa1,0xff9e,0xff9d,0xff9e,
0xffa1,0xffa7,0xffb0,0xffba,0xffc6,0xffd3,0xffe2,0xfff1 };
int in_buffer[BUF_SIZE];
int out_buffer[BUF_SIZE];
int Gain;
void main()
{
int i,j;
Gain 0x20;
EXPERIMENTS USING CODE COMPOSER STUDIO 23
while (1)
{ /* <- set profile point on this line */
for (i BUF_SIZE 1; i > 0; i )
{
j BUF_SIZE 1 i;
out_buffer[j] 0;
in_buffer[j] 0;
}
for (i BUF_SIZE 1; i > 0; i )
{
j BUF_SIZE 1 i;
in_buffer[i] sineTable[i]; /* <- set breakpoint */
in_buffer[i] 0 in_buffer[i];
out_buffer[j] Gain*in_buffer[i];
}
} /* <- set probe and profile points on this line */
}
pre-calculated sinewave values from a table, negates, and stores the values in a
reversed order to an output buffer. Note that the program exp1.c is included in
the experimental software package.
However, it is recommended that we create this program with the editor to get
familiar with the CCS editing functions.
± the working directory. For programs written in C language, it requires using the
run-time support library, rts55.lib for DSP system initialization. This can be
done by selecting Libraries under Category in the Linker dialogue box, and enter
the C55x run-time support library, rts55.lib. We can also specify different
directories to store the output executable file and map file. Figure 1.13 shows an
example of how to set the search paths for compiler, assembler, or linker.
The CCS has extended traditional DSP code generation tools by integrating a set of
editing, emulating, debugging, and analyzing capabilities in one entity. In this section of
the experiment, we will introduce some DSP program building steps and software
debugging capabilities including:
For a more detailed description of the CCS features and sophisticated configuration
settings, please refer to Code Composer Studio User's Guide [7].
Like most editors, the standard tool bar in Figure 1.12 allows users to create and open
files, cut, copy, and paste texts within and between files. It also has undo and re-do
capabilities to aid file editing. Finding or replacing texts can be done within one file or in
different files. The CCS built-in context-sensitive help menu is also located in the
standard toolbar menu. More advanced editing features are in the edit toolbar menu,
refer to Figure 1.12. It includes mark to, mark next, find match, and find next open
parenthesis capabilities for C programs. The features of out-indent and in-indent can be
used to move a selected block of text horizontally. There are four bookmarks that allow
users to create, remove, edit, and search bookmarks.
26 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
The project environment contains a C compiler, assembler, and linker for users to
build projects. The project toolbar menu (see Figure 1.12) gives users different choices
while working on projects. The compile only, incremental build, and build all functions
allow users to build the program more efficiently. Breakpoints permit users to set stop
points in the program and halt the DSP whenever the program executes at those
breakpoint locations. Probe points are used to transfer data files in and out of pro-
grams. The profiler can be used to measure the execution time of the program. It
provides program execution information, which can be used to analyze and identify
critical run-time blocks of the program. Both the probe point and profile will be
discussed in detail in the next section.
The debug toolbar menu illustrated in Figure 1.12 contains several step operations:
single step, step into a function, step over a function, and step out from a function back
to its caller function. It can also perform the run-to-cursor operation, which is a very
convenient feature that allows users to step through the code. The next three hot
buttons in the debug tool bar are run, halt, and animate. They allow users to execute,
stop, and animate the program at anytime. The watch-windows are used to monitor
variable contents. DSP CPU registers, data memory, and stack viewing windows
provide additional information for debugging programs. More custom options are
available from the pull-down menus, such as graphing data directly from memory
locations.
When we are developing and testing programs, we often need to check the values of
variables during program execution. In this experiment, we will apply debugging
settings such as breakpoints, step commands, and watch-window to understand the
CCS. The experiment can be divided into the following four steps.
and click the Toggle Breakpoint toolbar button (or press <F9>).
CPU register by double clicking on it. Right click on the CPU Register window
and select Allow Docking. We can now move and resize the window. Try
to change the temporary register T0 and accumulator AC0 to T0 0x1234 and
AC0 0x56789ABC.
± On the CCS menu bar click Tools!Command Window to add the Command
Window. We can resize and dock it as in the previous step. The command
window will appear each time we rebuild the project.
± We can customize the CCS display and settings using the workspace
feature. To save a workspace, click File!Workspace!Save Workspace and give
the workspace a name. When we restart CCS, we can reload that workspace by
clicking File!Workspace!Load Workspace and select the proper workspace
filename.
± Click View!Dis-Assembly on the menu bar to see the disassembly window.
Every time we reload an executable file, the disassembly window will appear
automatically.
4. Resource monitoring:
± From View!Watch Window, open the Watch Window area. At run time, this
area shows the values of watched variables. Right-click on the Watch Window
area and choose Insert New Expression from the pop-up list. Type the output
buffer name, out_buffer, into the expression box and click OK, expand the
out_buffer to view each individual element of the buffer.
± From View!Memory, open a memory window and enter the starting address
of the in_buffer in data page to view the data in the input and output buffers.
Since global variables are defined globally, we can use the variable name as the
address for memory viewing.
± From View!Graph!Time/Frequency, open the Graphic Property dialogue. Set
the display parameters as shown in Figure 1.14. The CCS allows the user to plot
data directly from memory by specifying the memory location and its length.
± Set a breakpoint on the line of the following C statement:
28 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
Start animation execution, and view CPU registers, in_buffer and out_buffer data
in both the watch-window and the memory window. Figure 1.15 shows one instant
snapshot of the animation. The yellow arrow represents the current program counter's
location, and the red dot shows where the breakpoint is set. The data and register values
in red color are the ones that have just been updated.
Probe point is a useful tool for algorithm development, such as simulating real-time
input and output operations. When a probe point is reached, the CCS can either read a
selected amount of data samples from a file on the host PC to DSP memory on the
target, or write processed data samples to the host PC. In the following experiment, we
will learn how to set up a probe point to transfer data from the example program to a
file on the host computer.
± Set the probe point at the end of the while {} loop at the line of the close bracket
as:
while(1)
{
... ...
} /* <- set probe point on this line */
± where the data in the output buffer is ready to be transferred out. Put the cursor on
the line and click Toggle Probe Point. A blue dot on the left indicates the probe point
is set (refer to Figure 1.15).
EXPERIMENTS USING CODE COMPOSER STUDIO 29
± From File!File I/O, open the file I/O dialogue box and select File Output tab.
From the Add File tab, enter exp1_out.dat as file name, then select Open. Using
the output variable name, out_buffer, as the address and 40 (BUF_SIZE) as the
length of the data block for transferring 40 data samples to the host computer
from the buffer every time the probe point is reached. Now select Add Probe Point
tab to connect the probe point with the output file exp1_out.dat as shown in
Figure 1.16.
± Restart the program. After execution, we can view the data file exp1_out.dat
using the built-in editor by issue File!Open command. If we want to view
or edit the data file using other editors/viewers, we need to exit the CCS or
disconnect the file from the File I/O.
An example data file is shown in Table 1.4. The first line contains the header
information in TI Hexadecimal format, which uses the syntax illustrated in Figure 1.17.
For the example given in Table 1.4, the data stored is in hexadecimal format with the
address of out_buffer at 0xa8 on data page and each block containing 40 (0x28) data
values. If we want to use probe to connect an input data file to the program, we will
need to use the same hex format to include a header in the input data file.
The profiler can be used to measure the system execution status of specific segments of
the code. This feature gives users an immediate result about the program's performance.
30 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
(a)
(b)
Figure 1.16 Connect probe point to a file: (a) set up probe point address and length, and
(b) connect probe point with a file
1651 1 a8 1 28
0x01E0
0x03C0
0x05A0
0x0740
0x08C0
0x0A00
0x0B20
0x0BE0
...
EXPERIMENTS USING CODE COMPOSER STUDIO 31
It is a very useful tool for analyzing and optimizing DSP code for large complex
projects. In the following experiment, we use the profiling features of the CCS to obtain
statistics about code execution time.
± Open the project exp1 and load the file exp1.out. Open the source file exp1.c
and identify the line numbers on the source code where we like to set profile
marks. For a demonstration purpose, we will profile the entire code within the
while {}loop in the experiment. The profile points are set at line 32 and 46 as
shown below:
while (1)
{ /* <- set profile point here */
... ...
} /* <- set profile point here */
± From Profiler menu, select Start New Session to open the profile window. Click the
Create Profile Area hot button, and in the Manual Profile Area Creation dialogue
box (see Figure 1.18), enter the number for starting and ending lines. In the mean-
time, make sure the Source File Lines and the Generic type are selected. Finally,
click on the Ranges tab to switch the window that displays the range of the code
segments we just selected.
± Run the program and record the cycle counts shown in the profile status
window.
32 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
The CCS uses General Extension Language (GEL) to extend its functions. GEL is a
useful tool for automated testing and design of workspace customization. The Code
Composer Studio User's Guide [7] provides a detailed description of GEL functions. In
this experiment, we will use a simple example to introduce it.
Create a file called Gain.gel and type the simple GEL code listed in Table
1.5. From the CCS, load this GEL file from File!Load GEL and bring the Gain
EXERCISES 33
control slide bar shown in Figure 1.19 out from GEL!Gain Control. While
animating the program using the CCS, we can change the gain by moving the slider up
and down.
References
[1] A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, Englewood Cliffs, NJ:
Prentice-Hall, 1989.
[2] S. J. Orfanidis, Introduction to Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1996.
[3] J. G. Proakis and D. G. Manolakis, Digital Signal Processing ± Principles, Algorithms, and Applica-
tions, 3rd Ed., Englewood Cliffs, NJ: Prentice-Hall, 1996.
[4] A Bateman and W. Yates, Digital Signal Processing Design, New York: Computer Science Press,
1989.
[5] S. M. Kuo and D. R. Morgan, Active Noise Control Systems ± Algorithms and DSP Implementa-
tions, New York: Wiley, 1996.
[6] J. H. McClellan, R. W. Schafer, and M. A. Yoder, DSP First: A Multimedia Approach, 2nd Ed.,
Englewood Cliffs, NJ: Prentice-Hall, 1998.
[7] Texas Instruments, Inc., Code Composer Studio User's Guide, Literature no. SPRU32, 1999.
[8] Texas Instruments, Inc., Code Composer Studio Quick Start Guide, Literature no. SPRU368A, 1999.
Exercises
Part A
Part B
4. From the Option menu, set the CCS for automatically loading the program after the
project has been built.
5. To reduce the number of mouse click, many pull-down menu items have been mapped
to the hot buttons for the standard, advanced edit, project management, and debug
tools bar. There are still some functions; however, do not associate with any hot
buttons. Using the Option menu to create shortcut keys for the following menu items:
(a) map Go Main in the debug menu to AltM (Alt key and M key),
(b) map Reset in the debug menu to AltR,
(c) map Restart in the debug menu to AltS, and
(d) map File reload in the file menu to CtrlR.
6. After having loaded a program into the simulator and enabled Source/ASM mixed
display mode from View!Mixed Source/ASM, what is showing in the CCS source display
window besides the C source code?
7. How to change the format of displayed data in the watch-window to hex, long, and
floating-point format from integer format?
8. What does File!Workspace do? Try the save and reload workspace commands.
9. Besides using file I/O with the probe point, data values in a block of memory
space can also be stored to a file. Try the File!Data!Store and File!Data!Load
commands.
10. Use Edit!Memory command we can manipulate (edit, copy, and fill) system memory:
(a) open memory window to view out_buffer,
(b) fill out_buffer with data 0x5555, and
(c) copy the constant sineTable[]to out_buffer.
11. Using CCS context-sensitive on-line help menu to find the TMS320C55x CUP diagram,
and name all the buses and processing units.
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
2
Introduction to TMS320C55x
Digital Signal Processor
Digital signal processors with architecture and instructions specifically designed for
DSP applications have been launched by Texas Instruments, Motorola, Lucent Tech-
nologies, Analog Devices, and many other companies. DSP processors are widely used
in areas such as communications, speech processing, image processing, biomedical
devices and equipment, power electronics, automotive, industrial electronics, digital
instruments, consumer electronics, multimedia systems, and home appliances.
To efficiently design and implement DSP systems, we must have a solid knowledge of
DSP algorithms as well as a basic concept of processor architecture. In this chapter, we
will introduce the architecture and assembly programming of the Texas Instruments
TMS320C55x fixed-point processor.
2.1 Introduction
The C55x processor is designed for low power consumption, optimum performance,
and high code density. Its dual multiply±accumulate (MAC) architecture provides twice
the cycle efficiency computing vector products ± the fundamental operation of digital
signal processing, and its scaleable instruction length significantly improves the code
density. In addition, the C55x is source code compatible with the C54x. This greatly
reduces the migration cost from the popular C54x based systems to the C55x systems.
Some essential features of the C55x device are listed below:
. 64-byte instruction buffer queue that works as a program cache and efficiently
implements block repeat operations.
. Two 17-bit by 17-bit MAC units can execute dual multiply-and-accumulate oper-
ations in a single cycle.
. A 40-bit arithmetic and logic unit (ALU) performs high precision arithmetic and
logic operations with an additional 16-bit ALU performing simple arithmetic
operations parallel to the main ALU.
. Eight extended auxiliary registers for data addressing plus four temporary data
registers to ease data processing requirements.
The C55x CPU consists of four processing units: an instruction buffer unit (IU), a
program flow unit (PU), an address-data flow unit (AU), and a data computation unit
(DU). These units are connected to 12 different address and data buses as shown in
Figure 2.1.
Instruction buffer unit (IU): This unit fetches instructions from the memory into the
CPU. The C55x is designed for optimum execution time and code density. The instruc-
tion set of the C55x varies in length. Simple instructions are encoded using eight bits
TMS320C55X ARCHITECTURE 37
32 bits CB DB BB CB DB
Figure 2.2 Simplified block diagram of the C55x instruction buffer unit
(one byte), while more complicated instructions may contain as many as 48 bits (six
bytes). For each clock cycle, the IU can fetch four bytes of program code via its 32-bit
program-read data bus. At the same time, the IU can decode up to six bytes of program.
After four program bytes are fetched, the IU places them into the 64-byte instruction
buffer. At the same time, the decoding logic decodes an instruction of one to six bytes
previously placed in the instruction decoder as shown in Figure 2.2. The decoded
instruction is passed to the PU, the AU, or the DU.
The IU improves the efficiency of the program execution by maintaining a constant
stream of instruction flow between the four units within the CPU. If the IU is able to
38 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
hold a segment of the code within a loop, the program execution can be repeated many
times without fetching additional code. Such a capability not only improves the loop
execution time, but also saves the power consumption by reducing program accesses
from the memory. Another advantage is that the instruction buffer can hold multiple
instructions that are used in conjunction with conditional program flow control. This
can minimize the overhead caused by program flow discontinuities such as conditional
calls and branches.
Program flow unit (PU): This unit controls DSP program execution flow. As illus-
trated in Figure 2.3, the PU consists of a program counter (PC), four status registers, a
program address generator, and a pipeline protection unit. The PC tracks the C55x
program execution every clock cycle. The program address generator produces a 24-bit
address that covers 16 Mbytes of program space. Since most instructions will be exe-
cuted sequentially, the C55x utilizes pipeline structure to improve its execution effi-
ciency. However, instructions such as branches, call, return, conditional execution, and
interrupt will cause a non-sequential program address switch. The PU uses a dedicated
pipeline protection unit to prevent program flow from any pipeline vulnerabilities
caused by a non-sequential execution.
Address-data flow unit (AU): The address-data flow unit serves as the data access
manager for the data read and data write buses. The block diagram illustrated in Figure
2.4 shows that the AU generates the data-space addresses for data read and data write.
It also shows that the AU consists of eight 23-bit extended auxiliary registers (XAR0±
XAR7), four 16-bit temporary registers (T0±T3), a 23-bit extended coefficient data
pointer (XCDP), and a 23-bit extended stack pointer (XSP). It has an additional 16-
bit ALU that can be used for simple arithmetic operations. The temporary registers may
be utilized to expand compiler efficiency by minimizing the need for memory access. The
AU allows two address registers and a coefficient pointer to be used together for
processing dual-data and one coefficient in a single clock cycle. The AU also supports
up to five circular buffers, which will be discussed later.
Data computation unit (DU): The DU handles data processing for most C55x
applications. As illustrated in Figure 2.5, the DU consists of a pair of MAC units, a
40-bit ALU, four 40-bit accumulators (AC0, AC1, AC2, and AC3), a barrel shifter,
rounding and saturation control logic. There are three data-read data buses that
allow two data paths and a coefficient path to be connected to the dual-MAC units
simultaneously. In a single cycle, each MAC unit can perform a 17-bit multiplication
24-bit
Figure 2.3 Simplified block diagram of the C55x program flow unit
TMS320C55X ARCHITECTURE 39
D CB
16-bit 23-bit
A
T XAR0
DB T0
A
T1 XAR1
M EB T2 XAR2
E
M T3 XAR3
FB
O XAR4
R 16-bit
BAB ALU XAR5
Y
CAB XAR6
Data
S XAR7
DAB address
P
EAB generator XCDP
A
unit
C FAB XSP
(24-bit)
E AU
Figure 2.4 Simplified block diagram of the C55x address-data flow unit
DU
BB AC0 ALU
16-bit (40-bit) EB
AC1
AC2 Barrel 16-bit
CB Shifter
AC3
16-bit Overflow
FB
MAC &
Saturation 16-bit
DB MAC
16-bit
Figure 2.5 Simplified block diagram of the C55x data computation unit
and a 40-bit addition or subtraction operation with a saturation option. The ALU can
perform 40-bit arithmetic, logic, rounding, and saturation operations using the four
accumulators. It can also be used to achieve two 16-bit arithmetic operations in both the
upper and lower portions of an accumulator at the same time. The ALU can accept
immediate values from the IU as data and communicate with other AU and PU
registers. The barrel shifter may be used to perform a data shift in the range of 2 32
(shift right 32-bit) to 231 (shift left 31-bit).
As illustrated in Figure 2.1, the TMS320C55x has one 32-bit program data bus, five 16-
bit data buses, and six 24-bit address buses. The program buses include a 32-bit
program-read data bus (PB) and a 24-bit program-read address bus (PAB). The PAB
carries the program memory address to read the code from the program space. The unit
of program address is in bytes. Thus the addressable program space is in the range of
40 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
The C55x uses a unified program, data, and I/O memory configurations. All 16 Mbytes
of memory are available as program or data space. The program space is used for
instructions and the data space is used for general-purpose storage and CPU memory
mapped registers. The I/O space is separated from the program/data space, and is used
for duplex communication with peripherals. When the CPU fetches instructions from
the program space, the C55x address generator uses the 24-bit program-read address
bus. The program code is stored in byte units. When the CPU accesses data space, the
C55x address generator masks the least-significant-bit (LSB) of the data address since
data stored in memory is in word units. The 16 Mbytes memory map is shown in Figure
2.6. Data space is divided into 128 data pages (0±127). Each page has 64 K words. The
memory block from address 0 to 0x5F in page 0 is reserved for memory mapped
registers (MMRs).
The manufacturers of DSP processors typically provide a set of software tools for the
user to develop efficient DSP software. The basic software tools include an assembler,
linker, C compiler, and simulator. As discussed in Section 1.4, DSP programs can be
written in either C or assembly language. Developing C programs for DSP applications
requires less time and effort than those applications using assembly programs. However,
the run-time efficiency and the program code density of the C programs are generally
worse than those of the assembly programs. In practice, high-level language tools such
SOFTWARE DEVELOPMENT TOOLS 41
7F 0000 FE 0000
Page 127
7F FFFF FF FFFF
Figure 2.6 TMS320C55x program space and data space memory map
as MATLAB and C are used in early development stages to verify and analyze the
functionality of the algorithms. Due to real-time constraints and/or memory limitations,
part (or all) of the C functions have to be replaced with assembly programs.
In order to execute the designed DSP algorithms on the target system, the C or
assembly programs must first be translated into binary machine code and then linked
together to form an executable code for the target DSP hardware. This code conversion
process is carried out using the software development tools illustrated in Figure 2.7.
The TMS320C55x software development tools include a C compiler, an assembler,
a linker, an archiver, a hex conversion utility, a cross-reference utility, and an absolute
lister. The debugging tools can either be a simulator or an emulator. The C55x C
compiler generates assembly code from the C source files. The assembler translates
assembly source files; either hand-coded by the engineers or generated by the C com-
piler, into machine language object files. The assembly tools use the common object file
format (COFF) to facilitate modular programming. Using COFF allows the program-
mer to define the system's memory map at link time. This maximizes performance by
enabling the programmer to link the code and data objects into specific memory
locations. The archiver allows users to collect a group of files into a single archived
file. The linker combines object files and libraries into a single executable COFF object
module. The hex conversion utility converts a COFF object file into a format that can
be downloaded to an EPROM programmer.
In this section, we will briefly describe the C compiler, assembler, and linker. A full
description of these tools can be found in the user's guides [2,3].
42 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
Macro C
source files source files
Macro
Assembler
library
Library-build
COFF utility
Archiver
object files
Run-time
support
Library of Linker libraries
object files
2.3.1 C Compiler
As mentioned in Chapter 1, C language is the most popular high-level tool for evaluating
DSP algorithms and developing real-time software for practical applications. The
TMS320C55x C compiler translates the C source code into the TMS320C55x assembly
source code first. The assembly code is then given to the assembler for generating machine
code. The C compiler can generate either a mnemonic assembly code or algebraic
assembly code. Table 2.1 gives an example of the mnemonic and algebraic assembly
code generated by the C55x compiler. In this book, we will introduce only the widely used
mnemonic assembly language. The C compiler package includes a shell program, code
optimizer, and C-to-ASM interlister. The shell program supports automatic compile,
assemble, and link modules. The optimizer improves run-time and code density efficiency
of the C source files. The C-to-ASM interlister inserts the original comments in C source
code into the compiler's output assembly code; so the user can view the corresponding
assembly instructions generated by the compiler for each C statement.
The C55x compiler supports American National Standards Institute (ANSI) C and its
run-time-support library. The run-time support library, rts55.lib, includes functions
to support string operation, memory allocation, data conversion, trigonometry, and
exponential manipulations. The CCS introduced in Section 1.5 has made using DSP
development tools (compiler, assembly, and linker) easier by providing default setting
SOFTWARE DEVELOPMENT TOOLS 43
Table 2.1 An example of C code and the C55x compiler generated assembly code
parameters and prompting the options. It is still beneficial for the user to understand
how to use these tools individually, and set parameters and options from the command
line correctly.
We can invoke the C compiler from a PC or workstation shell by entering the
following command:
The filenames can be one or more C program source files, assembly source files,
object files, or a combination of these files. If we do not supply an extension, the
compiler assumes the default extension as .c, .asm, or .obj. The -z option enables
the linker, while the -c option disables the linker. The link_options set up the way
the linker processes the object files at link time. The object_files are additional
objective files for the linker to add to the target file at link time. The compiler options
have the following categories:
1. The options that control the compiler shell, such as the -g option that generates
symbolic debug information for debugging code.
2. The options that control the parser, such as the -ps option that sets the strict ANSI
C mode for C.
3. The options that are C55x specific, such as the -ml option that sets the large
memory model.
4. The options that control the optimization, such as the -o0 option that sets the
register optimization.
5. The options that change the file naming conventions and specify the directories,
such as the -eo option that sets the default object file extension.
6. The options that control the assembler, such as the -al option that creates assem-
bly language listing files.
7. The options that control the linker, such as the -ar option that generates a re-
locatable output module.
44 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
There are a number of options in each of the above categories. Refer to the
TMS320C55x Optimizing C Compiler User's Guide [3] for detailed information on
how to use these options.
The options are preceded by a hyphen and are not case sensitive. All the single letter
options can be combined together, i.e., the options of -g, -k, and -s, are the same as
setting the compiler options as -gks. The two-letter operations can also be combined if
they have the same first letter. For example, setting -pl, -pk, and -pi three options are
the same as setting the options as -plki.
C language lacks specific DSP features, especially those of fixed-point data oper-
ations that are necessary for many DSP algorithms. To improve compiler efficiency for
real-time DSP applications, the C55x compiler provides a method to add in-line assem-
bly language routines directly into the C program. This allows the programmer to write
highly efficient assembly code for the time-critical sections of a program. Intrinsic is
another improvement for users to substitute DSP arithmetic operation with assembly
intrinsic operators. We will introduce more compiler features in Section 2.7 when we
present the mixing of C and assembly programs. In this chapter, we emphasize assembly
language programming.
2.3.2 Assembler
The assembler translates processor-specific assembly language source files (in ASCII
text) into binary COFF object files for specific DSP processors. Source files can contain
assembler directives, macro directives, and instructions. Assembler directives are used to
control various aspects of the assembly process such as the source file listing format,
data alignment, section content, etc. Binary object files contain separate blocks (called
sections) of code or data that can be loaded into memory space.
Assembler directives are used to control the assembly process and to enter data
into the program. Assembly directives can be used to initialize memory, define global
variables, set conditional assembly blocks, and reserve memory space for code and data.
Some of the most important C55x assembler directives are described below:
.BSS directive: The .bss directive reserves space in the uninitialized .bss section for
data variables. It is usually used to allocate data into RAM for run-time variables such
as I/O buffers. For example,
.bss xn_buffer, size_in_words
where the xn_buffer points to the first location of the reserved memory space, and the
size_in_words specifies the number of words to be reserved in the .bss section. If
we do not specify uninitialized data sections, the assembler will put all the uninitialized
data into the .bss section.
.DATA directive: The .data directive tells the assembler to begin assembling the
source code into the .data section, which usually contains data tables or pre-initialized
variables such as sinewave tables. The data sections are word addressable.
.SECT directive: The .sect directive defines a section and tells the assembler to
begin assembling source code or data into that section. It is often used to separate long
programs into logical partitions. It can separate the subroutines from the main pro-
gram, or separate constants that belong to different tasks. For example,
SOFTWARE DEVELOPMENT TOOLS 45
.sect "section_name"
assigns the code into the user defined memory section called section_name. Code
from different source files with the same section names are placed together.
.USECT directive: The .usect reserves space in an uninitialized section. It is similar
to the .bss directive. It allows the placement of data into user defined sections instead
of.bss sections. It is often used to separate large data sections into logical partitions,
such as separating the transmitter data variables from the receiver data variables. The
syntax of .usect directive is
symbol .usect "section_name", size_in_words
where symbol is the variable, or the starting address of a data array, which will be
placed into the section named section_name. In the latter case, the size_in_words
defines the number of words in the array.
.TEXT directive: The .text directive tells the assembler to begin assembling source
code into the .text section, which normally contains executable code. This is the
default section for program code. If we do not specify a program code section, the
assembler will put all the programs into the .text section.
The directives, .bss, .sect, .usect, and.text are used to define the memory
sections. The following directives are used to initialize constants.
.INT (.WORD) directive: The .int (or .word) directive places one or more 16-bit
integer values into consecutive words in the current section. This allows users to
initialize memory with constants. For example,
data1 .word 0x1234
data2 .int 1010111b
where the symbol must appear in the first column. This example equates the constant
value to the symbol. The symbolic name used in the program will be replaced with the
constant by the assembler during assembly time, thus allowing programmers to write
more readable programs. The .set and .equ directives can be used interchangeably,
and do not produce object code.
The assembler is used to convert assembly language source code to COFF format
object files for the C55x processor. The following command invokes the C55x mne-
monic assembler:
masm55 [input_file [object_file [list_file]]] [-options]
.asm. The object_file is the name of the object file that the assembler creates. The
assembler uses the source file's name with the default extension .obj for the object
file unless specified otherwise. The list_file is the name of the list file that the
assembler creates. The assembler will use the source file's name and .lst as the default
extension for the list file. The assembler will not generate list files unless the option -l
is set.
The options identify the assembler options. Some commonly used assembler
options are:
. The -l option tells the assembler to create a listing file showing where the program
and the variables are allocated.
. The -s option puts all symbols defined in the source code into the symbol table so
the debugger may access them.
. The -c option makes the case insignificant in symbolic names. For example, -c
makes the symbols ABC and abc equivalent.
. The -i option specifies a directory where the assembler can find included files such
as those following the .copy and .include directives.
2.3.3 Linker
The linker is used to combine multiple object files into a single executable program for
the target DSP hardware. It resolves external references and performs code relocation to
create the executable code. The C55x linker handles various requirements of different
object files and libraries as well as target system memory configurations. For a specific
hardware configuration, the system designers need to provide the memory mapping
specifications for the linker. This task can be accomplished by using a linker command
file. The Texas Instruments' visual linker is also a very useful tool that provides memory
usage directly.
The linker commands support expression assignment and evaluation, and provides
the MEMORY and SECTION directives. Using these directives, we can define the
memory configuration for the given target system. We can also combine object file
sections, allocate sections into specific memory areas, and define or redefine global
symbols at link time.
We can use the following command to invoke the C55x linker from the host system:
lnk55 [-options] filename_1, . . . , filename_n
The filename list (filename_1, . . . , filename_n) consists of object files created
by the assembler, linker command files, or achieve libraries. The default extension for
object files is .obj; any other extension must be explicitly specified. The options can
be placed anywhere on the command line to control different linking operations. For
example, the -o filename option can be used to specify the output executable file
name. If we do not provide the output file name, the default executable file name is
a.out. Some of the most common linker options are:
SOFTWARE DEVELOPMENT TOOLS 47
. The -ar option produces a re-locatable executable object file. The linker generates
an absolute executable code by default.
. The -e entry_point option defines the entry point for the executable module. This
will be the address of the first operation code in the program after power up or reset.
We can put the filenames and options inside the linker command file, and then invoke
the linker from the command line by specifying the command file name as follows:
lnk55 command_file.cmd
The linker command file is especially useful when we frequently invoke the linker with
the same information. Another important feature of the linker command file is that it
allows users to apply the MEMORY and SECTION directives to customize the pro-
gram for different hardware configurations. A linker command file is an ASCII text file
and may contain one or more of the following items:
. Linker options to control the linker as given from the command line of the shell
program.
. The MEMORY and SECTION directives define the target memory configuration
and information on how to map the code sections into different memory spaces.
The linker command file we used for the experiments in Chapter 1 is listed in Table
2.2. The first portion of the command file uses the MEMORY directive to identify the
range of memory blocks that physically exist in the target hardware. Each memory
block has a name, starting address, and block length. The address and length are given
in bytes. For example, the data memory is given a name called RAM, and it starts at the
byte address of hexadecimal 0x100, with a size of hexadecimal 0x1FEFF bytes.
The SECTIONS directive provides different code section names for the linker to
allocate the program and data into each memory block. For example, the program in
the .text section can be loaded into the memory block ROM. The attributes inside the
parenthesis are optional to set memory access restrictions. These attributes are:
There are several additional options that can be used to initialize the memory using
linker command files [2].
48 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
Table 2.2 Example of a linker command file used for the C55x simulator
MEMORY
{
RAM (RWIX) : origin 0100 h, length 01FEFFh /* Data memory */
RAM2 (RWIX) : origin 040100 h, length 040000 h /* Program memory */
ROM (RIX) : origin 020100 h, length 020000 h /* Program memory */
VECS (RIX) : origin 0FFFF00 h, length 00100 h /* Reset vector */
}
SECTIONS
{
vectors > VECS /* Interrupt vector table */
.text > ROM /* Code */
.switch > RAM /* Switch table info */
.const > RAM /* Constant data */
.cinit > RAM2 /* Initialization tables */
.data > RAM /* Initialized data */
.bss > RAM /* Global & static variables */
.stack > RAM /* Primary system stack */
}
As illustrated in Figure 2.8, the code composer studio (CCS) provides interface with the
C55x simulator (SIM), DSP starter kit (DSK), evaluation module (EVM), or in-circuit
emulator (XDS). The CCS supports both C and assembly programs.
The C55x simulator is available for PC and workstations, making it easy and
inexpensive to develop DSP software and to evaluate the performance of the processor
before designing any hardware. It accepts the COFF files and simulates the instructions
of the program such as the code running on the target DSP hardware. The C55x
simulator enables the users to single-step through the program, and observe the con-
tents of the CPU registers, data and I/O memory locations, and the current DSP states
of the status registers. The C55x simulator also provides profiling capabilities that tell
users the amount of time spent in one portion of the program relative to another. Since
all the functions of the TMS320C55x are performed on the host computer, the simula-
tion may be slow, especially for complicated DSP applications. Real world signals can
only be digitized and then later fed into a simulator as test data. In addition, the timing
of the algorithm under all possible input conditions cannot be tested using a simulator.
As introduced in Section 1.5, the various display windows and the commands of the
CCS provide most debugging needs. Through the CCS, we can load the executable object
code, display a disassembled version of the code along with the original source code, and
SOFTWARE DEVELOPMENT TOOLS 49
lnk.cmd DSK
File .asm .out Program
Build
edit .C debug EVM
.lst/.map/.obj
XDS
Profile Graphic Probe
analysis display file out DSP
board
view the contents of the registers and the memory locations. The data in the registers and
the memory locations can be modified manually. The data can be displayed in hexadeci-
mal, decimal integer, or floating-point formats. The execution of the program can be
single-stepped through the code, run-to-cursor, or controlled by applying breakpoints.
DSK and EVM are development boards with the C55x processor. They can be used for
real-time analysis of DSP algorithms, code logic verification, and simple application
tests. The XDS allows breakpoints to be set at a particular point in a program to examine
the registers and the memory locations in order to evaluate the real-time results using a
DSP board. Emulators allow the DSP software to run at full-speed in a real-time
environment.
The TMS320C55x assembly program statements may be separated into four ordered
fields. The basic syntax expression for a C55x assembly statement is
[label] [:] mnemonic [operand list] [;comment]
The elements inside the brackets are optional. Statements must begin with a label, blank,
asterisk, or semicolon. Each field must be separated by at least one blank. For ease of
reading and maintenance, it is strongly recommended that we use meaningful mnemonics
for labels, variables, and subroutine names, etc. An example of a C55x assembly state-
ment is shown in Figure 2.9. In this example, the auxiliary register, AR1, is initialized to a
constant value of 2.
Label field: A label can contain up to 32 alphanumeric characters (A±Z, a±z, 0±9, _ ,
and $). It associates a symbolic address with a unique program location. The line that is
labeled in the assembly program can then be referenced by the defined symbolic name.
This is useful for modular programming and branch instructions. Labels are optional,
but if used, they must begin in column 1. Labels are case sensitive and must start with an
alphabetic letter. In the example depicted in Figure 2.9, the symbol start is a label and
is placed in the first column.
Mnemonic field: The mnemonic field can contain a mnemonic instruction, an assem-
bler directive, macro directive, or macro call. The C55x instruction set supports both
50 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
To explain the different addressing modes of the C55x, Table 2.3 lists the move
instruction (mov) with different syntax.
As illustrated in Table 2.3, each addressing mode uses one or more operands. Some of
the operand types are explained as follows:
. Smem means a data word (16-bit) from data memory, I/O memory, or MMRs.
. Lmem means a long data word (32-bit) from either data memory space or
MMRs.
. Xmem and Ymem are used by an instruction to perform two 16-bit data memory
accesses simultaneously.
Instruction Description
1. mov #k, dst Load the 16-bit signed constant k to the destination
register dst
2. mov src, dst Load the content of source register src to the
destination register dst
3. mov Smem, dst Load the content of memory location Smem to the
destination register dst
4. mov Xmem, Ymem, ACx The content of Xmem is loaded into the lower part of
ACx while the content of Ymem is sign extended and
loaded into upper part of ACx
5. mov dbl(Lmem), pair(TAx) Load upper 16-bit data and lower 16-bit data from
Lmem to the TAx and TA(x1), respectively
6. amov #k23, xdst Load the effective address of k23 (23-bit constant) into
extended destination register (xdst)
52 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
There are four types of direct addressing modes: data-page pointer (DP) direct, stack
pointer (SP) direct, register-bit direct, and peripheral data-page pointer (PDP) direct.
The DP direct mode uses the main data page specified by the 23-bit extended data-
page pointer (XDP). Figure 2.10 shows a generation of DP direct address. The upper
seven bits of the XDP (DPH) determine the main data page (0±127). The lower 16 bits
of the XDP (DP) define the starting address in the data page selected by the DPH. The
instruction contains the seven-bit offset in the data page (@x) that directly points to the
variable x (Smem). The data-page registers DPH, DP, and XDP can be loaded by the
mov instruction as
mov #k7, DPH ; Load DPH with a 7-bit constant k7
mov #k16, DP ; Load DP with a 16-bit constant k16
These instructions initialize the data pointer DPH and DP, respectively, using the
assembly code syntax, mov #k,dst, given in Table 2.3. The first instruction loads
the high portion of the extended data-page pointer, DPH, with a 7-bit constant k7 to set
up the main data page. The second instruction initializes the starting address of the
data-page pointer. The following is an example that initializes the DPH and DP
pointers:
XDP
DPH (7 bits) DP (16 bits)
+ @x (7 bits)
Indirect addressing modes using index and displacement are the most powerful and
commonly used addressing modes. There are four types of indirect addressing
modes. The AR indirect mode uses one of the eight auxiliary registers as a pointer
to data memory, I/O space, and MMRs. The dual-AR indirect mode uses two
PDP
Upper (9 bits) Lower (7 bits)
+ @x (7 bits)
auxiliary registers for dual data memory access. The coefficient data pointer
(CDP) indirect mode uses the CDP to point to data memory space. The coefficient-
dual-AR indirect mode uses the CDP and the dual-AR indirect modes for generating
three addresses. The coefficient-dual-AR indirect mode will be discussed later along
with pipeline parallelism.
The indirect addressing is the most frequently used addressing mode because it
provides powerful pointer update/modification schemes. Several pointer modification
schemes are listed in Table 2.4.
The AR indirect addressing mode uses an auxiliary register (AR0±AR7) to point to
data memory space. The upper seven-bit of the extended auxiliary register (XAR) points
to the main data page, while the lower 16-bit points to a data location on that page. Since
the I/O space address is limited to a 16-bit range, the upper portion of the XAR must be
set to zero when accessing I/O space. The next example uses indirect addressing mode,
where AR0 is used as the address pointer, and the instruction loads the data
content stored in data memory pointed by AR0 to the destination register AC0.
The dual-AR indirect addressing mode allows two data-memory accesses through
the auxiliary registers AR0±AR7. It can access two 16-bit data in memory using the
syntax, mov Xmem, Ymem, ACx given in Table 2.3. The next example performs dual
16-bit data load with AR2 and AR3 as the data pointers to Xmem and Ymem, respect-
ively. The data pointed at by AR3 is sign-extended to 24-bit, loaded into the upper
portion of the destination register AC0(39:16), and the data pointed at by AR2
is loaded into the lower portion of AC0(15:0). The data pointers AR2 and AR3 are
also updated.
The extended coefficient data pointer (XCDP) is the concatenation of the CDPH (the
upper 7-bit) and the CDP (the lower 16-bit). The CDP indirect addressing mode uses
the upper 7-bit to define the main data page and the lower 16-bit to point to the data
memory location within the specified data page. For the I/O space, only the 16-bit
address is used. An example of using the CDP indirect addressing mode is given as
follows:
In this example, CDP is the pointer that contains the address of the coefficient in data
memory with an offset. This instruction increments the CDP pointer by 2 first, then
loads a coefficient pointed by the updated coefficient pointer to the destination register
AC3.
56 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
The memory can also be addressed using absolute addressing modes in either k16 or k23
absolute addressing modes. The k23 absolute mode specifies an address as a 23-bit
unsigned constant. The following example loads the data content at address 0x1234 on
main data page 1 into the temporary register, T2, where the symbol, *( ), represents the
absolute address mode.
The k16 absolute addressing mode uses the operand *abs(#k16), where k16 is a
16-bit unsigned constant. The DPH (7-bit) is forced to 0 and concatenated with the
unsigned constant k16 to form a 23-bit data-space memory address. The I/O absolute
addressing mode uses the operand port(#k16). The absolute address can also be the
variable name such as the variable, x, in the following example:
mov *(x), AC0
This instruction loads the accumulator AC0 with a content of variable x. When using
absolute addressing mode, we do not need to worry about what is loaded into the data-
page pointer. The drawback of the absolute address is that it uses more code space to
represent the 23-bit address.
The absolute, direct, and indirect addressing modes introduced above can be used to
address MMRs. The MMRs are located in the data memory from address 0x0 to
0x5F on the main data page 0 as shown in Figure 2.6. To access the MMRs using
the k16 absolute operand, the DPH must be set to zero. The following example uses the
absolute addressing mode to load the 16-bit content of the AR2 into the temporary
register T2:
For the MMR direct addressing mode, the DP addressing mode must be selected. The
example given next uses direct addressing mode to load the content of the lower portion of
the accumulator AC0(15:0), into the temporary register T0. When the mmap()qualifier
for the MMR direct addressing mode is used, it forces the data address generator to act as
if the access is made to the main data page 0. That is, XDP 0.
Accessing the MMRs using indirect addressing mode is the same as addressing the data
memory space. The address pointer can be either an auxiliary register or a CDP. Since
the MMRs are all located on data page 0, the XAR and XCDP must be initialized to
page 0 by setting all upper 7-bit to zero. The following instructions load the content of
AC0 into T1 and T2 temporary registers:
amov #AC0H, XAR6
mov *AR6 , T2
mov *AR6, T1
In this example, the first instruction loads the effective address of the upper portion of the
accumulator AC0 (AC0H, located at address 0x9 of page 0) to the extended auxiliary
register XAR6. That is, XAR6 0x000009. The second instruction uses AR6 as a pointer
to copy the content of AC0H into the T2 register, and then the pointer decrements by 1 to
point to the lower portion of AC0 (AC0L, located at address 0x8 of page 0). The third
instruction copies the content of AC0L into the register T1 and modifies AR6 to point to
AC0H again.
Both direct and indirect addressing modes can be used to address one bit or a pair of bits
of a specific register. The direct addressing mode uses a bit offset to access a particular
register's bit. The offset is the number of bits counting from the least significant bit (LSB),
i.e., bit 0. The bit test instruction will update the test condition bits, TC1 and TC2, of the
status register ST0. The instruction of register-bit direct addressing mode is shown in the
next example.
Using the indirect addressing modes to specify register bit(s) can be done as follows:
mov #2, AR4 ; AR4 contains the bit offset 2
bset *AR4, AC3 ; Set the AC3 bit pointed by AR4 to 1
btstp *AR4, AC1 ; Test AC1 bit-pair pointed by AR4
The register bit-addressing mode supports only the bit test, bit set, bit clear, and bit
complement instructions in conjunction with the accumulators (AC0±AC3), auxiliary
registers (AR0±AR7), and temporary registers (T0±T3).
Circular addressing mode provides an efficient method for accessing data buffers
continuously without having to reset the data pointers. After accessing data, the data
buffer pointer is updated in a modulo fashion. That is, when the pointer reaches the
end of the buffer, it will wrap back to the beginning of the buffer for the next iteration.
Auxiliary registers (AR0±AR7) and the CDP can be used as circular pointers in
indirect addressing mode. The following steps are commonly used to set up circular
buffers:
1. Initialize the most significant 7-bit extended auxiliary register (ARnH or CDPH) to
select the main data page for a circular buffer. For example, mov #k7, AR2H.
2. Initialize the 16-bit circular pointer (ARn or CDP). The pointer can point to any
memory location within the buffer. For example, mov #k16, AR2 (the initialization
of the address pointer in the example of steps 1 and 2 can also be done using the
amov #k23, XAR2 instruction).
3. Initialize the 16-bit circular buffer starting address register (BSA01, BSA23, BSA45,
BSA67, or BSAC) associated with the auxiliary registers. For example, mov #k16,
BSA23, if AR2 (or AR3) is used as the circular addressing pointer register. The main
data page concatenated with the content of this register defines the 23-bit starting
address of the circular buffer.
4. Initialize the data buffer size register (BK03, BK47, or BKC). When using AR0±
AR3 (or AR4±AR7) as the circular pointer, BK03 (or BK47) should be initialized.
The instruction, mov #16, BK03, sets up a circular buffer of 16 elements for the
auxiliary registers, AR0±AR3.
5. Enable the circular buffer configuration by setting the appropriate bit in the status
register ST2. For example, the instruction bset AR2LC enables AR2 for circular
addressing.
Refer to the TMS320C55x DSP CPU Reference Guide [1] for details on circular
addressing mode. The following example demonstrates how to initialize a four integer
circular buffer, COEFF[4], and how the circular addressing mode accesses data in the
buffer:
PIPELINE AND PARALLELISM 59
Since the circular addressing uses the indirect addressing modes, the circular pointers
can be updated using the modifications listed in Table 2.4. The use of circular buffers for
FIR filtering will be introduced in Chapter 5 in details.
The pipeline technique has been widely used by many DSP manufacturers to improve
processor performance. The pipeline execution breaks a sequence of operations into
smaller segments and executes these smaller pieces in parallel. The TMS320C55x uses
the pipelining mechanism to efficiently execute its instructions to reduce the overall
execution time.
Separated by the instruction buffer unit, the pipeline operation is divided into two
independent pipelines ± the program fetch pipeline and the program execution pipeline
(see Figure 2.12). The program fetch pipeline consists of the following three stages (it
uses three clock cycles):
PA (program address): The C55x instruction unit places the program address on the
program-read address bus (PAB).
PM (program memory address stable): The C55x requires one clock cycle for its
program memory address bus to be stabilized before that memory can be read.
PB (program fetch from program data bus): In this stage, four bytes of the program
code are fetched from the program memory via the 32-bit program data-read bus (PB).
The code is placed into the instruction buffer queue (IBQ). For every clock cycle, the IU
will fetch four bytes to the IBQ. The numbers on the top of the diagram represent the
CPU clock cycle.
At the same time, the seven-stage execution pipeline performs the fetch, decode,
address, access, read, and execution sequence independent of the program fetch pipe-
line. The C55x program execution pipeline stages are summarized as follows:
F (fetch): In the fetch stage, an instruction is fetched from the IBQ. The size of the
instruction can be one byte for simple operations, or up to six bytes for more complex
operations.
D (decode): During the decoding process, decode logic gets one to six bytes from the
IBQ and decodes these bytes into an instruction or an instruction pair under the parallel
operation. The decode logic will dispatch the instruction to the program flow unit (PU),
address flow unit (AU), or data computation unit (DU).
AD (address): In this stage, the AU calculates data memory addresses using its data-
address generation unit (DAGEN), modifies pointers if required, and computes the
program-space address for PC-relative branching instructions.
AC (access cycles 1 and 2): The first cycle is used for the C55x CPU to send
the address for read operations to the data-read address buses (BAB, CAB, and
DAB), or transfer an operand to the CPU via the C-bus (CB). The second access
cycle is inserted to allow the address lines to be stabilized before the memory is read.
R (read): In the read stage, the data and operands are transferred to the CPU via the
CB for the Ymem operand, the B-bus (BB) for the Cmem operand, and the D-bus (DB)
for the Smem or the Xmem operands. For the Lmem operand read, both the CB and the
DB will be used. The AU will generate the address for the operand write and send the
address to the data-write address buses (EAB and FAB).
X (execute): Most data processing work is done in this stage. The ALU inside the AU
and the ALU inside the DU performs data processing execution, stores an operand via
the F-bus (FB), or stores a long operand via the E-bus and F-bus (EB and FB).
The C55x pipeline diagram illustrated in Figure 2.12 explains how the C55x pipeline
works. It is clear that the execution pipeline is full after seven cycles and every execution
cycle that follows will complete an instruction. If the pipeline is always full, this technique
increases the processing speed seven times. However, the pipeline flow efficiency is based
on the sequential execution of instruction. When a disturbing execution such as a branch
instruction occurs, the sudden change of the program flow breaks the pipeline sequence.
Under such circumstances, the pipeline will be flushed and will need to be refilled. This is
called pipeline break down. The use of IBQ can minimize the impact of the pipeline break
down. Proper use of conditional execution instructions to replace branch instructions can
also reduce the pipeline break down.
The parallelism of the TMS320C55x uses the processor's multiple-bus architecture, dual
MAC units, and separated PU, AU, and DU. The C55x supports two parallel process-
ing types ± implied and explicit. The implied parallel instructions are the built-in
instructions. They use the symbol of parallel columns, `: :', to separate the pair of
instructions that will be processed in parallel. The explicit parallel instructions are the
PIPELINE AND PARALLELISM 61
user-built instructions. They use the symbol of parallel bar, `j j', to indicate the pair of
parallel instructions. These two types of parallel instructions can be used together to
form a combined parallel instruction. The following examples show the user-built, built-
in, and combined parallel instructions. Each example is carried out in just one clock
cycle.
User-built:
mpym *AR1, *AR2, AC0 ; User-built parallel instruction
jjand AR4, T1 ; Using DU and AU
Built-in:
mac *AR0 , *CDP , AC0 ; Built-in parallel instruction
::mac *AR1 , *CDP , AC1 ; Using dual-MAC units
Some of the restrictions when using parallel instructions are summarized as follows:
. For either the user-built or the built-in parallelism, only two instructions can be
executed in parallel, and these two instructions must not exceed six bytes.
. When addressing memory space, only the indirect addressing mode is allowed.
. Parallelism is allowed between and within execution units, but there cannot be any
hardware resources conflicts between units, buses, or within the unit itself.
There are several restrictions that define the parallelism within each unit when applying
parallelism to assembly coding. The detailed descriptions are given in the TMS320C55x
DSP Mnemonic Instruction Set Reference Guide [4].
The PU, AU, and DU can all be involved in parallel operations. Understanding the
register files in each of these units will help to be aware of the potential conflicts when
using the parallel instructions. Table 2.5 lists some of the registers in PU, AU, and
DU.
The parallel instructions used in the following example are incorrect because the
second instruction uses the direct addressing mode:
mov *AR2, AC0
j jmov T1, @x
We can correct the problem by replacing the direct addressing mode, @x, with an
indirect addressing mode, *AR1, so both memory accesses are using indirect addressing
mode as follows:
62 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
Consider the following example where the first instruction loads the content of AC0
that resides inside the DU to the auxiliary register AR2 inside the AU. The second
instruction attempts to use the content of AC3 as the program address for a function
call. Because there is only one link between AU and DU, when both instructions try to
access the accumulators in the DU via the single link, it creates a conflict.
mov AC0, AR2
j jcall AC3
To solve the problem, we can change the subroutine call from call by accumulator to
call by address as follows:
mov AC0, AR2
j jcall my_func
This is because the instruction, call my_func, only needs the PU.
The coefficient-dual-AR indirect addressing mode is used to perform operations with
dual-AR indirect addressing mode. The coefficient indirect addressing mode supports
three simultaneous memory-accesses (Xmem, Ymem, and Cmem). The finite impulse
response (FIR) filter (will be introduced in Chapter 3) is an application that can
effectively use coefficient indirect addressing mode. The following code segment is an
example of using the coefficient indirect addressing mode:
mpy *AR1, *CDP, AC2 ; AR1 pointer to data X1
: :mpy *AR2, *CDP, AC3 ; AR2 pointer to data X2
| |rpt #6 ; Repeat the following 7 times
mac *AR1, *CDP, AC2 ; AC2 has accumulated result
: :mac *AR2, *CDP, AC3 ; AC3 has another result
In this example, the memory buffers (Xmem and Ymem) are pointed at by AR2 and AR3,
respectively, while the coefficient array is pointed at by CDP. The multiplication results
TMS320C55X INSTRUCTION SET 63
are added with the contents in the accumulators AC2 and AC3, and the final results are
stored back to AC2 and AC3.
Instructions used to perform addition (ADD), subtraction (SUB), and multiplication (MPY)
are arithmetic instructions. Most arithmetic operations can be executed conditionally.
The combination of these basic arithmetic operations produces another powerful subset
of instructions such as the multiply±accumulation (MAC) and multiply±subtraction
(MAS) instructions. The C55x also supports extended precision arithmetic such as
add-with-carry, subtract-with-borrow, signed/signed, signed/unsigned, and unsigned/
unsigned arithmetic instructions. In the following example, the multiplication instruc-
tion, mpym, multiplies the data pointed by AR1 and CDP, and the multiplication
product is stored in the accumulator AC0. After the multiplication, both pointers
(AR1 and CDP) are updated.
In the next example, the macmr40 instruction uses AR1 and AR2 as data pointers
and performs multiplication±accumulation. At the same time, the instruction also
carries out the following operations:
1. The key word `r' produces a rounded result in the high portion of the accumulator
AC3. After rounding, the lower portion of AC3(15:0) is cleared.
64 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
2. 40-bit overflow detection is enabled by the key word `40'. If overflow is detected,
the result in accumulator AC3 will be saturated to its 40-bit maximum value.
3. The option `T3 *AR1' loads the data pointed at by AR1 into the temporary
register T3 for later use.
4. Finally, AR1 and AR2 are incremented by one to point to the next data location in
memory space.
Logic operation instructions such as AND, OR, NOT, and XOR (exclusive-OR) on data
values are widely used in program decision-making and execution flow control. They
are also found in many applications such as error correction coding in data commu-
nications. For example, the instruction and #0xf, AC0 clears all upper bits in the
accumulator AC0 but the four least significant bits.
The bit manipulation instructions act on an individual bit or a pair of bits of a register
or data memory. These types of instructions consist of bit clear, bit set, and bit test to a
specified bit (or a pair of bits). Similar to logic operations, the bit manipulation
instructions are often used with logic operations in supporting decision-making pro-
cesses. In the following example, the bit clear instruction clears the carry bit (bit 11) of
the status register ST0.
TMS320C55X INSTRUCTION SET 65
The move instruction is used to copy data values between registers, memory locations,
register to memory, or memory to register. For example, to initialize the upper portion
of the 32-bit accumulator AC1 with a constant and zero out the lower portion of the
AC1, we can use the instruction mov #k16, AC1, where the constant k is first shifted
left by 16-bit and then loaded into the upper portion of the accumulator AC1(31:16) and
the lower portion of the accumulator AC1(15:0) is zero filled. The 16-bit constant that
follows the # can be any signed number.
1. The unsigned data content in AC0 is shifted to the left according to the content in
the temporary register T2.
3. The data value in AC0 may be saturated if the left-shift or the rounding process
causes the result in AC0 to overflow.
66 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
4. The final result after left shifting, rounding, and maybe saturation, is stored into the
data memory pointed at by the pointer AR1.
The program flow control instructions are used to control the execution flow of the
program, including branching (B), subroutine call (CALL), loop operation (RPTB),
return to caller (RET), etc. All these instructions can be either conditionally or uncondi-
tionally executed. For example,
callcc my_routine, TC1
is the conditional instruction that will call the subroutine my_routine only if the test
control bit TC1 of the status register ST0 is set. Conditional branch (BCC) and condi-
tional return (RETCC) can be used to control the program flow according to certain
conditions.
The conditional execution instruction, xcc, can be implemented in either condi-
tional execution or partial conditional execution. In the following example, the
conditional execution instruction tests the TC1 bit. If TC1 is set, the instruction,
mov *AR1, AC0, will be executed, and both AC0 and AR1 are updated. If the
condition is false, AC0 and AR1 will not be changed. Conditional execution instruction
xcc allows for the conditional execution of one instruction or two paralleled instruc-
tions. The label is used for readability, especially when two parallel instructions are
used.
In addition to conditional execution, the C55x also provides the capability of partially
conditional execution of an instruction. An example of partial conditional execution is
given as follows:
TMS320C55X INSTRUCTION SET 67
When the condition is true, both AR1 and AC0 will be updated. However, if the
condition is false, the execution phase of the pipeline will not be carried out. Since the
first operand (the address pointer AR1) is updated in the read phase of the pipeline, AR1
will be updated whether or not the condition is true, while the accumulator AC0 will
remain unchanged at the execution phase. That is, the instruction is only partially
executed.
Many real-time DSP applications require repeated executions of some instructions
such as filtering processes. These arithmetic operations may be located inside nested
loops. If the number of data processing instructions in the inner loop is small, the
percentage of overhead for loop control may be very high. The loop control instruc-
tions, such as testing and updating the loop counter(s), pointer(s), and branches back to
the beginning of the loop to execute the loop again, impose a heavy overhead for the
processor. To minimize the loop overhead, the C55x includes built-in hardware for zero-
overhead loop operations.
The single-repeat instruction (RPT) repeats the following single-cycle instruction or two
single-cycle instructions that are executed in parallel. For example,
rpt #N 1 ; Repeat next instruction N times
instruction_A
The number, N 1, is loaded into the single-repeat counter (RPTC) by the RPT
instruction. The following instruction_A will be executed N times.
The block-repeat instruction (RPTB) forms a loop that repeats a block of instructions.
It supports a nested loop with an inner loop being placed inside an outer loop. Block-
repeat registers use block-repeat counters BRC0 and BRC1. For example,
mov #N 1, BRC0 ; Repeat outer loop N times
mov #M 1, BRC1 ; Repeat inner loop M times
rptb outloop-1 ; Repeat outer loop up to outloop
mpy *AR1, *CDP, AC0
mpy *AR2, *CDP, AC1
rptb inloop-1 ; Repeat inner loop up to inloop
mac *AR1, *CDP, AC0
mac *AR2, *CDP, AC1
inloop ; End of inner loop
68 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
The above example uses two repeat instructions to control a nested repetitive oper-
ation. The block-repeat structure
rptb label_name-1
(more instructions . . . )
label_name
executes a block of instructions between the rptb instruction and the end label
label_name. The maximum number of instructions that can be used inside a block-
repeat loop is limited to 64 Kbytes of code. Because of the pipeline scheme, the minimum
cycles within a block-repeat loop are two. The maximum number of times that a loop can
be repeated is limited to 65 536 ( 216 ) because of the 16-bit block-repeat counters.
As discussed in Chapter 1, the mixing of C and assembly programs are used for many
DSP applications. C code provides the ease of maintenance and portability, while
assembly code has the advantages of run-time efficiency and code density. We can
develop C functions and assembly routines, and use them together. In this section, we
will introduce how to interface C with assembly programs and review the guidelines of
the C function calling conventions for the TMS320C55x.
The assembly routines called by a C function can have arguments and return values
just like C functions. The following guidelines are used for writing the C55x assembly
code that is callable by C functions.
Naming convention: Use the underscore `_' as a prefix for all variables and routine
names that will be accessed by C functions. For example, use _my_asm_func as the name
of an assembly routine called by a C function. If a variable is defined in an assembly
routine, it must use the underscore prefix for C function to access it, such as _my_var. The
prefix `_' is used by the C compiler only. When we access assembly routines or variables in
C, we don't need to use the underscore as a prefix. For example, the following C program
calls the assembly routine using the name my_asm_func without the underscore:
extern int my_asm_func /* Reference an assembly function */
void main( )
{
int c; /* Define local variable */
c my_asm_func( ); /* Call the assembly function */
}
Variable definition: The variables that are accessed by both C and assembly routines
must be defined as global variables using the directive .global, .def, or .ref by the
assembler.
Compiler mode: By using the C compiler, the C55x CPL (compiler mode) bit is
automatically set for using stack-pointer (SP) relative addressing mode when entering
an assembly routine. The indirect addressing modes are preferred under this configur-
ation. If we need to use direct addressing modes to access data memory in a C callable
assembly routine, we must change to DP (data-page) direct addressing mode. This can
be done by clearing the CPL bit. However, before the assembly routine returns to its C
caller function, the CPL bit must be restored. The bit clear and bit set instructions,
bclr CPL and bset CPL, can be used to reset and set the CPL bit in the status register
ST1, respectively. The following code can be used to check the CPL bit, turn CPL bit off
if it is set, and restore the CPL bit before returning it to the caller.
btstclr #14, *(ST1), TC1 ; Turn off CPL bits if it is set
(more instructions . . . )
xcc continue, TC1 ; TC1 is set if we turned CPL bit off
bset CPL ; Turn CPL bit on
continue
ret
If the arguments are pointers to data memory, they are treated as data pointers. If the
argument can fit into a 16-bit register such as int and char, it is considered to be 16-bit
data. Otherwise, it is considered 32-bit data. The arguments can also be structures. A
structure of two words (32 bits) or less is treated as a 32-bit data argument and is passed
using a 32-bit register. For structures larger than two words, the arguments are passed
by reference. The C compiler will pass the address of a structure as a pointer, and this
pointer is treated like a data argument.
For a subroutine call, the arguments are assigned to registers in the order that the
arguments are listed in the function. They are placed in the following registers according
to their data type, in an order shown in Table 2.6.
Note in Table 2.6 the overlap between AR registers used for data pointers and the
registers used for 16-bit data. For example, if T0 and T1 hold 16-bit data arguments,
and AR0 already holds a data pointer argument, a third 16-bit data argument would be
placed into AR1. See the second example in Figure 2.13. If the registers of the appro-
priate type are not available, the arguments are passed onto the stack. See the third
example in Figure 2.13.
70 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
T0 T0 AC0 AR0
↓ ↓ ↓ ↓
int func(int i1, long l2, int *p3);
Return values: The calling function/routine collects the return value from the called
function/subroutine. A 16-bit integer data is returned by the register T0 and a 32-bit
data is returned in the accumulator AC0. A data pointer is returned in (X)AR0. When
the called routine returns a structure, the structure is on the local stack.
Register use and preservation: When making a function call, the register assignments
and preservations between the caller and called functions are strictly defined. Table 2.7
describes how the registers are preserved during a function call. The called function
must save the contents of the save-on-entry registers (T2, T3, AR5, AR6, and AR7) if
it will use these registers. The calling function must push the contents of any other
save-on-call registers onto the stack if these register's contents are needed after the
function/subroutine call. A called function can freely use any of the save-on-call
registers (AC0±AC2, T0, T1, and AR0±AR4) without saving its value. More detailed
descriptions can be found in the TMS320C55x Optimizing C Compiler User's
Guide [3].
We have introduced the TMS320C55x assembly language and several addressing modes.
Experiments given in this section will help to use different addressing modes for writing
assembly code. We also introduced the C function interfacing with assembly routines
and we will explore the C-assembly interface first.
EXPERIMENTS ± ASSEMBLY PROGRAMMING BASICS 71
In this experiment, we will learn how to write C-callable assembly routines. The
following example illustrates a C function main, which calls an assembly routine to
perform a summation, and returns the result back to the C main function. The C
program exp2a.c is listed as follows:
extern int sum(int *); /* Assembly routine sum */
int x[2] {0x1234, 0x4321}; /* Define x[] as global array */
int s; /* Define s as global variable */
void main( )
{
s sum(x); /* Call assembly routine _sum */
}
The assembly routine exp2_sum.asm is listed as follows:
.global _sum
_sum
mov *AR0, AC0 ; AC0 x[1]
add *AR0, AC0 ; AC0 x[1]x[2]
mov AC0, T0 ; Return value in T0
ret ; Return to calling function
where the label _sum defines the starting or entry of the assembly subroutine and
directive .global defines that the assembly routine _sum as a global function.
Perform the following steps for Experiment 2A:
3. Write exp2_sum.asm based on the assembly sample code given above and save it
in A: \Experiment2.
4. Copy the linker command file, exp1.cmd, from previous experiment, rename it to
exp2.cmd and save it to A: \Experiment2.
7. Compile and debug the code, then load exp2a.out and issue the Go-Main
command.
8. Watch and record the changes in the AC0, AR0, and T0 in the CPU register
window. Watch memory location `s' and `x' and record when the content of result
`s' is updated. Why?
10. Examine the assembly code exp2a.asm that the C compiler generated. How is the
return value passed to the C calling function?
11. If we define the result `s' inside the function main( ), from which location can we
view its value? Why?
In Section 2.4, we introduced six C55x addressing modes. In the second part of the
experiments, we will write assembly routines to exercise different addressing modes to
understand how each of these addressing modes work. The assembly routines for this
experiment are called by a C function.
2. Since the array Ai[8]is defined in the C function, the assembly routine references it
using the directive .global (or .ref ).
2. The instruction btstclr #14, *(ST1), TC1 tests the CPL (compiler mode) bit (bit
14) of the status register ST1. The compiler mode will be set if this routine is called
by a C function. If the test is true, the test flag bit, TC1 (bit 13 of status register ST0)
will be set, and the instruction clears the CPL bit. This is necessary for using DP
direct addressing mode instead of SP direct addressing mode. At the end of the code
section, the conditional execution instruction, xcc continue, TC1, is used to set
the CPL bit if TC1 was set.
4. The sum-of-product operation (dot product) is one of the most commonly used
functions by DSP systems. A dot product of two vectors of length L can be
expressed as
X
L 1
Y Ai Xi A0 X0 A1 X1 AL 1 XL 1 ;
i0
74 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
2. where the vectors Ai and Xi are one-dimensional arrays of length L. There are many
different ways to access the elements of the arrays, such as direct, indirect, and
absolute addressing modes. In the following experiment, we will write a subroutine
int exp2b_3(int *Ai, int *Xi) to perform the dot product using indirect
addressing mode, and store the returned value in the variable result in data
memory. The code example is given as follows:
; Assume AR0 and AR1 are pointing to Ai and Xi
mpym *AR0, *AR1, AC0 ; Multiply Ai[0]and Xi[0]
mpym *AR0, *AR1, AC1 ; Multiply Ai[1]and Xi[1]
add AC1, AC0 ; Accumulate the partial result
mpym *AR0, *AR1, AC1 ; Multiply Ai[2]and Xi[2]
add AC1, AC0 ; Accumulate the partial result
(more instructions . . . )
mov AC0, T0
4. In the program, arrays Ai and Xi are defined as global arrays in the exp2.c. The
Ai and Xi arrays have the same data values as given previously. The return value is
passed to the calling function by T0.
5. Write an assembly routine int exp2b_4(int *Ai, int *Xi) using the indirect
addressing mode in conjunction with parallel instructions and repeat instructions
to improve the code density and efficiency. The following is an example of the
code:
mpym *AR0, *AR1, AC0 ; Multiply Ai[0]and Xi[0]
j jrpt #6 ; Multiply and accumulate the rest
macm *AR0, *AR1, AC0
4. The auxiliary registers, AR0 and AR1, are used as data pointers to array Ai and
array Xi , respectively. The instruction macm performances multiply-and-accumu-
late operation. The parallel bar jj indicates the parallel operation of two instruc-
tions. The repeat instruction, rpt #K will repeat the following instruction K1
times.
8. Open the memory watch window to watch how the arrays Ai and Xi are initialized
in data memory by the assembly routine exp2b_1.asm and exp2b_2.asm.
9. Open the CPU registers window to see how the dot product is computed by
exp2b_3.asm, and exp2b_4.asm.
10. Use the profile capability learned from the experiments given in Chapter 1 to
measure the run-time of the sum-of-product operations and compare the cycle
difference of the routine exp2b_3.asm and exp2b_4.asm.
EXERCISES 75
11. Use the map file to compare the assembly program code size of routine
exp2b_3.asm and exp2b_4.asm. Note that the program size is given in bytes.
References
[1] Texas Instruments, Inc., TMS320C55x DSP CPU Reference Guide, Literature no. SPRU371A,
2000.
[2] Texas Instruments, Inc., TMS320C55x Assembly Language Tools User's Guide, Literature no.
SPRU380, 2000.
[3] Texas Instruments, Inc., TMS320C55x Optimizing C Compiler User's Guide, Literature no.
SPRU281, 2000.
[4] Texas Instruments, Inc., TMS320C55x DSP Mnemonic Instruction Set Reference Guide, Literature
no. SPRU374A, 2000.
[5] Texas Instruments, Inc., TMS320C55x DSP Algebraic Instruction Set Reference Guide, Literature
no. SPRU375, 2000.
[6] Texas Instruments, Inc., TMS320C55x Programmer's Reference Guide, Literature no. SPRU376,
2000.
Exercises
1. Check the following examples to determine if these are correct parallel instructions. If not,
correct the problems.
(a) mov *AR1, AC1
: : add @x, AR2
(b) mov AC0, dbl(*AR2)
: : mov dbl(*(AR1T0)), AC2
(c) mpy *AR1, *AR2, AC0
: : mpy *AR3, *AR2, AC1
j j rpt #127
2. Given a memory block and XAR0, XDP, and T0 as shown in Figure 2.14. Determine the
contents of AC0, AR0, and T0 after the execution of the following instructions:
(a) mov *(#x2), AC0
(b) mov @(x x1), AC0
(c) mov @(x x0x80), AC0
(d) mov *AR0, AC0
(e) mov *(AR0T0), AC0
(f) mov *AR0(T0), AC0
(g) mov *AR0(# 1), AC0
(h) mov *AR0(#2), AC0
(i) mov *AR0(#0x80), AC0
3. C functions are defined as follows. Use Table 2.8 to show how the C compiler passes
parameters for each of the following functions:
76 INTRODUCTION TO TMS320C55X DIGITAL SIGNAL PROCESSOR
data
address:
memory
0x00FFFF 0xFFFF T0 0x0004
3
DSP Fundamentals
and Implementation
Considerations
The derivation of discrete-time systems is based on the assumption that the signal and
system parameters have infinite precision. However, most digital systems, filters, and
algorithms are implemented on digital hardware with finite wordlength. Therefore DSP
implementation with fixed-point hardware requires special attention because of the
potential quantization and arithmetic errors, as well as the possibility of overflow.
These effects must always be taken into consideration in DSP system design and
implementation.
This chapter presents some fundamental DSP concepts in time domain and practical
considerations for the implementation of digital filters and algorithms on DSP hard-
ware. Sections 3.1 and 3.2 briefly review basic time-domain DSP issues. Section 3.3
introduces probability and random processes, which are useful in analyzing the finite-
precision effects in the latter half of the chapter and adaptive filtering in Chapter 8. The
rigorous treatment of these subjects can be found in other DSP books listed in the
reference. Readers who are familiar with these DSP fundamentals should be able to skip
through some of these sections. However, most notations used throughout the book will
be defined in this chapter.
There are several ways to describe signals. For example, signals encountered in com-
munications are classified as deterministic or random. Deterministic signals are used
78 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
for testing purposes and for mathematically describing certain phenomena. Random
signals are information-bearing signals such as speech. Some deterministic signals will be
introduced in this section, while random signals will be discussed in Section 3.3.
As discussed in Chapter 1, a digital signal is a sequence of numbers
fx
n, 1 < n < 1g, where n is the time index. The unit-impulse sequence, with
only one non-zero value at n 0, is defined as
1, n0
d
n
3:1:1
0, n 6 0,
where d
n is also called the Kronecker delta function. This unit-impulse sequence is
very useful for testing and analyzing the characteristics of DSP systems, which will be
discussed in Section 3.1.3.
The unit-step sequence is defined as
1, n0
u
n
3:1:2
0, n < 0:
This signal is very convenient for describing a causal (or right-sided) signal x
n
for n 0. Causal signals are the most commonly encountered signals in real-time
DSP systems.
Sinusoidal signals (sinusoids or sinewaves) are the most important sine (or cosine)
signals that can be expressed in a simple mathematical formula. They are also good
models for real-world signals. The analog sinewave can be expressed as
V 2pf 3:1:4
is the frequency in radians per second (rad/s), f is the frequency in Hz, and f is the
phase-shift (initial phase at origin t 0) in radians.
When the analog sinewave defined in (3.1.3) is connected to the DSP system shown in
Figure 1.1, the digital signal x(n) available for the DSP hardware is the causal sinusoidal
signal
x
n A sin
VnT f, n 0, 1, . . . , 1
A sin
VnT fu
n
A sin
2pfnT fu
n,
3:1:5
where T is the sampling period in seconds. This causal sequence can also be expressed
as
where
V
! VT
3:1:7
fs
f
F fT
3:1:8
fs
Example 3.1: Generate 64 samples of a sine signal with A 2, f 1000 Hz, and
fs 8 kHz using MATLAB. Since F f =fs 0:125, we have ! 2pF 0:25p.
From Equation (3.1.6), we need to generate x
n 2 sin
!n), for n 0, 1, . . . , 63.
These sinewave samples can be generated and plotted by the following MATLAB
script:
n [0:63];
omega 0.25*pi;
xn 2*sin(omega*n);
plot(n, xn);
A DSP system (or algorithm) performs prescribed operations on digital signals. In some
applications, we view a DSP system as an operation performed on an input signal, x(n),
in order to produce an output signal, y(n), and express the general relationship between
x(n) and y(n) as
where T denotes the computational process for transforming the input signal, x(n), into
the output signal, y(n). A block diagram of the DSP system defined in (3.1.9) is
illustrated in Figure 3.1.
The processing of digital signals can be described in terms of combinations of certain
fundamental operations on signals. These operations include addition (or subtraction),
multiplication, and time shift (or delay). A DSP system consists of the interconnection
of three basic elements ± adders, multipliers, and delay units.
Two signals, x1
n and x2
n, can be added as illustrated in Figure 3.2, where
is the adder output. With more than two inputs, the adder could be drawn as a multi-
input adder, but the additions are typically done two inputs at a time in digital hard-
ware. The addition operation of Equation (3.1.10) can be implemented as the following
C55x code using direct addressing mode:
mov @x1n, AC0 ; AC0 x1(n)
add @x2n, AC0 ; AC0 x1(n)x2(n) y(n)
A given signal can be multiplied by a constant, a, as illustrated in Figure 3.3, where
x(n) is the multiplier input, a represents the multiplier coefficient, and
x1(n) x1(n)
or
+
x2(n) + y(n) x2(n) y(n)
Σ
is the multiplier's output. The multiply operation of equation (3.1.11) can be imple-
mented by the following C55x code using indirect addressing mode:
amov #alpha, XAR1 ; AR1 points to alpha ()
amov #xn, XAR2 ; AR2 points to x(n)
mpy *AR1, *AR2, AC0 ; AC0 *x(n) y(n)
The sequence {x
n} can be shifted (delayed) in time by one sampling period, T, as
illustrated in Figure 3.4. The box labeled z 1 represents the unit delay, x(n) is the input
signal, and
is the output signal, which is the input signal delayed by one unit (a sampling period).
In fact, the signal x
n 1 is actually the stored signal x(n) one sampling period
(T seconds) before the current time. Therefore the delay unit is very easy to implement
in a digital system, but is difficult to implement in an analog system. A delay by more
than one unit can be implemented by cascading several delay units in a row. Therefore
an L-unit delay requires L memory locations configured as a first-in first-out buffer,
which can also be implemented as a circular buffer (will be discussed in Chapter 5) in
memory.
There are several ways to implement delay operations on the TMS320C55x. The
following code uses a delay instruction to move the contents of the addressed data
memory location into the next higher address location:
amov #xn, XAR1 ; AR1 points to x(n)
delay *AR1 ; Contents of x(n)is copied to x(n 1)
These three basic building blocks can be connected to form a block diagram repre-
sentation of a DSP system. The input±output (I/O) description of a DSP system consists
of mathematical expressions with addition, multiplication, and delays, which explicitly
define the relationship between the input and output signals. DSP algorithms are closely
related to block diagram realizations of the I/O difference equations. For example,
consider a simple DSP system described by the difference equation
The block diagram of the system using the three basic building blocks is sketched in
Figure 3.5(a). Note that the difference equation (3.1.13) and the block diagram show
exactly how the output signal y(n) is computed in the DSP system for a given input
signal, x(n).
The DSP algorithm shown in Figure 3.5(a) requires two multiplications and one
addition to compute the output sample y(n). A simple algebraic simplification may
82 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
a a
+ + a
+ y(n) + y(n)
Σ Σ
(a) (b)
Figure 3.5 Block diagrams of DSP systems: (a) direct realization described in (3.1.13), and
(b) simplified implementation given in (3.1.14)
When the multiplier coefficient a is a number with a base of 2 such as 0.25 (1/4), we
can use shift operation instead of multiplication. The following example uses the
absolute addressing mode:
mov *(x1n)#-2, AC0 ; AC0 0.25*x1(n)
add *(x2n)#-2, AC0 ; AC0 0.25*x1(n) 0.25*x2(n)
where the right shift option, #-2, shifts the content of x1n and x2n to the right by 2
bits (equivalent to dividing it by 4) before they are used.
INTRODUCTION TO DIGITAL FILTERS 83
If the input signal to the DSP system is the unit-impulse sequence d
n defined in (3.1.1),
then the output signal, h(n), is called the impulse response of the system. The impulse
response plays a very important role in the study of DSP systems. For example, consider
a digital system with the I/O equation
The impulse response of the system can be obtained by applying the unit-impulse
sequence d
n to the input of the system. The outputs are the impulse response coeffi-
cients computed as follows:
h
0 y
0 b0 1 b1 0 b2 0 b0
h
1 y
1 b0 0 b1 1 b2 0 b1
h
2 y
2 b0 0 b1 0 b2 1 b2
h
3 y
3 b0 0 b1 0 b2 0 0
...
Therefore the impulse response of the system defined in (3.1.15) is fb0 , b1 , b2 , 0, 0, . . .g.
The I/O equation given in (3.1.15) can be generalized as the difference equation with
L parameters, expressed as
X
L 1
y
n b0 x
n b1 x
n 1 bL 1 x
n L 1 bl x
n l :
3:1:16
l0
Substituting x
n d
n into (3.1.16), the output is the impulse response expressed
as
X
L 1
bn n = 0 , 1, ..., L 1
h
n bl d
n l
3:1:17
l0 0 otherwise.
Therefore the length of the impulse response is L for the difference equation defined in
(3.1.16). Such a system is called a finite impulse response (FIR) system (or filter). The
impulse response coefficients, bl , l 0, 1, . . . , L 1, are called filter coefficients
(weights or taps). The FIR filter coefficients are identical to the impulse response
coefficients. Table 3.2 shows the relationship of the FIR filter impulse response h(n)
and its coefficients bl .
As shown in (3.1.17), the system described in (3.1.16) has a finite number of non-zero
impulse response coefficients bl , l 0, 1, . . . , L 1. The signal-flow diagram of the
84 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
bl b0 b1 b2 ... bL 1
n x
n x
n 1 x
n 2 ... x
n L 1 y
n h
n
0 1 0 0 ... 0 h
0 b0
1 0 1 0 ... 0 h
1 b1
2 0 0 1 ... 0 h
2 b2
... ... ... ... ... ... ...
L 1 0 0 0 ... 1 h
L 1 bL 1
L 0 0 0 ... 0 0
x(n) −1
x(n−1) −1
x(n−L+1)
z z
b0 b1 bL−1
+ + y(n)
Σ
+
system described by the I/O Equation (3.1.16) is illustrated in Figure 3.6. The string
of z 1 functions is called a tapped-delay-line, as each z 1 corresponds to a delay of
one sampling period. The parameter, L, is the order (length) of the FIR filter. The
design and implementation of FIR filters (transversal filters) will be discussed in
Chapter 5.
The moving (running) average filter is a simple example of an FIR filter. Averaging is
used whenever data fluctuates and must be smoothed prior to interpretation. Consider
an L-point moving-average filter defined as
1
y
n x
n x
n 1 x
n L 1
L
1XL 1
x
n l,
3:2:1
L l0
where each output signal y
n is the average of L consecutive input signal samples. The
summation operation that adds all samples of x
n between 1 and L can be implemented
using the MATLAB statement:
yn sum(xn(1:L));
INTRODUCTION TO DIGITAL FILTERS 85
1
y
n y
n 1 x
n x
n L:
3:2:2
L
The strength of a digital signal may be expressed in terms of peak value, energy, and
power. The peak value of deterministic signals is the maximum absolute value of the
signal. That is,
Mx maxfjx
njg:
3:2:3
n
Window at time n
n−L n−1
Time
n−L+1 n
The maximum value of the array xn can be found using the MATLAB function
Mx max(xn);
The energy of the signal x
n is defined as
X
Ex jx
nj2 :
3:2:4
n
1XL 1
Px lim jx
nj2 :
3:2:5
L!1 L
n0
where k is an integer and L is the period in samples. Any one period of L samples
completely defines a periodic signal. From Figure 3.7, the power of x
n can be
computed by
1 X n
1XL 1
Px jx
lj2 jx
n lj2 :
3:2:7
L ln L1 L l0
For example, a real-valued sinewave of amplitude A defined in (3.1.6) has the power
Px 0:5A2 .
In most real-time applications, the power estimate of real-valued signals at time n can
be expressed as
1XL 1
P^x
n x2
n l:
3:2:8
L l0
Note that this power estimate uses L samples from the most recent sample at time n
back to the oldest sample at time n L 1, as shown in Figure 3.7. Following the
derivation of (3.2.2), we have the recursive power estimator
1 2
P^x
n P^x
n 1 x
n x2
n L:
3:2:9
L
INTRODUCTION TO DIGITAL FILTERS 87
or
where
1
a :
3:2:11b
L
This is the most effective and widely used recursive algorithm for power estimation
because only three multiplication operations and two memory locations are needed. For
example, (3.2.11a) can be implemented by the C statement
pxn (1.0-alpha)*pxn alpha*xn*xn;
where alpha 1=L as defined in (3.2.11b). This C statement shows that we need three
multiplications and only two memory locations for xn and pxn.
For stationary signals, a larger L (longer window) or smaller a can be used for
obtaining a better average. However, a smaller L (shorter window) should be used
for non-stationary signals for better results. In many real-time applications, the square
of signal x2
n used in (3.2.10) and (3.2.11a) can be replaced with its absolute value jx
nj
in order to reduce further computation. This efficient power estimator will be further
analyzed in Chapter 4 using the z-transform.
X
1 X
1
y
n x
n h
n x
kh
n k h
kx
n k,
3:2:12
k 1 k 1
where * denotes the linear convolution operation and the operation defined in (3.2.12) is
called the convolution sum. The input signal, x
n, is convoluted with the impulse
response, h
n, in order to yield the output, y
n. We will discuss the computation of
linear convolution in detail in Chapter 5.
As shown in (3.2.12), the I/O description of a DSP system consists of mathematical
expressions, which define the relationship between the input and output signals. The
exact internal structure of the system is either unknown or ignored. The only way to
interact with the system is by using its input and output terminals as shown in Figure
3.8. The system is assumed to be a `black box'. This block diagram representation is a
very effective way to depict complicated DSP systems.
A digital system is called the causal system if and only if
A causal system is one that does not provide a response prior to input application. For a
causal system, the limits on the summation of the Equation (3.2.12) can be modified to
reflect this restriction as
X
1
y
n h
kx
n k:
3:2:14
k0
Thus the output signal y
n of a causal system at time n depends only on present and
past input signals, and does not depend on future input signals.
Consider a causal system that has a finite impulse response of length L. That is,
(
0, n<0
h
n bn , 0nL 1
3:2:15
0 nL.
Substituting this equation into (3.2.14), the output signal can be expressed identically to
the Equation (3.1.16). Therefore the FIR filter output can be calculated as the input
sequence convolutes with the coefficients (or impulse response) of the filter.
A digital filter can be classified as either an FIR filter or an infinite impulse response
(IIR) filter, depending on whether or not the impulse response of the filter is of
finite or infinite duration. Consider the I/O difference equation of the digital system
expressed as
where each output signal y
n is dependent on the current input signal x
n and the
previous output signal y
n 1. Assuming that the system is causal, i.e., y
n 0 for
n < 0 and let x
n d
n. The output signals y
n are computed as
INTRODUCTION TO DIGITAL FILTERS 89
In general, we have
This system has infinite impulse response h
n if the coefficients a and b are non-zero. This
system is called an IIR system (or filter). In theory, we can calculate an IIR filter output
y
n using either the convolution equation (3.2.14) or the I/O difference equation (3.2.16).
However, it is not computationally feasible using (3.2.14) for the impulse response h
n
given in (3.2.17), because we cannot deal with an infinite number of impulse response
coefficients. Therefore we must use an I/O difference equation such as the one defined in
(3.2.16) for computing the IIR filter output in practical applications.
The I/O equation of the IIR system given in (3.2.16) can be generalized with the
difference equation
where a is defined in (3.2.11b). This is a simple first-order IIR filter. Design and
implementation of IIR filters will be further discussed in Chapter 6.
In Section 3.1, we treat signals as deterministic, which are known exactly and repeatable
(such as a sinewave). However, the signals encountered in practice are often random
signals such as speech and interference (noise). These random (stochastic) processes can
be described at best by certain probability concepts. In this section, we will briefly
introduce the concept of probability, followed by random variables and random signal
processing.
An experiment that has at least two possible outcomes is fundamental to the concept of
probability. The set of all possible outcomes in any given experiment is called the sample
space S. An event A is defined as a subset of the sample space S. The probability of
event A is denoted by P(A). Letting A be any event defined on a sample space S, we have
0 P A 1 3:3:1
and
P S 1 3:3:2
For example, consider the experiment of rolling of a fair die N times (N ! 1), we have
S f1 A 6g and P
A 1=6.
A random variable, x, is defined as a function that maps all elements from the sample
space S into points on the real line. A random variable is a number whose value depends
on the outcome of an experiment. Given an experiment defined by a sample space with
elements A, we assign to every A a real number x x
A according to certain rules.
Consider the rolling of a fair die N times and assign an integer number to each face of a die,
we have a discrete random variable that can be any one of the discrete values from 1 to 6.
The cumulative probability distribution function of a random variable x is defined as
F X P x X , 3:3:3
F 1 0, 3:3:4a
F 1 1, 3:3:4b
0 F
X 1,
3:3:4c
INTRODUCTION TO RANDOM VARIABLES 91
dF
X
f
X
3:3:5
dX
X
F
X f
xdx,
3:3:6c
1
X2
P
X1 < x X2 F
X2 F
X1 f
X dX :
3:3:6d
X1
Note that both F(X ) and f(X ) are non-negative functions. The knowledge of these two
functions completely defines the random variable x.
Example 3.2: Consider a random variable x that has a probability density function
0, x < X1 or x > X2
f
X
a, X1 x X2 ,
Thus we have
1
a :
X2 X1
If a random variable x is equally likely to take on any value between the two limits X1
and X2 and cannot assume any value outside that range, it is said to be uniformly
distributed in the range [X1 , X2 ]. As shown in Figure 3.9, a uniform density function is
defined as
92 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
f (X)
1
X2−X1
X
0 X1 X2
(
1
f
X , X1 X X2
3:3:7
X2 X1
0, otherwise.
This uniform density function will be used to analyze quantization noise in Section 3.4.
If x is a discrete random variable that can take on any one of the discrete values
Xi , i 1, 2, . . . as the result of an experiment, we define
pi P x Xi : 3:3:8
We can use certain statistics associated with random variables. These statistics
areoften more meaningful from a physical viewpoint than the probability density
function. For example, the mean and the variance are used to reveal sufficient
features of random variables. The mean (expected value) of a random variable x is
defined as
1
mx Ex Xf
X dX , continuous-time case
1
X
X i pi , discrete-time case,
3:3:9
i
Xi 1 2 3 4 5 6
pi 1/6 1/6 1/6 1/6 1/6 1/6
X
6
1
mx pi Xi
1 2 3 4 5 6 3:5:
i1
6
The variance of x, which is a measure of the spread about the mean, is defined as
s2x E
x mx 2
1
X mx 2 f
X dX , continuous-time case
1
X
pi
Xi mx 2 , discrete-time case,
3:3:10
i
where x mx is the deviation of x from the mean value mx . The mean of the squared
deviation indicates the average dispersion of the distribution about the mean mx . The
positive square root sx of variance is called the standard deviation of x. The MATLAB
function std calculates standard deviation. The statement
s std(x);
computes the standard deviation of the elements in the vector x.
The variance defined in (3.3.10) can be expressed as
We call Ex2 the mean-square value of x. Thus the variance is the difference between
the mean-square value and the square of the mean value. That is, the variance is the
expected value of the square of the random variable after the mean has been removed.
The expected value of the square of a random variable is equivalent to the notation of
power. If the mean value is equal to 0, then the variance is equal to the mean-square
value. For a zero-mean random variable x, i.e., mx 0, we have
1
X2
1
mx Ex Xf
X dX X dX
1 X2 X1 X1
X2 X1
3:3:13
:
2
i.e., the mean value of the sum of random variables equals the sum of mean values. The
correlation of x and y is denoted as E[xy]. In general, Exy 6 Ex Ey. However, if x
and y are uncorrelated, then the correlation can be written in the form
X
N
y x 1 x 2 xN xi :
3:3:17
i1
FIXED-POINT REPRESENTATION AND ARITHMETIC 95
f (y)
1/sy√2p
y
my
The probability density function f(Y ) becomes a Gaussian (normal) distribution func-
tion (normal curve) as N ! 1. That is,
" #
1 y my 2
1
y my 2 =2s2y 1 2 sy
f
Y p e p e ,
3:3:18
sy 2p sy 2p
PN q
PN 2
where my i1 mi and sy i1 si . A graphical representation of the probability
The basic element in digital hardware is the two-state (binary) device that contains one
bit of information. A register (or memory unit) containing B bits of information is called
a B-bit word. There are several different methods of representing numbers and carrying
out arithmetic operations. In fixed-point arithmetic, the binary point has a fixed loca-
tion in the register. In floating-point arithmetic, it does not. In general, floating-point
processors are more expensive and slower than fixed-point devices. In this book, we
focus on widely used fixed-point implementations.
A B-bit fixed-point number can be interpreted as either an integer or a fractional
number. It is better to limit the fixed-point representation to fractional numbers because
it is difficult to reduce the number of bits representing an integer. In fixed-point
fractional implementation, it is common to assume that the data is properly scaled so
that their values lie between 1 and 1. When multiplying these normalized fractional
numbers, the result (product) will always be less than one.
A given fractional number x has a fixed-point representation as illustrated in
Figure 3.11. In the figure, M is the number of data (magnitude) bits. The most
significant bit
96 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
x = b0 . b1 b2 ... bM–1 bM
Binary point
Sign-bit
0, x 0 (positive number)
b0
3:4:1
1, x < 0 (negative number),
represents the sign of the number. It is called the sign-bit. The remaining M bits give the
magnitude of the number. The rightmost bit, bM , is called the least significant bit (LSB).
The wordlength is B
M 1 bits, i.e., each data point is represented by B 1
magnitude bits and one sign-bit.
As shown in Figure 3.11, the decimal value of a positive B-bit binary fractional
number x can be expressed as
X
M
1 2 M m
x10 b1 2 b2 2 bM 2 bm 2 :
3:4:2
m1
X 15 m
X
15
m 1 2 15 1 1
1=216
x 2 2 2 2 1 1
m1 m0
2 1
1=2
15
1 2 0:999969:
The negative numbers can be represented using three different formats: the sign-
magnitude, the 1's complement, and the 2's complement. Fixed-point DSP devices
usually use the 2's complement to represent negative numbers because it allows the
processor to perform addition and subtraction uses the same hardware. A positive
number (b0 0) is represented as a simple binary value, while a negative number
(b0 1) is represented using the 2's complement format. With the 2's complement
form, a negative number is obtained by complementing all the bits of the positive binary
number and then adding a 1 to the least significant bit. Table 3.4 shows an example of 3-
bit binary fractional numbers using the 2's complement format and their corresponding
decimal values.
In general, the decimal value of a B-bit binary fractional number can be calculated as
X
15
m
x10 b0 bm 2 :
3:4:3
m1
Table 3.4 Example of 3-bit binary fractional numbers in 2's complement format
and their corresponding decimal values
Binary number 000 001 010 011 100 101 110 111
Decimal value 0.00 0.25 0.50 0.75 1.00 0.75 0.50 0.25
M
1 x
1 2 :
3:4:4
Example 3.4: The following are some examples of the Q15 format data used for
C55x assembly programming. The directives .set and .equ have the same
functions that assign a value to a symbolic name. They do not require memory
space. The directives .word and .int are used to initialize memory locations
with particular data values represented in binary, hexadecimal, or integer format.
Each data is treated as a 16-bit value and separated by a comma.
ONE .set 32767 ; 1 2 15 0:999969 in integer
ONE_HALF .set 0x4000 ; 0.5 in hexadecimal
ONE_EIGHTH .equ 1000 h ; 1/8 in hexadecimal
MINUS_ONE .equ 0xffff ; 1 in hexadecimal
COEFF .int 0ff00h ; COEFF of 0x100
ARRAY .word 2048, 2048 ; ARRAY [0.0625, 0.0625]
Fixed-point arithmetic is often used with DSP hardware for real-time processing
because it offers fast operation and relatively economical implementation. Its draw-
backs include a small dynamic range (the range of numbers that can be represented) and
low accuracy. Roundoff errors exist only for multiplication. However, the addition may
cause an accumulator overflow. These problems will be discussed in detail in the
following sections.
98 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
1. Quantization errors
a. Input quantization
b. Coefficient quantization
2. Arithmetic errors
a. Roundoff (truncation) noise
b. Overflow
The limit cycle oscillation is another phenomenon that may occur when implementing a
feedback system such as an IIR filter with finite-precision arithmetic. The output of the
system may continue to oscillate indefinitely while the input remains 0. This can happen
because of quantization errors or overflow.
This section briefly analyzes finite-precision effects in DSP systems using fixed-point
arithmetic, and presents methods for confining these effects to acceptable levels.
The ADC shown in Figure 1.2 converts a given analog signal x(t) into digital form x(n).
The input signal is first sampled to obtain the discrete-time signal x(nT). Each x(nT)
value is then encoded using B-bit wordlength to obtain the digital signal x(n), which
consists of M magnitude bits and one sign-bit as shown in Figure 3.11. As discussed in
Section 3.4, we assume that the signal x(n) is scaled such that 1 x
n < 1. Thus the
full-scale range of fractional numbers is 2. Since the quantizer employs B bits, the
number of quantization levels available for representing x(nT) is 2B . Thus the spacing
between two successive quantization levels is
Quantization level
x(t)
011
∆/2 ∆
010
e(n)
001
000 Time, t
0 T 2T
discrete-time signal x(T) is rounded to 010, since the real value is below the middle line
between 010 and 011, while x(2T) is rounded to 011 since the value is above the middle
line.
The quantization error (noise), e(n), is the difference between the discrete-time signal,
x(nT), and the quantized digital signal, x(n). The error due to quantization can be
expressed as
D
je
nj :
3:5:3
2
Thus the quantization noise generated by an ADC depends on the quantization interval.
The presence of more bits results in a smaller quantization step, therefore it produces
less quantization noise.
From (3.5.2), we can view the ADC output as being the sum of the quantizer input
x(nT) and the error component e(n). That is,
where Q[] denotes the quantization operation. The nonlinear operation of the quantizer
is modeled as a linear process that introduces an additive noise e(n) to the discrete-time
signal x(nT) as illustrated in Figure 3.13. Note that this model is not accurate for low-
amplitude slowly varying signals.
For an arbitrary signal with fine quantization (B is large), the quantization error e(n)
may be assumed to be uncorrelated with the digital signal x(n), Dand can be assumed to
D
be random noise that is uniformly distributed in the interval 2 2 . From (3.3.13), we
,
can show that
D=2 D=2
Ee
n 0:
3:5:5
2
100 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
e(n)
x(nT ) + + x(n)
Σ
That is, the quantization noise e(n) has zero mean. From (3.3.14) and (3.5.1), we can
show that the variance
D2 2 2B
s2e :
3:5:6
12 3
Therefore the larger the wordlength, the smaller the input quantization error.
If the quantization error is regarded as noise, the signal-to-noise ratio (SNR) can be
expressed as
s2x
SNR 3 22B s2x ,
3:5:7
s2e
where s2x denotes the variance of the signal, x(n). Usually, the SNR is expressed in
decibels (dB) as
2
s
SNR 10 log10 x2 10 log10
3 22B s2x
se
10 log10 3 20B log10 2 10 log10 s2x
4:77 6:02B 10 log10 s2x
3:5:8
This equation indicates that for each additional bit used in the ADC, the converter
provides about 6-dB signal-to-quantization-noise ratio gain. When using a 16-bit ADC
(B 16), the SNR is about 96 dB. Another important fact of (3.5.8) is that the SNR is
proportional to s2x . Therefore we want to keep the power of signal as large as possible.
This is an important consideration when we discuss scaling issues in Section 3.6.
In digital audio applications, quantization errors arising from low-level signals are
referred to as granulation noise. It can be eliminated using dither (low-level noise) added
to the signal before quantization. However, dithering reduces the SNR. In many applica-
tions, the inherent analog audio components (microphones, amplifiers, or mixers) noise
may already provide enough dithering, so adding additional dithers may not be necessary.
If the digital filter is a linear system, the effect of the input quantization noise alone on
the output may be computed. For example, for the FIR filter defined in (3.1.16), the
variance of the output noise due to the input quantization noise may be expressed as
X
L 1
s2y;e s2e b2l :
3:5:9
l0
QUANTIZATION ERRORS 101
This noise is relatively small when compared with other numerical errors and is deter-
mined by the wordlength of ADC.
When implementing a digital filter, the filter coefficients are quantized to the word-
length of the DSP hardware so that they can be stored in the memory. The filter
coefficients, bl and am , of the digital filter defined by (3.2.18) are determined by a filter
design package such as MATLAB for given specifications. These coefficients are usually
represented using the floating-point format and have to be encoded using a finite
number of bits for a given fixed-point processor. Let b0l and a0m denote the quantized
values corresponding to bl and am , respectively. The difference equation that can
actually be implemented becomes
X
L 1 X
M
y
n b0l x
n l a0m y
n m:
3:5:10
l0 m1
This means that the performance of the digital filter implemented on the DSP hardware
will be slightly different from its design specification. Design and implementation of
digital filters for real-time applications will be discussed in Chapter 5 for FIR filters and
Chapter 6 for IIR filters.
If the wordlength B is not large enough, there will be undesirable effects. The
coefficient quantization effects become more significant when tighter specifications
are used. This generally affects IIR filters more than it affects FIR filters. In many
applications, it is desirable for a pole (or poles) of IIR filters to lie close to the unit circle.
102 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
Coefficient quantization can cause serious problems if the poles of desired filters are too
close to the unit circle because those poles may be shifted on or outside the unit circle
due to coefficient quantization, resulting in an unstable implementation. Such undesir-
able effects due to coefficient quantization are far more pronounced when high-order
systems (where L and M are large) are directly implemented since a change in the value
of a particular coefficient can affect all the poles. If the poles are tightly clustered for a
lowpass or bandpass filter with narrow bandwidth, the poles of the direct-form realiza-
tion are sensitive to coefficient quantization errors. The greater the number of clustered
poles, the greater the sensitivity.
The coefficient quantization noise is also affected by the different structures for the
implementation of digital filters. For example, the direct-form implementation of IIR
filters is more sensitive to coefficient quantization errors than the cascade structure
consisting of sections of first- or second-order IIR filters. This problem will be further
discussed in Chapter 6.
As shown in Figure 3.3 and (3.1.11), we may need to compute the product y
n ax
n
in a DSP system. Assuming the wordlength associated with a and x(n) is B bits, the
multiplication yields 2B bits product y(n). For example, a 16-bit number times another
16-bit number will produce a 32-bit product. In most applications, this product may
have to be stored in memory or output as a B-bit word. The 2B-bit product can be either
truncated or rounded to B bits. Since truncation causes an undesired bias effect, we
should restrict our attention to the rounding case.
In C programming, rounding a real number to an integer number can be implemented
by adding 0.5 to the real number and then truncating the fractional part. For example,
the following C statement
y (int)(x+0.5);
rounds the real number x to the nearest integer y. As shown in Example 3.5, MATLAB
provides the function round for rounding a real number.
In TMS320C55x implementation, the CPU rounds the operands enclosed by the
rnd( ) expression qualifier. For example,
mov rnd(HI(AC0)), *AR1
This instruction will round the content of the high portion of AC0(31:16)and the
rounded 16-bit value is stored in the memory location pointed at by AR1. Another key
word, R (or r), when used with the operation code, also performs rounding operation on
the operands. The following is an example that rounds the product of AC0 and AC1
and stores the rounded result in the upper portion of the accumulator AC1(31:16)and
the lower portion of the accumulator AC1(15:0) is cleared:
mpyr AC0, AC1
The process of rounding a 2B-bit product to B bits is very similar to that of quantiz-
ing discrete-time samples using a B-bit quantizer. Similar to (3.5.4), the nonlinear
OVERFLOW AND SOLUTIONS 103
operation of product roundoff can be modeled as the linear process shown in Figure
3.13. That is,
where ax
n is the 2B-bit product and e(n) is the roundoff noise due to rounding 2B-bit
product to B-bit. The roundoff noise is a uniformly distributed random process in the
interval defined in (3.5.3). Thus it has a zero-mean and its power is defined in (3.5.6).
It is important to note that most commercially available fixed-point DSP devices such
as the TMS320C55x have double-precision (2B-bit) accumulator(s). As long as the
program is carefully written, it is quite possible to ensure that rounding occurs only at
the final stage of calculation. For example, consider the computation of FIR filter
output given in (3.1.16). We can keep all the temporary products, bl x
n l for
l 0, 1, . . . , L 1, in the double-precision accumulator. Rounding is only performed
when computation is completed and the sum of products is saved to memory with B-bit
wordlength.
Most commercially available DSP devices (such as the TMS320C55x) have mechanisms
that protect against overflow and indicate if it occurs. Saturation arithmetic prevents a
104 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
1 − 2−M
−1
x
−M
1 −2
−1
result from overflowing by keeping the result at a maximum (or minimum for an
underflow) value. Saturation logic is illustrated in Figure 3.14 and can be expressed as
8 M M
>
< 1 2 , x1 2
y x, 1x<1
3:6:1
>
:
1, x< 1,
where x is the original addition result and y is the saturated adder output. If the adder is
under saturation mode, the undesired overflow can be avoided since the 32-bit accu-
mulator fills to its maximum (or minimum) value, but does not roll over. Similar to the
previous example, when 3-bit hardware with saturation arithmetic is used, the addition
result of x1 x2 is 011, or 0.75 in decimal value. Compared with the correct answer 1,
there is an error of 0.25. However, the result is much better than the hardware without
saturation arithmetic.
Saturation arithmetic has a similar effect to `clipping' the desired waveform. This is a
nonlinear operation that will add undesired nonlinear components into the signal. There-
fore saturation arithmetic can only be used to guarantee that overflow will not occur. It is
not the best, nor the only solution, for solving overflow problems.
As mentioned earlier, the TMS320C55x supports the data saturation logic in the data
computation unit (DU) to prevent data computation from overflowing. The logic is
enabled when the overflow mode bit (SATD) in status register ST1 is set (SATD 1).
When the mode is set, the accumulators are loaded with either the largest positive 32-bit
value (0x00 7FFF FFFF) or the smallest negative 32-bit value (0xFF 8000 0000) if the
result overflows. The overflow mode bit can be set with the instruction
bset SATD
The TMS320C55x provides overflow flags that indicate whether or not an arithmetic
operation has exceeded the capability of the corresponding register. The overflow flag
ACOVx, (x 0, 1, 2, or 3) is set to 1 when an overflow occurs in the corresponding
accumulator ACx. The corresponding overflow flag will remain set until a reset is
performed or when a status bit clear instruction is implemented. If a conditional
instruction that tests overflow status (such as a branch, a return, a call, or a conditional
execution) is executed, the overflow flag is cleared. The overflow flags can be tested and
cleared using instructions.
The most effective technique in preventing overflow is by scaling down the magnitudes
of signals at certain nodes in the DSP system and then scaling the result back up to the
original level. For example, consider the simple FIR filter illustrated in Figure 3.15(a).
Let x
n 0:8 and x
n 1 0:6, the filter output y
n 1:2. When this filter is
implemented on a fixed-point DSP hardware without saturation arithmetic, undesired
overflow occurs and we get a negative number as a result.
As illustrated in Figure 3.15(b), the scaling factor, b < 1, can be used to scale
down the input signal and prevent overflowing. For example, when b 0:5 is used,
we have x
n 0:4 and x
n 1 0:3, and the result y
n 0:6. This effectively
prevents the hardware overflow. Note that b 0:5 can be implemented by right shifting
1 bit.
If the signal x(n) is scaled by b, the corresponding signal variance changes to b2 s2x .
Thus the signal-to-quantization-noise ratio given in (3.5.8) changes to
!
b2 s2x
SNR 10 log10 4:77 6:02B 10 log10 s2x 20 log10 b:
3:6:2
s2e
Since we perform fractional arithmetic, b < 1 is used to scale down the input signal. The
term 20 log10 b has negative value. Thus scaling down the amplitude of the signal
reduces the SNR. For example, when b 0:5, 20 log10 b 6:02 dB, thus reducing
the SNR of the input signal by about 6 dB. This is equivalent to losing 1-bit in
representing the signal.
+ +
+ y(n) + y(n)
Σ Σ
(a) (b)
Figure 3.15 Block diagram of simple FIR filters: (a) without scaling, and (b) with scaling
factor b
106 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
Therefore we have to keep signals as large as possible without overflow. In the FIR
filter given in Figure 3.6, a scaling factor, b, can be applied to the input signal, x(n), to
prevent overflow during the computation of y(n) defined in (3.1.16). The value of signal
y(n) can be bounded as
XL 1 X
L 1
jy
nj b b x
n l bMx jbl j,
3:6:3
l0 l l0
where Mx is the peak value of x(n) defined in (3.2.3). Therefore we can ensure that
jy
nj < 1 by choosing
1
b LP1
:
3:6:4
Mx jbl j
l0
PL 1
Note that the input signal is bounded and jx
nj 1, thus Mx 1. The sum l0 jbl j
can be calculated using the MATLAB statement
bsum sum(abs(b));
where b is the coefficient vector.
Scaling the input by the scaling factor given in (3.6.4) guarantees that overflow never
occurs in the FIR filter. However, the constraint on b is overly conservative for most
signals of practical interest. We can use a more relaxed condition
1
b s :
3:6:5
LP1
Mx b2l
l0
Other scaling factors that may be used are based on the frequency response of the filter
(will be discussed in Chapter 4). Assuming that the reference signal is narrowband,
overflow can be avoided for all sinusoidal signals if the input is scaled by the maximum
magnitude response of the filter. This scaling factor is perhaps the easiest to use,
especially for IIR filters. It involves calculating the magnitude response and then
selecting its maximum value.
An IIR filter designed by a filter design package such as MATLAB may have some of
its filter coefficients greater than 1.0. To implement a filter with coefficients larger than
1.0, we can also scale the filter coefficients instead of changing the incoming signal. One
common solution is to use a different Q format instead of the Q15 format to represent
the filter coefficients. After the filtering operation is completed, the filter output needs
to be scaled back to the original signal level. This issue will be discussed in the C55x
experiment given in Section 3.8.5.
The TMS320C55x provides four 40-bit accumulators as introduced in Chapter 2.
Each accumulator is split into three parts as illustrated in Figure 3.16. The guard bits
are used as a head-margin for computations. These guard bits prevent overflow in
iterative computations such as the FIR filtering of L 256 defined in (3.1.16).
IMPLEMENTATION PROCEDURE FOR REAL-TIME APPLICATIONS 107
The digital filters and algorithms can be implemented on a DSP chip such as the
TMS320C55x following a four-stage procedure to minimize the amount of time spent
on finite wordlength analysis and real-time debugging. Figure 3.17 shows a flowchart of
this procedure.
In the first stage, algorithm design and study is performed on a general-purpose
computer in a non-real-time environment using a high-level MATLAB or C program
with floating-point coefficients and arithmetic. This stage produces an `ideal' system.
In the second stage, we develop the C (or MATLAB) program in a way that emulates
the same sequence of operations that will be implemented on the DSP chip, using the
same parameters and state variables. For example, we can define the data samples and
filter coefficients as 16-bit integers to mimic the wordlength of 16-bit DSP chips. It is
carefully redesigned and restructured, tailoring it to the architecture, the I/O timing
structure, and the memory constraints of the DSP device. This program can also serve
as a detailed outline for the DSP assembly language program or may be compiled using
the DSP chip's C compiler. This stage produces a `practical' system.
The quantization errors due to fixed-point representation and arithmetic can be
evaluated using the simulation technique illustrated in Figure 3.18. The testing data
x(n) is applied to both the ideal system designed in stage 1 and the practical system
developed in stage 2. The output difference, e(n), between these two systems is due to
finite-precision effects. We can re-optimize the structure and algorithm of the practical
system in order to minimize finite-precision errors.
The third stage develops the DSP assembly programs (or mixes C programs with
assembly routines) and tests the programs on a general-purpose computer using a DSP
software simulator (CCS with simulator or EVM) with test data from a disk file. This test
data is either a shortened version of the data used in stage 2, which can be generated
internally by the program or read in as digitized data emulating a real application. Output
from the simulator is saved as another disk file and is compared to the corresponding
output of the C program in the second stage. Once a one-to-one agreement is obtained
between these two outputs, we are assured that the DSP assembly program is essentially
correct.
108 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
Start
Algorithm analysis
and C program
implementation
Rewrite C program
to emulate
DSP device
Real-time testing
in
target system
End
Ideal
system +
x(n) e(n)
∑
Practical −
system
The final stage downloads the compiled (or assembled) and linked program into the
target hardware (such as EVM) and brings it to a real-time operation. Thus the real-
time debugging process is primarily constrained to debugging the I/O timing structure
and testing the long-term stability of the algorithm. Once the algorithm is running, we
can again `tune' the parameters of the systems in a real-time environment.
To experiment with input quantization effects, we shift off (right) bits of input signal
and then evaluate the shifted samples. Altering the number of bits for shifting right, we
can obtain an output stream that corresponds to a wordlength of 14 bits, 12 bits, and so
on. The example given in Table 3.5 simulates an A/D converter of different wordlength.
Instead of shifting the samples, we mask out the least significant 4 (or 8, or 10) bits of
each sample, resulting in the 12 (8 or 6) bits data having comparable amplitude to the
16-bit data.
1. Copy the C function exp3a.c and the linker command file exp3.cmd from the
software package to A: \Experiment3 directory, create project exp3a to simulate
16, 12, 8, and 6 bits A/D converters. Use the run-time support library rts55.lib
and build the project.
2. Use the CCS graphic display function to plot all four output buffers: out16,
out12, out8, and out6. Examples of the plots and graphic settings are shown in
Figure 3.19 and Figure 3.20, respectively.
3. Compare the graphic results of each output stream, and describe the differences
between waveforms represented by different wordlength.
#define BUF_SIZE 40
const int sineTable [BUF_SIZE]
{0x0000, 0x01E0, 0x03C0, 0x05A0, 0x0740, 0x08C0, 0x0A00, 0x0B20,
0x0BE0, 0x0C40, 0x0C60, 0x0C40, 0x0BE0, 0x0B20, 0x0A00, 0x08C0,
0x0740, 0x05A0, 0x03C0, 0x01E0, 0x0000, 0xFE20, 0xFC40, 0xFA60,
0xF8C0, 0xF740, 0xF600, 0xF4E0, 0xF420, 0xF3C0, 0xF3A0, 0xF3C0,
0xF420, 0xF4E0, 0xF600, 0xF740, 0xF8C0, 0xFA60, 0xFC40, 0x0000 };
int out16 [BUF_SIZE]; /* 16 bits output sample buffer */
int out12 [BUF_SIZE]; /* 12 bits output sample buffer */
int out8 [BUF_SIZE]; /* 8 bits output sample buffer */
int out6 [BUF_SIZE]; /* 6 bits output sample buffer */
void main( )
{
int i;
for (i 0; i < BUF_SIZE 1; i)
{
out16[i] sineTable[i]; /* 16-bit data */
out12[i] sineTable[i]&0xfff0; /* Mask off 4-bit */
out8[i] sineTable[i]&0xff00; /* Mask off 8-bit */
out6[i] sineTable[i]&0xfc00; /* Mask off 10-bit */
}
}
110 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
Figure 3.19 Quantizing 16-bit data (top-left) into 12-bit (bottom-left), 8-bit (top-right), and
6-bit (bottom-right)
4. Find the mean and variance of quantization noise for the 12-, 8-, and 6-bit A/D
converters.
There are many practical applications from cellular phones to MP3 players that process
speech (audio) signals using DSP. To understand the quantization effects of speech
signals, we use a digitized speech file, timit1.asc, as the input for this experiment. An
experiment code for this experiment, exp3b.c, is listed in Table 3.6.
1. Refer to the program listed in Table 3.6. Write a C function called exp3b.c to
simulate 16, 12, 8, and 4 bits A/D converters (or copy the file exp3b.c from the
software package). Use the digitized speech file timit1.asc (included in the soft-
ware package) as the input signal for the experiment. Create the project exp3b, add
exp3b.c and exp3.cmd into the project.
2. Use CCS probe points to connect disk files as described in Chapter 1. In this experi-
ment, we use a probe point to connect the input speech to the variable named indata.
We also connect four output variables out16, out12, out8, and out4 to generate
quantized output files. As mentioned in Chapter 1, we need to add a header line to the
input data file, timit1.asc. The header information is formatted in the following
syntax using hexadecimal numbers:
1651 2 C4 1 1
where
. the starting address is the address of the data variable where we want to connect
the input file to, such as C4 in the above example
3. Invoke the CCS and set probe points for input and outputs in exp3b.c. Use
probe points to output the speech signals with wordlength of 16, 12, 8, and 4 bits
to data files. Because the output file generated by CCS probe points have a
header line, we need to remove this header. If we want to use MATLAB to listen
to the output files, we have to set the CCS output file in integer format. We can
load the output file out12.dat and listen to it using the following MATLAB
commands:
load out12.dat; % Read data file
soundsc(out12, 8000, 16); % Play at 8 kHz
Listen to the quantization effects between the files with different wordlength.
As discussed in Section 3.6, overflow may occur when DSP algorithms perform accumu-
lations such as FIR filtering. When the number exceeds the maximum capacity of an
accumulator, overflow occurs. Sometimes an overflow occurs when data is transferred to
memory even though the accumulator does not overflow. This is because the C55x
accumulators (AC0±AC3) have 40 bits, while the memory space is usually defined as a
16-bit word. There are several ways to handle the overflow. As introduced in Section 3.6,
the C55x has a built-in overflow-protection unit that will saturate the data value if
overflow occurs.
In this experiment, we will use an assembly routine, ovf_sat.asm (included in
the software package), to evaluate the results with and without overflow protection.
Table 3.7 lists a portion of ovf_sat.asm.
In the program, the following code repeatedly adds the constant 0x140 to AC0:
rptblocal add_loop_end 1
add #0x140#16, AC0
mov hi(AC0), *AR5
add_loop_end
The updated value is stored at the buffer pointed at by AR5. The content of AC0 will
grow larger and larger and eventually the accumulator will overflow. When the over-
flow occurs, a positive number in AC0 suddenly becomes negative. However, when the
C55x saturation mode is enabled, the overflowed positive number will be limited to
0x7FFFFFFF.
EXPERIMENTS OF FIXED-POINT IMPLEMENTATIONS 113
.def _ovftest
.bss buff, (0x100)
.bss buff1, (0x100)
;
; Code start
;
_ovftest
bclr SATD ; Clear saturation bit if set
xcc start, T0 ! #0 ; If T0 ! 0, set saturation bit
bset SATD
start
pshboth XAR5 ; Save XAR5
... ... ; Some instructions omitted here
mov #0, AC0
mov #0x80 1, BRC0 ; Initialize loop counts for addition
amov #buff0x80, XAR5 ; Initialize buffer pointer
rptblocal add_loop_end 1
add #0x140#16, AC0 ; Use AC0 as a ramp up counter
mov hi(AC0), *AR5 ; Save the counter to buffer
add_loop_end
... ... ; Some instructions omitted here
mov #0x100 1, BRC0 ; Init loop counts for sinewave
amov #buff1, XAR5 ; Initialize buffer pointer
mov mmap(@AR0), BSA01 ; Initialize base register
mov #40, BK03 ; Set buffer size to 40
mov #20, AR0 ; Start with an offset of 20
bset AR0LC ; Activate circular buffer
rptblocal sine_loop_end 1
mov *ar0 #16, AC0 ; Get sine value into high AC0
sfts AC0, #9 ; Scale the sine value
mov hi(AC0), *AR5 ; Save scaled value
sine_loop_end
mov #0, T0 ; Return 0 if no overflow
xcc set_ovf_flag, overflow(AC0)
mov #1, T0 ; Return 1 if overflow detected
set_ovf_flag
bclr AR0LC ; Reset circular buffer bit
bclr SATD ; Reset saturation bit
popboth XAR5 ; Restore AR5
ret
The second portion of the code stores the left-shifted sinewave values to data memory
locations. Without saturation protection, this shift will cause some of the shifted values
to overflow.
114 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
1. Create the project exp3c that uses exp3c.c and ovf_sat.asm (included in the
software package) for this experiment.
2. Use the graphic function to display the sinewave (top) and the ramp counter
(bottom) as shown in Figure 3.22.
(a) (b)
Figure 3.22 C55x data saturation example: (a) without saturation protection, and (b) with
saturation protection enabled
Filters are widely used for DSP applications. The C55x implementation of FIR (or IIR)
filters often use 16-bit numbers to represent filter coefficients. Due to the quantization
of coefficients, the filter implemented using fixed-point hardware will not have the exact
same response as the filter that is obtained by a filter design package, such as
MATLAB, which uses the floating-point numbers to represent coefficients. Since filter
design will be discussed in Chapters 5 and 6, we only briefly describe the fourth-order
IIR filter used in this experiment.
Table 3.8 shows an assembly language program that implements a fourth-order IIR
lowpass filter. This filter is designed for 8 kHz sampling frequency with cut-off fre-
quency 1800 Hz. The routine, _init_iir4, initializes the memory locations of x and y
buffers to 0. The IIR filter routine, _iir4, filters the input signal. The coefficient data
pointer (CDP) is used to point to the filter coefficients. The auxiliary registers, AR5 and
AR6, are pointing to the x and y data buffers, respectively. After each sample is
processed, both the x and y buffers are updated by shifting the data in the tapped-
delay-line.
1. Write a C function, exp3d.c, to call the routine _ii4( ) to perform lowpass filter
operation. The initialization needs to be done only once, while the routine _ii4( )
will be called for filtering every sample, these files are also included in the software
package.
2. The filter coefficient quantization effects can be observed by modifying the MASK
value defined in assembly routine _ii4( ). Adjusting the MASK to generate 12, 10,
and 8 bits quantized coefficients. Interface the C function, exp3d.c with a signal
source and use either the simulator (or EVM) to observe the quantization effects due
to the limited wordlength representing filter coefficients.
116 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
Table 3.8 List of C55x assembly program for a fourth-order IIR filter
For many DSP applications, signals and system parameters, such as filter coefficients,
are usually normalized in the range of 1 and 1 using fractional numbers. In Section
3.4, we introduced the fixed-point representation of fractional numbers and in Section
3.6, we discussed the overflow problems and present some solutions. The experiment in
this section used polynomial approximation of the sinusoid function as an example to
understand the fixed-point arithmetic operation and overflow control.
The cosine and sine functions can be expressed as the infinite power (Taylor) series
expansion
1 2 1 4 1 6
cos
y 1 y y y ,
3:8:1a
2! 4! 6!
118 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
1 3 1 5 1 7
sin
y y y y y ,
3:8:1b
3! 5! 7!
where y is in radians and `!' represents the factorial operation. The accuracy of the
approximation depends on how many terms are used in the series. Usually more terms
are needed to provide reasonable accuracy for larger values of y. However, in real-time
DSP application, only a limited number of terms can be used. Using a function
approximation approach such as the Chebyshev approximation, cos(y) and sin(y) can
be computed as
where the value y is defined in the first quadrant. That is, 0 y < p=2. For y in the
other quadrants, the following properties can be used to transfer it to the first quadrant:
and
The C55x assembly routine given in Table 3.9 synthesizes the sine and cosine functions,
which can be used to calculate the angle y from 180o to 180o .
As shown in Figure 3.11, data in the Q15 format is within the range defined in (3.4.4).
Since the absolute value of the largest coefficient given in this experiment is 5.325196, we
cannot use the Q15 format to represent this number. To properly represent the coeffi-
cients, we have to scale the coefficient, or use a different Q format that represents both
the fractional numbers and the integers. We can achieve this by assuming the binary
point to be three bits further to the right. This is called the Q12 format, which has one
sign-bit, three integer bits, and 12 fraction bits, as illustrated in Figure 3.23(a). The
Q12 format covers the range 8 to 8. In the given example, we use the Q12 format
for all the coefficients, and map the angle p y p to a signed 16-bit number
(0x8000 x 0x7FFF), as shown in Figure 3.23(b).
When the sine_cos subroutine is called, a 16-bit mapped angle (function
argument) is passed to the assembly routine in register T0 following the C calling
conversion described in Chapter 2. The quadrant information is tested and stored in
TC1 and TC2. If TC1 (bit 14) is set, the angle is located in either quadrant II or IV. We
use the 2's complement to convert the angle to the first or third quadrant. We mask out
the sign-bit to calculate the third quadrant angle in the first quadrant, and the negation
changes the fourth quadrant angle to the first quadrant. Therefore the angle to be
EXPERIMENTS OF FIXED-POINT IMPLEMENTATIONS 119
continues overleaf
120 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
0x3FFF = 90⬚
s.xxxxxxxxxxxxxxx
0x7FFF = 180⬚ 0x0000 = 0⬚
Q15 format
0x8000 = −180⬚ 0xFFFF = 360⬚
siii.xxxxxxxxxxxx
(a) (b)
Figure 3.23 Scaled fixed-point number representation: (a) Q formats, and (b) Map angle value
to 16-bit signed integer
calculated is always located in the first quadrant. Because we use the Q12 format
coefficients, the computed result needs to be left shifted 3 bits to become the Q15
format.
2. In the above implementation of sine approximation, what will the following C55x
assembly instructions do? What may happen to the approximation result if we do
not set these control bits?
(a) .bset FRCT
(b) .bset SATD
(c) .bclr FRCT
(d) .bclr SATD
References
[1] N. Ahmed and T. Natarajan, Discrete-Time Signals and Systems, Englewood Cliffs, NJ: Prentice-
Hall, 1983.
[2] MATLAB User's Guide, Math Works, 1992.
122 DSP FUNDAMENTALS AND IMPLEMENTATION CONSIDERATIONS
Exercises
Part A
1. Compute the impulse response h
n for n 0 of the digital systems defined by the following
I/O difference equations:
(a) y
n x
n 0:75y
n 1
(b) y
n 0:5y
n 1 2x
n x
n 1
(c) y
n 2x
n 0:75x
n 1 1:5x
n 2
2. Construct detailed flow diagrams for the three digital systems defined in Problem 1.
3. Similar to the signal flow diagram for the FIR filter shown Figure 3.6, construct a detailed
signal flow diagram for the IIR filter defined in (3.2.18).
4. A discrete-time system is time invariant (or shift invariant) if its input±output characteristics
do not change with time. Otherwise this system is time varying. A digital system with input
signal x
n is time invariant if and only if the output signal
for any time shift k, i.e., when an input is delayed (shifted) by k, the output is delayed by the
same amount. Show that the system defined in (3.2.18) is time-invariant system if the
coefficients am and bl are constant.
5. A linear system is a system that satisfies the superposition principle, which states that the
response of the system to a weighted sum of signals is equal to the corresponding weighted
sum of the responses of the system to each of the individual input signals. That is, a system is
linear if and only if
EXERCISES 123
for any arbitrary input signals x1
n and x2
n, and for any arbitrary constants a1 and a2 . If
the input is the sum (superposition) of two or more scaled sequences, we can find the output
due to each sequence acting alone and then add (superimpose) the separate scaled outputs.
Check whether the following systems are linear or nonlinear:
(a) y
n 0:5x
n 0:75y
n 1
(b) y
n x
nx
n 1 0:5y
n 1
(c) y
n 0:75x
n x
ny
n 1
(d) y
n 0:5x
n 0:25x2
n
6. Show that a real-valued sinewave of amplitude A defined in (3.1.6) has the power
Px 0:5A2 .
7. Equation (3.3.12) shows that the power is equal to the variance for a zero-mean random
variable. Show that if the mean of the random variable x is mx , the power of x is given by
Px m2x s2x .
l ljxj
f
x e :
2
9. Find the fixed-point 2's complement representation with B 8 for the decimal numbers
0.152 and 0.738. Round the binary numbers to 6 bits and compute the corresponding
roundoff errors.
10. If the quantization process uses truncation instead of rounding, show that the truncation
error, e
n x
n x
nT, will be in the interval D < e
n < 0. Assuming that the trunca-
tion error is uniformly distributed in the interval ( D, 0), compute the mean and the variance
of e
n.
11. Identify the various types of finite wordlength effects that can occur in a digital filter defined
by the I/O equation (3.2.18).
(b) Assume the DSP hardware has 4-bit worldlength (B 4), compute y
n for
n 0, 1, 2, 3, . . . , 1. In this case, show that y
n oscillates between 0:125 for n 2.
(c) Repeat part (b) but use wordlength B 5. Show that the output y
n oscillates between
0:0625 for n 3.
Part B
13. Generate and plot (20 samples) the following sinusoidal signals using MATLAB:
(a) A 1, f 100 Hz, and fs 1 kHz
(b) A 1, f 400 Hz, and fs 1 kHz
(c) Discuss the difference of results between (a) and (b)
(d) A 1, f 600 Hz, and fs 1 kHz
(e) Compare and explain the results (b) and (d).
14. Generate 1024 samples of pseudo-random numbers with mean 0 and variance 1 using the
MATLAB function rand. Then use MATLAB functions mean, std, and hist to verify the
results.
15. Generate 1024 samples of sinusoidal signal at frequency 1000 Hz, amplitude equal to unity,
and the sampling rate is 8000 Hz. Mix the generated sinewave with the zero-mean pseudo-
random number of variance 0.2.
16. Write a C program to implement the moving-average filter defined in (3.2.2). Test the filter
using the corrupted sinewave generated in Problem 15 as input for different L. Plot both the
input and output waveforms. Discuss the results related to the filter order L.
17. Given the difference equations in Problem 1, calculate and plot the impulse response
h
n, n 0, 1, . . . , 127 using MATLAB.
18. Assuming that P^x
0 1, use MATLAB to estimate the power of x
n generated in Problem
15 by using the recursive power estimator given in (3.2.11). Plot P^x
n for n 0, 1, . . . , 1023.
Part C
19. Using EVM (or other DSP boards) to conduct the quantization experiment in real-time:
(a) Generate an analog input signal, such as a sinewave, using a signal generator. Both the
input and output channels of the DSP are displayed on an oscilloscope. Assuming the
ADC has 16-bit resolution and adjusting the amplitude of input signal to the full scale of
ADC without clipping the waveform. Vary the number of bits (by shifting out or
masking) to 14, 12, 10, etc. to represent the signal and output the signal to DAC.
Observe the output waveform using the oscilloscope.
(b) Replace the input source with a microphone, radio line output, or CD player, and send
DSP output to a loudspeaker for audio play back. Vary the number of bits (16, 12, 8, 4,
EXERCISES 125
etc.) for the output signal, and listen to the output sound. Depending upon the type of
loudspeaker being used, we may need to use an amplifier to drive the loudspeaker.
20. Implement the following square-root approximation equation in C55x assembly language:
p
x 0:2075806 1:454895x 1:34491x2 1:106812x3 0:536499x4 0:1121216x5
21. Write a C55x assembly function to implement the inverse square-root approximation
equation as following:
p
1= x 1:84293985 2:57658958x 2:11866164x2 0:67824984x3 :
This equation approximates an input variable in the range of 0:5 x 1. Use this approxi-
p
mation equation to compute 1= x in the following table:
p
Note that 1= x will result in a number greater than 1.0. Try to use Q14 data format. That is,
use 0x3FFF for 1 and 0x2000 for 0.5, and scale back to Q15 after calculation.
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
4
Frequency Analysis
Any periodic signal, x(t), can be represented as the sum of an infinite number of
harmonically related sinusoids and complex exponentials. The basic mathematical
representation of periodic signal x(t) with period T0 (in seconds) is the Fourier series
defined as
128 FREQUENCY ANALYSIS
X
1
x
t ck e jkO0 t ,
4:1:1
k 1
where ck is the Fourier series coefficient, and V0 2p=T0 is the fundamental frequency
(in radians per second). The Fourier series describes a periodic signal in terms of infinite
sinusoids. The sinusoidal component of frequency kV0 is known as the kth harmonic.
The kth Fourier coefficient, ck , is expressed as
1
ck x
te jkV0 t dt:
4:1:2
T0 T0
This integral can be evaluated over any interval of length T0 . For an odd function, it is
easier to integrate from 0 to T0 . For an even function, integration from T0 =2 to T0 =2
is commonly
used. The term with k 0 is referred to as the DC component because
c0 T10 T0 x
tdt equals the average value of x(t) over one period.
Example 4.1: The waveform of a rectangular pulse train shown in Figure 4.1 is a
periodic signal with period T0 , and can be expressed as
A, kT0 t=2 t kT0 t=2
x
t
4:1:3
0, otherwise,
For the rectangular pulse train with a fixed period T0 , the effect of decreasing t is to
spread the signal power over the frequency range. On the other hand, when t is fixed but
the period T0 increases, the spacing between adjacent spectral lines decreases.
x(t)
A
−T0 T0
2 2
t
t 0 t
−T0 − T0
2 2
A periodic signal has infinite energy and finite power, which is defined by Parseval's
theorem as
X
1
1
Px x
t2 dt ck 2 :
4:1:5
T0 T0 k 1
Since jck j2 represents the power of the kth harmonic component of the signal, the total
power of the periodic signal is simply the sum of the powers of all harmonics.
The complex-valued Fourier coefficients, ck , can be expressed as
A plot of jck j versus the frequency index k is called the amplitude (magnitude) spectrum,
and a plot of fk versus k is called the phase spectrum. If the periodic signal x(t) is real
valued, it is easy to show that c0 is real valued and that ck and c k are complex
conjugates. That is,
ck c k , j c k j j ck j and f k fk : 4:1:7
Therefore the amplitude spectrum is an even function of frequency V, and the phase
spectrum is an odd function of V for a real-valued periodic signal.
If we plot jck j2 as a function of the discrete frequencies kV0 , we can show that the
power of the periodic signal is distributed among the various frequency components.
This plot is called the power density spectrum of the periodic signal x(t). Since the power
in a periodic signal exists only at discrete values of frequencies kV0 , the signal has a line
spectrum. The spacing between two consecutive spectral lines is equal to the funda-
mental frequency V0 .
Example 4.2: Consider the output of an ideal oscillator as the perfect sinewave
expressed as
V0
x
t sin
2pf0 t, f0 :
2p
We can then calculate the Fourier series coefficients using Euler's formula
(Appendix A.3) as
1 j2pf0 t X
1
j2pf0 t
sin
2pf0 t e e ck e jk2pf0 t :
2j k 1
We have
8
< 1=2j, k1
ck 1=2j, k 1
4:1:8
:
0, otherwise.
130 FREQUENCY ANALYSIS
This equation indicates that there is no power in any of the harmonic k 6 1.
Therefore Fourier series analysis is a useful tool for determining the quality
(purity) of a sinusoidal signal.
We have shown that a periodic signal has a line spectrum and that the spacing between
two consecutive spectral lines is equal to the fundamental frequency V0 2p=T0 . The
number of frequency components increases as T0 is increased, whereas the envelope of
the magnitude of the spectral components remains the same. If we increase the period
without limit (i.e., T0 ! 1), the line spacing tends toward 0. The discrete frequency
components converge into a continuum of frequency components whose magnitudes
have the same shape as the envelope of the discrete spectra. In other words, when the
period T0 approaches infinity, the pulse train shown in Figure 4.1 reduces to a single
pulse, which is no longer periodic. Thus the signal becomes non-periodic and its
spectrum becomes continuous.
In real applications, most signals such as speech signals are not periodic. Consider the
signal that is not periodic (V0 ! 0 or T0 ! 1), the number of exponential components
in (4.1.1) tends toward infinity and the summation becomes integration over the entire
continuous range ( 1, 1. Thus (4.1.1) can be rewritten as
1
1
x
t X
Ve jVt dV:
4:1:9
2p 1
This integral is called the inverse Fourier transform. Similarly, (4.1.2) can be rewritten
as
1
jVt
X
V x
te dt,
4:1:10
1
which is called the Fourier transform (FT) of x(t). Note that the time functions
are represented using lowercase letters, and the corresponding frequency functions are
denoted by using capital letters. A sufficient condition for a function x(t) that possesses
a Fourier transform is
1
jx
tjdt < 1:
4:1:11
1
at
Example 4.3: Calculate the Fourier transform of the function x
t e u
t, where
a > 0 and u(t) is the unit step function. From (4.1.10), we have
FOURIER SERIES AND TRANSFORM 131
1
at jVt
X
V e u
te dt
11
ajVt
e dt
0
1
:
a jV
The Fourier transform X
V is also called the spectrum of the analog signal x(t). The
spectrum X
V is a complex-valued function of frequency V, and can be expressed as
X
V X
Ve jf
V ,
4:1:12
where jX
Vj is the magnitude spectrum of x(t), and f
V is the phase spectrum of x(t).
In the frequency domain, jX
Vj2 reveals the distribution of energy with respect to the
frequency and is called the energy density spectrum of the signal. When x(t) is any finite
energy signal, its energy is
1
1
2 1
Ex jx
tj dt jX
Vj2 dV:
4:1:13
1 2p 1
This is called Parseval's theorem for finite energy signals, which expresses the principle
of conservation of energy in time and frequency domains.
For a function x(t) defined over a finite interval T0 , i.e., x
t 0 for jtj > T0 =2, the
Fourier series coefficients ck can be expressed in terms of X
V using (4.1.2) and (4.1.10) as
1
ck X
kV0 :
4:1:14
T0
For a given finite interval function, its Fourier transform at a set of equally spaced
points on the V-axis is specified exactly by the Fourier series coefficients. The distance
between adjacent points on the V-axis is 2p=T0 radians.
If x(t) is a real-valued signal, we can show from (4.1.9) and (4.1.10) that
It follows that
Therefore the amplitude spectrum jX
Vj is an even function of V, and the phase
spectrum is an odd function.
If the time signal x(t) is a delta function d
t, its Fourier transform can be calculated as
1
jVt
X
V d
te dt 1:
4:1:17
1
132 FREQUENCY ANALYSIS
This indicates that the delta function has frequency components at all frequencies. In
fact, the narrower the time waveform, the greater the range of frequencies where the
signal has significant frequency components.
Some useful functions and their Fourier transforms are summarized in Table 4.1. We
may find the Fourier transforms of other functions using the Fourier transform proper-
ties listed in Table 4.2.
d
t 1
jVt
d
t t e
1 2pd
V
at 1
e u
t
a jV
e jV0 t 2pd
V V0
sin
V0 t jpd
V V0 d
V V0
cos
V0 t pd
V V0 d
V V0
1, t 0 2
sgn
t
1, t < 0 jV
ajtj
y
t e , a > 0:
y t x t x t,
where
at
x
t e u
t, a > 0:
From Table 4.1, we have X
V 1=
a jV. From Table 4.2, we have
Y
V X
V X
V. This results in
1 1 2a
Y
V :
a jV a jV a V2
2
Continuous-time signals and systems are commonly analyzed using the Fourier trans-
form and the Laplace transform (will be introduced in Chapter 6). For discrete-time
systems, the transform corresponding to the Laplace transform is the z-transform. The
z-transform yields a frequency-domain description of discrete-time signals and systems,
and provides a powerful tool in the design and implementation of digital filters. In this
section, we will introduce the z-transform, discuss some important properties, and show
its importance in the analysis of linear time-invariant (LTI) systems.
The z-transform (ZT) of a digital signal, x
n, 1 < n < 1, is defined as the power
series
X
1
X
z x
nz n ,
4:2:1
n 1
where X
z represents the z-transform of x
n. The variable z is a complex variable, and
can be expressed in polar form as
z re jy , 4:2:2
converges. The region on the complex z-plane in which the power series converges is
called the region of convergence (ROC).
As discussed in Section 3.1, the signal x
n encountered in most practical applications
is causal. For this type of signal, the two-sided z-transform defined in (4.2.1) becomes a
one-sided z-transform expressed as
X
1
X
z x
nz n :
4:2:3
n0
Clearly if x n is causal, the one-sided and two-sided z-transforms are equivalent.
x n an u n:
X
1 X
1
X
z an z n u
n
az 1 n :
n 1 n0
1
X
z if jaz 1 j < 1:
1 az 1
z
X
z , jzj > jaj:
z a
There is a zero at the origin z 0 and a pole at z a. The ROC and the pole±zero
plot are illustrated in Figure 4.2 for 0 < a < 1, where `' marks the position of the
pole and `o' denotes the position of the zero. The ROC is the region outside
the circle with radius a. Therefore the ROC is always bounded by a circle since the
convergence condition is on the magnitude jzj. A causal signal is characterized by
an ROC that is outside the maximum pole circle and does not contain any pole.
The properties of the z-transform are extremely useful for the analysis of discrete-time
LTI systems. These properties are summarized as follows:
|z| = a
Im[z]
|z| = 1
Re[z]
Figure 4.2 Pole, zero, and ROC (shaded area) on the z-plane
2. where a1 and a2 are constants, and X1
z and X2
z are the z-transforms of the
signals x1
n and x2
n, respectively. This linearity property can be generalized for
an arbitrary number of signals.
2. Time shifting. The z-transform of the shifted (delayed) signal y n x n k is
2. where the minus sign corresponds to a delay of k samples. This delay property states
that the effect of delaying a signal by k samples is equivalent to multiplying its
z-transform by a factor of z k . For example, ZTx
n 1 z 1 X
z. Thus the unit
delay z 1 in the z-domain corresponds to a time shift of one sampling period in the
time domain.
Some of the commonly used signals and their z-transforms are summarized in
Table 4.3.
136 FREQUENCY ANALYSIS
where C denotes the closed contour in the ROC of X
z taken in a counterclockwise
direction. Several methods are available for finding the inverse z-transform. We will
discuss the three most commonly used methods ± long division, partial-fraction expan-
sion, and residue method.
Given the z-transform X
z of a causal sequence, it can be expanded into an infinite
series in z 1 or z by long division. To use the long-division method, we express X
z as
the ratio of two polynomials such as
X
L 1
l
bl z
B
z l0
X
z M ,
4:2:9
A
z X
m
am z
m0
where A
z and B
z are expressed in either descending powers of z, or ascending powers
of z 1 . Dividing B
z by A
z obtains a series of negative powers of z if a positive-time
sequence is indicated by the ROC. If a negative-time function is indicated, we express
X
z as a series of positive powers of z. The method will not work for a sequence defined
THE Z-TRANSFORM 137
where
1 2z 1 z 2
X
z
1 z 1 0:3561z 2
x
0 b0 =a0 1,
x
1 b1 x
0a1 =a0 3,
x
2 b2 x
1a1 x
0a2 =a0 3:6439,
...
This yields the time domain signal x
n f1, 3, 3:6439, . . .g obtained from long
division.
X
L 1
cl X
L 1
cl z
X
z c0 1
c0 ,
4:2:12
l1
1 pl z l1
z pl
where pl are the distinct poles of X
z and cl are the partial-fraction coefficients. The
coefficient cl associated with the pole pl may be obtained with
X
z
cl
z pl :
4:2:13
z zpl
138 FREQUENCY ANALYSIS
If the order of the numerator B(z) is less than that of the denominator A(z) in (4.2.9),
that is L 1 < M, then c0 0. If L 1 > M, then X(z) must be reduced first in order to
make L 1 M by long division with the numerator and denominator polynomials
written in descending power of z 1 .
1
z
X
z 1 2
1 0:25z 0:375z
z z c1 z c2 z
X
z :
z2 0:25z 0:375
z 0:75
z 0:5 z 0:75 z 0:5
and
1
c2 0:8:
z 0:75z 0:5
Thus we have
0:8z 0:8z
X
z :
z 0:75 z 0:5
The overall inverse z-transform x(n) is the sum of the two inverse z-transforms.
From entry 3 of Table 4.3, we obtain
The MATLAB function residuez finds the residues, poles and direct terms of the
partial-fraction expansion of B
z=A
z given in (4.2.9). Assuming that the numerator
and denominator polynomials are in ascending powers of z 1 , the function
[c, p, g ]= residuez(b, a);
finds the partial-fraction expansion coefficients, cl , and the poles, pl , in the returned
vectors c and p, respectively. The vector g contains the direct (or polynomial) terms of
the rational function in z 1 if L 1 M. The vectors b and a represent the coefficients
of polynomials B(z) and A(z), respectively.
If X(z) contains one or more multiple-order
Pm poles, the partial-fraction expansion must
gj
include extra terms of the form j1
z pl j for an mth order pole at z pl . The
coefficients gj may be obtained with
THE Z-TRANSFORM 139
1 dm j
z pl m X
z
gj :
4:2:14
m j! dzm j z zpl
z2 z
X
z :
z 12
g1 g2
X
z :
z 1
z 12
" #
d
z 12 X
z d
g1
z 1 1,
dz z dz z1
z1
z 2
1 X
z
g2
z 1jz1 2:
z
z1
Thus
z 2z
X
z :
z 1
z 12
Thus the inversion integral in (4.2.8) can be easily evaluated using Cauchy's residue
theorem expressed as
1
x
n X
zzn 1 dz
2pj c
X
residues of X
zzn 1
at poles of X
zzn 1
within C:
4:2:16
140 FREQUENCY ANALYSIS
where m is the order of the pole at z pl . For a simple pole, Equation (4.2.17) reduces to
Rzpl
z pl X zzn 1 zp :
4:2:18
l
1
X
z ,
z 1
z 0:5
we have
zn 1
X
zzn 1
:
z 1
z 0:5
1
X
zzn 1
:
z
z 1
z 0:5
We have discussed three methods for obtaining the inverse z-transform. A limitation
of the long-division method is that it does not lead to a closed-form solution. However,
it is simple and lends itself to software implementation. Because of its recursive nature,
care should be taken to minimize possible accumulation of numerical errors when the
number of data points in the inverse z-transform is large. Both the partial-fraction-
expansion and the residue methods lead to closed-form solutions. The main disadvan-
tage with both methods is the need to factor the denominator polynomial, which is done
by finding the poles of X(z). If the order of X(z) is high, finding the poles of X(z) may be
SYSTEMS CONCEPTS 141
a difficult task. Both methods may also involve high-order differentiation if X(z)
contains multiple-order poles. The partial-fraction-expansion method is useful in gen-
erating the coefficients of parallel structures for digital filters. Another application of z-
transforms and inverse z-transforms is to solve linear difference equations with constant
coefficients.
Consider the discrete-time LTI system illustrated in Figure 3.8. The system output is
computed by the convolution sum defined as y
n x
n h
n. Using the convolution
property and letting ZTx
n X
z and ZT y
n Y
z, we have
where H
z ZTh
n is the z-transform of the impulse response of the system. The
frequency-domain representation of LTI system is illustrated in Figure 4.3.
The transfer (system) function H(z) of an LTI system may be expressed in terms of the
system's input and output. From (4.3.1), we have
Y
z X1
H
z ZTh
n h
nz n :
4:3:2
X
z n 1
Therefore the transfer function of the LTI system is the rational function of two
polynomials Y(z) and X(z). If the input x(n) is the unit impulse d
n, the z-transform
of such an input is unity (i.e., X
z 1), and the corresponding output Y
z H
z.
One of the main applications of the z-transform in filter design is that the z-transform
can be used in creating alternative filters that have exactly the same input±output
behavior. An important example is the cascade or parallel connection of two or more
ZT ZT ZT−1
Figure 4.3 A block diagram of LTI system in both time-domain and z-domain
142 FREQUENCY ANALYSIS
systems, as illustrated in Figure 4.4. In the cascade (series) interconnection, the output
of the first system, y1
n, is the input of the second system, and the output of the second
system, y(n), is the overall system output. From Figure 4.4(a), we have
Thus
Therefore the overall transfer function of the cascade of the two systems is
Y
z
H
z H1
zH2
z:
4:3:3
X
z
Since multiplication is commutative, H1
zH2
z H2
zH1
z, the two systems can be
cascaded in either order to obtain the same overall system response. The overall impulse
response of the system is
Similarly, the overall impulse response and the transfer function of the parallel
connection of two LTI systems shown in Figure 4.4(b) are given by
and
H(z)
(a)
H(z)
y1(n)
H1(z)
x(n) y(n)
y2(n)
H2(z)
(b)
Figure 4.4 Interconnect of digital systems: (a) cascade form, and (b) parallel form
SYSTEMS CONCEPTS 143
Example 4.10: The following LTI system has the transfer function:
1
H
z 1 2z z 3:
1
1 2
H
z 1 z 1 z z H1
zH2
z:
Thus the overall system H(z) can be realized as the cascade of the first-order
system H1
z 1 z 1 and the second-order system H2
z 1 z 1 z 2 .
The general I/O difference equation of an FIR filter is given in (3.1.16). Taking the
z-transform of both sides, we have
Y
z X
L 1
1
L 1
H
z b0 b1 z bL 1 z bl z 1 :
4:3:8
X
z l0
The signal-flow diagram of the FIR filter is shown in Figure 3.6. FIR filters can be
implemented using the I/O difference equation given in (3.1.16), the transfer function
defined in (4.3.8), or the signal-flow diagram illustrated in Figure 3.6.
Similarly, taking the z-transform of both sides of the IIR filter defined in (3.2.18)
yields
By rearranging the terms, we can derive the transfer function of an IIR filter as
X
L 1
l
bl z
Y
z l0 B
z
H
z ,
4:3:10
X
z XM 1 A
z
m
1 am z
m1
P P
where B
z Ll01 bl z l and A
z M m1 am z
m
. Note that if all am 0, the IIR filter
given in (4.3.10) is equivalent to the FIR filter described in (4.3.8).
The block diagram of the IIR filter defined in (4.3.10) can be illustrated in Figure
4.5, where A
z and B
z are the FIR filters as shown in Figure 3.6. The numerator
coefficients bl and the denominator coefficients am are referred to as the feedforward
and feedback coefficients of the IIR filter defined in (4.3.10). A more detailed signal-
flow diagram of an IIR filter is illustrated in Figure 4.6 assuming that M L 1. IIR
filters can be implemented using the I/O difference equation expressed in (3.2.18), the
transfer function given in (4.3.10), or the signal-flow diagram shown in Figure 4.6.
Factoring the numerator and denominator polynomials of H(z), Equation (4.3.10) can
be further expressed as the rational function
x(n) y(n)
B(z)
z−1
A(z) y(n−1)
Figure 4.5 IIR filter H(z) consists of two FIR filters A(z) and B(z)
x(n) b0 y(n)
z−1 z−1
b1 − a1
y(n−1)
z−1 z−1
b2 − a2
y(n−2)
bL−1 − aM
y(n−M)
x(n−L+1)
Y1
L
z zl
b0 L1 l1
H
z zM ,
4:3:11
a0 Y
M
z pm
m1
Y
M
z zl
l1 b0
z z1
z z2
z zM
H
z b0 M :
4:3:12
Y
z p1
z p2
z pM
z pm
m1
The roots of the numerator polynomial are called the zeros of the transfer function H(z).
In other words, the zeros of H(z) are the values of z for which H
z 0, i.e., B
z 0.
Thus H(z) given in (4.3.12) has M zeros at z z1 , z2 , . . . , zM . The roots of the denom-
inator polynomial are called the poles, and there are M poles at z p1 , p2 , . . . , pM . The
poles of H(z) are the values of z such that H
z 1. The LTI system described in
(4.3.12) is a pole±zero system, while the system described in (4.3.8) is an all-zero system.
The poles and zeros of H(z) may be real or complex, and some poles and zeros may be
identical. When they are complex, they occur in complex-conjugate pairs to ensure that
the coefficients am and bl are real.
Example 4.11: Consider the simple moving-average filter given in (3.2.1). Taking
the z-transform of both sides, we have
1XL 1
Y
z z l X
z:
L l0
Using the geometric series defined in Appendix A.2, the transfer function of the
filter can be expressed as
Y
z 1 X
L 1
l 1 1 z L
H
z z :
4:3:13
X
z L l0 L 1 z 1
1
Y
z z 1 Y
z X
z z L
X
z :
L
Taking the inverse z-transform of both sides and rearranging terms, we obtain
1
y
n y
n 1 x
n x
n L:
L
This is an effective way of deriving (3.2.2) from (3.2.1).
146 FREQUENCY ANALYSIS
zk e j 2p=Lk , k 0, 1, . . . , L 1: 4:3:14
Therefore there are L zeros on the unit circle jzj 1. Similarly, the poles of H(z) are
determined by the roots of the denominator zL 1
z 1. Thus there are L 1 poles at
the origin z 0 and one pole at z 1. A pole±zero diagram of H(z) given in (4.3.13) for
L 8 on the complex plane is illustrated in Figure 4.7. The pole±zero diagram provides
an insight into the properties of a given LTI system.
Describing the z-transform H(z) in terms of its poles and zeros will require finding the
roots of the denominator and numerator polynomials. For higher-order polynomials,
finding the roots is a difficult task. To find poles and zeros of a rational function H(z),
we can use the MATLAB function roots on both the numerator and denominator
polynomials. Another useful MATLAB function for analyzing transfer function is
zplane(b, a), which displays the pole±zero diagram of H(z).
Example 4.12: Consider the IIR filter with the transfer function
1
H
z 1 2
:
1 z 0:9z
We can plot the pole±zero diagram using the following MATLAB script:
b [1]; a [1, 1, 0.9];
zplane(b, a);
Similarly, we can plot Figure 4.7 using the following MATLAB script:
b [1, 0, 0, 0, 0, 0, 0, 0, 1]; a [1, 1];
zplane(b, a);
As shown in Figure 4.7, the system has a single pole at z 1, which is at the same
location as one of the eight zeros. This pole is canceled by the zero at z 1. In this case,
the pole±zero cancellation occurs in the system transfer function itself. Since the system
Im[z]
Zero
Re[z]
Pole
|z|=1
output Y
z X
zH
z, the pole±zero cancelation may occur in the product of system
transfer function H(z) with the z-transform of the input signal X
z. By proper selection
of the zeros of the system transfer function, it is possible to suppress one or more poles of
the input signal from the output of the system, or vice versa. When the zero is located
very close to the pole but not exactly at the same location to cancel the pole, the system
response has a very small amplitude.
The portion of the output y
n that is due to the poles of X
z is called the forced
response of the system. The portion of the output that is due to the poles of H(z) is
called the natural response. If a system has all its poles within the unit circle, then its
natural response dies down as n ! 1, and this is referred to as the transient response. If
the input to such a system is a periodic signal, then the corresponding forced response is
called the steady-state response.
Consider the recursive power estimator given in (3.2.11) as an LTI system H(z) with
input w
n x2
n and output y
n P ^x
n. As illustrated in Figure 4.8, Equation
(3.2.11) can be rewritten as
Taking the z-transform of both sides, we obtain the transfer function that describes this
efficient power estimator as
Y
z a
H
z 1
:
4:3:15
W
z 1
1 az
This is a simple first-order IIR filter with a zero at the origin and a pole at z 1 a. A
pole±zero plot of H(z) given in (4.3.15) is illustrated in Figure 4.9. Note that a 1=L
results in 1 a
L 1=L, which is slightly less than 1. When L is large, i.e., a longer
window, the pole is closer to the unit circle.
Im[z]
Zero
Re[z]
Pole
|z| = 1
An LTI system H
z is stable if and only if all the poles are inside the unit circle. That is,
jpm j < 1 for all m:
4:3:16
In this case, lim fh
ng 0. In other words, an LTI system is stable if and only if the
n!1
unit circle is inside the ROC of H(z).
h n an , n 0:
When jaj > 1, i.e., the pole at z a is outside the unit circle, we have
lim h
n ! 1:
n!1
that is an unstable system. However, when jaj < 1, the pole is inside the unit circle,
we have
lim h
n ! 0,
n!1
The frequency response of a digital system can be readily obtained from its transfer
function. If we set z e j! in H(z), we have
X
1 X
1
H
zz e j! h
nz n z e j! h
ne j!n
H
!:
4:3:17
n 1 n 1
Thus the frequency response of the system is obtained by evaluating the transfer
function on the unit circle jzj je j! j 1. As summarized in Table 3.1, the digital
frequency ! 2pf =fs is in the range p ! p.
The characteristics of the system can be described using the frequency response of the
frequency !. In general, H
! is a complex-valued function. It can be expressed in polar
form as
SYSTEMS CONCEPTS 149
where jH
!j is the magnitude (or amplitude) response and f
! is the phase shift
(phase response) of the system at frequency !. The magnitude response jH
!j is an
even function of !, and the phase response f
! is an odd function of !. We only need
to know that these two functions are in the frequency region 0 ! p. The quantity
jH
!j2 is referred to as the squared-magnitude response. The value of jH
!0 j for a
given H
! is called the system gain at frequency !0 .
1
y
n x
n x
n 1, n0
2
is a first-order FIR filter. Taking the z-transform of both sides and re-arranging
the terms, we obtain
1
H
z 1z 1 :
2
1 j!
1
H
! 1e
1 cos ! j sin !,
2 2
1
jH
!j2 fReH
!g2 fImH
!g2
1 cos !,
2
1 ImH
! 1 sin !
f
! tan tan :
ReH
! 1 cos !
1
H
z 1 2
:
4:3:19b
1 z 0:9z
The MATLAB script to analyze the magnitude and phase responses of this IIR
filter is listed (exam 4_15.m in the software package) as follows:
b [1]; a [1, 1, 0.9];
[H, w ] freqz(b, a, 128);
magH abs(H); angH angle(H);
subplot(2, 1, 1), plot(magH), subplot(2, 1, 2), plot(angH);
The MATLAB function abs(H)returns the absolute value of the elements of H
and angle(H)returns the phase angles in radians.
A simple, but useful, method of obtaining the brief frequency response of an LTI
system is based on the geometric evaluation of its pole±zero diagram. For example,
consider a second-order IIR filter expressed as
b0 b1 z 1 b2 z 2
b0 z 2 b1 z b2
H
z :
4:3:20
1 a1 z 1 a2 z 2 z 2 a1 z a2
z 2 a1 z a 2 0 4:3:21
are the poles of the filter, which may be either real or complex. For complex poles,
p1 re jy and p2 re jy
,
4:3:22
where r is radius of the pole and y is the angle of the pole. Therefore Equation (4.3.20)
becomes
z re jy z re jy
z2 2r cos y r2 0:
4:3:23
The filter behaves as a digital resonator for r close to unity. The system with a pair of
complex-conjugated poles as given in (4.3.22) is illustrated in Figure 4.10.
SYSTEMS CONCEPTS 151
Im[z]
|z| = 1
r
q
Re[z]
q
r
Im[z] p1
V1
z = e jw
z1
z = −1 U1
Re[z]
z2 V2
U2
p2
Figure 4.11 Geometric evaluation of the magnitude response from the pole±zero diagram
b0
z z1
z z2
H
z :
4:3:25
z p1
z p2
b0
e j! z1
e j! z2
H
! :
4:3:26
e j! p1
e j! p2
U1 U2
jH
!j ,
4:3:27
V1 V2
where U1 and U2 represent the distances from the zeros z1 and z2 to the point z e j! ,
and V1 and V2 are the distances of the poles p1 and p2 , to the same point as illustrated in
Figure 4.11. The complete magnitude response can be obtained by evaluating jH
!j as
the point z moves from z 0 to z 1 on the unit circle. As the point z moves closer to
the pole p1 , the length of the vector V1 decreases, and the magnitude response increases.
152 FREQUENCY ANALYSIS
When the pole p1 is close to the unit circle, V1 becomes very small when z is on the same
radial line with pole p1
! y. The magnitude response has a peak at this resonant
frequency. The closer r is to the unity, the sharper the peak. The digital resonator is an
elementary bandpass filter with its passband centered at the resonant frequency y. On
the other hand, as the point z moves closer to the zero z1 , the zero vector U1 decreases as
does the magnitude response. The magnitude response exhibits a peak at the pole angle,
whereas the magnitude response falls to the valley at the zero.
X1
N
x
n ck ejk
2p=Nn ,
4:4:1
k0
1NX1
jk
2p=Nn
ck x
ne :
4:4:2
N n0
ckiN ck ,
4:4:3
DISCRETE FOURIER TRANSFORM 153
where i is an integer. Thus the spectrum of a periodic signal with period N is a periodic
sequence with the same period N. The single period with frequency index
k 0, 1, . . . , N 1 corresponds to the frequency range 0 f fs or 0 F 1.
Similar to the case of analog aperiodic signals, the frequency analysis of discrete-time
aperiodic signals involves the Fourier transform of the time-domain signal. In previous
sections, we have used the z-transform to obtain the frequency characteristics of discrete
signals and systems. As shown in (4.3.17), the z-transform becomes the evaluation of the
Fourier transform on the unit circle z e j! . Similar to (4.1.10), the Fourier transform
of a discrete-time signal x(n) is defined as
X
1
j!n
X
! x
ne :
4:4:4
n 1
This is called the discrete-time Fourier transform (DTFT) of the discrete-time signal
x(n).
It is clear that X
! is a complex-valued continuous function of frequency !, and
X
! is periodic with period 2p. That is,
Thus the frequency range for a discrete-time signal is unique over the range ( p, p) or
(0, 2p). For real-valued x(n), X
! is complex-conjugate symmetric. That is,
X ! X !: 4:4:6
Consider an LTI system H(z) with input x(n) and output y(n). From (4.3.1) and
letting z e j! , we can express the output spectrum of system in terms of its frequency
response and the input spectrum. That is,
where X
! and Y
! are the DTFT of the input x(n) and output y(n), respectively.
Similar to (4.3.18), we can express X
! and Y
! as
and
Therefore the output magnitude spectrum jY
!j is the product of the magni-
tude response jH
!j and the input magnitude spectrum jX
!j. The output phase
spectrum fy
! is the sum of the system phase response f
! and the input
phase spectrum fx
!.
For example, if the input signal is the sinusoidal signal at frequency !0 expressed as
where jH
!0 j is the system amplitude gain at frequency !0 and f
!0 is the phase shift
of the system at frequency !0 . Therefore it is clear that the sinusoidal steady-state
response has the same frequency as the input, but its amplitude and phase angle are
determined by the system's magnitude response jH
!j and phase response f
! at any
given frequency !0 .
As discussed in Section 4.1, let x(t) be an analog signal, and let X( f ) be its Fourier
transform, defined as
1
j2pft
X
f x
te dt,
4:4:15
1
where f is the frequency in Hz. The sampling of x(t) with sampling period T yields the
discrete-time signal x(n). Similar to (4.4.4), the DTFT of x(n) can be expressed as
X
1
j2pFn
X
F x
ne :
4:4:16
n 1
The periodic sampling imposes a relationship between the independent variables t and
n in the signals x(t) and x(n) as
n
t nT :
4:4:17
fs
DISCRETE FOURIER TRANSFORM 155
1 X 1
X
F X
f kfs :
4:4:18
T k 1
This equation states that X(F ) is the sum of all repeated values of X( f ), scaled by 1/T,
and then frequency shifted to kfs . It also states that X(F ) is a periodic function with
period T 1=fs . This periodicity is necessary because the spectrum X(F) of the discrete-
time signal x(n) is periodic with period F 1 or f fs . Assume that a continuous-time
signal x(t) is bandlimited to fM , i.e.,
jX f j 0 for j f j fM , 4:4:19
X( f )
− fM 0 fM f
(a)
X( f /fs )
− fs fs
2 2
f
− fs − fM 0 fM fs
(b)
X( f/fs )
− fs fs
2 2
f
− fs − fM 0 fM fs
(c)
Figure 4.12 Spectrum replication caused by sampling: (a) spectrum of analog bandlimited
signal x(t), (b) sampling theorem is satisfied, and (c) overlap of spectral components
156 FREQUENCY ANALYSIS
The effect of sampling is that it extends the spectrum of X( f ) repeatedly on both sides
of the f-axis, as shown in Figure 4.12. When the sampling rate fs is selected to be greater
than 2 fM , i.e., if fM fs =2, the spectrum X( f ) is preserved in X(F ) as shown in Figure
4.12(b). Therefore when fs 2fM , we have
1 1
X
F X
f for jF j or j f j fN ,
4:4:20
T 2
where fN fs =2 is called the Nyquist frequency. In this case, there is no aliasing, and the
spectrum of the discrete-time signal is identical (within the scale factor 1/T) to the
spectrum of the analog signal within the fundamental frequency range j f j fN or
jF j 1=2. The analog signal x(t) can be recovered from the discrete-time signal x(n)
by passing it through an ideal lowpass filter with bandwidth fM and gain T. This
fundamental result is the sampling theorem defined in (1.2.3). This sampling theorem
states that a bandlimited analog signal x(t) with its highest frequency (bandwidth) being
fM can be uniquely recovered from its digital samples, x(n), provided that the sampling
rate fs 2fM .
However, if the sampling rate is selected such that fs < 2fM , the shifted replicas of
X( f ) will overlap in X(F), as shown in Figure 4.12(c). This phenomenon is called
aliasing, since the frequency components in the overlapped region are corrupted when
the signal is converted back to the analog form. As discussed in Section 1.1, we used an
analog lowpass filter with cut-off frequency less than fN before the A/D converter in
order to prevent aliasing. The goal of filtering is to remove signal components that may
corrupt desired signal components below fN . Thus the lowpass filter is called the
antialiasing filter.
Consider two sinewaves of frequencies f1 1 Hz and f2 5 Hz that are sampled at
fs 4 Hz, rather than at 10 Hz according to the sampling theorem. The analog wave-
forms are illustrated in Figure 4.13(a), while their digital samples and reconstructed
waveforms are illustrated in Figure 4.13(b). As shown in the figures, we can reconstruct
the original waveform from the digital samples for the sinewave of frequency f1 1 Hz.
However, for the original sinewave of frequency f2 5 Hz, the reconstructed signal
is identical to the sinewave of frequency 1 Hz. Therefore f1 and f2 are said to be aliased to
one another, i.e., they cannot be distinguished by their discrete-time samples.
In general, the aliasing frequency f2 related to f1 for a given sampling frequency fs can
be expressed as
f2 ifs f1 , i 1: 4:4:21
f2 i 4 1, i 1, 2, 3, . . .
3, 5, 7, 9, . . .
4:4:22
The folding phenomenon can be illustrated as the aliasing diagram shown in Figure
4.14. From the aliasing diagram, it is apparent that when aliasing occurs, aliasing
frequencies in x(t) that are higher than fN will fold over into the region 0 f fN .
DISCRETE FOURIER TRANSFORM 157
x(t), f1 = 1 Hz x(t), f 2 = 5 Hz
x(n) x(t)
x(n)
x(t)
1 1
t, second t
(a)
x(n), f1 = 1 Hz x(n), f2 = 5 Hz
x(n)
x(n)
x(t)
x(t)
n n
(b)
Figure 4.13 Example of the aliasing phenomenon: (a) original analog waveforms and digital
samples for f1 1 Hz and f2 5 Hz, and (b) digital samples of f1 1 Hz and f2 5 Hz and
reconstructed waveforms
f1 = 1
0 fN = 2
2 fN = 4 f2 = 3
f2 = 5
3 fN = 6
4 fN = 8 f2 = 7
X1
N
x
ne j
N kn ,
2p
X
k k 0, 1, . . . , N 1,
4:4:23
n0
where n is the time index, k is the frequency index, and X
k is the kth DFT coefficient.
The inverse discrete Fourier transform (IDFT) is defined as
1NX1
X
ke j
N kn ,
2p
x
n n 0, 1, . . . , N 1:
4:4:24
N k0
Equation (4.4.23) is called the analysis equation for calculating the spectrum from the
signal, and (4.4.24) is called the synthesis equation used to reconstruct the signal from its
spectrum. This pair of DFT and IDFT equations holds for any discrete-time signal that
is periodic with period N.
When we define the twiddle factor as
WN e j
N ,
2p
4:4:25
X1
N
X
k x
nWNkn , k 0, 1, . . . , N 1:
4:4:26
n0
1NX1
x
n X
kWN kn , n 0, 1, . . . , N 1:
4:4:27
N k0
Note that WN is the Nth root of unity since e j2p 1. Because the WNkn are N-periodic, the
DFT coefficients are N-periodic. The scalar 1/N that appears in the IDFT in (4.4.24) does
not appear in the DFT. However, if we had chosen to define the DFT with the scalar 1/N,
it would not have appeared in the IDFT. Both forms of these definitions are equivalent.
In this M-file, the special character ' (prime or apostrophe) denotes the transpose
of a matrix. The exam4_16.m (included in the software package) with the
following statements:
n [0:127]; N 128;
xn 1.5*sin(0.2*pi*n+0.25*pi);
Xk dft(xn, N);
semilogy(abs(Xk));
axis( [0 63 0 120]);
will display the magnitude spectrum of sinewave x(n) in logarithmic scale, and the
x-axis shows only the range from 0 to p.
The DFT and IDFT play an important role in many DSP applications including linear
filtering, correlation analysis, and spectrum analysis. To compute one of the X(k)
coefficients in (4.4.23), we need N complex multiplications and N 1 complex add-
itions. To generate N coefficients, we need N 2 multiplications and N 2 N additions.
The DFT can be manipulated to obtain a very efficient algorithm to compute it.
Efficient algorithms for computing the DFT are called the fast Fourier transform
(FFT) algorithms, which require a number of operations proportional to N log2 N
rather than N 2 . The development, implementation, and application of FFT will be
further discussed in Chapter 7.
MATLAB provides the built-in function fft(x), or fft(x, N)to compute the DFT
of the signal vector x. If the argument N is omitted, then the length of the DFT is the
length of x. When the sequence length is a power of 2, a high-speed radix-2 FFT
algorithm is employed. The MATLAB function fft(x, N)performs N-point FFT. If
the length of x is less than N, then x is padded with zeros at the end. If the length of x is
greater than N, fft truncates the sequence x and performs DFT of the first N samples
only. MATLAB also provides ifft(x) to compute the IDFT of the vector x, and
ifft(x, N)to calculate the N-point IDFT.
The function fft(x, N)generates N DFT coefficients X(k) for k 0, 1, . . . N 1.
The Nyquist frequency ( fN fs =2) corresponds to the frequency index k N=2. The
frequency resolution of the N-point DFT is
fs
:
4:4:28
N
160 FREQUENCY ANALYSIS
kfs
fk k , k 0, 1, . . . , N 1:
4:4:29
N
Since the magnitude spectrum jX
kj is an even function of k, we only need to display
the spectrum for 0 k N=2 (or 0 !k p).
Example 4.17: By considering the sinewave given in Example 3.1, we can generate
the time-domain signal and show the magnitude spectrum of signal by using the
following MATLAB script (exam4_17.m in the software package):
N 256;
n [0:N 1];
omega 0.25*pi;
xn 2*sin(omega*n);
Xk fft(xn, N); % Perform FFT
absXk abs(Xk); % Compute magnitude spectrum
plot(absXk(1:(N/2))); % Plot from 0 to p
The phase response can be obtained using the MATLAB function phase
angle(Xk), which returns the phase angles in radians of the elements of complex
vector Xk.
4.5 Applications
In this section, we will introduce two examples of using frequency analysis techniques
for designing simple notch filters and analyzing room acoustics.
A notch filter contains one or more deep notches (nulls) in its magnitude response. To
create a null in the frequency response at frequency !0 , we simply introduce a pair of
complex-conjugate zeros on the unit circle at angle !0 . That is, at
z ej!0 : 4:5:1
20
10
0
Magnitude (dB)
−10
−20
−30
−40
−50
−60
0.0 0.1 0.2 0.3 0.4 0.5
Normalized Frequency
Figure 4.15 Magnitude response of a notch filter with zeros only for !0 0:2p
Obviously, the second-order FIR notch filter has a relatively wide bandwidth, which
means that other frequency components around the null are severely attenuated. To
reduce the bandwidth of the null, we may introduce poles into the system. Suppose that
we place a pair of complex-conjugate poles at
zp rejy0 , 4:5:3
where r and y0 are radius and angle of poles, respectively. The transfer function for the
resulting filter is
−5
−10
Magnitude (dB)
−15
−20
−25
−30
−35
−40
0.0 0.1 0.2 0.3 0.4 0.5
Normalized frequency
Figure 4.16 Magnitude response of a notch filter with zeros and poles, !0 y0 0:2p and
r 0:85
0
r=0.95
r=0.85
−5 r=0.75
Magnitude (dB)
−10
−15
−20
−25
−30
0.0 0.1 0.2 0.3 0.4 0.5
Normalized frequency
Figure 4.17 Magnitude response of notch filter with both zeros and poles, !0 y0 0:2p
and different values of r
Room
Source Receiver
The RTF includes the characterstics of the direct sound and all reflected sounds
(reverberations) in the room.
An efficient model is required to represent the RTF with a few parameters for
reducing memory and computation requirements. The first method for modeling an
RTF is an all-zero model as defined in (4.3.8), with the coefficients corresponding to the
impulse response of the RTF in the time domain. The all-zero model can be realized
with an FIR filter. When the reverberation time is 500 ms, the FIR filter needs 4000
coefficients (at 8 kHz sampling rate) to represent the RTF. Furthermore, the RTF varies
due to changes in the source and receiver positions.
The pole±zero model defined in (4.3.10) can also be used to model RTF. From a
physical point of view, poles represent resonances, and zeros represent time delays and
anti-resonances. Because the poles can represent a long impulse response caused by
resonances with fewer parameters than the zeros, the pole±zero model seems to match a
physical RTF better than the all-zero model. Because the acoustic poles corresponding
to the resonance properties are invariant, the pole±zero model that has constant poles
and variable zeros is cost effective.
It is also possible to use an all-pole modeling of room responses to reduce the
equalizer length. The all-pole model of RTF can be expressed as
1 1
H
z :
4:5:5
1 A
z X
M
m
1 am z
m1
Acoustic poles correspond to the resonances of a room and do not change even if the
source and receiver positions change or people move. This technique can be applied to
dereverberation of recorded signals, acoustic echo cancellation, etc. In this section, we
show how the MATLAB functions are used to model and analyze the room acoustics.
To evaluate the room transfer function, impulse responses of a rectangular room
(246 143 111 cubic inches) were measured using the maximum-length sequence
technique. The original data is sampled at 48 kHz, which is then bandlimited to
100±400 Hz and decimated to 1 kHz. An example of room impulse response is shown
in Figure 4.19, which is generated using the following MATLAB script:
load imp.dat;
plot(imp(1:1000)), title(`Room impulse response');
xlabel(`Time'), ylabel(`Amplitude');
164 FREQUENCY ANALYSIS
−4
× 10
1.5
1.0
0.5
Amplitude
0.0
−0.5
−1.0
−1.5
0 200 400 600 800 1000
Time
−50
−60
−70
−80
Magnitude, dB
−90
−100
−110
−120
−130
−140
−150
0 100 200 300 400 500
Frequency, Hz
where the room impulse response samples are stored in the ASCII file imp.dat. Both
the MATLAB script imprtf.m and the data file imp.dat are included in the software
package.
We can easily evaluate the magnitude response of the room transfer function using
the MATLAB script magrtf.m available in the software package. The magnitude
response is shown in Figure 4.20.
MATLAB provides a powerful function a lpc(x, N) to estimate the coefficients
am of an Mth-order all-pole IIR filter. A user-written MATLAB function all_pole.m
that shows the magnitude responses of the measured and modeled RTF is given in the
EXPERIMENTS USING THE TMS320C55X 165
software package. This MATLAB function can be invoked by using the following
commands:
load imp.dat;
all_pole(imp, 120);
The impulse response of the RTF is modeled by the all-pole model defined in (4.5.5) by
using the MATLAB function a lpc(imp_x, pole_number), where the pole
number M is selected as 120. In order to evaluate the accuracy of the model, the
MATLAB function freqz(1, a, leng, 1000)is used to compute the frequency
response of the estimated model. The magnitude response of the RTF model is then
compared with the measured magnitude response of RTF from the measured room
impulse response. It is shown that the all-pole model matches the peaks better than the
valleys. Note that the higher the model order M, the better the model match can be
obtained.
A pole±zero model for the RTF can be estimated by using Prony's method as follows:
[b, a] prony(imp_x, nb, na);
where b and a are vectors containing the estimated numerator and denominator
coefficients, and nb and na are orders of numerator b and denominator a.
where the subscripts r and i denote the real and imaginary parts of complex variable.
Equation (4.6.1) can be rewritten as
for n 0, 1, . . . , N 1, where
The C program listed in Table 4.4 uses two arrays, Xin[2*N] and Xout[2*N], to
represent the complex (real and imaginary) input and output data samples. The
input samples for the experiment are generated using the MATLAB script listed in
Table 4.5.
#define PI 3.1415926536
void dft(float Xin [], float Xout [])
{
int n, k, j;
float angle;
float Xr [N], Xi [N];
float Wn [2];
for(k 0; k < N; k)
{
Xr [k] 0;
Xi [k] 0;
for(j 0, n 0; n < N; n)
{
angle (2.0*PI*k*n)/ N;
W [0] cos(angle);
W [1] sin(angle);
Xr [k] Xr [k] Xin [j]*W [0] Xin [j1]*W [1];
Xi [k] Xi [k] Xin [j 1]*W [0] Xin [j]*W [1];
j 2;
}
Xout [n] Xr [k];
Xout [n] Xi [k];
}
}
This program generates 128 data samples. The data is then represented using the
Q15 format for the experiments. They are stored in the data files input.dat and
input.inc, and are included in the software package. The data file input.dat is
used by the C program exp4a.c for Experiment 4A, and the data file input.inc
is used by the assembly routine exp4b.asm for Experiment 4B.
The sine±cosine generator we implemented for the experiments in Chapter 3 can be used
to generate the twiddle factors for comparing the DFT. Recall the assembly function
sine_cos.asm developed in Section 3.8.5. This assembly routine is written as a C-
callable function that follows the C55x C-calling convention. There are two arguments
passed by the function as sine_cos(angle, Wn). The first argument is passed
through the C55x temporary register T0 containing the input angle in radians. The
second argument is passed by the auxiliary register AR0 as a pointer Wn to the memory
locations, where the results of sin(angle) and cos(angle) will be stored upon return.
The following C example shows how to use the assembly sine±cosine generator inside
nested loops to generate the twiddle factors:
#define N 128
#define TWOPIN 0x7FFF6 /* 2pkn/N, N 128 */
int n, k, angle;
int Wn [2]; /* Wn [0] cos(angle), Wn [1] sin(angle) */
for(k 0; k < N; k)
{
for(n 0; n < N; n)
{
angle TWOPIN*k*n;
sine_cos(angle, Wn);
}
}
168 FREQUENCY ANALYSIS
The assembly code that calls the subroutine sine_cos is listed as follows:
In Chapter 2, we introduced how to write nested loops using block-repeat and single
repeat instructions. Since the inner loop of the C code dft()contains multiple instruc-
tions, we will use the block-repeat instruction (rptb) to implement both of the inner
and outer loops. The C55x has two block-repeat counters, registers BRC0 and BRC1.
When implementing nested loops, the repeat counter BRC1 must be used as the inner-
loop counter, while the BRC0 should be used as the outer-loop counter. Such an
arrangement allows the C55x to automatically reload the inner-loop repeat counter
BRC1 every time the outer-loop counter being updated. The following is an example of
using BRC0 and BRC1 for nested loops N times:
mov #N 1, BRC0
mov #N 1, BRC1
rptb outer_loop-1
(more outer loop instructions...)
rptb inner_loop-1
(more inner loop instructions...)
inner_loop
(more outer loop instructions...)
outer_loop
As defined in Figure 3.23 of Section 3.8.5, the fixed-point representation of value p for
sine±cosine generator is 0x7FFF. The angle used to generate the twiddle factors for
DFT of N 128 can be expressed as
N .set 128
TWOPIN .set 0x7FFF 6 ; 2*PI/N, N 128
.bss Wn, 2 ; Wn [0] Wr, Wn [1] Wi
.bss angle, 1 ; Angle for sine cosine function
mov #N 1, BRC0 ; Repeat counter for outer-loop
mov #N 1, BRC1 ; Repeat counter for inner-loop
mov #0, T2 ; k T2 0
EXPERIMENTS USING THE TMS320C55X 169
For the experiment, the complex data and twiddle factor vectors are arranged
in order of the real and the imaginary pairs. That is, the input array
Xin [2N] fXr , Xi , Xr , Xi , . . .g and the twiddle factor [2] fWr , Wi g. The computa-
tion of (4.6.3) is implemented in C as follows:
Xr [n] 0;
Xi [n] 0;
for(n 0; n < N; n)
{
Xr [n] Xr [n] Xin [n]*Wr Xin [n 1]*Wi;
Xi [n] Xi [n] Xin [n 1]*Wr Xin [n]*Wi;
}
Because the DFT function accumulates N intermediate results, the possible overflow
during computation should be considered. The instruction masm40 enables the use of
accumulator guard bits that allow the intermediate multiply±accumulate result to be
handled in a 40-bit accumulator. Finally, we can put all the pieces together to complete
the routine, as listed in Table 4.6.
170 FREQUENCY ANALYSIS
/* Experiment 4A exp4a.c */
#include "input.dat"
#define N 128
extern void dft_128(int *, int *);
extern void mag_128(int *, int *);
int Xin [2*N];
int Xout [2*N];
int Spectrum [N];
void main()
{
int i, j;
for(j 0, i 0; i < N; i)
{
Xin [j] input [i]; /* Get real sample */
Xin [j] 0; /* Imaginary sample 0 */
}
dft_128(Xin, Xout); /* DFT routine */
mag_128(Xout, Spectrum); /* Compute spectrum */
}
We will complete the DFT routine of N 128 and test it in this section. The C program
listed in Table 4.7 calls for the assembly routine dft_128()to compute the 128-point
DFT.
The data file, input.dat, is an ASCII file that contains 128 points of data sampled
at 8 kHz, and is available in the software package. First, the program composes the
complex input data array Xin [2*N]by zero-filling the imaginary parts. Then the DFT
is carried out by the subroutine dft_128(). The 128 complex DFT samples are stored
in the output data array Xout [2*N]. The subroutine mag_128()at the end of the
program is used to compute the squared magnitude spectrum of the 128 complex DFT
samples from the array Xout [2*N]. The magnitude is then stored in the array called
Spectrum [N], which will be used for graphic display later. The assembly routine,
mag_128.asm, is listed in Table 4.8.
172 FREQUENCY ANALYSIS
1. Write the C program exp4a.c based on the example (or copy from the software
package) that will complete the following tasks:
(a) Compose the complex input sample to Xin [].
(b) Call the subroutine dft_128()to perform DFT.
(c) Call the subroutine mag_128()to compute the squared magnitude spectrum of
the DFT.
2. Write the assembly routine dft_128.asm for the DFT, and write the assembly
routine mag_128.asm for computing the magnitude spectrum (or copy these files
from the software package).
3. Test and debug the programs. Plot the magnitude spectrum (Spectrum [N]) and
the input samples as shown in Figure 4.21.
4. Profile the DFT routine and record the program and data memory usage. Also,
record the clock cycles used for each subroutine.
EXPERIMENTS USING THE TMS320C55X 173
Figure 4.21 The plots of time-domain input signal (top), the input spectrum (middle) which
shows three peaks are located at frequencies 0.5 kHz, 1 kHz, 2 kHz, and the DFT result (bottom)
1. Initialize extended stack pointer XSP and extended system stack pointer XSSP.
2. Turn the sign extension mode on for arithmetic operations.
3. Turn the 40-bit accumulator mode off, and set the default as 32-bit mode.
4. Turn DU and AU saturate mode off.
5. Turn off the fractional mode.
6. Turn off the circular addressing mode.
; vectors.asm
.def rsv
.ref start
.sect "vectors"
rsv .ivec start
where the assembly directive .ivec defines the starting address of the C55x interrupt
vector table.
Interrupts are hardware- or software-driven signals that cause the C55x to suspend
its current program and execute an interrupt service routine (ISR). Once the interrupt
is acknowledged, the C55x executes the branch instruction at the corresponding
interrupt vector table to perform an ISR. There are 32 interrupts and each interrupt
uses 64 bits (4-word) in the C55x vector table. The first 32 bits contain the 24-bit
program address of ISR. The second 32-bit can be ISR instructions. This 32-bit code
will be executed before branching to the ISR. The label start is the entry point of
our experiment program. At power up (or reset), the C55x program counter will be
pointing to the first interrupt vector table, which is a branch instruction to the
label start to begin executing the program. Since the vectors are fixed for the C55x,
we need to map the address of interrupt-vector table .ivec to the program memory at
address 0xFFFF00. The linker command file can be used to map the address of the
vector table.
Because we do not use boot.asm, we are responsible for setting up the system before
we can begin to perform our experiment. The stack pointer must be correctly set before
any subroutine calls (or branch to ISR) can be made. Some of the C55x operation states/
modes should also be set accordingly. The following example shows some of the settings
at the beginning of our program:
stk_size .set 0x100
stack .usect ".stack",stk_size
sysstack .usect ".stack",stk_size
.def start
.sect .text
start
bset SATD
bset SATA
bset SXMD
bclr C54CM
bclr CPL
amov #(stackstk_size), XSP
mov #(sysstackstk_size), SSP
The label start in the code defines the entry point of the program. Since it will also
be used by the vectors.asm, it needs to be defined as a global label. The first three
bit-set instructions (bset) set up the saturation mode for both the DU and AU and
the sign extension mode. The next two bit-clear instructions (bclr) turn off the
C54x compatibility mode and the C compiler mode. The last two move instructions
(amov/mov) initialize the stack pointers. In this example, the stack size is defined as
0x100 long and starts in the section named .stack in the data memory. When
subroutine calls occur, the 24-bit program counter PC(23:0) will split into two portions.
EXPERIMENTS USING THE TMS320C55X 175
The stack pointer SP is used for storing the lower 16-bit address of the program counter
PC(15:0), and the system stack pointer SSP is used for the upper 8-bit of the
PC(23:16).
In this experiment, we wrote an assembly program exp4b.asm listed in Table 4.9 to
replace the main()function in the C program exp4a.c.
2. Use the .usect "indata" directive for array Xin[256], .usect "outdata" for
Xout [256]and Spectrum [128], and use sect.code for the program section of
the assembly routine exp4b.asm. Create a linker command file exp4b.cmd and
add the above sections. The code section.code in the program memory starts at
address 0x20400 with a length of 4096 bytes. The indata section starts in the data
memory of word address 0x8000, and its length is 256 words. The outdata section
starts in the data memory with staring address of 0x08800 and has the length of 512
words.
3. Test and debug the programs, verify the memory locations for sections .code,
indata, and outdata, and compare the DFT results with experiment results
obtained in Section 4.6.3.
References
[1] N. Ahmed and T. Natarajan, Discrete-Time Signals and Systems, Englewood Cliffs, NJ: Prentice-
Hall, 1983.
[2] MATLAB User's Guide, Math Works, 1992.
[3] MATLAB Reference Guide, Math Works, 1992.
[4] A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, Englewood Cliffs, NJ:
Prentice-Hall, 1989.
[5] S. J. Orfanidis, Introduction to Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1996.
[6] J. G. Proakis and D. G. Manolakis, Digital Signal Processing ± Principles, Algorithms, and Applica-
tions, 3rd Ed., Englewood Cliffs, NJ: Prentice-Hall, 1996.
[7] A Bateman and W. Yates, Digital Signal Processing Design, New York: Computer Science Press,
1989.
[8] S. M. Kuo and D. R. Morgan, Active Noise Control Systems ± Algorithms and DSP Implementa-
tions, New York: Wiley, 1996.
[9] H. P. Hsu, Signals and Systems, New York: McGraw-Hill, 1995.
Exercises
Part A
2. Similar to Example 4.2, compute the Fourier series coefficients for the signal
X
1
d
t d
t kT0 :
k 1
(d) x t 1.
(a) x n 1, n 0.
(b) x
n e an , n 0:
n
a , n 1, 2, . . . , 1
(c) x
n
0, n 0.
(d) x
n sin
!n, n 0:
5. The z-transform of an N-periodic sequence can be expressed in terms of the z-transform of its
first period. That is, if
x
n, 0nN 1
x1
n
0, elsewhere
denotes the sequence of the first period of the periodic sequence x(n). Show that
X1
z
X
z , jzN j > 1,
1 z N
x n f1, 1, 1, 1, 1, 1, 1, 1, 1, 1, . . .g
X1
N
X
z x
nz n :
n0
where a > 0. Find X z and plot the poles and zeros of X z.
z 1
a X
z , jzj < :
2z2 3z 1 2
z
b X
z :
3z2 4z 1
2z2
c X
z :
z 1
z 22
10. Using residue method to find the inverse z-transform of the following functions:
z2 z
a X
z :
z 12
z2 2z
b X
z :
z 0:63
1
c X
z :
z 0:4
z 12
11. The first-order trapezoidal integration rule in numerical analysis is described by the I/O
difference equation
1
y
n x
n x
n 1 y
n 1, n 0:
2
z
z 1
a H
z :
z2 z 1
z 0:8
14. In Figure 4.4(b), let H1
z and H2
z are the transfer functions of the two first-order IIR
filters defined as
1 a
y
n x
n x
n 1 ay
n 1, n 0,
2
16. Consider a moving average filter defined in (3.2.1). Find the magnitude response jH
!j and
the phase response f
!.
Part B
19. Compute ck given in (4.1.4) for A 1, T0 0:1, and t 0:05, 0:01, 0:001, and 0.0001.
Using MATLAB function stem to plot ck for k 0, 1, . . . 20.
20. Repeat the Problem 19 for A 1, t 0:001 and T0 0:005, 0.001, and 0.01.
180 FREQUENCY ANALYSIS
Part C
21. The assembly program, dft_128.asm, can be further optimized. Use parallel instructions
to improve the DFT performance. Profile the optimized code, and compare the cycle counts
against the profile data obtained in experiment given in Section 4.6.3.
22. Find the clock rate of the TMS320C55x device, and use the profile data to calculate the total
time the DFT routine spent to compute 128-point samples. Can this DFT routine be used for
a real-time application? Why?
23. Why does the C55x DSP have two stack pointers, XSP and XSSP, and how are these
pointers initialized? The C55x also uses RETA and CFCT during subroutine calls. What
are these registers? Create and use a simple assembly program example to describe how the
RETA, CFCT, XSP, and XSSP react to a nested subroutine call (hint: use references from
the CCS help menu).
24. Modify experiments given in Section 4.6.4 to understand how the linker works:
(a) Move all the programs in .text section (exp4b. asm, dft_128.asm and
mag_128.asm) to a new section named .sect "dft_code", which starts at the
program memory of address 0x020400. Adjust the section length if necessary.
(b) Put all the data variables (exp4b. asm, dft_128.asm, and mag_128.asm) under the
section named. usect "dft_vars", which starts at the data memory of address
0x08000. Again, adjust the section length if necessary.
(c) Build and run the project. Examine memory segments in the map file. How does the
linker handle each program and data section?
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
5
Design and Implementation
of FIR Filters
A filter is a system that is designed to alter the spectral content of input signals in
a specified manner. Common filtering objectives include improving signal quality,
extracting information from signals, or separating signal components that have been
previously combined. A digital filter is a mathematical algorithm implemented in hard-
ware, firmware, and/or software that operates on a digital input signal to produce a
digital output signal for achieving filtering objectives. A digital filter can be classified
as being linear or nonlinear, time invariant or varying. This chapter is focused on the
design and implementation of linear, time-invariant (LTI) finite impulse response
(FIR) filters. The time-invariant infinite impulse response (IIR) filters will be discussed
in Chapter 6, and the time-varying adaptive filters are introduced in Chapter 8.
The process of deriving the digital filter transfer function H(z) that satisfies the
given set of specifications is called digital filter design. Although many applications
require only simple filters, the design of more complicated filters requires the use of
sophisticated techniques. A number of computer-aided design tools (such as MATLAB)
are available for designing digital filters. Even though such tools are widely available,
we should understand the basic characteristics of digital filters and familiar with
techniques used for implementing digital filters. Many DSP books devote substantial
efforts to the theory of designing digital filters, especially approximation methods,
reflecting the considerable work that has been done for calculating and optimizing filter
coefficients.
A digital filter is said to be linear if the output due to the application of input,
is equal to
where a1 and a2 are arbitrary constants, and y1
n and y2
n are the filter outputs due to
the application of the inputs x1
n and x2
n, respectively. The important property of
linearity is that in the computation of y(n) due to x(n), we may decompose x(n) into a
summation of simpler components xi
n. We then compute the response yi
n due to
input xi
n. The summation of yi
n will be equal to the output y(n). This property is
also called the superposition.
A time-invariant system is a system that remains unchanged over time. A digital filter
is time-invariant if the output due to the application of delayed input x
n m is equal
to the delayed output y
n m, where m is a positive integer. It means that if the input
signal is the same, the output signal will always be the same no matter what instant the
input signal is applied. It also implies that the characteristics of a time-invariant filter
will not change over time.
A digital filter is causal if the output of the filter at time n0 does not depend on the
input applied after n0 . It depends only on the input applied at and before n0 . On the
contrary, the output of a non-causal filter depends not only on the past input, but also
on the future input. This implies that a non-causal filter is able to predict the input that
will be applied in the future. This is impossible for any real physical filter.
Linear, time-invariant filters are characterized by magnitude response, phase
response, stability, rise time, settling time, and overshoot. Magnitude response specifies
the gains (amplify, pass, or attenuate) of the filter at certain frequencies, while phase
response indicates the amount of phase changed by the filter at different frequencies.
Magnitude and phase responses determine the steady-state response of the filter. For an
instantaneous change in input, the rise time specifies an output-changing rate. The
settling time describes an amount of time for the output to settle down to a stable
value, and the overshoot shows if the output exceeds the desired output value. The rise
time, the settling time, and the overshoot specify the transient response of the filter in
the time domain.
A digital filter is stable if, for every bounded input signal, the filter output is bounded.
A signal x(n) is bounded if its magnitude jx
nj does not go to infinity. A digital filter
with the impulse response h(n) is stable if and only if
X
1
jh
nj < 1:
5:1:3
n0
Since an FIR filter has only a finite number of non-zero h(n), the FIR filter is always
stable. Stability is critical in DSP implementations because it guarantees that the filter
INTRODUCTION TO DIGITAL FILTERS 183
output will never grow beyond bounds, thus avoiding numerical overflow in computing
the convolution sums.
As mentioned earlier, filtering is a process that passes certain frequency components
in a signal through the system and attenuates other frequency components. The range of
frequencies that is allowed to pass through the filter is called the passband, and the
range of frequencies that is attenuated by the filter is called the stopband. If a filter is
defined in terms of its magnitude response, there are four different types of filters:
lowpass, highpass, bandpass, and bandstop filters. Each ideal filter is characterized by a
passband over which frequencies are passed unchanged (except with a delay) and a
stopband over which frequencies are rejected completely. The two-level shape of the
magnitude response gives these filters the name brickwall. Ideal filters help in analyzing
and visualizing the processing of actual filters employed in signal processing. Achieving
an ideal brickwall characteristic is not feasible, but ideal filters are useful for concep-
tualizing the impact of filters on signals.
As discussed in Chapter 3, there are two basic types of digital filters: FIR filters and
IIR filters. An FIR filter of length L can be represented with its impulse response h(n)
that has only L non-zero samples. That is, h
n 0 for all n L. An FIR filter is also
called a transversal filter. Some advantages and disadvantages of FIR filters are sum-
marized as follows:
1. Because there is no feedback of past outputs as defined in (3.1.16), the FIR filters
are always stable. That is, a bounded input results in a bounded output. This
inherent stability is also manifested in the absence of poles in the transfer function
as defined in (4.3.8), except possibly at the origin.
2. The filter has finite memory because it `forgets' all inputs before the
L 1th
previous one.
3. The design of linear phase filters can be guaranteed. In applications such as audio
signal processing and data transmission, linear phase filters are preferred since they
avoid phase distortion.
4. The finite-precision errors (discussed in Chapter 3) are less severe in FIR filters than
in IIR filters.
5. FIR filters can be easily implemented on most DSP processors such as the
TMS320C55x introduced in Chapter 2.
6. A relatively higher order FIR filter is required to obtain the same characteristics as
compared with an IIR filter. Thus more computations are required, and/or longer
time delay may be involved in the case of FIR filters.
passed without attenuation is the passband of the filter, and the range of frequencies
that is attenuated is the stopband. Thus the magnitude response of an ideal filter is given
by jH
!j 1 in the passband and jH
!j 0 in the stopband. Note that the frequency
response H
! of a digital filter is a periodic function of !, and the magnitude response
jH
!j of a digital filter with real coefficients is an even function of !. Therefore the
digital filter specifications are given only for the range 0 ! p.
The magnitude response of an ideal lowpass filter is illustrated in Figure 5.1(a). The
regions 0 ! !c and ! > !c are referred to as the passband and stopband, respec-
tively. The frequency that separates the passband and stopband is called the cut-off
frequency !c . An ideal lowpass filter has magnitude response jH
!j 1 in the fre-
quency range 0 ! !c and has jH
!j 0 for ! > !c . Thus a lowpass filter passes all
low-frequency components below the cut-off frequency and attenuates all high-fre-
quency components above !c . Lowpass filters are generally used when the signal
components of interest are in the range of DC to the cut-off frequency, but other higher
frequency components (or noise) are present.
The magnitude response of an ideal highpass filter is illustrated in Figure 5.1(b). The
regions ! !c and 0 ! < !c are referred to as the passband and stopband, respec-
tively. A highpass filter passes all high-frequency components above the cut-off fre-
quency !c and attenuates all low-frequency components below !c . As discussed in
Chapter 1, highpass filters can be used to eliminate DC offset, 60 Hz hum, and other
low frequency noises.
The magnitude response of an ideal bandpass filter is illustrated in Figure 5.1(c). The
regions ! < !a and ! > !b are referred to as the stopband. The frequencies !a and !b
are called the lower cut-off frequency and the upper cut-off frequency, respectively. The
H(w) H(w)
1 1
w w
0 wc p 0 wc p
(a) (b)
H(w) H(w)
1 1
w w
0 wa wb p 0 wa wb p
(c) (d)
Figure 5.1 Magnitude response of ideal filters: (a) lowpass, (b) highpass, (c) bandpass, and
(d) bandstop
INTRODUCTION TO DIGITAL FILTERS 185
In practice, we cannot achieve the infinitely sharp cutoff implied by the ideal filters
shown in Figure 5.1. This will be shown later by considering the impulse response of the
ideal lowpass filter that is non-causal and hence not physically realizable. Instead we
must compromise and accept a more gradual cutoff between passband and stopband, as
well as specify a transition band between the passband and stopband. The design is
based on magnitude response specifications only, so the phase response of the filter is
not controlled. Whether this is important depends on the application. Realizable filters
do not exhibit the flat passband or the perfect linear phase characteristic. The deviation
of jH
!j from unity (0 dB) in the passband is called magnitude distortion, and the
deviation from the linear phase of the phase response H
! is called phase distortion.
The characteristics of digital filters are often specified in the frequency domain. For
frequency-selective filters, the magnitude response specifications of a digital filter are
often given in the form of tolerance (or ripple) schemes. In addition, a transition band is
specified between the passband and the stopband to permit the magnitude drop off
smoothly. A typical magnitude response of lowpass filter is shown in Figure 5.2. The
dotted horizontal lines in the figure indicate the tolerance limits. In the passband, the
magnitude response has a peak deviation dp and in the stopband, it has a maximum
deviation ds . The frequencies !p and !s are the passband edge (cut-off) frequency and
the stopband edge frequency, respectively.
186 DESIGN AND IMPLEMENTATION OF FIR FILTERS
H(w)
1+d p Ap
1
Ideal filter
1−d p
Actual filter
As
ds
w
0 wp wc ws p
Transition
Passband Stopband
band
1 dp jH !j 1 dp , 0 ! !p : 5:1:4
jH !j ds , !s ! p: 5:1:5
The stopband ripple (or attenuation) describes the maximum gain (or minimum
attenuation) for signal components above the !s .
Passband and stopband deviations may be expressed in decibels. The peak passband
ripple, dp , and the minimum stopband attenuation, ds , in decibels are given as
1 dp
Ap 20 log10 dB
5:1:6
1 dp
and
Thus we have
INTRODUCTION TO DIGITAL FILTERS 187
10Ap =20 1
dp
5:1:8
10Ap =20 1
and
As =20
ds 10 :
5:1:9
The transition band is the area between the passband edge frequency !p and the
stopband edge frequency !s . The magnitude response decreases monotonically from the
passband to the stopband in this region. Generally, the magnitude in the transition band
is left unspecified. The width of the transition band determines how sharp the filter is. It
is possible to design filters that have minimum ripple over the passband, but a certain
level of ripple in this region is commonly accepted in exchange for a faster roll-off of
gain in the transition band. The stopband is chosen by the design specifications.
Generally, the smaller dp and ds are, and the narrower the transition band, the more
complicated (higher order) the designed filter becomes.
An example of a narrow bandpass filter is illustrated in Figure 5.3. The center
frequency !m is the point of maximum gain (or maximum attenuation for a notch
filter). If a logarithm scale is used for frequency such as in many audio applications, the
center frequency at the geometric mean is expressed as
p
!m !a !b ,
5:1:10a
H(w)
1
1
2
w
wa wm wb
where !a and !b are the lower and upper cut-off frequencies, respectively. The
bandwidth is the difference between the two cut-off frequencies for a bandpass filter.
That is,
BW !b !a : 5:1:10b
1
jH
!a j jH
!b j p 0:707:
5:1:11
2
Another way of describing a resonator (or notch) filter is the quality factor defined as
!m
Q :
5:1:12
2pBW
df
!
Td
! ,
5:1:13
d!
or
These equations show that for a filter with a linear phase, the group delay Td
! given in
(5.1.13) is a constant a for all frequencies. This filter avoids phase distortion because all
sinusoidal components in the input are delayed by the same amount. A filter with a
nonlinear phase will cause a phase distortion in the signal that passes through it. This is
because the frequency components in the signal will each be delayed by a different
amount, thereby altering their harmonic relationships. Linear phase is important in data
communications, audio, and other applications where the temporal relationships
between different frequency components are critical.
The specifications on the magnitude and phase (or group delay) of H
! are based on
the steady-state response of the filter. Therefore they are called the steady-state speci-
fications. The speed of the response concerns the rate at which the filter reaches the
steady-state response. The transient performance is defined for the response right after
FIR FILTERING 189
the application of an input signal. A well-designed filter should have a fast response, a
small rise time, a small settling time, and a small overshoot.
In theory, both the steady-state and transient performance should be considered in
the design of a digital filter. However, it is difficult to consider these two specifications
simultaneously. In practice, we first design a filter to meet the magnitude specifications.
Once this filter is obtained, we check its phase response and transient performance. If
they are satisfactory, the design is completed. Otherwise, we must repeat the design
process. Once the transfer function has been determined, we can obtain a realization of
the filter. This will be discussed later.
The signal-flow diagram of the FIR filter is shown in Figure 3.6. As discussed in
Chapter 3, the general I/O difference equation of FIR filter is expressed as
X
L 1
y
n b0 x
n b1 x
n 1 bL 1 x
n L 1 bl x
n l,
5:2:1
l0
where bl are the impulse response coefficients of the FIR filter. This equation describes
the output of the FIR filter as a convolution sum of the input with the impulse response
of the system. The transfer function of the FIR filter defined in (5.2.1) is given by
X
L 1
1
L 1
H
z b0 b1 z bL 1 z bl z l :
5:2:2
l0
As discussed in Section 3.2.2, the output of the linear system defined by the impulse
response h(n) for an input signal x(n) can be expressed as
X
1
y
n x
n h
n h
lx
n l:
5:2:3
l 1
Thus the output of the LTI system at any given time is the sum of the input samples
convoluted by the impulse response coefficients of the system. The output at time n0 is
given as
X
1
y
n0 h
lx
n0 l:
5:2:4
l 1
Assuming that n0 is positive, the process of computing the linear convolution involves
the following four steps:
190 DESIGN AND IMPLEMENTATION OF FIR FILTERS
3. Multiplication. Multiply h(l) by x
n0 l to obtain the products h
l x
n0 l for
all l.
4. Summation. Sum all the products to obtain the output y n0 at time n0 .
Repeat steps 2±4 in computing the output of the system at other time instants n0 .
This general procedure of computing convolution sums can be applied to (5.2.1) for
calculating the FIR filter output y(n). As defined in (3.2.15), the impulse response of the
FIR filter is
(
0, l<0
h
l bl , 0l<L
5:2:5
0, l L.
If the input signal is causal, the general linear convolution equation defined in (5.2.3)
can be simplified to (5.2.1). Note that the convolution of the length M input with the
length L impulse response results in length L M 1 output.
X
3
y
n bl x
n l, n 0:
l0
This yields
n 0, y
0 b0 x
0,
n 1, y
1 b0 x
1 b1 x
0,
n 2, y
2 b0 x
2 b1 x
1 b2 x
0,
n 3, y
3 b0 x
3 b1 x
2 b2 x
1 b3 x
0:
In general, we have
As shown in Figure 5.4, the input sequence is flipped around (folding) and then
shifted to the right over the filter coefficients. At each time instant, the output value is
the sum of products of overlapped coefficients with the corresponding input data
FIR FILTERING 191
aligned below it. This flip-and-slide form of linear convolution can be illustrated in
Figure 5.5. Note that shifting x
l to the right is equivalent to shift bl to the left one
unit at each sampling period.
As shown in Figure 5.5, the input sequence is extended by padding L 1 zeros to its
right. At time n 0, the only non-zero product comes from b0 and x(0) which are time
aligned. It takes the filter L 1 iterations before it is completely overlapped with the
input sequence. The first L 1 outputs correspond to the transient behavior of the FIR
filter. For n L 1, the filter aligns over the non-zero portion of the input sequence.
That is, the signal buffer of FIR filter is full and the filter is in the steady state. If the
input is a finite-length sequence of M samples, there are L M 1 output samples and
the last L 1 samples also correspond to transients.
b2
b0
b1
b3
b0x(0)
x(0)
n = 0:
x(1) b0x(1)
x(0) b1x(0)
n = 1:
b2x(0)
x(1) b0x(2) b1x(1)
x(2) x(0)
n = 2:
b2x(n−2)
x(n−1) x(n−2)x(n−3) b1x(n−1) b3x(n−3)
n ≥ 3: x(n) b0x(n)
b0 b1 b2 b3 b0 b1 b2 b3
y(n) y(0)
A multiband filter has more than one passband and stopband. A special case of the
multiband filter is the comb filter. A comb filter has evenly spaced zeros, with the shape
of the magnitude response resembling a comb in order to block frequencies that are
integral multiples of a fundamental frequency. A difference equation of a comb filter is
given as
where the number of delay L is an integer. The transfer function of this multiplier-free
FIR filter is
L zL 1
H
z 1 z :
5:2:7
zL
Thus the comb filter has L poles at the origin (trivial poles) and L zeros equally spaced
on the unit circle at
zl e j 2p=Ll , l 0, 1, . . . , L 1: 5:2:8
Example 5.3: The zeros and the frequency response of a comb filter can be
computed and plotted using the following MATLAB script for L 8:
b [1 0 0 0 0 0 0 0 1];
zplane(b, 1)
freqz(b, 1, 128);
The zeros on the z-plane are shown in Figure 5.6(a) and the characteristic of comb
shape is shown in Figure 5.6(b). The center of the passband lies halfway between
2l 1p
the zeros of the response, that is at frequencies , l 0, 1, . . . , L 1.
L
Because there is not a large attenuation in the stopband, the comb filter can
only be used as a crude bandstop filter to remove harmonics at frequencies
!l 2pl=L , l 0, 1, . . . , L 1: 5:2:9
Comb filters are useful for passing or eliminating specific frequencies and their
harmonics. Periodic signals have harmonics and using comb filters are more efficient
than having individual filters for each harmonic. For example, the constant humming
sound produced by large transformers located in electric utility substations are com-
posed of even-numbered harmonics (120 Hz, 240 Hz, 360 Hz, etc.) of the 60 Hz power
frequency. When a desired signal is corrupted by the transformer noise, the comb filter
with notches at the multiples of 120 Hz can be used to eliminate undesired harmonics.
We can selectively cancel one or more zeros in a comb filter with corresponding poles.
Canceling the zero provides a passband, while the remaining zeros provide attenuation
for a stopband. For example, we can add a pole at z 1. Thus the transfer function
given in (5.2.7) is changed to
FIR FILTERING 193
1
0.8
0.6
Imaginary Part
0.4
0.2
(a) 0
−0.2
−0.4
−0.6
−0.8
−1
−1 −0.5 0 0.5 1
Real Part
10
Magnitude (dB)
5
0
−5
−10
−15
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(b)
Normalized Frequency (⫻p rad/sample)
100
Phase (degrees)
50
0
−50
−100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (⫻p rad/sample)
Figure 5.6 Zeros of a simple comb filter
L 8 and its frequency response: (a) zeros, and
(b) magnitude (top) and phase (bottom) responses
L
1 z
H
z 1
:
5:2:10
1 z
This is a lowpass filter with passband centered at z 1, where the pole±zero cancella-
tion occurs. Since the pole at z 1 is canceled by the zero at z 1, the system defined
by (5.2.10) is still the FIR filter. Note that canceling the zero at z 1 produces a
lowpass filter, canceling the zeros at z j produces a bandpass filter, and canceling
the zero at z 1 produces a highpass filter.
Applying the scaling factor 1/L to (5.2.10), the transfer function becomes
L
1 1 z
H
z 1
:
5:2:11
L 1 z
This is the moving-average filter introduced in Chapter 3 with the I/O difference
equation expressed as
194 DESIGN AND IMPLEMENTATION OF FIR FILTERS
1
y
n x
n x
n L y
n 1
L
1XL 1
x
n l:
5:2:12
L l0
The moving-average filter is a very simple lowpass filtering operation that passes the
zero-frequency (or the mean) component. However, there are disadvantages of this
type of filter such as the passband cut-off frequency is a function of L and the sampling
rate fs , and the stopband attenuation is fixed by L.
1
y
n x
n x
n 1, n 0:
2
1
H
z 1z 1 ,
2
1 1 h i
j! j!=2
H
! 1e e ej!=2 e j!=2
e j!=2
cos
!=2:
2 2
Therefore, we have
!2 1
jH
!j2 cos 1 cos
!:
2 2
!
f
! ,
2
In many practical applications, it is required that a digital filter has a linear phase. In
particular, it is important for phase-sensitive signals such as speech, music, images, and
data transmission where nonlinear phase would give unacceptable frequency distortion.
FIR filters can be designed to obtain exact linear phase.
If L is an odd number, we define M
L 1=2. If we define hl blM , then (5.2.1)
can be written as
FIR FILTERING 195
" #
X
2M X
M X
M
l
lM M l M
B
z bl z blM z z hl z z H
z,
5:2:13
l0 l M l M
where
X
M
H
z hl z l :
5:2:14
l M
hl h l , l 0, 1, . . . , M: 5:2:15
l M l1
" #
X
M
j!M
e h0 2 hl cos
!l :
5:2:16
l1
If hl is real, then H
! is a real function of !. If H
! 0, then the phase of B
! is
equal to
which is a linear function of !. However, if H
! < 0, then the phase of B
! is equal to
p !M. Thus, if there are sign changes in H
!, there are corresponding 1808 phase
shifts in B
!, and B
! is only piecewise linear. However, it is still simple to refer to the
filter as having linear phase.
If hl has the anti-symmetry property expressed as
hl h l, l 0, 1, . . . , M, 5:2:18
If hl is real, then H
! is pure imaginary and the phase of B(z) is a linear function of !.
The filter order L is assumed to be an odd integer in the above derivations. If L is an
even integer and M L=2, then the derivations of (5.2.16) and (5.2.19) have to be
196 DESIGN AND IMPLEMENTATION OF FIR FILTERS
modified slightly. In conclusion, an FIR filter has linear phase if its coefficients satisfy
the following (positive) symmetric condition:
bl bL 1 l, l 0, 1, . . . , L 1, 5:2:20
bl bL 1 l, l 0, 1, . . . , L 1: 5:2:21
There are four types of linear phase FIR filters, depending on whether L is even or
odd and whether bl has positive or negative symmetry as illustrated in Figure 5.7. The
group delay of a symmetric (or anti-symmetric) FIR filter is Td
! L=2, which
corresponds to the midpoint of the FIR filter. The frequency response of the type I
Center of
symmetry
(a)
Center of
symmetry
(b)
Center of
symmetry
(c)
Center of
symmetry
(d)
Figure 5.7 Coefficients of the four types of linear phase FIR filters: (a) type I: L even
L 8,
positive symmetry, (b) type II: L odd
L 7, positive symmetry, (c) type III: L even
L 8,
negative symmetry, and (d) type IV: L odd
L 7, negative symmetry
FIR FILTERING 197
(L even, positive symmetry) filter is always 0 at the Nyquist frequency. This type of filter
is unsuitable for a highpass filter. Type III (L even, negative symmetry) and IV (L odd,
negative symmetry) filters introduce a 908 phase shift, thus they are often used to design
Hilbert transformers. The frequency response is always 0 at DC frequency, making
them unsuitable for lowpass filters. In addition, type III response is always 0 at the
Nyquist frequency, also making it unsuitable for a highpass filter.
The symmetry (or anti-symmetry) property of a linear-phase FIR filter can be
exploited to reduce the total number of multiplications into almost half. Consider the
realization of FIR filter with an even length L and positive symmetric impulse response
as given in (5.2.20), Equation (5.2.2) can be combined as
L1 1 L2 L=21 L=2
H
z b0 1 z b1 z z bL=2 1 z z :
5:2:22
X1
L=2
y
n bl x
n l x
n L 1 l:
5:2:24
l0
As shown in (5.2.23) and Figure 5.8, the number of multiplications is cut in half by
adding the pair of samples, then multiplying the sum by the corresponding coefficient.
z−1
x(n−L+1)
b0 b1 bL/2−1
y(n)
The trade-off is that instead of accessing data linearly through the same buffer with a
single pointer, we need two address pointers that point at both ends for x
n l and
x
n L 1 l. The TMS320C55x provides two special instructions for implementing
the symmetric and anti-symmetric FIR filters efficiently. In Section 5.6, we will demon-
strate how to use the symmetric FIR instructions for experiments.
There are applications where data is already collected and stored for later processing,
i.e., the processing is not done in real time. In these cases, the `current' time n can be
located arbitrarily as the data is processed, so that the current output of the filter may
depend on past, current, and future input values. Such a filter is `non-realizable' in real
time, but is easy to implement for the stored data. The non-causal filter has the I/O
equation
X
L2
y
n bl x
n l
5:2:25
l L1
X
L2
H
z bl z l :
5:2:26
l L1
Some typical applications of non-causal filters are the smoothing filters, the interpola-
tion filters, and the inverse filters. A simple example of a non-causal filter is a Hanning
filter with coefficients f0:25, 0:5, 0:25g for smoothing estimated pitch in speech pro-
cessing.
1
H
z b0
1 b11 z b12 z 2
1 b21 z 1
b22 z 2
1 bM1 z 1
bM2 z 2
Y
M
1
b0
1 bm1 z bm2 z 2
m1
b0 H1
zH2
z HM
z,
5:2:28
where M
L 1=2 if L is odd and M L=2 if L is even. Thus the higher order H(z)
given in (5.2.2) is broken up and can be implemented in cascade form as illustrated in
Figure 5.9. Splitting the filter in this manner reduces roundoff errors, which may be
critical for some applications. However, the direct form is more efficient for implemen-
tation on most commercially available DSP processors such as the TMS320C55x.
The output y(n) is a linear combination of a finite number of inputs fx
n, x
n 1,
. . . , x
n L 1g and L coefficients fbl , l 0, 1, . . . , L 1g, which can be represented
as tables illustrated in Figure 5.10. In order to compute the output at any time, we
simply have to multiply the corresponding values in each table and sum the results.
That is,
In FIR filtering, the coefficient values are constant, but the data in the signal buffer
changes every sampling period, T. That is, the x(n) value at time n becomes x
n 1 in
the next sampling period, then x
n 2, etc., until it simply drops off the end of the
delay chain.
x(n) b0 y(n)
H1(z) H2 (z) HM (z)
(a)
Input
z−1 z−1
bm1 bm2
Output
(b)
Figure 5.9 A cascade structure of FIR filter: (a) overall structure, and (b) flow diagram of
second-order FIR section
200 DESIGN AND IMPLEMENTATION OF FIR FILTERS
bL−1 x (n − L + 1)
The signal buffer is refreshed in every sampling period in the fashion illustrated in
Figure 5.11, where the oldest sample x
n L 1 is discarded and other signals are
shifted one location to the right in the buffer. A new sample (from ADC in real-time
application) is inserted to the memory location labeled as x(n). The FIR filtering
operation that computes y(n) using (5.2.29) is then performed. The process of refreshing
the signal buffer shown in Figure 5.11 requires intensive processing time if the operation
is not implemented by the DSP hardware.
The most efficient method for handling a signal buffer is to load the signal samples
into a circular buffer, as illustrated in Figure 5.12(a). Instead of shifting the data
forward while holding the buffer addresses fixed, the data is kept fixed and the addresses
are shifted backwards (counterclockwise) in the circular buffer. The beginning of the
signal sample, x(n), is pointed at with a pointer and the previous samples are loaded
sequentially from that point in a clockwise direction. As we receive a new sample, it is
placed at the position x(n) and our filtering operation defined in (5.2.29) is performed.
After calculating the output y(n), the pointer is moved counterclockwise one position to
the point at x
n L 1 and we wait for the next input signal. The next input at time
n 1 is written to the x
n L 1 position, and is referred to as x(n) for the next
iteration. This is permitted because the old x
n L 1 signal dropped off the end of
our delay chain after the previous calculation as shown in Figure 5.11. The circular
buffer implementation of a signal buffer, or a tapped-delay-line is very efficient. The
update is carried out by adjusting the address pointer without physically shifting any
data in memory. It is especially useful in implementing a comb filter when L is large,
since we only need to access two adjacent samples x(n) and x
n L in the circular
DESIGN OF FIR FILTERS 201
x(n − 3) b3
(a) (b)
Figure 5.12 Circular buffers for FIR filter: (a) circular buffer for holding the signals for FIR
filtering. The pointer to x
n is updated in the counterclockwise direction, and (b) circular buffer for
FIR filter coefficients, the pointer always pointing to b0 at the beginning of filtering
buffer. It is also used in sinewave generators and wavetable sound synthesis, where a
stored waveform can be generated periodically by cycling over the circular buffer.
Figure 5.12(b) shows a circular buffer for FIR filter coefficients. Circular buffer
allows the coefficient pointer to wrap around when it reaches to the end of the
coefficient buffer. That is, the pointer moves from bL 1 to b0 such that the filtering
will always start with the first coefficient.
These five steps are not necessarily independent, and they may be conducted in a
different order. Specification of filter characteristics and realization of desired filters
were discussed in Section 5.2. In this section, we focus on designing FIR filters for given
specifications.
There are several methods for designing FIR filters. The methods discussed in this
section are the Fourier series (window) method and the frequency-sampling method.
The Fourier series method offers a very simple and flexible way of computing FIR filter
coefficients, but it does not allow the designer adequate control over the filter para-
meters. The main attraction of the frequency-sampling method is that it allows recursive
realization of FIR filters, which can be computationally efficient. However, it lacks
flexibility in specifying or controlling filter parameters.
With the availability of an efficient and easy-to-use filter design program such as
MATLAB, the Park±McClellan algorithm is now widely used in industry for FIR filter
design. The Park±McClellan algorithm should be the method of first choice for most
practical applications.
The basic idea of Fourier series method is to design an FIR filter that approximates the
desired frequency response of filter by calculating its impulse response. This method
utilizes the fact that the frequency response H
! of a digital filter is a periodic function
with period 2p. Thus it can be expanded in a Fourier series as
X
1
j!n
H
! h
ne ,
5:3:1
n 1
where
p
1
h
n H
!e j!n d!, 1 n 1:
5:3:2
2p p
This equation shows that the impulse response h(n) is double-sided and has infinite
length. If H
! is an even function in the interval j!j p, we can show that (see exercise
problem)
p
1
h
n H
! cos
!nd!, n0
5:3:3
p 0
h
n h
n, n 0:
5:3:4
DESIGN OF FIR FILTERS 203
For a given desired frequency response H
!, the corresponding impulse response
(filter coefficients) h(n) can be calculated for a non-recursive filter if the integral (5.3.2)
or (5.3.3) can be evaluated. However, in practice there are two problems with this simple
design technique. First, the impulse response for a filter with any sharpness to its
frequency response is infinitely long. Working with an infinite number of coefficients
is not practical. Second, with negative values of n, the resulting filter is non-causal, thus
is non-realizable for real-time applications.
A finite-duration impulse response fh0
ng of length L 2M 1 that is the best
approximation (minimum mean-square error) to the ideal infinite-length impulse
response can be simply obtained by truncation. That is,
0 h
n, MnM
h
n
5:3:5
0, otherwise.
Note that in this definition, we assume L to be an odd number otherwise M will not be
an integer. On the unit circle, we have z ej! and the system transfer function is
expressed as
X
M
H 0
z h0
nz n :
5:3:6
n M
It is clear that this filter is not physically realizable in real time since the filter must
produce an output that is advanced in time with respect to the input.
A causal FIR filter can be derived by delaying the h0
n sequence by M samples.
That is, by shifting the time origin to the left of the vector and re-indexing the
coefficients as
X
L 1
B0
z b0l z l :
5:3:8
l0
B0
z z M
H 0
z
5:3:9
and
B0
! e j!M
H 0
!:
5:3:10
204 DESIGN AND IMPLEMENTATION OF FIR FILTERS
j!M
Since je j 1, we have
This causal filter has the same magnitude response as that of the non-causal filter. If h(n)
is real, then H 0
! is a real function of ! (see exercise problem). As discussed in Section
5.2.3, if H 0
! 0, then the phase of B0
! is equal to M!. If H 0
! < 0, then the
phase of B0
! is equal to p M!. Therefore the phase of B0
! is a linear function of !
and thus the transfer function B0
z has a constant group delay.
Example 5.5: The ideal lowpass filter of Figure 5.1(a) has frequency response
1, j!j !c
H
!
5:3:12
0, otherwise.
sin
px
sinc
x :
5:3:13b
px
Taking the limit as n ! 0, we have
Example 5.6: Design a lowpass FIR filter with the frequency response
1, 0 f 1 kHz
H
f
0, 1 kHz < f 4 kHz
DESIGN OF FIR FILTERS 205
when the sampling rate is 8 kHz. The duration of the impulse response is limited to
2.5 msec.
Since 2MT 0:0025 seconds and T 0:000125 seconds, we obtain M 10:
Thus the actual filter has 21 coefficients. From Table 3.1, 1 kHz corresponds to
!c 0:25p. From (5.3.13), we have
0:25pn
h
n 0:25 sinc , n 0, 1, . . . , 10:
p
X
20
B0
z b0l z 1 :
l0
Example 5.7: Design a lowpass filter of cut-off frequency !c 0:4p with filter
length L 41 and L 61.
When L 41, M
L 1=2 20. From (5.3.15), the designed impulse
response is given by
0:4p
l 20
b0l 0:4 sinc , l 0, 1, . . . , 40:
p
The magnitude responses are computed and plotted in Figure 5.13 using the
MATLAB script exam5_7.m given in the software package.
As shown in Figure 5.13, the causal FIR filter obtained by simply truncating the
impulse response coefficients of the desired filter exhibits an oscillatory behavior
(or ripples) in its magnitude response. As the length of the filter is increased, the number
of ripples in both passband and stopband increases, and the width of the ripples
decrease. The ripple becomes narrower, but its height remains almost constant. The
largest ripple occurs near the transition discontinuity and their amplitude is independent
of L. This undesired effect is called the Gibbs phenomenon. This is an unavoidable
consequence of having an abrupt discontinuity (or truncation) of impulse response in
time domain.
The truncation operation described in (5.3.5) can be considered as multiplication
of the infinite-length sequence {h(n)} by the rectangular sequence {w(n)}. That is,
206 DESIGN AND IMPLEMENTATION OF FIR FILTERS
0.8
Magnitude
(a) 0.6
0.4
0.2
1
−3 −2 −1 0 1 2 3
Frequency
0.8
Magnitude
(b) 0.6
0.4
0.2
1
−3 −2 −1 0 1 2 3
Frequency
Figure 5.13 Magnitude responses of lowpass filters designed by Fourier series method:
(a) L 41, and (b) L 61
where W
! is the DTFT of w(n) defined in (5.3.17). Thus the designed filter H 0
! will
be a smeared version of the desired filter H
!.
DESIGN OF FIR FILTERS 207
Equation (5.3.18) shows that H 0
! is obtained by the convolution of the desired
frequency response H
! with the rectangular window's frequency response W
!. If
we have the desired result H 0
! H
!. Equation (5.3.19) implies that if W
! is a
very narrow pulse centered at ! 0 such as a delta function W
! 2pd
!, then
H 0
! will approximate H
! very closely. From Table 4.1, this condition requires the
optimum window
3. Their frequency responses W
! have a narrow mainlobe and small sidelobes as
suggested by (5.3.19).
X
M
j!n
W
! e
5:3:21a
n M
sin
2M 1!=2
:
5:3:21b
sin
!=2
A plot of W
! is illustrated in Figure 5.14 for M 8 and 20. The MATLAB script
fig5_14.m that generated this figure is available in the software package. The fre-
quency response W
! has a mainlobe centered at ! 0. All the other ripples in the
frequency response are called the sidelobes. The magnitude response jW
!j has the first
zero at
2M 1!=2 p. That is, ! 2p=
2M 1. Therefore the width of the main-
lobe is 4p=
2M 1. From (5.3.21a), it is easy to show that the magnitude of mainlobe
is jW
0j 2M 1. The first sidelobe is approximately at frequency !1 3p=
2M 1
with magnitude jW
!1 j 2
2M 1=3p for M 1. The ratio of the mainlobe mag-
nitude to the first sidelobe magnitude is
W
0 3p
W
! 2 13:5 dB:
5:3:22
1
208 DESIGN AND IMPLEMENTATION OF FIR FILTERS
40
Magnitude (dB)
20
−20
−40
−3 −2 −1 0 1 2 3
40
Magnitude (dB)
20
−20
−40
−3 −2 −1 0 1 2 3
Frequency
Figure 5.14 Frequency response of the rectangular window for M 8 (top) and 20 (bottom)
As ! increases to the Nyquist frequency, p, the denominator grows larger. This attenu-
ates the higher-frequency numerator terms, resulting in the damped sinusoidal function
shown in Figure 5.14.
As M increases, the width of the mainlobe decreases as desired. However, the area
under each lobe remains constant, while the width of each lobe decreases with an
increase in M. This implies that with increasing M, ripples in H 0
! around the point
of discontinuity occur more closely but with no decrease in amplitude.
The rectangular window has an abrupt transition to 0 outside the range
M n M, which causes the Gibbs phenomenon in the magnitude response of the
windowed filter's impulse response. The Gibbs phenomenon can be reduced by either
using a window that tapers smoothly to 0 at each end, or by providing a smooth
transition from the passband to the stopband. A tapered window causes the height of
the sidelobes to diminish and increases in the mainlobe width, resulting in a wider
transition at the discontinuity. This phenomenon is often referred to as leakage or
smearing.
A large number of tapered windows have been developed and optimized for different
applications. In this section, we restrict our discussion to four commonly used windows
of length L 2M 1. That is, w(n), n 0, 1, . . . , L 1 and is symmetric about its
middle, n M. Two parameters that predict the performance of the window in FIR
filter design are its mainlobe width and the relative sidelobe level. To ensure a fast
transition from the passband to the stopband, the window should have a small mainlobe
width. On the other hand, to reduce the passband and stopband ripples, the area under
DESIGN OF FIR FILTERS 209
the sidelobes should be small. Unfortunately, there is a trade-off between these two
requirements.
The Hann (Hanning) window function is one period of the raised cosine function
defined as
2pn
w
n 0:5 1 cos , n 0, 1, . . . , L 1:
5:3:23
L 1
Note that the Hanning window has an actual length of L 2 since the two end values
given by (5.3.23) are zero. The window coefficients can be generated by the MATLAB
built-in function
w hanning(L);
which returns the L-point Hanning window function in array w. Note that the
MATLAB window functions generate coefficients w(n), n 1, . . . , L, and is shown in
Figure 5.15 (top). The magnitude response of the Hanning window is shown in the
bottom of Figure 5.15. The MATLAB script han.m is included in the software package.
For a large L, the peak-to-sidelobe ratio is approximately 31 dB, an improvement of
17.5 dB over the rectangular window. However, since the width of the transition band
corresponds roughly to the mainlobe width, it is more than twice that resulting from the
rectangular window shown in Figure 5.14.
The Hamming window function is defined as
2pn
w
n 0:54 0:46 cos , n 0, 1, . . . , L 1,
5:3:24
L 1
1
0.8
Amplitude
0.6
0.4
0.2
0
0 5 10 15 20 25 30 35 40
Time Index
40
20
Magnitude (dB)
0
−20
−40
−60
−80
−3 −2 −1 0 1 2 3
Frequency
Figure 5.15 Hanning window function (top) and its magnitude response (bottom), L 41
210 DESIGN AND IMPLEMENTATION OF FIR FILTERS
which also corresponds to a raised cosine, but with different weights for the constant
and cosine terms. The Hamming function does not taper the end values to 0, but rather
to 0.08. MATLAB provides the Hamming window function as
w = hamming(L);
This window function and its magnitude response are shown in Figure 5.16, and the
MATLAB script ham.m is given in the software package.
The mainlobe width is about the same as for the Hanning window, but has
an additional 10 dB of stopband attenuation (41 dB). In designing a lowpass filter,
the Hamming window provides low ripple over the passband and good stopband
attenuation is usually more appropriate for FIR filter design than the Hanning window.
Example 5.8: Design a lowpass filter of cut-off frequency !c 0:4p and order
L 61 using the Hamming window. Using MATLAB script (exam5_8.m in the
software package) similar to the one used in Example 5.7, we plot the magnitude
response in Figure 5.17. Compared with Figure 5.13(b), we observe that the
ripples produced by rectangular window design are virtually eliminated from the
Hamming window design. The trade-off for eliminating the ripples is loss of
resolution, which is shown by increasing transition width.
n 0, 1, . . . , L 1:
1
0.8
Amplitude
0.6
0.4
0.2
0
0 5 10 15 20 25 30 35 40
Time Index
40
20
Magnitude (dB)
0
−20
−40
−60
−80
−3 −2 −1 0 1 2 3
Frequency
Figure 5.16 Hamming window function (top) and its magnitude response (bottom), L 41
DESIGN OF FIR FILTERS 211
0.6
0.4
0.2
0
−3 −2 −1 0 1 2 3
Frequency
1
0.8
Amplitude
0.6
0.4
0.2
0
0 5 10 15 20 25 30 35 40
Time Index
40
20
Magnitude (dB)
0
−20
−40
−60
−80
−3 −2 −1 0 1 2 3
Frequency
Figure 5.18 Blackman window function (top) and its magnitude response (bottom), L 41
is the zero-order modified Bessel function of the first kind. In practice, it is sufficient to
keep only the first 25 terms in the summation of (5.3.26b). Because I0
0 1, the Kaiser
window has the value 1=I0
b at the end points n 0 and n L 1, and is symmetric
about its middle n M. This is a useful and very flexible family of window functions.
MATLAB provides Kaiser window function as
kaiser(L, beta);
The window function and its magnitude response are shown in Figure 5.19 for L 41
and b 8 using the MATLAB script ksw.m given in the software package. The Kaiser
window is nearly optimum in the sense of having the most energy in the mainlobe for a
given peak sidelobe level. Providing a large mainlobe width for the given stopband
attenuation implies the sharpness transition width. This window can provide different
transition widths for the same L by choosing the parameter b to determine the trade-off
between the mainlobe width and the peak sidelobe level.
As shown in (5.3.26), the Kaiser window is more complicated to generate, but the
window function coefficients are generated only once during filter design. Since a window
is applied to each filter coefficient when designing a filter, windowing will not affect the
run-time complexity of the designed FIR filter.
Although dp and ds can be specified independently as given in (5.1.8) and (5.1.9), FIR
filters designed by all windows will have equal passband and stopband ripples. There-
fore we must design the filter based on the smaller of the two ripples expressed as
d min dp , ds : 5:3:27
The designed filter will have passband and stopband ripples equal to d. The value of d
can be expressed in dB scale as
In practice, the design is usually based on the stopband ripple, i.e., d ds . This is because
any reasonably good choices for the passband and stopband attenuation (such as
Ap 0:1 dB and As 60 dB) will result in ds < dp .
DESIGN OF FIR FILTERS 213
1
0.8
Amplitude
0.6
0.4
0.2
0
0 5 10 15 20 25 30 35 40
Time Index
40
Magnitude (dB)
20
0
−20
−40
−60
−80
−3 −2 −1 0 1 2 3
Frequency
Figure 5.19 Kaiser window function (top) and its magnitude response (bottom), L 41 and
b8
The main limitation of Hanning, Hamming, and Blackman windows is that they
produce a fixed value of d. They limit the achievable passband and stopband attenu-
ation to only certain specific values. However, the Kaiser window does not suffer from
the above limitation because it depends on two parameters, L and b, to achieve any
desired value of ripple d or attenuation A. For most practical applications, A 50 dB.
The Kaiser window parameters are determined in terms of the filter specifications d and
transition width Df as follows [5]:
and
A 7:59fs
L 1,
5:3:30
14:36Df
Example 5.9: Design a lowpass filter using the Kaiser window with the follow-
ing specifications: fs 10 kHz, fpass 2 kHz, fstop 2:5 kHz, Ap 0:1 dB, and
As 80 dB.
100:1=20 1
From (5.1.8), dp 0:0058. From (5.1.9), ds 10 80=20 0:0001.
100:1=20 1
Thus we choose d ds 0:0001 from (5.3.27) and A 20 log10 d 80 As
from (5.3.28). The shaping parameter b is computed as
214 DESIGN AND IMPLEMENTATION OF FIR FILTERS
80 7:5910
L 1 101:85:
14:36
2:5 2
The procedures of designing FIR filters using windows are summarized as follows:
1. Determine the window type that will satisfy the stopband attenuation requirements.
4. Generate the ideal impulse response h(n) using (5.3.3) for the desired filter.
5. Truncate the ideal impulse response of infinite length using (5.3.5) to obtain
h0
n, M n M.
6. Make the filter causal by shifting the result M units to the right using (5.3.7) to
obtain b0l , l 0, 1, . . . , L 1.
7. Multiply the window coefficients obtained in step 3 and the impulse response
coefficients obtained in step 6 sample-by-sample. That is,
Applying a window to an FIR filter's impulse response has the effect of smoothing the
resulting filter's magnitude response. A symmetric window will preserve a symmetric
FIR filter's linear-phase response.
The advantage of the Fourier series method with windowing is its simplicity. It does
not require sophisticated mathematics and can be carried out in a straightforward
manner. However, there is no simple rule for choosing M so that the resulting filter
will meet exactly the specified cut-off frequencies. This is due to the lack of an exact
relation between M and its effect on leakage.
frequency samples are non-zero, or when several bandpass functions are desired
simultaneously. A unique attraction of the frequency-sampling method is that it also
allows recursive implementation of FIR filters, leading to computationally efficient
filters. However, the disadvantage of this method is that the actual magnitude response
of the filter will match the desired filter only at the points that were samples.
For a given frequency response H
!, we take L samples at frequencies of kfs =L,
k 0, 1, . . . , L 1 to obtain H
k, k 0, 1, . . . L 1. The filter coefficients bl can be
obtained as the inverse DFT of these frequency samples. That is,
1XL 1
bl H
ke j
2p=Llk , l 0, 1, . . . , L 1:
5:3:32
L k0
The resulting filter will have a frequency response that is exactly the same as the original
response at the sampling instants. However, the response may be significantly different
between the samples. To obtain a good approximation to the desired frequency
response, we have to use a sufficiently large number of frequency samples. That is, we
have to use a large L.
Let fBk g be the DFT of fbl g so that
X
L 1
j
2p=Llk
Bk bl e , k 0, 1, . . . , L 1
5:3:33
l0
and
1XL 1
bl Bk ej
2p=Llk , l 0, 1, . . . , L 1:
5:3:34
L k0
PL L
Using the geometric series xl 1 x (see Appendix A.2), the desired filter's
l0
1
1 x
transfer function can be obtained as
!
X
L 1 X
L 1
1XL 1
l j
2plk=L l
H
z bl z Bk e z
l0 l0
L k0
1 X
L 1
Bk
L
1 z :
5:3:35
L k0
1 ej
2pk=L z 1
This equation changes the transfer function into a recursive form, and H(z) can be
viewed as a cascade of two filters: a comb filter,
1 z L =L as discussed in Section
5.2.2, and a sum of L first-order all-pole filters.
The problem is now to relate fBk g to the desired sample set {H(k)} used in (5.3.32). In
general, the frequency samples H(k) are complex. Thus a direct implementation of
(5.3.35) would require complex arithmetic. To avoid this complication, we use the
symmetry inherent in the frequency response of any FIR filter with real impulse
response h(n). Suppose that the desired amplitude response jH
!j is sampled such
216 DESIGN AND IMPLEMENTATION OF FIR FILTERS
that Bk and H(k) are equal in amplitude. For a linear-phase filter (assume positive
symmetry), we have
L
Hk jH
kj jH
L kj jBk j, k :
5:3:36
2
The remaining problem is to adjust the relative phases of the Bk so that Hk will
provide a smooth approximation to jH
!j between the samples. The phases of two
adjacent contributions to jH
!j are in phase except between the sample points, which
are 180 degrees out of phase [10]. Thus the two adjacent terms should be subtracted to
provide a smooth reconstruction between sample points. Therefore Bk should be equal
to Hk with alternating sign. That is,
L
Bk
1k Hk , k :
5:3:37
2
This is valid for L being odd or even, although it is convenient to assume that B0 is zero
and BL=2 is 0 if L is even.
With these assumptions, the transfer function given in (5.3.35) can be further
expressed as
X
1 L k 1 1
H
z
1 z
1 Hk
L kL=2
1 ej
2pk=L z 1 1 ej2p
L k=L z 1
2 X 1 cos
2pk=Lz 1
1 z L
1k Hk :
5:3:38
L kL=2
1 2 cos
2pk=Lz 1 z 2
This equation shows that H(z) has poles at ej
2pk=L on the unit circle in the z-plane. The
comb filter 1 z L provides L zeros at zk ej
2pk=L , k 0, 1, . . . , L 1, equally
spaced around the unit circle. Each non-zero sample Hk brings to the digital filter a
conjugate pair of poles, which cancels a corresponding pair of zeros at ej
2pk=L on the
unit circle. Therefore the filter defined in (5.3.38) is recursive, but does not have poles in
an essential sense. Thus it still has a finite impulse response.
Although the pole±zero cancellation is exact in theory, it will not be in practice due to
finite wordlength effects in the digital implementation. The cos
2pk=L terms in (5.3.38)
will in general be accurate only if more bits are used. An effective solution to this
problem is to move the poles and zeros slightly inside the unit circle by moving them
into radius r, where r < 1.
Therefore (5.3.38) is modified to
2 X 1 r cos
2pk=Lz 1
H
z
1 rL z L
1k Hk :
5:3:39
L kL=2
1 2r cos
2pk=Lz 1 r2 z 2
2
C
z
1 rL z L
5:3:40
L
and a bank of resonators
1 r cos
2pk=Lz 1
Rk
z , 0 k L=2,
5:3:41
1 2r cos
2pk=Lz 1 r2 z 2
as illustrated in Figure 5.20. They effectively act as narrowband filters, each passing
only those frequencies centered at and close to resonant frequencies 2pk=L and exclud-
ing others outside these bands. Banks of these filters weighted by frequency samples Hk
can then be used to synthesize a desired frequency response.
The difference equation representing the comb filter C(z) can be written in terms of
variables in Figure 5.20 as
2
u
n x
n rL x
n L:
5:3:42
L
The block diagram of this modified comb filter is illustrated in Figure 5.21. An effective
technique to implement this comb filter is to use the circular buffer, which is available in
most modern DSP processors such as the TMS320C55x. The comb filter output u(n) is
common to all the resonators Rk
z connected in parallel.
The resonator with u(n) as common input can be computed as
2pk 2pk
fk
n u
n r cos u
n 1 2r cos fk
n 1 r2 fk
n 2
L L
2pk
u
n r cos 2fk
n 1 u
n 1 r2 fk
n 2, 0 k L=2
5:3:43
L
u(n)
z−1
2pk
rcos
− L fk(n)
Σ Σ
−
r2 z−1
2
fk(n−1)
z−1
fk(n−2)
where fk
n is the output from the kth resonator. The detailed flow diagram of the
resonator is illustrated in Figure 5.22. Note that only one coefficient r cos 2pk is
L
needed for each resonator. This significantly reduces the memory requirements when
compared to other second-order IIR bandpass filters.
Finally, the output of each resonator fk
n is weighted by Hk and combined into the
overall output
X
y
n
1k Hk fk
n:
5:3:44
kL=2
L 0:02=T 200:
where
!s !p
Df :
5:4:2
2p
A highly efficient procedure, the Remez algorithm, is developed to design the opti-
mum linear-phase FIR filters based on the Parks±McClellan algorithm. The algorithm
uses the Remez exchange and Chebyshev approximation theory to design a filter with
an optimum fit between the desired and actual frequency responses. This algorithm is
implemented as an M-file function remez that is available in the Signal Processing
Toolbox of MATLAB. There are various versions of this function:
b remez(N, f, m);
b remez(N, f, m, w);
b remez(N, f, m, `ftype');
b remez(N, f, m, w, `ftype');
The function returns row vector b containing the N 1 coefficients of the FIR filter of
order L
N 1. The vector f specifies bandedge frequencies in the range between 0
and 1, where 1 corresponds to the Nyquist frequency fN fs =2. The frequencies must be
in an increasing order with the first element being 0 and the last element being 1. The
desired values of the FIR filter magnitude response at the specified bandedge frequen-
cies in f are given in the vector m, with the elements given in equal-valued pairs. The
vector f and m must be the same length with the length being an even number.
b remez(17, f, m);
[h, omega] freqz(b, 1, 512);
plot(f, m, omega/pi, abs(h));
The graph is shown in Figure 5.23.
The desired magnitude response in the passband and the stopband can be weighted by
an additional vector w. The length of w is half of the length of f and m. As shown in
Figure 5.7, there are four types of linear-phase FIR filters. Types III (L even) and IV (L
odd) are used for specialized filter designs: the Hilbert transformer and differentiator.
To design these two types of FIR filters, the arguments `hilbert' and `differen-
tiator' are used for `ftype' in the last two versions of remez.
Similar to remez, MATLAB also provides firls function to design linear-phase
FIR filters which minimize the weighted, integrated squared error between the ideal
filter and the actual filter's magnitude response over a set of desired frequency bands.
The synopsis of function firls is identical to the function remez.
Two additional functions available in the MATLAB Signal Processing Toolbox,
fir1 and fir2, can be used in the design of FIR filters using windowed Fourier series
method. The function fir1 designs windowed linear-phase lowpass, highpass, band-
pass, and bandstop FIR filters with the following forms:
b fir1(N, Wn);
b fir1(N, Wn, `filtertype');
b fir1(N, Wn, window);
b fir1(N, Wn, `filtertype', window);
The basic form, b fir1(N, Wn), generates the length L N 1 vector b containing
the coefficients of a lowpass filter with a normalized cut-off frequency Wn between 0 and
1.4
Actual filter
1.2
Ideal filter
1
Magnitude
0.8
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
Figure 5.23 Magnitude responses of the desired and actual FIR filters
IMPLEMENTATION CONSIDERATIONS 221
Discrete-time FIR filters designed in the previous section can be implemented in the
following forms: hardware, firmware, and software. Digital filters are realized by digital
computers or DSP hardware uses quantized coefficients to process quantized signals. In
this section, we discuss the software implementation of digital FIR filters using
MATLAB and C to illustrate the main issues. We will consider finite wordlength effects
in this section. The DSP chip implementation using the TMS320C55x will be presented
in the next section.
MATLAB provides the built-in function filter for FIR and IIR filtering. The basic
form of this function is
y filter(b, a, x)
For FIR filtering, a 1 and filter coefficients bl are contained in the vector b. The input
vector is x while the output vector generated by the filter is y.
Example 5.12: The following C function fir.c implements the linear convolution
(FIR filtering, inner product, or dot product) operation given in (5.2.1). The
arrays x and h are declared to proper dimension in the main program firfltr.c
given in Appendix C.
/**************************************************************
* FIR This function performs FIR filtering (linear convolution)
* ntap-1
* y(n) sum hi*x(n i)
* i0
**************************************************************/
float fir(float *x, float *h, int ntap)
{
float yn 0.0; /* Output of FIR filter */
int i; /* Loop index */
for(i 0; i > ntap; i++)
{
yn h [i]*x [i]; /* Convolution of x(n) with h(n) */
}
return(yn); /* Return y(n) to main function */
}
The FIR filtering defined in (5.2.1) can be implemented using DSP chips or special
purpose ASIC devices. Modern programmable DSP chips, such as the TMS320C55x,
have architecture that is optimized for the repetitive nature of multiplications and
accumulations. They are also optimized for performing the memory moves required
in updating the contents of the signal buffer, or realizing the circular buffers. The
implementation of FIR filters using the TMS320C55x will be discussed in Section 5.6.
Consider an FIR filter transfer function given in (5.2.2). The filter coefficients, bl , are
determined by a filter design package such as MATLAB for given specifications. These
coefficients are usually represented by double-precision floating-point numbers and
have to be quantized using a finite number of bits for a given fixed-point processor
such as 16-bit for the TMS320C55x. The filter coefficients are only quantized once in
the design process, and those values remain constant in the filter implementation. We
must check the quantized design. If it no longer meets the given specifications, we can
optimize, redesign, restructure, and/or use more bits to satisfy the specifications. It is
especially important to consider quantization effects when designing filters for imple-
mentation using fixed-point arithmetic.
Let b0l denote the quantized values corresponding to bl . As discussed in Chapter 3, the
nonlinear quantization can be modeled as a linear operation expressed as
where e(l ) is the quantization error and can be assumed as a uniformly distributed
random noise of zero mean and variance defined in (3.5.6).
Quantization of the filter coefficients results in a new filter transfer function
X
L 1 X
L 1
H 0
z b0l z l
bl e
lz l
H
z E
z,
5:5:2
l0 l0
where
X
L 1
l
E
z e
lz
5:5:3
l0
is the FIR filter representing the error in the transfer function due to coefficient
quantization. The FIR filter with quantized coefficients can be modeled as a parallel
connection of two FIR filters as illustrated in Figure 5.24.
The frequency response of the actual FIR filter with quantized coefficients b0l can be
expressed as
y(n)
H(z)
x(n) y⬘(n)
e(n)
E(z)
where
X
L 1
j!l
E
! e
le
5:5:5
l0
represents the error in the desired frequency response H
!. The error is bounded by
XL 1 XL 1 X
L 1
j!l j!l
jE
!j e
le je
ljje j je
lj:
5:5:6
l0 l0 l0
As shown in (3.5.3),
D B
je
lj 2 :
5:5:7
2
B
jE
!j L 2 :
5:5:8
This bound is too conservative because it can only be reached if all errors e(l ) are of
the same sign and have the maximum value in the range. A more realistic bound can be
derived assuming e(l ) are statistically independent random variables. The variance of
E
! can be obtained as
r
2L 1
s2E
! 2 B1
:
5:5:9
12
This bound can be used to estimate the wordlength of the FIR coefficients required to
meet the given filter specifications.
As discussed in Section 3.6.3, the most effective technique in preventing overflow is
scaling down the magnitude of signals. The scaling factor used to prevent overflow in
computing the sum of products defined in (5.2.1) is given in (3.6.4) or (3.6.5).
As discussed in Section 5.2.3, most FIR filters are linear-phase and the coefficients are
constrained to satisfy the symmetry condition (5.2.15) or the anti-symmetry property
(5.2.18). Quantizing both sides of (5.2.15) or (5.2.18) has the same quantized value for
EXPERIMENTS USING THE TMS320C55X 225
each l, which implies that the filter still has linear phase after quantization. Only the
magnitude response of the filter is changed. This constraint greatly reduces the sen-
sitivity of the direct-form FIR filter implementation given in (5.2.1). There is no need to
use the cascade form shown in Figure 5.9 for FIR filters, unlike the IIR filters that
require cascade form. This issue will be discussed in the next chapter.
FIR filters are widely used in a variety of areas such as audio, video, wireless com-
munications, and medical devices. For many practical applications such as wireless
communications (CDMA/TDMA), streamed video (MPEG/JPEG), and voice over
internet protocol (VoIP), the digital samples are usually grouped in frames with time
duration from a few milliseconds to several hundred milliseconds. It is more efficient
for the C55x to process samples in frames (blocks). The FIR filter program fir.c given
in Section 5.5.1 are designed for processing signals sample-by-sample. It can be
easily modified to handle a block of samples. We call the filter that processes signals
block-by-block a block filter. Example 5.13 is an example of a block-FIR filter written
in C.
for(i L 1; i > 0; i )
{
x [i] x [i 1]; /* Shift old data x(n i) */
}
}
return;
}
Simulation and emulation methods are commonly used in DSP software develop-
ment. They are particularly useful for the study and analysis of DSP algorithms. By
using a software signal generator, we can produce the exact same signals repeatedly
during the debug and analysis processes. Table 5.1 lists the example of sinusoid signal
generator, signal_gen.c that is used to generate the experimental data input5.dat
for experiments in this section.
/*
signal_gen.c Generate sinewaves as testing data in Q15 format
The difference equation (5.2.1) and Example 5.13 show that FIR filtering includes two
different processes: (1) It performs a summation of products generated by multiplying
the incoming signals with the filter coefficients. (2) The entire signal buffer is updated to
include a new sample. For an FIR filter of L coefficients, L multiplications, (L 1)
additions, and additional data memory move operations are required for the complete
filtering operations. Refreshing the signal buffer in Example 5.13 uses the memory shift
shown in Figure 5.11. To move (L 1) samples in the signal buffer to the next memory
location requires additional instruction cycles. These extensive operations make FIR
filtering a computation-intensive task for general-purpose microprocessors.
The TMS320C55x has three important features to support FIR filtering. It has
multiply±accumulate instructions, the circular addressing modes, and the zero-overhead
nested loops. Using multiply±accumulate instructions, the C55x can perform both
multiplication and addition with rounding options in one cycle. That is, the C55x can
complete the computation of one filter tap at each cycle. In Example 5.13, the updating
of the signal buffers by shifting data in memory requires many data-move operations. In
practice, we can use the circular buffers as shown in Figure 5.12. The FIR filtering in
Example 5.13 can be tightly placed into the loops. To reduce the overhead of loop
control, the loop counters in the TMS320C55x are handled using hardware, which can
support three levels of zero-overhead nested loops using BRC0, BRC1, and CSR
registers.
The block FIR filter in Example 5.13 can be implemented with circular buffers using
the following TMS320C55x assembly code:
mov # M 1,BRC0
mov # L 3,CSR
j j rptblocal sample_loop-1 ; Start the outer loop
mov *AR0,*AR3 ; Put the new sample to signal buffer
mpym *AR3,*AR1+,AC0 ; Do the 1st operation
j j rpt CSR ; Start the inner loop
macm *AR3,*AR1,AC0
macmr *AR3,*AR1,AC0 ; Do the last operation
mov hi(AC0),*AR2 ; Save result in Q15 format
sample_loop
Four auxiliary registers, AR0±AR3, are used as pointers in this example. AR0 points
to the input buffer in []. The signal buffer x []containing the current input x(n) and the
L 1 old samples is pointed at by AR3. The filter coefficients in the array h []are
pointed at by AR1. For each iteration, a new sample is placed into the signal buffer and
228 DESIGN AND IMPLEMENTATION OF FIR FILTERS
the inner loop repeats the multiply±accumulate instructions. Finally, the filter output
y(n) is rounded and stored in the output buffer out []that is pointed at by AR2.
Both AR1 and AR3 use circular addressing mode. At the end of the computation,
the coefficient pointer AR1 will be wrapped around, thus pointing at the first co-
efficient again. The signal buffer pointer AR3 will point at the oldest sample,
x
n L 1. In the next iteration, AR1 will start from the first tap, while the oldest
sample in the signal buffer will be replaced with the new input sample as shown in
Figure 5.12.
In the assembly code, we initialize the repeat counter CSR with the value L-3 for L-2
iterations. This is because we use a multiplication instruction before the loop and a
multiply±accumulate-rounding instruction after the repeat loop. Moving instructions
outside the repeat loops is called loop unrolling. It is clear from using the loop unrolling
technique, the FIR filter must have at least three coefficients. The complete assembly
program fir.asm is given in the experimental software package.
The C program exp5a.c listed in Table 5.2 will be used for Experiment 5A. It uses
the data file input5.dat as input, and calls fir()to perform lowpass filtering. Since
we use circular buffers, we define a global variable index as the signal buffer index for
tracking the starting position of the signal buffer for each sample block. The C55x
compiler supports several pragma directives. We apply the two most frequently used
directives, CODE_SECTION and DATA_SECTION, to allocate the C functions' program
and data variables for experiments. For a complete list of C pragma directives that the
compiler supports, please refer to the TMS320C55x Optimizing C Compiler User's
Guide [11].
/*
exp5a.c Block FIR filter experiment using input data file
*/
#define M 128 /* Input sample size */
#define L 48 /* FIR filter order */
#define SN L /* Signal buffer size */
extern unsigned int fir(int *, unsigned int, int *, unsigned int,
int *, int *, unsigned int);
/* Input data */
#include "input5.dat"
EXPERIMENTS USING THE TMS320C55X 229
void main(void)
{
unsigned int i,j;
where the var_name is the variable name contained in the C function that will be
allocated into the data section defined by the section_name. The linker command file
uses this name for data section allocation to system memory.
Go through the following steps for Experiment 5A:
1. Copy the assembly program fir.asm, the C function epx5a.c, the linker com-
mand file exp5.cmd, and the experimental data input5.dat from the software
package to the working directory.
2. Create the project exp5a and add the files fir.asm, epx5a.c, and exp5.cmd to
the project.
3. Build, debug, and run the project exp5a. The lowpass filter will attenuate two
higher frequency components at 1800 and 3300 Hz and pass the 800 Hz sinewave.
Figure 5.25 shows the time-domain and frequency-domain plots of the input and
output signals.
4. Use the CCS animation capability to view the FIR filtering process frame by frame.
Profile the FIR filter to record the memory usage and the average C55x cycles used
for processing one block of 128 samples.
As shown in Figure 5.7, a symmetric FIR filter has the characteristics of symmetric
impulse responses (or coefficients) about its center index. Type I FIR filters have an
even number of symmetric coefficients, while Type II filters have an odd number. An
even symmetric FIR filter shown in Figure 5.8 indicates that only the first half of the
filter coefficients are necessary for computing the filter result.
The TMS320C55x has two special instructions, firsadd and firssub, for imple-
menting symmetric and anti-symmetric FIR filters. The former can be used to compute
symmetric FIR filters given in (5.2.23), while the latter can be used for anti-symmetric
FIR filters defined in (5.2.24). The syntax for symmetric and anti-symmetric filter
instructions are
firsadd Xmem,Ymem,Cmem,ACx,ACy ; Symmetric FIR filter
firssub Xmem,Ymem,Cmem,ACx,ACy ; Anti-symmetric FIR filter
EXPERIMENTS USING THE TMS320C55X 231
(a) (b)
Figure 5.25 Input and output signals of Experiment 5A: (a) input signals in the frequency (top)
and time (bottom) domains, and (b) output signals in the frequency (top) and time (bottom)
domains
where Xmem and Ymem are the signal buffers for fx
n, x
n 1, . . . x
n L=2 1g
and fx
n L=2, . . . x
n L 1g, and Cmem is the coefficient buffer.
For a symmetric FIR filter, the firsadd instruction is equivalent to performing the
following parallel instructions in one cycle:
macm *CDP,ACx,ACy ; bl [x(n l) x(n l L 1)]
j j add *ARx,*ARy,ACx ; x (n l 1) x(n l L 2)
While the macm instruction carries out the multiply±accumulate portion of the sym-
metric filter operation, the add instruction adds up a pair of samples for the next
iteration. This parallel arrangement effectively improves the computation of symmetric
FIR filters. The following assembly program shows an implementation of symmetric
FIR filter using the TMS320C55x:
Although the assembly program of the symmetric FIR filter is similar to the regular
FIR filter, there are several differences when we implement it using the firsadd
instruction: (1) We only need to store the first half of the symmetric FIR filter coeffi-
cients. (2) The inner-repeat loop is set to L=2 2 since each multiply±accumulate
operation accounts for a pair of samples. (3) In order to use firsadd instructions
inside a repeat loop, we add the first pair of filter samples using a dual-memory add
instruction, add *AR1,*AR3 , AC1. We also place the last instruction, macmr
*CDP, AC1, AC0, outside the repeat loop for the final calculation. (4) We use
two data pointers, AR1 and AR3, to address the signal buffer. AR3 points at the
newest sample in the buffer, and AR1 points at the oldest sample in the buffer.
Temporary registers, T1 and T0, are used as the offsets for updating circular buffer
pointers. The offsets are initialized to T0 L/2 and T1 L=2 2. After AR3 and AR1
are updated, they will point to the newest and the oldest samples again. Figure 5.26
illustrates this two-pointer circular buffer for a symmetric FIR filtering. The firsadd
instruction accesses three data buses simultaneously (Xmem and Ymem for signal samples
and Cmem for filter coefficient). The coefficient pointer CDP is set as the circular
pointer for coefficients. The input and output samples are pointed at by AR0 and AR2.
Two implementation issues should be considered: (1) The symmetric FIR filtering
instruction firsadd adds two corresponding samples, and then performs multiplica-
tion. The addition may cause an undesired overflow. (2) The firsadd instruction
accesses three read operations in the same cycle. This may cause data memory bus
contention. The first problem can be resolved by scaling the new sample to Q14 format
x(n − 3) x(n − 3)
AR1 for next x(n − L + 1)
(a) (b)
Figure 5.26 Circular buffer for accessing signals for a symmetric FIR filtering. The pointers to
x
n and x
n L 1 are updated at the counter-clockwise direction: (a) circular buffer for a
symmetric FIR filter at time n, and (b) circular buffer for a symmetric FIR filter at time n 1
EXPERIMENTS USING THE TMS320C55X 233
prior to saving it to the signal buffer. The filter result needs to be scaled back before it
can be stored into the output buffer. The second problem can be resolved by placing the
filter coefficient buffer and the signal buffer into different memory blocks.
1. Copy the assembly program firsymm.asm, the C function epx5b.c, and the linker
command file exp5.cmd from the software package into the working directory.
2. Create the project exp5b and add the files firsymm.asm, epx5b.c, and
exp5.cmd into the project.
4. Build and run the project exp5b. Compare the results with Figure 5.25.
5. Profile the symmetric filter performance and record the memory usage. How many
instruction cycles were reduced as a result of using the symmetric FIR filter
implementation? How many memory locations have been saved?
In this example, ARx and ARy are data pointers to x(n) and x
n 1, respectively, and
CDP is the coefficient pointer. The repeat loop produces two filter outputs, y(n) and
y
n 1. After execution, the addresses of the data pointers ARx and ARy are increased
by one. The coefficient pointer CDP is also incremented by one, although the coefficient
pointer CDP is set for auto-increment mode in both instructions. This is because when
234 DESIGN AND IMPLEMENTATION OF FIR FILTERS
CDP pointer is used in parallel instructions, it can be incremented only once. Figure
5.27 shows the C55x dual-MAC architecture for FIR filtering. The CDP uses B-bus to
fetch filter coefficients, while ARx and ARy use C-bus and D-bus to get data from the
signal buffer. The dual-MAC filtering results are temporarily stored in the accumulators
ACx and ACy.
The following example shows the C55x assembly implementation using the dual-
MAC and circular buffer for a block-FIR filter:
mov #M 1,BRC0 ; Outer loop counter
mov #(L/2 3),CSR ; Inner loop counter as L/2 2
j j rptblocal sample_loop-1
mov *AR0,*AR1 ; Put new sample to signal buffer x [n]
mov *AR0,*AR3 ; Put next new sample to location x [n1]
mpy *AR1,*CDP,AC0 ; First operation
:: mpy *AR3,*CDP,AC1
j j rpt CSR
mac *AR1,*CDP,AC0 ; Rest MAC iterations
:: mac *AR3,*CDP,AC1
macr *AR1,*CDP,AC0
:: macr *AR3,*CDP,AC1 ; Last MAC operation
mov pair(hi(AC0)),dbl(*AR2) ; Store two output data
sample_loop
There are three implementation issues to be considered: (1) In order to use dual-MAC
units, we need to increase the length of the signal buffer by one in order to accommodate
an extra memory location required for computing two output signals. With an add-
itional space in the buffer, we can form two sample sequences in the signal buffer, one
pointed at by AR1 and the other by AR3. (2) The dual-MAC implementation of the
FIR filter also makes three memory reads simultaneously. Two memory reads are used to
get data samples from the signal buffer into MAC units, and the third one is used to fetch
the filter coefficient. To avoid memory bus contention, we shall place the coefficients
in a different memory block. (3) We place the convolution sums in two accumulators
when we use dual-MAC units. To store both filter results, it requires two memory
store instructions. It will be more efficient if we can use the dual-memory-store instruc-
tion, mov pair(hi(AC0)),dbl(*AR2), to save both outputs y(n) and y
n 1 to the
data memory in the same cycle. However, this requires the data memory to be
MAC MAC
ACx ACy
aligned on an even word boundary. This alignment can be done using the linker
command file with the key word, align 4, see the linker command file. We use the
DATA_SECTION pragma directive to tell the linker where to place the output
sequence.
Go through the following steps for Experiment 5C:
2. Create the project exp5c and add files fir2macs.asm, epx5c.c, and exp5.cmd
into the project.
3. Build and run the project. Compare the results with the results from the two
previous experiments.
4. Profile the filter performance and record the memory usage. How many instruction
cycles were reduced by using the dual-MAC implementation? Why is the dual-MAC
implementation more efficient than the symmetric FIR implementation?
References
[1] N. Ahmed and T. Natarajan, Discrete-Time Signals and Systems, Englewood Cliffs, NJ: Prentice-
Hall, 1983.
[2] V. K. Ingle and J. G. Proakis, Digital Signal Processing Using MATLAB V.4, Boston: PWS
Publishing, 1997.
[3] Signal Processing Toolbox for Use with MATLAB, Math Works, 1994.
[4] A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, Englewood Cliffs, NJ:
Prentice-Hall, 1989.
[5] S. J. Orfanidis, Introduction to Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1996.
[6] J. G. Proakis and D. G. Manolakis, Digital Signal Processing ± Principles, Algorithms, and
Applications, 3rd Ed., Englewood Cliffs, NJ: Prentice-Hall, 1996.
[7] S. K. Mitra, Digital Signal Processing: A Computer-Based Approach, 2nd Ed., New York, NY:
McGraw Hill, 1998.
[8] D. Grover and J. R. Deller, Digital Signal Processing and the Microcontroller, Englewood Cliffs,
NJ: Prentice-Hall, 1999.
[9] F. Taylor and J. Mellott, Hands-On Digital Signal Processing, New York, NY: McGraw Hill,
1998.
[10] S. D. Stearns and D. R. Hush, Digital Signal Analysis, 2nd Ed., Englewood Cliffs, NJ: Prentice-
Hall, 1990.
[11] Texas Instruments, Inc., TMS320C55x Optimizing C Compiler User's Guide, Literature no.
SPRU281, 2000.
236 DESIGN AND IMPLEMENTATION OF FIR FILTERS
Exercises
Part A
1. Consider the moving-average filter given in Example 5.4. What is the 3-dB bandwidth of this
filter if the sampling rate is 8 kHz?
2. Consider the FIR filter with the impulse response h
n f1, 1, 1g. Calculate the magnitude
and phase responses and show that the filter has linear phase.
h n an u n,
1 an1
y
n , n 0:
1 a
show that
(a) r
n u
n u
n, where denotes linear convolution and r
n
n 1u
n is called the
unit-ramp sequence.
(b) t
n x
n x
n r
n 2r
n L r
n 2L is the triangular pulse.
5. Using the graphical interpretation of linear convolution given in Figure 5.4 to compute the
linear convolution of h
n f1, 2, 1g and x(n), n 0, 1, 2 defined as follows:
(a) x
n f1, 1, 2g,
(b) x
n f1, 2, 1g, and
(c) x
n f1, 3, 1g:
find the transfer function, zeros, and the magnitude response of this filter and compare the
results with Figure 5.6.
8. Assuming h
n has the symmetry property h
n h
n for n 0, 1, . . . M, show that H
!
can be expressed as
EXERCISES 237
X
M
H
! h
0 2h
n cos
!n:
n1
1
y
n x
n x
n 1:
T
Find the transfer function H
z, the frequency response H
!, and the phase response of the
differentiator.
10. Redraw the signal-flow diagram shown in Figure 5.8 and modify equations (5.2.23) and
(5.2.24) in the case that L is an odd number.
11. Consider the rectangular window w(n) of length L 2M 1 defined in (5.3.17). Show that
the convolution of w(n) with itself and then divided by L yields the triangular window.
12. Assuming that H
! given in (5.3.2) is an even function in the interval j!j < p, show that
p
1
h
n H
! cos
!nd!, n0
p 0
and h n h n.
13. Design a lowpass FIR filter of length L 5 with a linear phase to approximate the ideal
lowpass filter of cut-off frequency !c 1. Use the Hamming window to eliminate the ripples
in the magnitude response.
14. The ideal highpass filter of Figure 5.1(b) has frequency response
0, j!j < !c
H
!
1, !c j!j p.
with the sampling rate 8 kHz and the duration of impulse response be 50 msec using Fourier
series method.
Part B
16. Consider the FIR filters with the following impulse responses:
(a) h
n f 4, 1, 1, 2, 5, 0, 5, 2, 1, 1, 4g
238 DESIGN AND IMPLEMENTATION OF FIR FILTERS
(b) h n f 4, 1, 1, 2, 5, 6, 5, 2, 1, 1, 4g
Using MATLAB to plot magnitude responses, phase responses, and locations of zeros of the
FIR filter's transfer function H(z).
17. Show the frequency response of the lowpass filter given in (5.2.10) for L 8 and compare
the result with Figure 5.6.
18. Plot the magnitude response of a linear-phase FIR highpass filter of cut-off frequency
!c 0:6p by truncating the impulse response of the ideal highpass filter to length
L 2M 1 for M 32 and 64.
19. Repeat problem 18 using Hamming and Blackman window functions. Show that oscillatory
behavior is reduced using the windowed Fourier series method.
20. Write C (or MATLAB) program that implement a comb filter of L 8. The program must
have the input/output capability as introduced in Appendix C. Test the filter using the
sinusoidal signals of frequencies and !1 p=4 and !2 3p=8. Explain the results based
on the distribution of the zeros of the filter.
22. Rewrite the program firfltr.c given in Appendix C using circular buffer. Implement the
circular pointer updated in a new C function to replace the function shift.c.
Part C
23. Based on the assembly routines given in Experiments 5A, 5B, and 5C, what is the minimum
number of the FIR filter coefficients if the FIR filter is
(a) symmetric and L is even,
(b) symmetric and L is odd,
(c) anti-symmetric and L is even,
(d) anti-symmetric and L is odd.
Do we need to modify these routines if the FIR filter has odd number of taps?
24. Design a 24th-order bandpass FIR filter using MATLAB. The filter will attenuate the
800 Hz and 3.3 kHz frequency components of the signal generated by the signal generator
signal_gen(). Implement this filter using the C55x assembly routines fir.asm,
firsymm.asm, and fir2macs.asm. Plot the filter results in both the time domain and
the frequency domain.
25. When design highpass or bandstop FIR filters using MATLAB, the number of filter
coefficients is an odd number. This ensures the unit gain at the half-sampling frequency.
Design a highpass FIR filter, such that it will pass the 3.3 kHz frequency components of the
input signal. Implement this filter using the dual-MAC block FIR filter. Plot the results in
EXERCISES 239
both the time domain and the frequency domain (Hint: modify the assembly routine
fir2macs.asm to handle the odd number coefficients).
26. Design an anti-symmetric bandpass FIR filter to allow only the frequency component at
1.8 kHz to pass. Using firssub instruction to implement the FIR filter and plot the filter
results in both the time domain and the frequency domain.
27. Experiment 5B demonstrates a symmetric FIR filter implementation. This filter can also be
implemented efficiently using the C55x dual-MAC architecture. Modify the dual-MAC FIR
filter assembly routine fir2macs.asm to implement the Experiment 5B based on the
Equation (5.2.23). Compare the profiling results with Experiment 5B that uses the symmetric
FIR filter firsadd instruction.
28. Use TMS320C55x EVM (or DSK) for collecting real-time signal from an analog signal
generator.
6
Design and Implementation
of IIR Filters
We have discussed the design and implementation of digital FIR filters in the previous
chapter. In this chapter, our attention will be focused on the design, realization, and
implementation of digital IIR filters. The design of IIR filters is to determine the
transfer function H(z) that satisfies the given specifications. We will discuss the basic
characteristics of digital IIR filters, and familiarize ourselves with the fundamental
techniques used for the design and implementation of these filters. IIR filters have the
best roll-off and lower sidelobes in the stopband for the smallest number of coefficients.
Digital IIR filters can be easily obtained by beginning with the design of an analog
filter, and then using mapping technique to transform it from the s-plane into the z-
plane. The Laplace transform will be introduced in Section 6.1 and the analog filter will
be discussed in Section 6.2. The impulse-invariant and bilinear-transform methods for
designing digital IIR filters will be introduced in Section 6.3, and realization of IIR
filters using direct, cascade, and parallel forms will be introduced in Section 6.4. The
filter design using MATLAB will be described in Section 6.5, and the implementation
considerations are given in Section 6.6. The software development and experiments
using the TMS320C55x will be given in Section 6.7.
transform. Given a positive-time function, x
t 0, for t < 0, a simple way to find the
Fourier transform is to multiply x(t) by a convergence factor e st , where s is a positive
number such that
1
st
x
te dt < 1:
6:1:1
0
st
Taking the Fourier transform defined in (4.1.10) on the composite function x
te , we
have
1
1
st jVt
sjVt
X
s x
te e dt x
te dt
01 0
st
x
te dt,
6:1:2
0
where
s s jV 6:1:3
is a complex variable. This is called the one-sided Laplace transform of x(t) and is
denoted by X
s LTx
t. Table 6.1 lists the Laplace transforms of some simple time
functions.
a ct 1
a$ and e $ :
s sc
a b
X
s :
s sc
The integral is evaluated along the straight line s jV in the complex plane from
V 1 to V 1, which is parallel to the imaginary axis jV at a distance s
from it.
LAPLACE TRANSFORM 243
x t, t 0 X(s)
d
t 1
1
u(t)
s
c
c
s
c
ct
s2
c
n 1!
ctn 1
sn
at 1
e
sa
V0
sin V0 t
s2 V20
s
cos V0 t
s2 V20
1
x
t cos V0 t X
s jV0 X
s jV0
2
j
x
t sin V0 t X
s jV0 X
s jV0
2
e at x
t X
s a
1 s
x
at X
a a
Equation (6.1.2) clearly shows that the Laplace transform is actually the Fourier
transform of the function x
te st , t > 0. From (6.1.3), we can think of a complex s-
plane with a real axis s and an imaginary axis jV. For values of s along the jV axis, i.e.,
s 0, we have
1
jVt
X
sjsjV x
te dt,
6:1:5
0
which is the Fourier transform of the causal signal x(t). Given a function X(s), we can
find its frequency characteristics by setting s jV.
There are convolution properties associated with the Laplace transform. If
1
1
y
t x
t h
t x
th
t tdt h
tx
t tdt,
6:1:6
0 0
244 DESIGN AND IMPLEMENTATION OF IIR FILTERS
then
where Y(s), H(s), and X(s) are the Laplace transforms of y(t), h(t), and x(t), respectively.
Thus convolution in the time domain is equivalent to multiplication in the Laplace (or
frequency) domain.
In (6.1.7), H(s) is the transfer function of the system defined as
1
Y
s st
H
s h
te dt,
6:1:8
X
s 0
where h(t) is the impulse response of the system. The general form of a transfer function
is expressed as
b0 b1 s bL 1 sL 1 N
s
H
s :
6:1:9
a0 a1 s aM s M D
s
The roots of N(s) are the zeros of the transfer function H(s), while the roots of D(s) are
the poles.
2t
Example 6.2: The input signal x
t e u
t is applied to an LTI system, and the
output of the system is given as
t 2t 3t
y
t
e e e u
t:
Find the system's transfer function H(s) and the impulse response h(t).
From Table 6.1, we have
1 1 1 1
X
s and Y
s :
s2 s1 s2 s3
s2 6s 7 1 1
H
s 1 :
s 1
s 3 s1 s3
t 3t
h
t d
t
e e u
t:
LAPLACE TRANSFORM 245
The stability condition for a system can be represented in terms of its impulse
response h(t) or its transfer function H(s). A system is stable if
This condition is equivalent to requiring that all the poles of H(s) must be in the left-half
of the s-plane, i.e., s < 0.
This function satisfies (6.1.10) for a > 0. From Table 6.1, the transfer function
1
H
s , a>0
sa
has the pole at s a, which is located at the left-half s-plane. Thus the system is
stable.
If lim h
t ! 1, the system is unstable. This condition is equivalent to the system
t!1
that has one or more poles in the right-half s-plane, or has multiple-order pole(s) on the
jV axis. The system is marginally stable if h(t) approaches a non-zero value or a
bounded oscillation as t approaches infinity. If the system is stable, then the natural
response goes to zero as t ! 1. In this case, the natural response is also called the
transient response. If the input signal is periodic, then the corresponding forced
response is called the steady-state response. When the input signal is the sinusoidal
signal in the form of sin Vt, cos Vt, or ejVt , the steady-state output is called the
sinusoidal steady-state response.
An analog signal x(t) can be converted into a train of narrow pulses x(nT ) as
where
X
1
dT
t d
t nT
6:1:12
n 1
represents a unit impulse train and is called a sampling function. Clearly, dT
t is not
a signal that we could generate physically, but it is a useful mathematical abstrac-
tion when dealing with discrete-time signals. Assuming that x
t 0 for t < 0, we
have
246 DESIGN AND IMPLEMENTATION OF IIR FILTERS
X
1 X
1
x
nT x
t d
t nT x
nTd
t nT:
6:1:13
n 1 n0
To obtain the frequency characteristics of the sampled signal, take the Laplace
transform of x(nT ) given
1in (6.1.13). Integrating term-by-term and using the property
of the impulse function 1 x
td
t tdt x
t, we obtain
1 " #
X
1 X
1
X
s x
nTd
t nT est dt x
nTe nsT
:
6:1:14
1 n0 n0
z esT , 6:1:15
X
1
X
z X
sjzesT x
nTz n ,
6:1:16
n0
where X(z) is the z-transform of the discrete-time signal x(nT). Thus the z-transform can
be viewed as the Laplace transform of the sampled function x(t) with the change of
variable z e sT .
As discussed in Chapter 4, the Fourier transform of a sequence x(nT) can be obtained
from the z-transform by replacing z with e j! . That is, by evaluating the z-transform on
the unit circle of jzj 1. The whole procedure can be summarized in Figure 6.1.
The relationship z esT defined in (6.1.15) represents the mapping of a region in the s-
plane to the z-plane since both s and z are complex variables. Since s s jV, we have
Sampling
x(t) x(nT)
Fourier
Laplace transform
transform z-transform
jΩ |z| = 1 Im z
w = p/2
p /T
w=p w=0
s=0
s Re z
s<0 s>0
−p /T w = 3p / 2
s-plane z-plane
! VT: 6:1:18b
In this section, we briefly introduce some basic concepts of analog filters. Knowledge of
analog filter transfer functions is readily available since analog filters have already been
investigated in great detail. In Section 6.3, we will introduce a conventional powerful
bilinear-transform method to design digital IIR filters utilizing analog filters.
From basic circuit theory, capacitors and inductors have an impedance (X ) that
depends on frequency. It can be expressed as
1
XC
6:2:1
jVC
and
XL jVL, 6:2:2
where C is the capacitance with units in Farads (F), and L is the inductance with units
in Henrys (H). When either component is combined with a resistor, we can build
frequency-dependent voltage dividers. In general, capacitors and resistors are used to
design analog filters since inductors are bulky, more expensive, and do not perform as
well as capacitors.
Vout R jVRC
H
V :
6:2:3
Vin 1 1 jVRC
R
jVC
C
Vin Vout
R
X(s) Y(s)
H(Ω)
Ω
0
The plot of the magnitude response jH
Vj vs. the frequency V is shown in Figure
6.4. For a constant input voltage, the output is approximately equal to the input at
high frequencies, and the output approaches zero at low frequencies. Therefore
the circuit shown in Figure 6.3 is called a highpass filter since it only allows high
frequencies to pass without attenuation.
The transfer function of the circuit shown in Figure 6.3 is given by
Y
s R RCs
H
s :
6:2:4
X
s R 1=Cs 1 RCs
To design an analog filter, we can use computer programs to calculate the correct values
of the resistor and the capacitor for desired magnitude and phase responses. Unfortu-
nately, the characteristics of the components drift with temperature and time. It is
sometimes necessary to re-tune the circuit while it is being used.
1
jH
Vj2 2L ,
6:2:5
1 V Vp
p
where L is the order of the filter. It is shown that jH
0j 1 and jH
Vp j 1= 2 or,
equivalently, 20 log10 jH
Vp j 3 dB for all values of L. Thus Vp is called the 3-dB
250 DESIGN AND IMPLEMENTATION OF IIR FILTERS
|H(Ω)|
1
1 − dp
ds
Ω
Ωp Ωs
1 s2 L 0: 6:2:8
sk e j
2kL 1p=2L
, k 0, 1, . . . , 2L 1:
6:2:9
These poles are located uniformly on a unit circle in the s-plane at intervals of p=L
radians. The pole locations are symmetrical with respect to both the real and imaginary
axes. Since 2L 1 cannot be an even number, it is clear that as there are no poles on the
jV axis, there are exactly L poles in each of the left- and right-half planes.
To obtain a stable Lth-order IIR filter, we choose only the poles in the left-half s-
plane. That is, we choose
sk e j
2kL 1p=2L
, k 1, 2, . . . , L:
6:2:10
1 1
H
s :
6:2:11
s s1
s s2 . . .
s sL sL aL 1 sL 1 . . . a1 s 1
The coefficients ak are real numbers because the poles sk are symmetrical with respect to
the imaginary axis. Table 6.2 lists the denominator of the Butterworth filter transfer
function H(s) in factored form for values of L ranging from L 1 to L 4.
Example 6.5: Obtain the transfer function of a lowpass Butterworth filter for
L 3. From (6.2.9), the poles are located at
These poles are shown in Figure 6.6. To obtain a stable IIR filter, we choose the
poles in the left-half plane to get
L H(s)
1
1
s1
1
2 p
s2 2s 1
1
3
s 1
s2 s 1
1
4
s2 0:7653s 1
s2 1:8477s 1
252 DESIGN AND IMPLEMENTATION OF IIR FILTERS
1 1
H
s
s s1
s s2
s s3
s e j2p=3
s e jp
s e j4p=3
1
:
s 1
s2 s 1
Chebyshev filters permit a certain amount of ripples in the passband, but have a much
steeper roll-off near the cut-off frequency than what the Butterworth design can achieve.
The Chebyshev filter is called the equiripple filter because the ripples are always of equal
size throughout the passband. Even if we place very tight limits on the passband ripple,
the improvement in roll-off is considerable when compared with the Butterworth filter.
There are two types of Chebyshev filters. Type I Chebyshev filters are all-pole filters
that exhibit equiripple behavior in the passband and a monotonic characteristic in the
stopband (see Figure 6.7a). The family of type II Chebyshev filters contains both poles
and zeros, and exhibit a monotonic behavior in the passband and an equiripple behav-
ior in the stopband, as shown in Figure 6.7(b). In general, the Chebyshev filter meets the
specifications with a fewer number of poles than the corresponding Butterworth filter.
Although the Chebyshev filter is an improvement over the Butterworth filter with
respect to the roll-off, it has a poorer phase response.
The sharpest transition from passband to stopband for any given dp , ds , and L can be
achieved using the elliptic design. In fact, the elliptic filter is the optimum design in this
sense. As shown in Figure 6.8, elliptic filters exhibit equiripple behavior in both the
jΩ
H(s) H(−s)
s1 s0
s2
s
s5
s3 s4
|H(Ω)| |H(Ω)|
1 1
1 − dp 1 − dp
ds ds
Ω Ω
Ωp Ωs Ωp Ωs
(a) (b)
Figure 6.7 Magnitude responses of Chebyshev lowpass filters: (a) type I, and (b) type II
ANALOG FILTERS 253
|H(Ω)|
1
1 − dp
ds
Ω
Ωp Ωs
passband and the stopband. In addition, the phase response of elliptic filter is extremely
nonlinear in the passband (especially near cut-off frequency), so we can only use the
design where the phase is not an important design parameter.
Butterworth, Chebyshev, and elliptic filters approximate an ideal rectangular band-
width. The Butterworth filter has a monotonic magnitude response. By allowing ripples
in the passband for type I and in the stopband for type II, the Chebyshev filter can
achieve sharper cutoff with the same number of poles. An elliptic filter has even sharper
cutoffs than the Chebyshev filter for the same complexity, but it results in both pass-
band and stopband ripples. The design of these filters strives to achieve the ideal
magnitude response with trade-offs in phase response.
Bessel filters are a class of all-pole filters that approximate linear phase in the sense of
maximally flat group delay in the passband. However, we must sacrifice steepness in the
transition region. In addition, acceptable Bessel IIR designs are derived by transforma-
tion only for a relatively limited range of specifications such as sufficiently low cut-off
frequency Vp .
We have discussed the design of prototype analog lowpass filters with a cut-off fre-
quency Vp . Although the same procedure can be applied to designing highpass, band-
pass, or bandstop filters, it is much easier to obtain these filters from the desired lowpass
filter using frequency transformations. In addition, most classical filter design tables
only generate lowpass filters and must be converted using spectral transformation into
highpass, bandpass, or bandstop filters. Filter design packages such as MATLAB often
incorporate and perform the frequency transformations directly.
Butterworth highpass filter's transfer function Hhp
s can be obtained from the
corresponding lowpass filter's transfer function H(s) by using the relationship
1
Hhp
s H
sj H :
6:2:12
s 1s s
For example, consider L 1. From Table 6.2, we have H
s 1=
s 1. From (6.2.12),
we obtain
254 DESIGN AND IMPLEMENTATION OF IIR FILTERS
1 s
Hhp
s :
6:2:13
s1 s 1 s1
s
Similarly, we can calculate Hhp
s for higher order filters. We can show that the
denominator polynomials of H(s) and Hhp
s are the same, but the numerator becomes
sL for the Lth-order highpass filters. Thus Hhp
s has an additional Lth-order zero at the
origin, and has identical poles sk as given in (6.2.10).
Transfer functions of bandpass filters can be obtained from the corresponding low-
pass filters by replacing s with
s2 V2m =BW. That is,
Hbp
s H
s s2 V2m
6:2:14
s BW ,
where Vm is the center frequency of the bandpass filter and BW is its bandwidth. As
illustrated in Figure 5.3 and defined in (5.1.10) and (5.1.11), the center frequency is
defined as
p
Vm Va V b ,
6:2:15
where Va and Vb are the lower and upper cut-off frequencies. The filter bandwidth is
defined by
BW Vb Va : 6:2:16
Note that for an Lth-order lowpass filter, we obtain a 2Lth-order bandpass filter
transfer function.
For example, consider L 1. From Table 6.2 and (6.2.14), we have
1 BWs
Hbp
s 2 :
6:2:17
s 1s s2 Vm s2 BWs V2m
BW
where Vm is the center frequency defined in (6.2.15) and BW is the bandwidth defined in
(6.2.16).
DESIGN OF IIR FILTERS 255
As discussed in Chapters 3 and 4, an IIR filter can be specified by its impulse response
fh
n, n 0, 1, . . . , 1g, I/O difference equation, or transfer function. The general
form of the IIR filter transfer function is defined in (4.3.10) as
LP1
l
bl z
l0
H
z :
6:3:1
P
M
m
1 am z
m1
The design problem is to find the coefficients bl and am so that H(z) satisfies the given
specifications. This IIR filter can be realized by the I/O difference equation
X
L 1 X
M
y
n bl x
n l am y
n m:
6:3:2
l0 m1
The impulse response h(n) of the IIR filter is the output that results when the input is
the unit impulse response defined in (3.1.1). Given the impulse response, the filter
output y(n) can also be obtained by linear convolution expressed as
256 DESIGN AND IMPLEMENTATION OF IIR FILTERS
X
1
y
n x
n h
n h
kx
n k:
6:3:3
k0
Q
M
z zm
H
z b0 m1 ,
6:3:4
QM
z pm
m1
where zm and pm are the mth zero and pole, respectively. For a system to be stable, it is
necessary that all its poles lie strictly inside the unit circle on the z-plane.
The design technique for an impulse-invariant digital filter is illustrated in Figure 6.9.
Assuming the impulse function d
t is used as a signal source, the output of the analog
filter will be the impulse response h(t). Sampling this continuous-time impulse response
yields the sample values h(nT). In the second signal path, the impulse function d
t is
sampled first to yield the discrete-time impulse sequence d
n. Filtering this signal by
H(z) yields the impulse response h(n) of the digital filter. If the coefficients of H(z) are
adjusted so that the impulse response coefficients are identical to the previous specified
h(nT), that is,
the digital filter H(z) is the impulse invariant equivalent of the analog filter H(s). An
analog filter H(s) and a digital filter H(z) are impulse invariant if the impulse response of
H(z) is the same as the sampled impulse response of H(s). Thus in effect, we sample the
continuous-time impulse response to produce the discrete-time filter as described by
(6.3.5).
h(t) h(nT )
H(s) Sampler
d(t)
d(n) h(n)
Sampler H(z)
The impulse-invariant design is usually not performed directly in the form of (6.3.5).
In practice, the transfer function of an analog filter H(s) is first expanded into a partial-
fraction form
XP
ci
H
s ,
6:3:6
i1
s si
where s si is the pole of H(s), and ci is the residue of the pole at si . Note that we
have assumed there are no multiple poles. Taking the inverse Laplace transform of
(6.3.6) yields
X
P
si t
h
t ci e , t 0,
6:3:7
i1
X
P
si nT
h
n ci e , n 0,
6:3:8
i1
X
1 X
P X
1 X
P
ci
H
z h
nz n
ci
e si T
z 1 n :
6:3:9
n0 i1 n0 i1
1 e si T z 1
The impulse response of H(z) is obtained by taking the inverse z-transform of (6.3.9).
Therefore the filter described in (6.3.9) has an impulse response equivalent to the
sampled impulse response of the analog filter H(s) defined in (6.3.6). Comparing
(6.3.6) with (6.3.9), the parameters of H(z) may be obtained directly from H(s) without
bothering to evaluate h(t) or h(n).
The magnitude response of the digital filter will be scaled by fs
1=T due to the
sampling operation. Scaling the magnitude response of the digital filter to approximate
magnitude response of the analog filter requires the multiplication of H(z) by T.
The transfer function of the impulse-invariant digital filter given in (6.3.9) is modified
as
X
P
ci
H
z T si T z 1
:
6:3:10
i1
1 e
The frequency variable ! for the digital filter bears a linear relationship to that for the
analog filter within the operating range of the digital filter. This means that when !
varies from 0 to p around the unit circle in the z-plane, V varies from 0 to p=T along the
jV-axis in the s-plane. Recall that ! VT as given in (3.1.7). Thus critical frequencies
258 DESIGN AND IMPLEMENTATION OF IIR FILTERS
such as cutoff and bandwidth frequencies specified for the digital filter can be used
directly in the design of the analog filter.
0:5
s 4 1:5 1
H
s :
s 1
s 2 s 1 s2
t 2t
h
t 1:5e e :
1:5T T
H
z :
1 e Tz 1 1 e 2T z 1
It is interesting to compare the frequency response of the two filters given in Example
6.6. For the analog filter, the frequency response is
0:5
4 jV
H
V :
1 jV
2 jV
1:5T T
H
! :
1 e Te j!T 1 e 2T e j!T
H 0 1, 6:3:11
and
1:5T T
H
0
6:3:12
1 e T 1 e 2T
for the digital filter. Thus the responses are different due to aliasing at DC. For a high
sampling rate, T is small and the approximations e T 1 T and e 2T 1 2T are
valid. Thus Equation (6.3.12) can be approximated with
1:5T T
H
0 1:
6:3:13
1
1 T 1
1 2T
Therefore by using a high sampling rate, the aliasing effect becomes negligible and the
DC gain is one as shown in (6.3.13).
DESIGN OF IIR FILTERS 259
This is not a one-to-one transformation from the s-plane to the z-plane. Therefore
H
! T1 H
V is true only if H
V 0 for jVj p=T. As shown in (6.3.14), H
! is
the aliased version of H
V. Hence the stopband characteristics are maintained ad-
equately if the aliased tails of H
V are sufficiently small. The passband is also affected,
but this effect is usually less serious. Thus the resulting digital filter does not exactly
meet the original design specifications.
In a bandlimited filter, the magnitude response of the analog filter is negligibly small
at frequencies exceeding half the sampling frequency in order to reduce the aliasing
effect. Thus we must have
This condition can hold for lowpass and bandpass filters, but not for highpass and
bandstop filters.
MATLAB supports the design of impulse invariant digital filters through the func-
tion impinvar in the Signal Processing Toolbox. The s-domain transfer function is first
defined along with the sampling frequency. The function impinvar determines the
numerator and denominator of the z-domain transfer function. The MATLAB com-
mand is expressed as
[bz, az]= impinvar(b, a, Fs)
where bz and az are the numerator and denominator coefficients of a digital filter, Fs is
the sampling rate, and b and a represent coefficients of the analog filter.
Bilinear
Digital filter transform Analog filter
specifications w →Ω specifications
Aanlog filter
design
Bilinear
Digital filter transform Analog filter
H(z) w ←Ω H(s)
Figure 6.10 Digital IIR filter design using the bilinear transform
or equivalently,
1
T=2s
z :
6:3:17
1
T=2s
This is called the bilinear transform because of the linear functions of z in both the
numerator and denominator of (6.3.16).
As discussed in Section 6.1.2, the jV-axis of the s-plane (s 0) maps onto the unit
circle in the z-plane. The left (s < 0) and right (s > 0) halves of the s-plane map into the
inside and outside of the unit circle, respectively. Because the jV-axis maps onto the unit
circle (jzj 1), there is a direct relationship between the s-plane frequency V and the z-
plane frequency !. Substituting s jV and z e j! into (6.3.16), we have
2 e j! 1
jV :
6:3:18
T e j! 1
2 !
V tan ,
6:3:19
T 2
or equivalently,
1 VT
! 2 tan :
6:3:20
2
Thus the entire jV-axis is compressed into the interval p=T, p=T for ! in a one-to-
one manner. The range 0 ! 1 portion in the s-plane is mapped onto the 0 ! p portion
of the unit circle in the z-plane, while the 0 ! 1 portion in the s-plane is mapped onto
DESIGN OF IIR FILTERS 261
p
2
ΩT
0 1
−p
the 0 ! p portion of the unit circle in the z-plane. Each point in the s-plane is uniquely
mapped onto the z-plane. This fundamental relation enables us to locate a point V on
the jV-axis for a given point on the unit circle.
The relationship in (6.3.20) between the frequency variables V and ! is illustrated in
Figure 6.11. The bilinear transform provides a one-to-one mapping of the points along
the jV-axis onto the unit circle, i.e., the entire jV axis is mapped uniquely onto the unit
circle, or onto the Nyquist band j!j p. However, the mapping is highly nonlinear. The
point V 0 is mapped to ! 0 (or z 1), and the point V 1 is mapped to ! p (or
z 1). The entire band VT 1 is compressed onto p=2 ! p. This frequency
compression effect associated with the bilinear transform is known as frequency warp-
ing due to the nonlinearity of the arctangent function given in (6.3.20). This nonlinear
frequency-warping phenomenon must be taken into consideration when designing
digital filters using the bilinear transform. This can be done by pre-warping the critical
frequencies and using frequency scaling.
The bilinear transform guarantees that
H
ssjV H
zz e j! ,
6:3:21
where H(z) is the transfer function of the digital filter, and H(s) is the transfer function
of an analog filter with the desired frequency characteristics.
The bilinear transform of an analog filter function H(s) is obtained by simply replacing s
with z using Equation (6.3.16). The filter specifications will be in terms of the critical
frequencies of the digital filter. For example, the critical frequency ! for a lowpass filter
is the bandwidth of the filter, and for a notch filter, it is the notch frequency. If we use
the same critical frequencies for the analog design and then apply the bilinear transform,
the digital filter frequencies would be in error because of the frequency wrapping given
in (6.3.20). Therefore we have to pre-wrap the critical frequencies of the analog filter.
262 DESIGN AND IMPLEMENTATION OF IIR FILTERS
There are three steps involved in the bilinear design procedure. These steps are
summarized as follows:
1. Pre-wrap the critical frequency !c of the digital filter using (6.3.19) to obtain the
corresponding analog filter's frequency Vc .
^
2. where H
s is the scaled transfer function corresponding to H(s).
^
3. Replace s in H
s by 2
z 1=
z 1T to obtain desired digital filter H(z). That is
^
H
z H
sjs2
z 1=
z1T ,
6:3:23
Example 6.7: Consider the transfer function of the simple analog lowpass filter
given as
1
H
s :
1s
Use this H(s) and the bilinear transform method to design the corresponding
digital lowpass filter whose bandwidth is 1000 Hz and the sampling frequency is
8000 Hz.
The critical frequency for the lowpass filter is the filter bandwidth
!c 2p
1000=8000 radians/sample and T 1=8000 second.
Step 1:
! 2 p 0:8284
2 c 2000p 2
Vc tan tan tan :
T 2 T 16 000 T 8 T
^ 0:8284
H
s H
sjss=
0:8284=T :
sT 0:8284
Step 3: The bilinear transform in (6.3.12) yields the desired transfer function
^ 1z 1
H
z H
sj s2
z 1=
z1T 0:2929 1
:
1 0:4142z
REALIZATION OF IIR FILTERS 263
MATLAB provides the function bilinear to design digital filters using the bilinear
transform. The transfer function for the analog prototype is first determined. The
numerator and denominator polynomials of the analog prototype are then mapped to
the polynomials for the digital filter using the bilinear transform. For example, the
following MATLAB script can be used for design a lowpass filter using bilinear trans-
form:
Fs 2000; % Sampling frequency
Wn 2*pi*500; % Edge frequency
n 2; % Order of analog filter
[b, a] butter(n, Wn, `s'); % Design analog filter
[bz, az] bilinear(b, a, Fs); % Determine digital filter
As discussed earlier, a digital IIR filter can be described by the linear convolution
(6.3.3), the transfer function (6.3.1), or the I/O difference equation (6.3.2). These
equations are equivalent mathematically, but may be different in realization. In DSP
implementation, we have to consider the required operations, memory storage, and the
finite wordlength effects. A given transfer function H(z) can be realized in several forms
or configurations. In this section, we will discuss direct-form I, direct-form II, cascade,
and parallel realizations. Many additional structures such as wave digital filters, ladder
structures, and lattice structures can be found in the reference book [7].
Given an IIR filter described by (6.3.1), the direct-form I realization is defined by the
I/O Equation (6.3.2). It has L M coefficients and needs L M 1 memory locations
to store fx
n l, l 0, 1, . . . , L 1g and fy
n m, m 0, 1, . . . , Mg. It also
requires L M multiplications and L M 1 additions for implementation on
a DSP system. The detailed signal-flow diagram for L M 1 is illustrated in
Figure 4.6.
b0 b1 z 1 b2 z 2
H
z ,
6:4:1
1 a1 z 1 a2 z 2
As shown in Figure 6.12, the IIR filter can be interpreted as the cascade of two
transfer functions H1
z and H2
z. That is,
where H1
z b0 b1 z 1 b2 z 2
and H2
z 1=
1 a1 z 1
a2 z 2 . Since multiplica-
tion is commutative, we have
x(n) b0 y(n)
z−1 z−1
b1 − a1
x(n−1) y(n−1)
z−1 z−1
b2 − a2
x(n−2) y(n−2)
H1(z) H2(z)
z−1 z−1
− a1 b1
z−1 z−1
− a2 b2
H2(z) H1(z)
z−1
− a1 b1
w(n−1)
z−1
− a2 b2
w(n−2)
the canonical form since it realizes the given transfer function with the smallest possible
numbers of delays, adders, and multipliers.
It is worthwhile verifying that the direct-form II realization does indeed implement
the second-order IIR filter. From Figure 6.14, we have
where
Taking the z-transform of both sides of these two equations and re-arranging terms, we
obtain
Y
z W
z b0 b1 z 1 b2 z 2
6:4:7
and
1 2
X
z W
z 1 a1 z a2 z :
6:4:8
Y
z b0 b1 z 1 b2 z 2
H
z
X
z 1 a1 z 1 a2 z 2
which is identical to (6.4.1). Thus the direct-form II realization described by (6.4.5) and
(6.4.6) is identical to the direct-form I realization described in (6.4.2).
Figure 6.14 can be expanded as Figure 6.15 to realize the general IIR filter defined in
(6.3.1) using the direct-form II structure. The block diagram realization of this system
assumes M L 1. If M 6 L 1, one must draw the maximum number of common
delays. Although direct-form II still satisfies the difference Equation (6.3.2), it does not
implement this difference equation directly. Similar to (6.4.5) and (6.4.6), it is a direct
implementation of a pair of I/O equations:
X
M
w
n x
n am w
n m
6:4:9
m1
266 DESIGN AND IMPLEMENTATION OF IIR FILTERS
z−1
− a1 b1
w(n−1)
z−1
− a2 b2
w(n−2)
− aM bL−1
w(n−L−1)
and
X
L 1
y
n bl w
n l:
6:4:10
l0
The computed value of w(n) from the first equation is passed into the second equation to
compute the final output y(n).
The cascade realization of an IIR filter assumes that the transfer function is the product
of first-order and/or second-order IIR sections. By factoring the numerator and the
denominator polynomials of the transfer function H(z) as a product of lower order
polynomials, an IIR filter can be realized as a cascade of low-order filter sections.
Consider the transfer function H(z) given in (6.3.4), it can be expressed as
Y
K
H
z b0 H1
zH2
z HK
z b0 Hk
z,
6:4:11
k1
where each Hk
z is a first- or second-order IIR filter and K is the total number of
sections. That is
1
z zi 1 b1k z
Hk
z 1
,
6:4:12
z pj 1 a1k z
or
REALIZATION OF IIR FILTERS 267
x(n) b0 y(n)
H1(z) H2(z) HK (z)
1 2
z zi
z zj 1 b1k z b2k z
Hk
z 1a z 2
:
6:4:13
z pl
z pm 1 a1k z 2k
for k 1, 2, . . . , K and
x1
n b0 x
n,
6:4:15a
y
n yk
n:
6:4:15b
0:5
z2 0:36
H
z ,
z2 0:1z 0:72
By different pairings of poles and zeros, there are four different realizations of
H(z). For example, we choose
1 1
1 0:6z 1 0:6z
H1
z and H2
z :
1 0:9z 1 1 0:8z 1
where c is a constant, K is a positive integer, and Hk
z are transfer functions of first- or
second-order IIR filters with real coefficients. That is,
b0k
Hk
z 1
,
6:4:17
1 a1k z
or
b0k b1k z 1
Hk
z :
6:4:18
1 a1k z 1 a2k z 2
c0
H1(z)
x(n) y(n)
H2(z)
HK (z)
The variation of parameters in a parallel form affects only the poles of the Hk
z
associated with the parameters. The variation of any parameter in the direct-form
realization will affect all the poles of H(z). Therefore the pole sensitivity of a parallel
realization is less than that of the direct form.
Example 6.10: Consider the transfer function H(z) as given in Example 6.9, we can
express
H
z 0:5 1 0:6z 1 1 0:6z 1 A B C
H 0
z ,
z z
1 0:9z 1
1 0:8z 1 z z 0:9 z 0:8
where
A zH 0
zjz0 0:25,
B
z 0:9H 0
zjz 0:9 0:147, and
0
C
z 0:8H
zjz0:8 0:103:
We can obtain
0:147 0:103
H
z 0:25 1
1
:
1 0:9z 1 0:8z
The cascade realization of an IIR transfer function H(z) involves its factorization in the
form of (6.3.4). This can be done in MATLAB using the function roots. For example,
the statement
r roots(b);
will return the roots of the numerator vector b containing the coefficients of polynomial
in z 1 in ascending power of z 1 in the output vector r. Similarly, we can use
270 DESIGN AND IMPLEMENTATION OF IIR FILTERS
d roots(a);
to obtain the roots of the denominator vector a in the output vector d. From the com-
puted roots, the coefficients of each section can be determined by pole±zero pairings.
A much simpler approach is to use the function tf2zp in the Signal Processing
Toolbox, which finds the zeros, poles, and gains of systems in transfer functions of
single-input or multiple-output form. For example, the statement
[z, p, c] tf2zp(b, a);
will return the zero locations in the columns of matrix z, the pole locations in the
column vector p, and the gains for each numerator transfer function in vector c.
Vector a specifies the coefficients of the denominator in descending powers of z 1 ,
and the matrix b indicates the numerator coefficients with as many rows as there are
outputs.
2z 1 3z 2
H
z
1 0:4z 1 z 2
MATLAB also provides a useful function zp2sos in the Signal Processing Toolbox
to convert a zero-pole-gain representation of a given system to an equivalent represen-
tation of second-order sections. The function
[sos, G] zp2sos(z, p, c);
finds the overall gain G and a matrix sos containing the coefficients of each second-
order section of the equivalent transfer function H(z) determined from its zero±pole
form. The zeros and poles must be real or in complex-conjugate pairs. The matrix sos is
a K 6 matrix
2 3
b01 b11 b21 a01 a11 a21
6 b02 b12 b22 a02 a12 a22 7
6 7
sos 6 .. .. .. .. .. .. 7,
6:4:19
4 . . . . . . 5
b0K b1K b2K a0K a1K a2K
whose rows contain the numerator and denominator coefficients, bik and aik , i 0, 1, 2
of the kth second-order section Hk
z. The overall transfer function is expressed as
Y
K Y
K
b0k b1k z 1
b2k z 2
H
z Hk
z 1 2
:
6:4:20
k1
a
k1 0k
a1k z a2k z
DESIGN OF IIR FILTERS USING MATLAB 271
for Butterworth, Chebyshev type I, Chebyshev type II, and elliptic filters, respectively.
The parameters Wp and Ws are the normalized passband and stopband edge frequencies,
respectively. The range of Wp and Ws are between 0 and 1, where 1 corresponds to the
Nyquist frequency
fN fs =2. The parameters Rp and Rs are the passband ripple and
the minimum stopband attenuation specified in dB, respectively. These four functions
return the order N and the frequency scaling factor Wn. These two parameters are needed
in the second step of IIR filter design using MATLAB.
For lowpass filters, the normalized frequency range of passband is 0 < F < Wp, the
stopband is Ws < F < 1, and Wp < Ws. For highpass filters, the normalized frequency
range of stopband is 0 < F < Ws, the passband is Wp < F < 1, and Wp > Ws. For
bandpass and bandstop filters, Wp and Ws are two-element vectors that specify the
transition bandages, with the lower-frequency edge being the first element of the vector,
and N is half of the order of the filter to be designed.
In the second step of designing IIR filters based on the bilinear transformation, the
Signal Processing Toolbox provides the following functions:
[b, a] butter(N, Wn);
[b, a] cheby1(N, Rp, Wn);
272 DESIGN AND IMPLEMENTATION OF IIR FILTERS
Example 6.12: Design a lowpass Butterworth filter with less than 1.0 dB of ripple
from 0 to 800 Hz, and at least 20 dB of stopband attenuation from 1600 Hz to the
Nyquist frequency 4000 Hz.
The MATLAB script (exam6_12.m in the software package) for designing the
specified filter is listed as follows:
Wp 800/4000; Ws = 1600/4000;
Rp 1.0; Rs 20.0;
[N, Wn] buttord(Wp, Ws, Rp, Rs);
[b, a] butter(N, Wn);
freqz(b, a, 512, 8000);
The Butterworth filter coefficients are returned via vectors b and a by MATLAB
function butter(N,Wn). The magnitude and phase responses of the designed
fourth-order IIR filter are shown in Figure 6.18. This filter will be used for the IIR
filter experiments in Sections 6.7.
Example 6.13: Design a bandpass filter with passband of 100 Hz to 200 Hz and the
sampling rate is 1 kHz. The passband ripple is less than 3 dB and the stopband
attenuation is at least 30 dB by 50 Hz out on both sides of the passband.
The MATLAB script (exam6_13.m in the software package) for designing the
specified bandpass filter is listed as follows:
Wp [100 200]/500; Ws [50 250]/500;
Rp 3; Rs 30;
[N, Wn] buttord(Wp, Ws, Rp, Rs);
[b, a] butter(N, Wn);
freqz(b, a, 128, 1000);
The magnitude and phase responses of the designed bandpass filter are shown in
Figure 6.19.
IMPLEMENTATION CONSIDERATIONS 273
Magnitude (dB)
−50
−100
−150
−200
−250
0 500 1000 1500 2000 2500 3000 3500 4000
Frequency (Hz)
0
Phase (degrees)
−100
−200
−300
−400
0 500 1000 1500 2000 2500 3000 3500 4000
Frequency (Hz)
100
Magnitude (dB)
−100
−200
−300
0 50 100 150 200 250 300 350 400 450 500
Frequency (Hz)
200
0
Phase (degrees)
−200
−400
−600
−800
−1000
0 50 100 150 200 250 300 350 400 450 500
Frequency (Hz)
6.6.1 Stability
The IIR filter described by the transfer function given in (6.3.4) is stable if all the poles
lie within the unit circle. That is,
If jpm j > 1 for any 0 m M, then the IIR filter defined in (6.3.4) is unstable since
In addition, an IIR filter is unstable if H(z) has multiple-order pole(s) on the unit circle.
For example, if H
z z=
z 12 , there is a second-order pole at z 1. The impulse
response of the system is h
n n, which is an unstable system as defined in (6.6.3).
1
H
z ,
1 az 1
h n an , n 0:
If the pole is inside the unit circle, that is, jaj < 1, the impulse response
Consider the second-order IIR filter defined by equation (6.4.1). The denominator
can be factored as
1 2
1 a1 z a2 z
1 p1 z 1
1 p2 z 1 ,
6:6:5
where
a1 p1 p2 6:6:6
and
a2 p1 p2 : 6:6:7
The poles must lie inside the unit circle for stability, that is jp1 j < 1 and jp2 j < 1. From
(6.6.7), we obtain
The corresponding condition on a1 can be derived from the Schur±Cohn stability test
and is given by
Stability conditions (6.6.8) and (6.6.9) are illustrated in Figure 6.20, which shows the
resulting stability triangle in the a1 a2 plane. That is, the second-order IIR filter is
stable if and only if the coefficients define a point (a1 , a2 ) that lies inside the stability
triangle.
As discussed in Chapter 3, there are four types of quantization effects in digital filters ±
input quantization, coefficient quantization, roundoff errors, and overflow. In practice,
the digital filter coefficients obtained from a filter design package are quantized to a
finite number of bits so that the filter can be implemented using DSP hardware. The
filter coefficients, bl and am , of the discrete-time filter defined by (6.3.1) and (6.3.2) are
determined by the filter design techniques introduced in Section 6.3, or by a filter design
a2
a2=1
1
a1
−2 2
− a1 = 1 + a2 −1 a1 = 1 + a2
Figure 6.20 Region of coefficient values for a stable second-order IIR filter
276 DESIGN AND IMPLEMENTATION OF IIR FILTERS
Similar to the concept of input quantization discussed in Section 3.5, the nonlinear
operation of coefficient quantization can be modeled as a linear process that introduces
a quantization noise expressed as
and
where the coefficient quantization errors e
l and e
m can be assumed to be a random
noise that has zero mean and variance as defined in (3.5.6).
If the wordlength is not large enough, some undesirable effects occur. For ex-
ample, the frequency characteristics such as magnitude and phase responses of H 0
z
may be different from those of H
z. In addition, for high-order filters whose poles
are closely clustered in the z-plane, small changes in the denominator coefficients can
cause large shifts in the location of the poles. If the poles of H(z) are close to the
unit circle, the pole(s) of H 0
z may move outside the unit circle after coefficient
quantization, resulting in an unstable implementation. These undesired effects are
more serious when higher-order filters are implemented using the direct-form I and II
realizations discussed in Section 6.4. Therefore the cascade and parallel realizations are
preferred in practical DSP implementations with each Hk
z in a first- or second-order
section.
1
H
z 1 2
,
1 0:9z 0:2z
the poles are located at z 0:4 and z 0:5. This filter can be realized in the
cascade form as
1
H 0
z 1 2
1 0:875z 0:125z
1 1
H 00
z 1
1
:
1 0:375z 1 0:5z
The pole locations of the direct-form H 0
z are z 0:18 and z 0:695, and the
pole locations of the cascade form H 00
z are z 0:375 and z 0:5. Therefore the
poles of cascade realization are closer to the desired H
z.
In practice, one must always check the stability of the filter with the quantized
coefficients. The problem of coefficient quantization may be studied by examining
pole locations in the z-plane. For a second-order IIR filter given in (6.4.1), we can
place the poles near z 1 with much less accuracy than elsewhere in the z-plane. Since
the second-order IIR filters are the building blocks of the cascade and parallel forms, we
can conclude that narrowband lowpass (or highpass) filters will be most sensitive to
coefficient quantization because their poles close to z 1 or (z 1). In summary, the
cascade form is recommended for the implementation of high-order narrowband IIR
filters that have closely clustered poles.
As discussed in Chapter 3, the effect of the input quantization noise on the output can
be computed as
s2e
s2y;e z 1 H
zH
z 1 dz,
6:6:13
2pj
where s2e 2 2B =3 is defined by (3.5.6). The integration around the unit circle jzj 1 in
the counterclockwise direction can be evaluated using the residue method introduced in
Chapter 4 for the inverse z-transform.
1
H
z 1
, jaj < 1,
1 az
and the input signal x(n) is an 8-bit data. The noise power due to input quantiza-
tion is s2e 2 16 =3. Since
1 1
Rza
z a ,
z a
1 az za 1 a2
16
2
s2y;e :
3
1 a2
278 DESIGN AND IMPLEMENTATION OF IIR FILTERS
As shown in (3.5.11), the rounding of 2B-bit product to B bits introduces the roundoff
noise, which has zero-mean and its power is defined by (3.5.6). Roundoff errors can be
trapped into the feedback loops of IIR filters and can be amplified. In the cascade
realization, the output noise power due to the roundoff noise produced at the previ-
ous section may be evaluated using (6.6.13). Therefore the order in which individual
sections are cascaded also influences the output noise power due to roundoff.
Most modern DSP chips (such as the TMS320C55x) solve this problem by using
double-precision accumulator(s) with additional guard bits that can perform many
multiplication±accumulation operations without roundoff errors before the final result
in the accumulator is rounded.
As discussed in Section 3.6, when digital filters are implemented using finite word-
length, we try to optimize the ratio of signal power to the power of the quantization
noise. This involves a trade-off with the probability of arithmetic overflow. The most
effective technique in preventing overflow of intermediate results in filter computation is
by introducing appropriate scaling factors at various nodes within the filter stages. The
optimization is achieved by introducing scaling factors to keep the signal level as high as
possible without getting overflow. For IIR filters, since the previous output is fed back,
arithmetic overflow in computing an output value can be a serious problem. A detailed
analysis related to scaling is available in a reference text [12].
Example 6.17: Consider the first-order IIR filter with scaling factor a described by
a
H
z ,
1 az 1
where stability requires that jaj < 1. The actual implementation of this filter is
illustrated in Figure 6.21. The goal of including the scaling factor a is to ensure
that the values of y
n will not exceed 1 in magnitude. Suppose that x(n) is a
sinusoidal signal of frequency !0 , then the amplitude of the output is a factor of
jH
!0 j. For such signals, the gain of H(z) is
a
max jH
!j :
! 1 jaj
Thus if the signals being considered are sinusoidal, a suitable scaling factor is
given by
a<1 jaj:
a
Q y(n)
x(n)
z−1
a
The numerator and denominator coefficients are contained in the vectors b and a
respectively. The first element of vector a, a(1), has been assumed to be equal to 1.
The input vector is x and the output vector generated by the filter is y. At the beginning,
the initial conditions (data in the signal buffer) are set to zero. However, they can be
specified in the vector zi to reduce transients.
The direct-form realization of IIR filters can be implemented using following C
function (iir.c in the software package):
/****************************************************************
* IIR.C ± This function performs IIR filtering *
* *
* na 1 nb 1 *
* yn sum ai *x(n i) sum bj *y(n j) *
* i0 j1 *
* *
****************************************************************/
float iir(float *x, int na, float *a, float *y, int nb, float *b,
int maxa, int maxb)
{
float yn; /* Output of IIR filter */
float yn1, yn2; /* Temporary storage */
int i, j; /* Indexes */
pi rp ej!0 , 6:6:14
A A
H
z
1 rp e j!0 z 1
1 rp e j!0 z 1 1 2rp cos
!0 z 1 r2p z 2
A
,
6:6:15
1 a1 z 1 a2 z 2
where A is a fixed gain used to normalize the filter to unity at !0 . That is, jH
!0 j 1.
The direct-form realization is shown in Figure 6.22.
The magnitude response of this normalized filter is given by
A
jH
!0 jze j!0 1:
6:6:16
1 rp e j!0 e j!0 1 rp e j!0 e j!0
A
x(n) y(n)
z −1
2rp cos w 0
z −1
−rp2
1 1
jH
!j2 jH
!0 j2 :
6:6:19
2 2
There are two solutions on both sides of !0 , and the bandwidth is the difference between
these two frequencies. When the poles are close to the unit circle, the BW is approxi-
mated as
BW%2 1 rp :
6:6:20
This design criterion determines the value of rp for a given BW. The closer rp is to one,
the sharper the peak, and the longer it takes for the filter to reach its steady-state
response. From (6.6.15), the I/O difference equation of resonator is given by
where
and
a2 r2p : 6:6:22b
A recursive oscillator is a very useful tool for generating sinusoidal waveforms. The
method is to use a marginally stable two-pole resonator where the complex-conjugate
poles lie on the unit circle (rp 1). This recursive oscillator is the most efficient way for
generating a sinusoidal waveform, particularly if the quadrature signals (sine and cosine
signals) are required.
Consider two causal impulse responses
and
where u(n) is the unit step function. The corresponding system transfer functions are
1 cos
!0 z 1
Hc
z
6:6:24a
1 2 cos
!0 z 1 z 2
282 DESIGN AND IMPLEMENTATION OF IIR FILTERS
and
1
sin
!0 z
Hs
z 1 2
:
6:6:24b
1 2 cos
!0 z z
and
w 0 A 6:6:27a
and
w 1 0: 6:6:27b
The waveform accuracy is limited primarily by the DSP processor wordlength. For
example, quantization of the coefficient cos
!0 causes the actual output frequency to
differ slightly from the ideal frequency !0 .
For some applications, only a sinewave is required. From equations (6.6.21), (6.2.22a)
and (6.6.22b) using the conditions that x
n Ad
n and rp 1, we can obtain the
sinusoidal function
w(n) +
yc(n)
−
z−1
+ w(n−1)
ys(n)
− 2 cos(w 0) sin(w 0)
z−
1
w(n−2)
ys
n Ax
n a1 ys
n 1 a2 ys
n 2
2 cos
!0 ys
n 1 ys
n 2
6:6:28
and
ys 0 0: 6:6:29b
The oscillating frequency defined by Equation (6.6.28) is determined from its coefficient
a1 and its sampling frequency fs , expressed as
1 ja1 j fs
f cos Hz,
6:6:30
2 2p
In the program, AR1 is the pointer for the signal buffer. The output sinewave samples
are stored in the output buffer pointed by AR0. Due to the limited wordlength, the
quantization error of fixed-point DSPs such as the TMSC320C55x could be severe for
the recursive computation.
A simple parametric equalizer filter can be designed from a resonator given in (6.6.15)
by adding a pair of zeros near the poles at the same angles as the poles. That is, placing
the complex-conjugate poles at
zi rz ej!0 , 6:6:31
where 0 < rz < 1. Thus the transfer function given in (6.6.15) becomes
1 2
1 b1 z b2 z
:
6:6:32
1 a1 z 1 a2 z 2
284 DESIGN AND IMPLEMENTATION OF IIR FILTERS
When rz < rp , the pole dominates over the zero because it is closer to the unit circle
than the zero does. Thus it generates a peak in the frequency response at ! !0 . When
rz > rp , the zero dominates over the pole, thus providing a dip in the frequency
response. When the pole and zero are very close to each other, the effects of the poles
and zeros are reduced, resulting in a flat response. Therefore Equation (6.6.32) provides
a boost if rz < rp , or a reduction if rz > rp . The amount of gain and attenuation is
controlled by the difference between rp and rz . The distance from rp to the unit circle will
determine the bandwidth of the equalizer.
The digital IIR filters are widely used for practical DSP applications. In the previous
sections, we discussed the characteristics, design, realization, and implementation of IIR
filters. The experiments given in this section demonstrate DSP system design process
using an IIR filter as example. We will also discuss some practical considerations for
real-time applications.
As shown in Figure 1.8, a DSP system design usually consists of several steps, such as
system requirements and specifications, algorithm development and simulation, soft-
ware development and debugging, as well as system integration and testing. In this
section, we will use an IIR filter as an example to show these steps with an emphasis on
software development. First, we define the filter specifications such as the filter type,
passband and stopband frequency ranges, passband ripple, and stopband attenuation.
We then use MATLAB to design the filter and simulate its performance. After the
simulation results meet the given specifications, we begin the software development
process. We start with writing a C program with the floating-point data format in order
to compare with MATLAB simulation results. We then measure the filter performance
and improve its efficiency by using fixed-point C implementation and C55x assembly
language. Finally, the design is integrated into the DSP system and is tested again.
Figure 6.24 shows a commonly used flow chart of DSP software development. In the
past, software development was heavily concentrated in stage 3, while stage 2 was
skipped. With the rapid improvement of DSP compiler technologies in recent years, C
compilers have been widely used throughout stages 1 and 2 of the design process. Aided
by compiler optimization features such as intrinsic as well as fast DSP processor speed,
real-time DSP applications are widely implemented using the mixed C and assembly
code. In the first experiment, we will use the floating-point C code to implement an
IIR filter in the first stage as shown in Figure 6.24. Developing code in stage 1 does not
require knowledge of the DSP processors and is suitable for algorithm development and
analysis. The second and third experiments emphasize the use of C compiler optimiza-
tion, data type management, and intrinsic for stage 2 of the design process. The third
stage requires the longest development time because assembly language programming
is much more difficult than C language programming. The last experiment uses
the assembly code in order to compare it with previous experiments. In general, the
assembly code is proven to be the most efficient in implementing DSP algorithms such
as filtering that require intensive multiply/accumulate operations, while C code can do
well in data manipulation such as data formatting and arrangement.
SOFTWARE DEVELOPMENTS AND EXPERIMENTS USING THE TMS320C55X 285
DSP Algorithm
Floating-point C
yes
Efficient?
Stage 1
no
Fixed-point C
yes
Efficient?
no
yes
Refine?
Stage 2
no
Assembly code
yes
Efficient?
no
yes
Refine?
Stage 3
no
Complete
Digital filter coefficients can be determined by filter design software packages such as
MATLAB for given specifications. As mentioned in Section 6.4, high-order IIR filters
are often implemented in the form of cascade or parallel second-order sections for real-
time applications. For instance, the fourth-order Butterworth filter given by Example
6.12 can be realized in the cascade direct-form II structure. The following MATLAB
script (s671.m in the software package) shows a lowpass IIR Butterworth filter design
process.
%
% Filter specifications
%
Fs 8000; % Sampling frequency 8 kHz
fc 800; % Passband cutoff frequency 800 Hz
fs 1600; % Stopband frequency 1.6 kHz
Rp 1; % Passband ripple in dB
Rs 20; % Stopband attenuation in dB
Wp 2*fc/Fs; % Normalized passband edge frequency
Ws 2*fs/Fs; % Normalized stopband edge frequency
%
286 DESIGN AND IMPLEMENTATION OF IIR FILTERS
% Filter design
%
[N,Wn] buttord(Wp,Ws,Rp,Rs); % Filter order selection
[b,a] butter(N,Wn); % Butterworth filter design
[Z,P,K] tf2zp(b,a); % Transfer function to zero-pole
[sos,G] zp2sos(Z,P,K); % Zero-pole to second-order section
This program generates a fourth-order IIR filter with the following coefficient vectors:
b [0.0098,0.0393,0.0590,0.0393,0.0098]
a [1.0000, 1.9908,1.7650, 0.7403,0.1235].
The fourth-order IIR filter is then converted into two second-order sections repre-
sented by the coefficients matrix sos and the overall system gain G. By decomposing the
coefficient matrix sos defined in (6.4.19), we obtain two matrices
0:0992 0:1984 0:0992 1:0 0:8659 0:2139
b and a ,
6:7:1
0:0992 0:1984 0:0992 1:0 1:1249 0:5770
where we equally distribute the overall gain factor into each second-order cascade
configuration for simplicity. In the subsequent sections, we will use this Butterworth
filter for the TMS320C55x experiments.
For an IIR filter consists of K second-order sections, the I/O equation of the cascade
direct-form II realization is given by Equation (6.4.14). The C implementation of
general cascade second-order sections can be written as follows:
temp input [n];
for(k 0; k < IIR_SECTION; k)
{
w [k][0] temp a[k][1]*w [k][1] a [k][2]*w [k][2];
temp b [k][0]*w [k][0]+ b [k][1]*w [k][1]+ b [k][2]*w [k][2];
w [k][2] w [k][1]; /* w(n 2) <- w(n 1) */
w [k][1] w [k][0]; /* w(n 1) <- w(n) */
}
output [n] temp;
where a [][]and b [][]are filter coefficient matrices defined in (6.7.1), and w [][]is the
signal buffer for wk
n m, m 0, 1, 2. The row index k represents the kth second-
order IIR filter section, and the column index points at the filter coefficient or signal
sample in the buffer.
As mentioned in Chapter 5, the zero-overhead repeat loop, multiply±accumulate
instructions, and circular buffer addressing modes are three important features of
DSP processors. To better understand these features, we write the IIR filter function
in C using data pointers to simulate the circular buffers instead of two-dimensional
arrays. We also arrange the C statements to mimic the DSP multiply/accumulate
operations. The following C program is an IIR filter that consists of Ns second-order
SOFTWARE DEVELOPMENTS AND EXPERIMENTS USING THE TMS320C55X 287
sections in cascade form. The completed block IIR filter function (iir.c) written in
floating-point C language is provided in the experimental software package.
m Ns *5; /* Setup for circular buffer C [m] */
k Ns *2; /* Setup for circular buffer w [k] */
j 0;
w_0 x [n]; /* Get input signal */
for (i 0; i < Ns; i)
{
w_0 *(wl)* *(Cj); j; l (lNs)%k;
w_0 *(wl)* *(Cj); j;
temp *(wl);
*(wl) w_0;
w_0 temp * *(Cj); j;
w_0 *(wl) * *(Cj); j; l (lNs)%k;
w_0 *(wl) * *(Cj); j (j1)%m; l (l1)%k;
}
y [n] w_0 /* Save output */
The coefficient and signal buffers are configured as circular buffers shown in Figure
6.25. The signal buffer contains two elements, wk
n 1 and wk
n 2, for each second-
order section. The pointer address is initialized pointing at the first sample w1
n 1
in the buffer. The coefficient vector is arranged with five coefficients (a1k , a2k , b2k , b0k ,
Coefficient Signal
buffer C[] buffer w[]
a11 w1(n−1)
a21 w2(n−1)
Section 1 b21 offset =
:
coefficients number of
b01 : sections
b11 wK(n−1)
a12 w1(n−2)
a22 w2(n−2)
Section 2
b22 :
coefficients
b02 :
b12 wK(n−2)
:
:
a1K
a2K
Section K b2K
coefficients
b0K
b1K
and b1k ) per section with the coefficient pointer initialized pointing at the first coeffi-
cient, a11 . The circular pointers are updated by j =(j1)%m and l =(l1)%k, where m
and k are the sizes of the coefficient and signal buffers, respectively.
The C function exp6a.c used for Experiment 6A is listed in Table 6.3. This program
calls the software signal generator signal_gen2()to create a block of signal samples
for testing. It then calls the IIR filter function iir to perform the lowpass filtering
process. The lowpass filter used for the experiment is the fourth-order Butterworth IIR
/*
exp6a.c Direct-form II IIR function implementation
in floating-point C and using signal generator
*/
#define M 128 /* Number of samples per block */
#define Ns 2 /* Number of second-order sections */
/* Low-pass IIR filter coefficients */
float C [Ns*5] { /* i is section index */
/* A [i][1],A [i][2],B [i][2],B [i][0],B [i][1] */
0.8659, 0.2139, 0.0992, 0.0992, 0.1984,
1.1249, 0.5770, 0.0992, 0.0992, 0.1984 };
/* IIR filter signal buffer:
w [] w [i][n 1],w [i1][n 1],...,w [i][n 2],w [i1][n 2],... */
float w [Ns*2];
int out [M];
int in [M];
/* IIR filter function */
extern void iir(int *, int, int *, float *, int, float *);
/* Software signal generator */
extern void signal_gen2(int *, int);
void main(void)
{
int i;
/* Initialize IIR filter signal buffer */
for(i = 0; i < Ns*2;i)
w [i]= 0;
/* IIR filtering */
for(;;)
{
signal_gen2(in, M); /* Generate a block of samples */
iir(in, M, out, C, Ns, w); /* Filter a block of samples */
}
}
SOFTWARE DEVELOPMENTS AND EXPERIMENTS USING THE TMS320C55X 289
filter designed in Section 6.7.1. We rearranged the filter coefficients for using circular
buffer. Two temporary variables, temp and w_0, are used for intermediate storage.
1. Create the project exp6a and add the linker command file exp6.cmd, the C
functions iir.c, epx6a.c, signal_gen2.c and sine.asm to the project.
The lowpass filter iir()will attenuate high-frequency components from the input
signal generated by the signal generator signal_gen2(), which uses the recursive
sinewave generator sine()to generate three sinewaves at 800 Hz, 1.8 kHz, and
3.3 kHz.
2. Use rts55.lib for initializing the C function main() and build the project
exp6a.
3. Set a breakpoint at the statement for(;;)of the main()function, and use the CCS
graphic function to view the 16-bit integer output samples in the buffer out []. Set
data length to 128 for viewing one block of data at a time. Animate the filtering
process, and observe the filter output as a clean 800 Hz sinewave.
4. Profile the IIR filter performance by measuring the average DSP clock cycles.
Record the clock cycles and memory usage of the floating-point C implementation.
5. Overflow occurs when the results of arithmetic operations are larger than the fixed-
point DSP can represent. Before we move on to fixed-point C implementation, let us
examine the IIR filter for possible overflow. First, change the conditional compiling
bit CHECK_OVERFLOW defined in iir.c from 0 to 1 to enable the sections that
search for maximum and minimum intermediate values. Then, add w_max and
w_min to the CCS watch window. Finally, run the experiment in the animation
mode and examine the values of w_max and w_min. If jw_maxj 1 or jw_minj 1,
an overflow will happen when this IIR filter is implemented by a 16-bit fixed-point
processor. If the overflow is detected, modify the IIR filter routine by scaling down
the input signal until the values jw_maxj and jw_minj are less than 1. Remember to
scale up the filter output if the input is scaled down.
tions. The intrinsic function names are similar to their mnemonic assembly counter-
parts. For example, the following signed multiply±accumulate intrinsic
z _smac(z,x,y); /* Perform signed z zx*y */
will perform the equivalent assembly instruction
macm Xmem,Ymem,ACx
Table 6.4 lists the intrinsics supported by the TMS320C55x.
C Compiler Intrinsic
(a,b,c are 16-bit and d,e,f are 32-bit data) Description
c _sadd(int a, int b); Adds 16-bit integers a and b, with SATA set,
producing a saturated 16-bit result c.
f _lsadd(long d, long e); Adds 32-bit integers d and e, with SATD set,
producing a saturated 32-bit result f.
c _ssub(int a, int b); Subtracts 16-bit integer b from a with SATA
set, producing a saturated 16-bit result c.
f _lssub(long d, long e); Subtracts 32-bit integer e from d with SATD
set, producing a saturated 32-bit result f.
c _smpy(int a, int b); Multiplies a and b, and shifts the result left
by 1. Produces a saturated 16-bit result c.
(upper 16-bit, SATD and FRCT set)
f _lsmpy(int a, int b); Multiplies a and b, and shifts the result left
by 1. Produces a saturated 32-bit result f.
(SATD and FRCT set)
f _smac(long d, int a, int b); Multiplies a and b, shifts the result left by 1,
and adds it to d. Produces a saturated
32-bit result f. (SATD, SMUL and FRCT
set)
f _smas(long d, int a, int b); Multiplies a and b, shifts the result left by 1,
and subtracts it from d. Produces a 32-bit
result f. (SATD, SMUL and FRCT set)
c _abss(int a); Creates a saturated 16-bit absolute value.
c jaj, _abss(0x8000) > 0x7FFF (SATA
set)
f _labss(long d); Creates a saturated 32-bit absolute value.
f jdj, _labss(0x8000000) > 0x7FFFFFFF
(SATD set)
c _sneg(int a); Negates the 16-bit value with saturation.
c a, _sneg(0xffff8000) > 0x00007FFF
SOFTWARE DEVELOPMENTS AND EXPERIMENTS USING THE TMS320C55X 291
C Compiler Intrinsic
(a,b,c are 16-bit and d,e,f are 32-bit data) Description
The floating-point IIR filter function given in the previous experiment can be con-
verted to the fixed-point C implementation using these intrinsics. To prevent inter-
mediate overflow, we scale the input samples to Q14 format in the fixed-point
implementation. Since the largest filter coefficient is between 1 and 2, we use Q14
representation for the fixed-point filter coefficients defined in (6.7.1). The implementa-
tion of the IIR filter in the fixed-point Q14 format is given as follows:
292 DESIGN AND IMPLEMENTATION OF IIR FILTERS
1. Create the project exp6b that include the linker command file exp6.cmd, the C
functions exp6b.c, iir_i1.c, signal_gen2.c, and the assembly routine
sine.asm.
2. Use rts55.lib for initializing the C function main(), and build the project
exp6b.
3. Set a breakpoint at the statement for(;;)of the main()function, and use the CCS
graphic function to view the 16-bit integer output samples in the buffer out []. Set
data length to 128 for viewing one block of data at a time. Animate the filtering
process and observe the filter output as a clean 800 Hz sinewave.
4. Profile the IIR filter function iir_i1(), and compare the results with those
obtained in Experiment 6A.
5. In file iir_i1(), set the scaling factor SCALE to 0 so the samples will not be scaled.
Rebuild the project, and run the IIR filter experiment in the animation mode. We
will see the output distortions caused by the intermediate overflow.
From the previous experiments, the fixed-point IIR filter implementation using intrin-
sics has greatly improved the efficiency of the fixed-point IIR filter using the floating-
point implementation. We can further enhance the C function performance by taking
advantage of the compiler optimization and restructuring the program to let the C
compiler generate a more efficient assembly code.
SOFTWARE DEVELOPMENTS AND EXPERIMENTS USING THE TMS320C55X 293
The TMS320C55x C compiler has many built-in C functions in its run-time support
library rts55.lib. Although these functions are helpful, most of them may run at a
slower speed due to the nested library function calls. We should try to avoid using them
in real-time applications if possible. For example, the MOD(%)operation we use to
simulate the circular addressing mode can be replaced by a simple AND(&)operation if
the size of the buffer is a base 2 number, such as 2, 4, 8, 16, and so on. The example given
in Table 6.5 shows the compiler will generate a more efficient assembly code (by
avoiding calling the library function I$$MOD) when using the logic operator AND
than the MOD operator, because the logic operation AND does not invoke any
function calls.
Loop counters
The for-loop is the most commonly used loop control operations in C programs for
DSP applications. The assembly code generated by the C55x C compiler varies depend-
ing on how the for-loop is written. Because the compiler must verify both the positive
and negative conditions of the integer loop counter against the loop limit, it creates
more lines of assembly code to check the entrance and termination of the loop. By using
an unsigned integer as a counter, the C compiler only needs to generate a code that
compares the positive loop condition. Another important loop-control method is to use
a down counter instead of an up counter if possible. This is because most of the built-in
conditional instructions act upon zero conditions. The example given in Table 6.6 shows
the assembly code improvement when it uses an unsigned integer as a down counter.
Using local repeat-loop is another way to improve the DSP run-time efficiency. The
local repeat-loop uses the C55x instruction-buffer-queue (see Figure 2.2) to store all the
Table 6.5 Example to avoid using library function when applying modulus operation
;Ns 2; ;Ns 2;
;k 2*Ns; ;k 2*Ns 1;
;l (lNs)%k; ;l (lNs)&k;
Table 6.6 Example of using unsigned integer as down counter for loop control
instructions within a loop. Local repeat-loop can execute the instructions repeatedly
within the loop without additional instruction fetches. To allow the compiler to generate
local-repeat loops, we should reduce the number of instructions within the loop because
the size of instruction buffer queue is only 64 bytes.
Compiler optimization
The C55x C compiler has many options. The -on option (n 0, 1, 2, or 3) controls the
compiler optimization level of the assembly code it generated. For example, the -o3
option will perform loop optimization, loop unrolling, local copy/constant propagation,
simplify expression statement, allocate variables to registers, etc. The example given in
Table 6.7 shows the code generated with and without the -o3 optimization.
1. Create the project exp6c, add the linker command file exp6.cmd, the C functions
exp6c.c, iir_i2.c, signal_gen2.c, and the assembly routine sine.asm
into the project. The C function iir_i2.c uses unsigned integers for loop coun-
ters, and replaces the MOD operation with AND operation for the signal buffer.
2. Use rts55.lib for initializing the C function main(), and build the project
exp6c.
3. Relocate the C program and data variables into SARAM and DARAM sections
defined by the linker command files. Use pragma to allocate the program and data
memory as follows:
SOFTWARE DEVELOPMENTS AND EXPERIMENTS USING THE TMS320C55X 295
Table 6.7 Example of compiler without and with -o3 optimization option
; Ns 2; ; Ns 2;
; for(i Ns*2; i > 0; i ) ; for(i Ns*2; i > 0; i )
; *ptr 0; ; *ptr 0;
± Place the main()and iir()functions into the program SARAM, and name the
section iir_code.
± Allocate the input and output buffers in []and out []to data SARAM, and
name the sections input and output.
± Put the IIR filter coefficient buffer C []in a separate data SARAM section, and
name the section iir_coef.
± Place the temporary buffer w []and temporary variables in a DARAM section,
and name it iir_data.
5. Set a breakpoint at the statement for(;;)of the main()function, and use the CCS
graphic function to view the 16-bit integer output samples in the buffer out []. Set
data length to 128 for viewing one block of data at a time. Animate the filtering
process and observe the filter output as a clean 800 Hz sinewave.
6. Profile the IIR filter iir_i2(), and compare the result with those obtained in
Experiment 6B.
the IIR filter function used for Experiments 6B and 6C, we anticipate that the filter
inner-loop can be implemented by the assembly language in seven DSP clock cycles.
Obviously, from the previous experiments, the IIR filter implemented in C requires
more cycles. To get the best performance, we can write the IIR filtering routine in
assembly language. The trade-off between C and assembly programming is the time
needed as well as the difficulties encountered for program development, maintenance,
and system migration from one DSP system to the others.
The IIR filter realized by cascading the second-order sections can be implemented in
assembly language as follows:
masm *AR3,*AR7,AC0 ; AC0 AC0 a1*w(n 1)
masm T3 *AR3,*AR7,AC0 ; AC0 AC0 a2*w(n 2)
mov rnd(hi(AC0)),*AR3 ; Update w(n)buffer
mpym *AR7,T3,AC0 ; AC0 bi2*w(n-2)
macm *(AR3T1), *AR7,AC0 ; AC0 AC0bi0*w(n)
macm *AR3,*AR7,AC0 ; AC0 AC0bi1*w(n 1)
mov rnd(hi(AC0)),*AR1 ; Store result
The assembly program uses three pointers. The IIR filter signal buffer for wk
n is
addressed by the auxiliary register AR3, while the filter coefficient buffer is pointed at
by AR7. The filter output is rounded and placed in the output buffer pointed at by AR1.
The code segment can be easily modified for filtering either a single sample of data or a
block of samples. The second-order IIR filter sections can be implemented using the inner
repeat loop, while the outer loop can be used for controlling samples in blocks. The input
sample is scaled down to Q14 format, and the IIR filter coefficients are also represented in
Q14 format to prevent overflow. To compensate the Q14 format of coefficients and signal
samples, the final result y
n is multiplied by 4 (implemented by shifting two bits to the
left) to scale it back to Q15 format and store it with rounding. Temporary register T3 is
used to hold the second element wk
n 2 when updating the signal buffer.
1. Create the project exp6d, and include the linker command file exp6.cmd, the
C functions exp6d.c, signal_gen2.c, the assembly routine sine.asm, and
iirform2.asm into the project. The prototype of the IIR filter routine is written as
void iirform2(int *x, unsigned int M, int *y, int *h,
unsigned int N, int *w);
where
x is the pointer to the input data buffer in []
h is the pointer to the filter coefficient buffer C []
y is the pointer to the filter output buffer out []
w is the pointer to the signal buffer w []
M is the number of samples in the input buffer
N is the number of second-order IIR filter sections.
2. Use rts55.lib for initializing the C function main(), and build the project
exp6d.
EXERCISES 297
3. Set a breakpoint at the statement for(;;)of the main()function, and use the CCS
graphic function to view the 16-bit integer output samples in the buffer out[]. Set
data length to 128 for viewing one block of data at a time. Animate the filtering
process, and observe the filter output as a clean 800 Hz sinewave.
4. Profile the IIR filter iirform2(), and compare the profile result with those
obtained in Experiments 6B and 6C.
References
[1] N. Ahmed and T. Natarajan, Discrete-Time Signals and Systems, Englewood Cliffs, NJ: Prentice-
Hall, 1983.
[2] V. K. Ingle and J. G. Proakis, Digital Signal Processing Using MATLAB V.4, Boston: PWS
Publishing, 1997.
[3] Signal Processing Toolbox for Use with MATLAB, The Math Works Inc., 1994.
[4] A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, Englewood Cliffs, NJ:
Prentice-Hall, 1989.
[5] S. J. Orfanidis, Introduction to Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1996.
[6] J. G. Proakis and D. G. Manolakis, Digital Signal Processing ± Principles, Algorithms, and
Applications, 3rd Ed., Englewood Cliffs, NJ: Prentice-Hall, 1996.
[7] S. K. Mitra, Digital Signal Processing: A Computer-Based Approach, New York, NY: McGraw-
Hill, 1998.
[8] D. Grover and J. R. Deller, Digital Signal Processing and the Microcontroller, Englewood Cliffs,
NJ: Prentice-Hall, 1999.
[9] F. Taylor and J. Mellott, Hands-On Digital Signal Processing, New York, NY: McGraw-Hill,
1998.
[10] S. D. Stearns and D. R. Hush, Digital Signal Analysis, 2nd Ed., Englewood Cliffs, NJ: Prentice-
Hall, 1990.
[11] S. S. Soliman and M. D. Srinath, Continuous and Discrete Signals and Systems, 2nd Ed., Engle-
wood Cliffs, NJ: Prentice-Hall, 1998.
[12] L. B. Jackson, Digital Filters and Signal Processing, 2nd Ed., Boston, MA: Kluwer Academic
Publishers, 1989.
Exercises
Part A
7. Given an analog IIR bandpass filter that has resonance at 1 radian/second with the transfer
function
5s 1
H
s ,
s2 0:4s 1
design a digital resonant filter that resonates at 100 Hz with the sampling rate at 1 kHz.
8. Design a second-order digital Butterworth filter using bilinear transform. The cut-off
frequency is 1 kHz at a sampling frequency of 10 kHz.
9. Repeat the previous problem for designing a highpass filter with the same specifications.
10. Design a second-order digital Butterworth bandpass filter with the lower cut-off frequency
200 Hz, upper cut-off frequency 400 Hz, and sampling frequency 2000 Hz.
11. Design a second-order digital Butterworth bandstop filter that has the lower cut-off fre-
quency 200 Hz, upper cut-off frequency 400 Hz, and sampling frequency 2000 Hz.
18. Consider the second-order IIR filter with the I/O equation
y
n x
n a1 y
n 1 a2 y
n 2; n 0,
where a1 and a2 are constants.
(a) Find the transfer function H(z).
(b) Discuss the stability conditions related to the cases:
a2
(1) 1 a2 < 0.
4
300 DESIGN AND IMPLEMENTATION OF IIR FILTERS
a21
(2) a2 > 0.
4
a21
(3) a2 0.
4
19. An allpass filter has a magnitude response that is unity for all frequencies, that is,
jH
!j 1 for all !: Such filters are useful for phase equalization of IIR designs. Show
that the transfer function of an allpass filter is of the form
z L b1 z L1 bL
H
z ,
1 b1 z 1 bL z L
where all coefficients are real.
21. Design a second-order resonator with peak at 500 Hz, bandwidth 32 Hz, and operating at the
sampling rate 10 kHz.
Part B
find the factored form of the IIR transfer function in terms of second-order section using
MATLAB.
24. Design and plot the magnitude response of an elliptic IIR lowpass filter with the following
specifications using MATLAB:
Passband edge at 800 Hz
Stopband edge at 1000 Hz
Passband ripple of 0.5 dB
EXERCISES 301
25. Design an IIR Butterworth bandpass filter with the following specifications:
Passband edges at 450 Hz and 650 Hz
Stopband edges at 300 Hz and 750 Hz
Passband ripple of 1 dB
Minimum stopband attenuation of 40 dB
Sampling rate of 4 kHz.
26. Design a type I Chebyshev IIR highpass filter with passband edge at 700 Hz, stopband edge
at 500 Hz, passband ripple of 1 dB, and minimum stopband attenuation of 32 dB. The
sampling frequency is 2 kHz. Plot the magnitude response of the design filter.
(a) plot the first 32 samples of the impulse response using MATLAB,
(b) filter the input signal that consists of two sinusoids of normalized frequencies 0.1 and 0.8
using MATLAB.
28. It is interesting to examine the frequency response of the second-order resonator filter given
in (6.6.15) as the radius rp and the pole angle !0 are varied. Using the MATLAB to compute
and plot
(a) The magnitude response for !0 p=2 and various values of rp .
(b) The magnitude response for rp 0:95 and various values of !0 .
Part C
29. An IIR filter design and implementation using the direct-form II realization.
(a) Use MATLAB to design an elliptic bandpass IIR filter that meets the following speci-
fications:
± Sampling frequency is 8000 Hz
± Lower stopband extends from 0 to 1200 Hz
± Upper stopband extends from 2400 to 4000 Hz
± Passband starts from 1400 Hz with bandwidth of 800 Hz
± Passband ripple should be no more than 0.3 dB
± Stopband attenuation should be at least 30 dB.
(b) For the elliptic bandpass IIR filter obtained above,
± plot the amplitude and phase responses
302 DESIGN AND IMPLEMENTATION OF IIR FILTERS
30. The overflow we saw in Experiments 6A and 6B is called the intermediate overflow. It
happens when the signal buffer of the direct-form II realization uses 16-bit wordlength.
Realizing the IIR filter using the direct-form I structure can eliminate the intermediate
overflow by keeping the intermediate results in the 40-bit accumulators. Write an assembly
routine to realize the fourth-order lowpass Butterworth IIR filter in the direct-form I
structure.
31. Implement the recursive quadrature oscillator shown in Figure 6.23 in TMS320C55x assem-
bly language.
32. Verify the IIR filter design in real time using a C55x EVM/DSK. Use a signal generator and
a spectrum analyzer to measure the amplitude response and plot it. Evaluate the IIR filter
according to the following steps:
± Set the TMS320C55x EVM/DSK to 8 kHz sampling rate
± Connect the signal generator output to the audio input of the EVM/DSK
± Write an interrupt service routine (ISR) to handle input samples
± Process the random samples at 128 samples per input block
± Verify the filter using a spectrum analyzer.
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
7
Fast Fourier Transform and Its
Applications
Frequency analysis of digital signals and systems was discussed in Chapter 4. To per-
form frequency analysis on a discrete-time signal, we converted the time-domain
sequence into the frequency-domain representation using the z-transform, the
discrete-time Fourier transform (DTFT), or the discrete Fourier transform (DFT). The
widespread application of the DFT to spectral analysis, fast convolution, and data
transmission is due to the development of the fast Fourier transform (FFT) algorithm
for its computation. The FFT algorithm allows a much more rapid computation of the
DFT, was developed in the mid-1960s by Cooley and Tukey.
It is critical to understand the advantages and the limitations of the DFT and how to
use it properly. We will discuss the important properties of the DFT in Section 7.1. The
development of FFT algorithms will be covered in Section 7.2. In Section 7.3, we will
introduce the applications of FFTs. Implementation considerations such as computa-
tional issues and finite-wordlength effects will be discussed in Section 7.4. Finally,
implementation of the FFT algorithm using the TMS320C55x for experimental purposes
will be given in Section 7.5.
7.1.1 Definitions
Given the DTFT X
!, we take N samples over the full Nyquist interval, 0 ! < 2p, at
discrete frequencies !k 2pk=N, k 0, 1, . . . , N 1. This is equivalent to evaluating
X
! at N equally spaced frequencies !k , with a spacing of 2p=N radians (or fs =N Hz)
between successive samples. That is,
" #
X
1 X1
N X
1
j
2p=Nkn j
2p=Nkn
X
!k x
ne x
n lN e
n 1 n0 l 1
X1
N
j
2p=Nkn
xp
ne , k 0, 1, . . . , N 1,
7:1:1a
n0
where
X
1
xp
n x
n lN
7:1:1b
l 1
where X(k) is the kth DFT coefficient and the upper and lower indices in the summation
reflect the fact that x
n 0 outside the range 0 n N 1. Strictly speaking, the DFT
is a mapping between an N-point sequence in the time domain and an N-point sequence in
the frequency domain that is applicable in the computation of the DTFT of periodic and
finite-length sequences.
Example 7.1: If the signals fx
ng are real valued and N is an even number, we can
show that X
0 and X
N=2 are real values and can be computed as
X1
N
X
0 x
n
n0
DISCRETE FOURIER TRANSFORM 305
and
X1
N X1
N
X
N=2 e jpn
x
n
1n x
n:
n0 n0
X1
N
X
k x
nWNkn , k 0, 1, . . . N 1,
7:1:3
n0
where
2p
j N kn 2pkn 2pkn
WNkn e cos j sin , 0 k, n N 1
7:1:4
N N
are the complex basis functions, or twiddle factors of the DFT. Each X(k) can be viewed
as a linear combination of the sample set fx
ng with the coefficient set fWNkn g. Thus we
have to store the twiddle factors in terms of real and imaginary parts in the DSP
memory. Note that WN is the Nth root of unity since WNN e j2p 1 WN0 . All the
successive powers WNk , k 0, 1, . . . , N 1 are also Nth roots of unity, but in clockwise
N=2
direction on the unit circle. It can be shown that WN e jp 1, the symmetry
property
kN=2
WN WNk , 0 k N=2 1
7:1:5a
Figure 7.1 illustrates the cyclic property of the twiddle factors for an eight-point DFT.
W86 = −W82
4 0
W8 = −W8 = − 1 W80 = 1
W83 W81
W82
The inverse discrete Fourier transform (IDFT) is used to transform the X(k) back into
the original sequence x(n). Given the frequency samples X(k), the IDFT is defined as
1NX1 1NX1
x
n X
ke j
2p=Nkn X
kWN kn , n 0, 1, . . . , N 1:
7:1:6
N k0 N k0
This is identical to the DFT with the exception of the normalizing factor 1/N and
the sign of the exponent of the twiddle factors. The IDFT shows that there is no loss
of information by transforming the spectrum X(k) back into the original time sequence
x(n). The DFT given in (7.1.3) is called the analysis transform since it analyzes the signal
x(n) at N frequency components. The IDFT defined in (7.1.6) is called the synthesis
transform because it reconstructs the signal x(n) from the frequency components.
x n an , n 0, 1, . . . , N 1,
X1
N X1
N n
x
k an e j
2pk=Nn
ae j2pk=N
n0 n0
j2pk=N N
1
ae 1 aN
, k 0, 1, . . . , N 1:
1 ae j2pk=N 1 ae j2pk=N
The DFT and IDFT defined in (7.1.3) and (7.1.6), can be expressed in matrix±vector
form as
X Wx
7:1:7a
and
1
x WX,
7:1:7b
N
where x x
0 x
1 . . . x
N 1T is the signal vector, the frequency-domain DFT
coefficients are contained in the complex vector X X
0 X
1 . . . X
N 1T , and
the NxN twiddle-factor matrix (or DFT matrix) W is given by
W WNkn 0k, nN 1
2 3
1 1 1
6 7
61 WN1 WNN 1 7
6 7
6 7
7:1:8
6. .. .. .. 7,
6. 7
6. . . . 7
4 5
N 12
1 WNN 1
WN
DISCRETE FOURIER TRANSFORM 307
and W is the complex conjugate of the matrix W. Since W is a symmetric matrix, the
inverse matrix W 1 N1 W was used to derive (7.1.7b).
Example 7.3: Given x
n f1, 1, 0, 0g, the DFT of this four-point sequence can be
computed using the matrix formulation as
2 3
1 1 1 1
6 7
61 W41 W42 W43 7
6 7
X6
6
7x
67
61 W42 W44 W4 7
4 5
1 W43 W46 W49
2 32 3 2 3
1 1 1 1 1 2
6 76 7 6 7
61 j 1 j7 6 7 6 7
6 76 1 7 6 1 j 7
6
6
76 7 6
76 7 6
7,
7
61 1 1 1 76 0 7 6 0 7
4 54 5 4 5
1 j 1 j 0 1j
2 3
1 1 1 1
16
61 W4 1
W4 2
W4 3 7
7
x 6 2 4 6 7X
441 W4 W4 W4 5
3 6 9
1 W4 W4 W4
2 32 3 2 3
1 1 1 1 2 1
16 1 j 1 j7 61 j7 617
6 76 7 6 7:
441 1 1 1 54 0 5 4 0 5
1 j 1 j 1j 0
As shown in Figure 7.1, the twiddle factors are equally spaced around the unit circle
at frequency intervals of fs =N (or 2p=N). Therefore the frequency samples X(k) repre-
sent discrete frequencies
fs
fk k , k 0, 1, . . . , N 1:
7:1:9
N
The computational frequency resolution of the DFT is equal to the frequency increment
fs =N, and is sometimes referred to as the bin spacing of the DFT outputs. The spacing
308 FAST FOURIER TRANSFORM AND ITS APPLICATIONS
of the spectral lines depends on the number of data samples. This issue will be further
discussed in Section 7.3.2.
Since the DFT coefficient X(k) is a complex variable, it can be expressed in polar form
as
q
jX
kj fReX
kg2 fImX
kg2
7:1:11
1 ImX
k
k tan :
7:1:12
ReX
k
The DFT is important for the analysis of digital signals and the design of DSP systems.
Like the Fourier, Laplace, and z-transforms, the DFT has several important properties
that enhance its utility for analyzing finite-length signals. Many DFT properties are
similar to those of the Fourier transform and the z-transform. However, there are some
differences. For example, the shifts and convolutions pertaining to the DFT are circular.
Some important properties are summarized in this section. The circular convolution
property will be discussed in Section 7.1.3.
Linearity
If fx ng and fy ng are time sequences of the same length, then
where a and b are arbitrary constants. Linearity is a key property that allows us to
compute the DFTs of several different signals and determine the combined DFT via the
summation of the individual DFTs. For example, the frequency response of a given
system can be easily evaluated at each frequency component. The results can then be
combined to determine the overall frequency response.
DISCRETE FOURIER TRANSFORM 309
Complex-conjugate property
X M k X M k, 0 k M, 7:1:15
where M N=2 if N is even, and M
N 1=2 if N is odd. This property shows that
only the first (M 1) DFT coefficients are independent. Only the frequency compon-
ents from k 0 to k M are needed in order to completely define the output. The rest
can be obtained from the complex conjugate of corresponding coefficients, as illustrated
in Figure 7.2.
The complex-conjugate (or symmetry) property shows that
and
Thus the DFT of a real sequence produces symmetric real frequency components and
anti-symmetric imaginary frequency components about X(M). The real part of the DFT
output is an even function, and the imaginary part of the DFT output is an odd
function. From (7.1.16) and (7.1.17), we obtain
and
Because of the symmetry of the magnitude spectrum and the anti-symmetry of the phase
spectrum, only the first M 1 outputs represent unique information from the input
signal. If the input to the DFT is a complex signal, however, all N complex outputs
could carry information.
real real
Complex conjugate
Periodicity
Because of the periodicity property shown in Figure 7.1, the DFT and IDFT produce
periodic results with period N. Therefore the frequency and time samples produced by
(7.1.3) and (7.1.6), respectively, are periodic with period N. That is,
and
The finite-length sequence x(n) can be considered as one period of a periodic function
with period N. Also, the DFT X(k) is periodic with period N.
As discussed in Section 4.4, the spectrum of a discrete-time signal is periodic. For a
real-valued signal, the frequencies ranged from 0 to fs =2 were reversed for the range of 0
to fs =2, and the entire range from fs =2 to fs =2 was repeated infinitely in both
directions in the frequency domain. The DFT outputs represent a single period (from
0 to fs ) of the spectrum.
Circular shifts
Let fX
kg be the DFT of a given N-periodic sequence fx
ng, and let y(n) be a circular
shifted sequence defined by
where m is the number of samples by which x(n) is shifted to the right (or delayed) and
the modulo operation
jmod N j iN 7:1:22a
For example, if m 1, x
N 1 replaces x
0, x
0 replaces x(1), x(1) replaces x(2), etc.
Thus a circular shift of an N-point sequence is equivalent to a linear shift of its periodic
extension.
For a given y(n) in (7.1.21), we have
This equation states that the DFT coefficients of a circular-shifted N-periodic sequence
by m samples are a linear shift of X(k) by WNmk .
DISCRETE FOURIER TRANSFORM 311
Consider a sequence x(n) having the z-transform X(z) with an ROC that includes
the unit circle. If X(z) is sampled at N equally spaced points on the unit circle at
zk e j2pk=N , k 0, 1, . . . , N 1, we obtain
X
1 X
1
n j
2pk=Nn
X
zjze j2pk=N x
nz x
ne :
7:1:24
n 1
n 1
ze j2pk=N
This is identical to evaluating the discrete-time Fourier transform X
! at the N equally
spaced frequencies !k 2pk=N, k 0, 1, . . . , N 1. If the sequence x(n) has a finite
duration of length N, the DFT of a sequence yields its z-transform on the unit circle at a
set of points that are 2p=N radians apart, i.e.,
Therefore the DFT is equal to the z-transform of a sequence x(n) of length N, evaluated
at N equally spaced points on the unit circle in the z-plane.
x n c, n 0, 1, . . . , N 1:
X1
N
1 WNkN
X
k c WNkn c :
n0
1 WNk
Since WNkN e j
N kN 1 for all k and for WNk 6 1 for k 6 iN, we have X
k 0
2p
P 1 kn
for k 1, 2, . . . , N 1. For k 0, N n0 WN N. Therefore we obtain
X k cNd k; k 0, 1, . . . , N 1:
The Fourier transform, the Laplace transform, and the z-transform of the linear con-
volution of two time functions are simply the products of the transforms of the
individual functions. A similar result holds for the DFT, but instead of a linear
convolution of two sequences, we have a circular convolution. If x(n) and h(n) are
real-valued N-periodic sequences, y(n) is the circular convolution of x(n) and h(n)
defined as
y
n x
n
h
n, n 0, 1, . . . , N 1,
7:1:26
312 FAST FOURIER TRANSFORM AND ITS APPLICATIONS
x(n)
h(n−1) h(n−N+1)
h(n−2) x(n−2)
Figure 7.3 Circular convolution of two sequences using the concentric circle approach
where
denotes circular convolution. Then
Thus the circular convolution in time domain is equivalent to multiplication in the DFT
domain. Note that to compute the product defined in (7.1.27), the DFTs must be of
equal length. This means that the shorter of the two original sequences must be padded
with zeros to the length of the other before its DFT is computed.
The circular convolution of two periodic signals with period N can be expressed as
X1
N X1
N
y
n x
mh
n mmod N h
mx
n mmod N ,
7:1:28
m0 m0
where y(n) is also periodic with period N. This cyclic property of circular convolution
can be illustrated in Figure 7.3 by using two concentric rotating circles. To perform
circular convolution, N samples of x(n) [or h(n)] are equally spaced around the outer
circle in the clockwise direction, and N samples of h(n) [or x(n)] are displayed on the
inner circle in the counterclockwise direction starting at the same point. Corresponding
samples on the two circles are multiplied, and the resulting products are summed to
produce an output. The successive value of the circular convolution is obtained by
rotating the inner circle one sample in the clockwise direction; the result is computed by
summing the corresponding products. The process is repeated to obtain the next result
until the first sample of inner circle lines up with the first sample of the exterior circle
again.
n 0, y
0 1 1 1 1 4
1
0 0 1
0 1
0 0 1 1
1 1
0 1 1
n 1, y 1 1 1 1 3
0 0 1
0 0
0 1 1 1
1 1
0 1 1
y
n x
n
h
n f4, 3, 2, 2, 2, 3, 4, 5g:
This circular convolution is due to the periodicity of the DFT. In circular convolution,
the two sequences are always completely overlapping. As the end of one period is shifted
out, the beginning of the next is shifted in as shown in Figure 7.3. To eliminate the circular
effect and ensure that the DFT method results in a linear convolution, the signals must be
zero-padded so that the product terms from the end of the period being shifted out are
zero. Zero padding refers to the operation of extending a sequence of length N1 to a length
N2 (> N1 ) by appending (N2 N1 ) zero samples to the tail of the given sequence. Note
that the padding number of zeros at the end of signal has no effect on its DTFT.
314 FAST FOURIER TRANSFORM AND ITS APPLICATIONS
Example 7.6: Consider the previous example. If these 8-point sequences h(n) and
x(n) are zero-padded to 16 points, the resulting circular convolution is
y
n x
n
h
n f0, 0, 0, 1, 2, 3, 4, 5, 4, 3, 2, 1, 0, 0, 0, 0g:
This result is identical to the linear convolution of the two sequences. Thus the
linear convolution discussed in Chapter 5 can be realized by the circular convolu-
tion with proper zero padding.
In MATLAB, zero padding can be implemented using the function zeros. For
example, the 8-point sequence x(n) given in example 7.5 can be zero-padded to 16
points with the following command:
x [1, 1, 1, 1, zeros(1, 11)];
where the MATLAB function zeros(1, N)generates a row vector of N zeros.
The DFT is a very effective method for determining the frequency spectrum of a time-
domain signal. The only drawback with this technique is the amount of computation
necessary to calculate the DFT coefficients X(k). To compute each X(k), we need
approximately N complex multiplications and N complex additions based on the DFT
defined in (7.1.3). Since we need to compute N samples of X(k) for k 0, 1, . . . , N 1,
a total of approximately N 2 complex multiplications and N 2 N complex additions are
required. When a complex multiplication is carried out using digital hardware, it
requires four real multiplications and two real additions. Therefore the number of
arithmetic operations required to compute the DFT is proportional to 4N 2 , which
becomes very large for a large number of N. In addition, computing and storing the
twiddle factors WNkn becomes a formidable task for large values of N.
The same values of the twiddle factors WNkn defined in (7.1.4) are calculated many
times during the computation of DFT since WNkn is a periodic function with a limited
number of distinct values. Because WNN 1,
knmod N
WNkn WN for kn > N:
7:2:1
For example, different powers of WNkn have the same value as shown in Figure 7.1 for
N 8. In addition, some twiddle factors have real or imaginary parts equal to 1 or 0. By
reducing these redundancies, a very efficient algorithm, called the FFT, exists. For
FAST FOURIER TRANSFORMS 315
example, if N is a power of 2, then the FFT makes it possible to calculate the DFT with
N log2 N operations instead of N 2 operations. For N 1024, FFT requires about 104
operations instead 106 of operations for DFT.
The generic term FFT covers many different algorithms with different features,
advantages, and disadvantages. Each FFT has different strengths and makes different
tradeoffs between code complexity, memory usage, and computation requirements. The
FFT algorithm introduced by Cooley and Tukey requires approximately N log2 N
multiplications, where N is a power of 2. The FFT can also be applied to cases where
N is a power of an integer other than 2. In this chapter, we introduce FFT algorithms
for the case where N is a power of 2, the radix-2 FFT algorithm.
There are two classes of FFT algorithms: decimation-in-time and decimation-in-
frequency. In the decimation-in-time algorithm, the input time sequence is successively
divided up into smaller sequences, and the DFTs of these subsequences are combined in
a certain pattern to yield the required DFT of the entire sequence with fewer operations.
Since this algorithm was derived by separating the time-domain sequence into succes-
sively smaller sets, the resulting algorithm is referred to as a decimation-in-time algo-
rithm. In the decimation-in-frequency algorithm, the frequency samples of the DFT are
decomposed into smaller and smaller subsequences in a similar manner.
7.2.1 Decimation-in-Time
The DFT expressed in (7.1.3) can be divided into two DFTs of length N=2. That is,
X1
N
X
k x
nWNkn
n0
X1
N=2 X1
N=2
2m1k
x
2mWN2mk x
2m 1WN :
7:2:3
m0 m0
Since
2p j 2p mk
WN2mk e j N 2mk e N=2 WN=2
mk
,
7:2:4
where each of the summation terms is reduced to an N=2 point DFT. Furthermore,
from symmetry and periodicity properties given in (7.1.5), Equation (7.2.5) can be
written as
where X1
k DFTx1
m and X2
k DFTx2
m using the N=2-point DFT.
The important point about this result is that the DFT of N samples becomes a linear
combination of two smaller DFTs, each of N=2 samples. This procedure is illustrated in
Figure 7.4 for the case N 8. The computation of X1
k and X2
k requires 2
N=22
multiplications, the computation of WNk X2
k requires N=2 multiplications. This gives a
total of approximately
N 2 N=2 multiplications. Compared with N 2 operations for
direct evaluation of the DFT, there is a saving in computation when N is large after only
one stage of splitting signals into even and odd sequences. If we continue with this
process, we can break up the single N-point DFT into log2 N DFTs of length 2. The
final algorithm requires computation proportional to N log2 N, a significant saving over
the original N 2 .
Equation (7.2.6) is commonly referred to as the butterfly computation because of its
crisscross appearance, which can be generalized in Figure 7.5. The upper group gen-
erates the upper half of the DFT coefficient vector X, and the lower group generates the
x(0) X 1 (0)
X(0)
x(2) X 1 (1)
N/2-point X(1)
x(4) DFT X 1 (2)
X(2)
x(6) X 1 (3)
X(3)
x(1) X 2 (0) W 80 −1
X(4)
x(3) X 2 (1) W 81 −1
N/2-point X(5)
x(5) X 2 (2) W 82 −1
DFT X(6)
x(7) X 2 (3) W 83 −1
X(7)
(m−1)th mth
stage stage
WNk −1
lower half. Each butterfly involves just a single complex multiplication by a twiddle
factor WNk , one addition, and one subtraction. For this first decomposition, the twiddle
factors are indexed consecutively, and the butterfly values are separated by N/2 samples.
The order of the input samples has also been rearranged (split between even and odd
numbers), which will be discussed in detail later.
Since N is a power of 2, N=2 is even. Each of these N=2-point DFTs in (7.2.6) can be
computed via two smaller N=4-point DFTs, and so on. The second step process is
illustrated in Figure 7.6. Note that the order of the input samples has been rearranged
as x(0), x(4), x(2), and x(6) because x(0), x(2), x(4), and x(6) are considered to be the 0th,
1st, 2nd, and 3rd inputs in a 4-point DFT. Similarly, the order of x(1), x(5), x(3), and x(7)
is used in the second 4-point DFT.
By repeating the process associated with (7.2.6), we will finally end up with a set of 2-
point DFTs since N is a power of 2. For example, in Figure 7.6, the N/4-point DFT
became a 2-point DFT since N 8. Since the twiddle factor for the first stage, WN0 1,
the 2-point DFT requires only one addition and one subtraction. The 2-point DFT
illustrated in Figure 7.7 is identical to the butterfly network.
Example 7.7: Consider the two-point DFT algorithm which has two input time-
domain samples x(0) and x(1). The output frequency-domain samples are X(0)
and X(1). For this case, the DFT can be expressed as
X
1
X
k x
nW2nk , k 0, 1:
n0
x(0)
N/4-point X(0)
x(4) DFT X(1)
x(2) W 80
N/4-point −1 X(2)
x(6) W 82
DFT X(3)
−1
x(1) W 80 −1
N/4-point X(4)
x(5) DFT W 81 −1
X(5)
x(3) W 80 W 82 −1
N/4-point −1 X(6)
x(7) W 82 W 83 −1
DFT X(7)
−1
−1
and
The signal flow graph is shown in Figure 7.7. Note that the results are agreed with
the results obtained in Example 7.1.
As shown in Figure 7.6, the output sequence is in natural order, while the input
sequence has the unusual order. Actually the order of the input sequence is arranged as
if each index was written in binary form and then the order of binary digits was reversed.
The bit-reversal process is illustrated in Table 7.1 for the case N 8. Each of the time
sample indices in decimal is converted to its binary representation. The binary bit streams
are then reversed. Converting the reversed binary numbers to decimal values gives the
reordered time indices. If the input is in natural order the output will be in bit-reversed
order. We can either shuffle the input sequence with a bit-reversal algorithm to get the
output sequence in natural order, or let the input sequence be in natural order and shuffle
the bit-reversed results to obtain the output in natural order. Note that most modern
DSP chips such as the TMS320C55x provide the bit-reversal addressing mode to support
this bit-reversal process. Therefore the input sequence can be stored in memory with the
bit-reversed addresses computed by the hardware.
For the FFT algorithm shown in Figure 7.6, once all the values for a particular stage
are computed, the old values that were used to obtain them are never required again.
Thus the FFT needs to store only the N complex values. The memory locations used for
the FFT outputs are the same as the memory locations used for storing the input data.
This observation is used to produce in-place FFT algorithms that use the same memory
locations for input samples, all intermediate calculations, and final output numbers.
0 000 000 0
1 001 100 4
2 010 010 2
3 011 110 6
4 100 001 1
5 101 101 5
6 110 011 3
7 111 111 7
FAST FOURIER TRANSFORMS 319
7.2.2 Decimation-in-Frequency
X1
N=2 X1
N
X
k x
nWNnk x
nWNnk
n0 nN=2
X1
N=2 X1
N=2
N=2k N
x
nWNnk WN x n WNnk :
7:2:7
n0 n0
2
2p
e j N
N=2 e
N=2 jp
WN 1,
7:2:8
X 1
N=2
k N
X
k x
n
1 x n WNnk :
7:2:9
n0
2
The next step is to separate the frequency terms X(k) into even and odd samples of k.
Since WN2kn WN=2
kn
, Equation (7.2.9) can be written as
X 1
N=2
N
kn
X
2k x
n x n WN=2
7:2:10a
n0
2
and
X 1
N=2
N
X
2k 1 x
n x n WNk WN=2
kn
7:2:10b
n0
2
for 0 k
N=2 1. Let x1
n x
n x n N2 and x2
n x
n x n N2 for
0 n
N=2 1, the first decomposition of an N-point DFT into two N=2-point
DFTs is illustrated in Figure 7.8.
Again, the process of decomposition is continued until the last stage is made up of
two-point DFTs. The decomposition proceeds from left to right for the decimation-
in-frequency development and the symmetry relationships are reversed from the
decimation-in-time algorithm. Note that the bit reversal occurs at the output instead
of the input and the order of the output samples X(k) will be re-arranged as bit-
reversed samples index given in Table 7.1. The butterfly representation for the
decimation-in-frequency FFT algorithm is illustrated in Figure 7.9.
320 FAST FOURIER TRANSFORM AND ITS APPLICATIONS
x(0) x 1 (0)
X(0)
x(1) x 1 (1)
N/2-point X(2)
x(2) x 1 (2)
DFT X(4)
x(3) x 1 (3)
X(6)
x(4) −1 x 2 (0) W 80
X(1)
x(5) −1 x 2 (1) W 81
N/2-point X(3)
x(6) −1 x 2 (2) W 82
DFT X(5)
x(7) −1 x 2 (3) W 83
X(7)
Figure 7.8 Decomposition of an N-point DFT into two N=2 DFTs using decimation-in-
frequency algorithm, N 8
(m−1)th mth
stage stage
−1 WNk
The FFT algorithms introduced in the previous section can be easily modified to
compute the IFFT efficiently. This is apparent from the similarities of the DFT and
the IDFT definitions given in (7.1.3) and (7.1.6), respectively. Complex conjugating
(7.1.6), we have
1NX1
x
n X
kWNkn , n 0, 1, . . . , N 1:
7:2:11
N k0
Therefore an FFT algorithm can be used to compute the inverse DFT by first con-
jugating the DFT coefficients X(k) to obtain X
k, computing the DFT of X
k use an
FFT algorithm, scaling the results by 1/N to obtain x
n, and then complex conjugating
x
n to obtain the output sequence x(n). If the signal is real-valued, the final conjuga-
tion operation is not required.
All the FFT algorithms introduced in this chapter are based on two-input, two-
output butterfly computations, and are classified as radix-2 complex FFT algorithms.
It is possible to use other radix values to develop FFT algorithms. However, these
algorithms do not work well when the length is a number with few factors. In addition,
these algorithms are more complicated than the radix-2 FFT algorithms, and the
FAST FOURIER TRANSFORMS 321
routines are not as available for DSP processors. Radix-2 and radix-4 FFT algorithms
are the most common, although other radix values can be employed. Different radix
butterflies can be combined to form mixed-radix FFT algorithms.
to compute the DFT of time sequence x(n) in the vector x. If x is a matrix, y is the DFT
of each column of the matrix. If the length of the x is a power of 2, the fft function
employs a high-speed radix-2 FFT algorithm. Otherwise a slower mixed-radix algo-
rithm is employed.
An alternative way of using fft function is
y fft(x, N);
to perform N-point FFT. If the length of x is less than N, then the vector x is
padded with trailing zeros to length N. If the length of x is greater than N, fft
function truncates the sequence x and only performs the FFT of the first N samples of
data.
The execution time of the fft function depends on the input data type and the
sequence length. If the input data is real-valued, it computes a real power-of-two FFT
algorithm that is faster than a complex FFT of the same length. As mentioned earlier,
the execution is fastest if the sequence length is exactly a power of 2. For this reason, it
is usually better to use power-of-two FFT. For example, if the length of x is 511,
the function y fft(x, 512)will be computed faster than fft(x), which performs
511-point DFT.
It is important to note that the vectors in MATLAB are indexed from 1 to N instead
of from 0 to N 1 given in the DFT and the IDFT definitions. Therefore the relation-
ship between the actual frequency in Hz and the frequency index k given in (7.1.9) is
modified as
fs
fk
k 1 , k 1, 2, . . . , N
7:2:12
N
The characteristics and usage of ifft are the same as those for fft.
322 FAST FOURIER TRANSFORM AND ITS APPLICATIONS
7.3 Applications
FFT has a wide variety of applications in DSP. Spectral analysis often requires the
numerical computation of the frequency spectrum for a given signal with large sample
sets. The DFT is also used in the coding of waveforms for efficient transmission or
storage. In these cases the FFT may provide the only possible means for spectral
computation within the limits of time and computing cost. In this section, we will also
show how the FFT can be used to implement linear convolution for FIR filtering in a
computationally efficient manner.
kfs
fk , k 0, 1, . . . , N 1:
7:3:1
N
APPLICATIONS 323
If there is a signal component that falls between two adjacent frequency components in
the spectrum, it cannot be properly represented. Its energy will be shared between
neighboring bins and the nearby spectral amplitude will be distorted.
60
40
dB
(a)
20
0
0 10 20 30 40 50 60
60
40
dB
(b)
20
0
0 10 20 30 40 50 60
Frequency, Hz
Figure 7.10 Effect of computational frequency resolution on sinewave spectra: (a) sinewave at
30 Hz, and (b) sinewave at 30.5 Hz
324 FAST FOURIER TRANSFORM AND ITS APPLICATIONS
n [0:127];
x1 sin(2*pi*30*n/128); X1 abs(fft(x1));
x2 sin(2*pi*30.5*n/128); X2 abs(fft(x2));
subplot(2,1,1),plot(n,X1), axis( [0 64 0 70]),
title(`(a) Sinewave at 30 Hz'),
subplot(2,1,2),plot(n,X2), axis([0 64 0 70]),
title(`(b) Sinewave at 30.5 Hz'),
xlabel(`Frequency,Hz');
A solution to this problem is to make the frequencies fk kfs =N more closely spaced,
thus matching the signal frequencies. This may be achieved by using a larger DFT size
N to increase the computational frequency resolution of the spectrum. If the number
of data samples is not sufficiently large, the sequence may be expanded by adding
additional zeros to the true data, thus increasing the length N. The added zeros serve
to increase the computational frequency resolution of the estimated spectrum to the
true spectrum without adding additional information. This process is simply the
interpolation of the spectral curve between adjacent frequency components. A real
improvement in frequency resolution can only be achieved if a longer data record is
available.
A number of problems have to be avoided in performing non-parametric spectral
analysis such as aliasing, finite data length, spectral leakage, and spectral smearing.
The effects of spectral leakage and smearing may be minimized by windowing the
data using a suitable window function. These issues will be discussed in the following
section.
The data that represents the signal of length N is effectively obtained by multiplying all
the sampled values in the interval by one, while all values outside this interval are
multiplied by zero. This is equivalent to multiplying the signal by a rectangular window
of width N and height 1, expressed as
1, 0nN 1
w
n
7:3:2
0, otherwise.
In this case, the sampled data xN
n is obtained by multiplying the signal x(n) with the
window function w(n). That is,
x
n, 0 n N 1
xN
n w
nx
n
7:3:3
0, otherwise.
The multiplication of x(n) by w(n) ensures that xN
n vanishes outside the window. As
the length of the window increases, the windowed signal xN
n becomes a better
approximation of x(n), and thus X(k) becomes a better approximation of the DTFT
X
!.
APPLICATIONS 325
X
N
XN
k W
k X
k W
k lX
k,
7:3:4
l N
where W(k) is the DFT of the window function w(n), and X(k) is the true DFT of the signal
x(n). Equation (7.3.4) shows that the computed spectrum consists of the true spectrum
X(k) convoluted with the window function's spectrum W(k). This means that when we
apply a window to a signal, the frequency components of the signal will be corrupted in
the frequency domain by a shifted and scaled version of the window's spectrum.
As discussed in Section 5.3.3, the magnitude response of the rectangular window
defined in (7.3.2) can be expressed as
W
! sin
!N=2:
7:3:5
sin
!=2
the spectrum of the infinite-length sampled signal over the Nyquist interval is given as
which consists of two line components at frequencies !0 . However, the spectrum of the
windowed sinusoid defined in (7.3.3) can be obtained as
1
XN
! W
! !0 W
! !0 ,
7:3:8
2
Thus the power of the infinite-length signal that was concentrated at a single frequency
has been spread into the entire frequency range by the windowing operation. This
undesired effect is called spectral smearing. Thus windowing not only distorted the
spectrum due to leakage effects, but also reduced spectral resolution. For example, a
similar analysis can be made in the case when the signal consists of two sinusoidal
components. That is,
1
XN
! W
! !1 W
! !1 W
! !2 W
! !2 :
7:3:10
2
Again, the sharp spectral lines are replaced with their smeared versions. From (7.3.5),
the spectrum W
! has its first zero at frequency ! 2p=N. If the frequency separation,
D! j!1 !2 j, of the two sinusoids is
2p
D! ,
7:3:11
N
the mainlobe of the two window functions W
! !1 and W
! !2 overlap. Thus the
two spectral lines in XN
! are not distinguishable. This undesired effect starts when D!
is approximately equal to the mainlobe width 2p=N. Therefore the frequency resolution
of the windowed spectrum is limited by the window's mainlobe width.
To guarantee that two sinusoids appear as two distinct ones, their frequency separ-
ation must satisfy the condition
2p
D! > ,
7:3:12a
N
in radians per sample, or
fs
Df > ,
7:3:12b
N
in Hz. Thus the minimum DFT length to achieve a desired frequency resolution is given
as
fs 2p
N> :
7:3:13
Df D!
In summary, the mainlobe width determines the frequency resolution of the windowed
spectrum. The sidelobes determine the amount of undesired frequency leakage. The
optimum window used for spectral analysis must have narrow mainlobe and small
sidelobes. Although adding to the record length by zero padding increases the FFT size
and thereby results in a smaller Df , one must be cautious to have sufficient record length
to support this resolution.
APPLICATIONS 327
In Section 5.3.4, we used windows to smooth out the truncated impulse response of
an ideal filter for designing an FIR filter. In this section, we showed that those window
functions can also be used to modify the spectrum estimated by the DFT. If the window
function, w(n), is applied to the input signal, the DFT outputs are given by
X1
N
X
k w
nx
nWNkn , k 0, 1, . . . , N 1:
7:3:14
n0
The rectangular window has the narrowest mainlobe width, thus providing the best
spectral resolution. However, its high-level sidelobes produce undesired spectral leak-
age. The amount of leakage can be substantially reduced at the cost of decreased
spectral resolution by using appropriate non-rectangular window functions introduced
in Section 5.3.4.
As discussed before, frequency resolution is directly related to the window's mainlobe
width. A narrow mainlobe will allow closely spaced frequency components to be
identified; while a wide mainlobe will cause nearby frequency components to blend.
For a given window length N, windows such as rectangular, Hanning, and Hamming
have relatively narrow mainlobe compared with Blackman or Kaiser windows. Unfor-
tunately, the first three windows have relatively high sidelobes, thus having more
spectral leakage. There is a trade-off between frequency resolution and spectral leakage
in choosing windows for a given application.
Example 7.10: Consider the sinewave used in Example 7.9. Using the Kaiser
window defined in (5.3.26) with L 128 and b 8:96, the magnitude spectrum
is shown in Figure 7.11 using the MATLAB script Exam7_10.m included in the
software package.
60
40
dB
(a)
20
0
0 10 20 30 40 50 60
60
40
dB
(b)
20
0
0 10 20 30 40 50 60
Frequency, Hz
Figure 7.11 Effect of Kaiser window function for reducing spectral leakage: (a) rectangular
window, and (b) Kaiser window
328 FAST FOURIER TRANSFORM AND ITS APPLICATIONS
An effective method for decreasing the mainlobe width is by increasing the window
length. For a given window, increasing the length of the window reduces the width of
the mainlobe, which leads to better frequency resolution. However, if the signal changes
frequency content over time, the window cannot be too long in order to provide a
meaningful spectrum. In addition, a longer window implies using more data, so there is
a trade-off between frequency resolution and the cost of implementation. If the number
of available signal samples is less than the required length, we can use the zero padding
technique. Note that the zeros are appended after windowing is performed.
The finite-energy signals possess a Fourier transform and are characterized in the
frequency domain by their power density spectrum. Consider a sequence x(n) of length
N whose DFT is X(k), Parseval's theorem can be expressed as
X1
N X1
N
E x
n2 1 X
k2 :
7:3:15
n0
N k0
2
The term X
k is called the power spectrum and is a measure of power in signal at
frequency fk defined in (7.3.1). The DFT magnitude spectrum jX
kj is defined in
(7.1.11). Squaring the magnitude of the DFT coefficient produces a power spectrum,
which is also called the periodogram.
As discussed in Section 3.3, stationary random processes do not have finite energy
and thus do not possess Fourier transform. Such signals have a finite average power and
are characterized by the power density spectrum (PDS) defined as
1 2 1
P
k X
k X
kX
k,
7:3:16
N N
which is also commonly referred to as the power spectral density, or simply power
spectrum.
The PDS is a very useful concept in the analysis of random signals since it provides a
meaningful measure for the distribution of the average power in such signals. There are
many different techniques developed for estimating the PDS. Since the periodogram is
not a consistent estimate of the true PDS, the periodogram averaging method may be
used to reduce statistical variation of the computed spectra. Given a signal vector xn
which consists N samples of digital signal x(n), a crude estimate of the PDS using
MATLAB is
pxn abs(fft(xn, 1024)).^2/N;
In practice, we only have a finite-length sequence whose PDS is desired. One way of
computing the PDS is to decompose x(n) into M segments, xm
n, of N samples each.
These signal segments are spaced N/2 samples apart, i.e., there is 50 percent overlap
between successive segments as illustrated in Figure 7.12.
APPLICATIONS 329
N
x0(n)
m=0
N/2 x1(n)
m=1
N/2 x2(n)
m=2
N
m = M−1 xM−1(n)
1 0 2
Pm
k Xm
k , 0kN 1,
7:3:18
NPw
where
1NX1
Pw w2
n
7:3:19
N n0
is a normalization factor for the average power in the window sequence w(n). Finally,
the desired PDS is the average of these periodograms. That is,
X1
1 M 1 M X1
P
k Pm
k X 0
k2 , 0kN 1:
7:3:20
m
M m0 MNPw m0
Therefore the PDS estimate given in (7.3.20) is a weighted sum of the periodograms of
each of the individual overlapped segments. The 50 percent overlap between successive
segments helps to improve certain statistical properties of this estimation.
The Signal Processing Toolbox provides the function psd to average the period-
ograms of windowed segments of a signal. This MATLAB function estimates the
PDS of the signal given in the vector x using the following statement:
where nfft specifies the FFT length, Fs is the sampling frequency, window specifies
the selected window function, and noverlap is the number of samples by which the
segments overlap.
x(n) X(k)
FFT
Y(k) y(n)
IFFT
h(n) H(k)
FFT
Overlap-save technique
1. Perform N-point FFT of the expanded (zero padded) impulse response sequence
0 h
n, n 0, 1, . . . , L 1
h
n
7:3:21
0, n L, L 1, . . . , N 1,
2. Select N signal samples xm
n (where m is the segment index) from the input
sequence x(n) based on the overlap illustrated in Figure 7.14, and then use N-point
0 xm−1(n) N−1
m−1
L xm(n)
m
xm+1(n)
m+1
L ym−1(n)
m−1
ym(n)
m
discarded ym+1(n)
m+1
3. Multiply the stored H 0
k (obtained in step 1) by the Xm
k of segment m (obtained
in step 2) to get
5. Discard the first L samples from each successive IFFT output since they are
circularly wrapped and superimposed as discussed in Section 7.1.3. The resulting
segments of
N L samples are concatenated to produce y(n).
Overlap-add technique
In the overlap-add process, the input sequence x(n) is divided into non-overlapping
segments of length
N L. Each segment is zero-padded to produce xm
n of length N.
Following the steps 2, 3, and 4 of the overlap-save method to obtain N-point segments
ym
n. Since the convolution is the linear operation, the output sequence y(n) is simply
the summation of all segments expressed as
X
y
n ym
n:
7:3:23
m
Because each output segment ym
n overlaps the following segment ym1
n by L
samples, (7.3.23) implies the actual addition of the last L samples in segment ym
n
with the first L samples in segment ym1
n.
This efficient FIR filtering using the overlap-add technique is implemented by the
MATLAB function
y fftfilt(h, x);
or
y fftfilt(h, x, N);
The fftfilt function filters the input signal in the vector x with the FIR filter
described by the coefficient vector h. The function y = fftfilt(h, x)chooses an
FFT and a data block length that automatically guarantees efficient execution time.
However, we can specify the FFT length N by using y = fftfilt(h, x, N).
7.3.5 Spectrogram
The PDS introduced in Section 7.3.3 is a powerful technique to show how the power of
the signal is distributed among the various frequency components. However, this
method will result in a distorted (blurred) spectrum when the signal is non-stationary.
IMPLEMENTATION CONSIDERATIONS 333
For a time-varying signal, it is more useful to compute a local spectrum that measures
spectral contents over a short time interval.
In this section, we use a sliding window defined in (7.3.17) to break up a long
sequence into several short finite-length blocks of N samples x0m
n, and then perform
the FFT to obtain the time-dependent frequency spectrum at each short segment to
obtain
X1
N
Xm
k x0m
nWNkn , k 0, 1, . . . , N 1:
7:3:24
n0
This process is repeated for the next block of N samples as illustrated in Figure 7.12.
This technique is also called the short-term Fourier transform, since Xm
k is just the
DFT spectrum of the short segment of xm
n that lies inside the sliding window w(n).
This form of time-dependent Fourier transform has several applications in speech,
sonar, and radar signal processing.
Equation (7.3.24) shows that Xm
k is a two-dimensional sequence. The index k
represents frequency as defined in (7.3.1), and the block index m represents time.
Since the result is a function of both time and frequency, a three-dimensional graphical
display is needed. This is done by plotting jXm
kj as a function of both k and m using
gray-scale (or color) images. The resulting three-dimensional graphic is called the
spectrogram. It uses the x-axis to represent time and the y-axis to represent frequency.
The gray level (or color) at point (m, k) is proportional to jXm
kj. The large values are
black, and the small ones are white.
The Signal Processing Toolbox provides a function, specgram, to compute spectro-
gram. This MATLAB function has the form
where B is a matrix containing the complex spectrogram values jXm
kj, and other
arguments are defined in the function psd. It is common to pick the overlap to
be around 50 percent as shown in Figure 7.12. The specgram function with no
output arguments displays the scaled logarithm of the spectrogram in the current
graphic window. See the Signal Processing Toolbox for Use with MATLAB [7]
for details.
As illustrated in Figure 7.5, the radix-2 FFT algorithm takes two input samples at a time
from memory, performs the butterfly computations, and returns the resulting numbers
to the same input memory locations. This process is repeated N log2 N times in the
computation of an N-point FFT. The FFT routines accept complex-valued inputs,
therefore the number of memory locations required is 2N. Complex-valued signals are
quite common in communications such as modems. However, most signals such as
speech are real-valued. To use the available FFT routine, we have to set the imaginary
part of each sample value to 0 for real input data. Note that each complex multiplication
is of the form
and therefore requires four real multiplications and two real additions.
The number of multiplications and the storage requirements can be reduced if the
signal has special properties. For example, if x(n) is real, only N/2 samples from X(0) to
X(N/2) need to be computed as shown by complex-conjugated property (7.1.15). In
addition, if x(n) is an even function of n, only the real part of X(k) is non-zero. If x(n) is
odd, only the imaginary part is non-zero.
The computation of twiddle factors WNkn usually takes longer than the computation of
complex multiplications. In most FFT programs on general-purpose computers, the
sine and cosine calculations defined in (7.1.4) are embedded in the program for con-
venience. If N is fixed, it is preferable to tabulate the values of twiddle factors so that
they can be looked up during the computation of FFT algorithms. When the FFT is
performed repeatedly with N being constant, the computation of twiddle factors need
not be repeated. In addition, in an efficient implementation of FFT algorithm on a DSP
processor, the twiddle factors are computed once and then stored in a table during the
programming stage.
There are other implementation issues such as indexing, bit reversal, and parallelism
in computations. The complexity of FFT algorithms is usually measured by the required
number of arithmetic operations (multiplications and additions). However, in practical
implementations on DSP chips, the architecture, instruction set, data structures, and
memory organizations of the processors are critical factors. Modern DSP chips such as
the TMS320C55x usually provide single-cycle multiplication-and-accumulation oper-
ation, bit-reversal addressing, and a high degree of instruction parallelism to efficiently
implement FFT algorithms. These issues will be discussed further in Section 7.5.
Since FFT is often employed in DSP hardware for real-time applications, it is important
to analyze the finite-precision effects in FFT computations. We assume that the FFT
computations are being carried out using fixed-point arithmetic. With clever scaling and
checking for overflow, the most critical error in the computation is due to roundoff
errors. Without loss of generality, we analyze the decimation-in-time radix-2 FFT
algorithm introduced in Section 7.2.1.
IMPLEMENTATION CONSIDERATIONS 335
From the flow-graph of the FFT algorithm shown in Figure 7.6, X(k) are computed
by a series of butterfly computations with a single complex multiplication per butterfly
network. Note that some of the butterfly computations require multiplications by 1
(such as 2-point FFT in the first stage) that do not require multiplication in practical
implementation, thus avoiding roundoff errors.
Figure 7.6 shows that the computation of N-point FFT requires M log2 N stages.
There are N/2 butterflies in the first stage, N/4 in the second stage, and so on. Thus the
total number of butterflies required to produce an output sample is
N N
2 1 2M 1 2M 2 2 1
2 4 " M 1 # X1
M m
1 1 1
2M 1 1 2M 1
2 2 m0
2
" M #
M 1
2 1 N 1:
7:4:1
2
The quantization errors introduced at the mth stage appear at the output after propaga-
tion through
m 1 stages, while getting multiplied by the twiddle factors at each
subsequent stage. Since the magnitude of the twiddle factor is always unity, the vari-
ances of the quantization errors do not change while propagating to the output. If we
assume that the quantization errors in each butterfly are uncorrelated with the errors in
other butterflies, the total number of roundoff error sources contributing to the output
is 4
N 1. Therefore the variance of the output roundoff error is
2 2B N2 2B
s2e 4
N 1 :
7:4:2
12 3
1
jx
nj <
7:4:3
N
to prevent the overflow at the output because je j
2p=Nkn j 1. For example, in a 1024-
point FFT, the input data must be shifted right by 10 bits. If the original data is 16-bit,
the effective wordlength after scaling is reduced to only 6 bits. This worst-case scaling
substantially reduces the resolution of the FFT results.
Instead of scaling the input samples by 1/N at the beginning, we can scale the signals
at each stage since the FFT algorithm consists of a sequence of stages. Figure 7.5 shows
that we can avoid overflow within the FFT by scaling the input at each stage by 1/2
(right shift one bit in a fixed-point hardware) because the outputs of each butterfly
involve the addition of two numbers. That is, we shift right the input by 1 bit, perform
the first stage of FFT, shift right that result by 1 bit, perform the second stage of FFT, and
so on. This unconditional scaling process does not affect the signal level at the output of
336 FAST FOURIER TRANSFORM AND ITS APPLICATIONS
the FFT, but it significantly reduces the variance of the quantization errors at the output.
Thus it provides a better accuracy than unconditional scaling the input by 1/N.
An alternative conditional scaling method examines the results of each FFT stage to
determine whether all the results of that stage should be scaled. If all the results in a
particular stage have magnitude less than 1, no scaling is necessary at that stage.
Otherwise, all the inputs of that stage have to be scaled by 1/2. The conditional scaling
technique achieves much better accuracy since we may scale less often than the uncon-
ditional scaling method. However, this conditional scaling method increases software
complexity and may require longer execution time.
/*
Example to test floating-point complex FFT
*/
#include <math.h>
#include "fcomplex.h" /* Floating-point complex.h header file */
#include "input7_f.dat" /* Floating-point testing data */
extern void fft(complex *, unsigned int, complex *, unsigned int);
extern void bit_rev(complex *X, int M);
#define N 128 /* FFT size */
#define EXP 7 /* EXP log2(N)*/
#define pi 3.1415926535897
EXPERIMENTS USING THE TMS320C55X 337
The complex radix-2 FFT program listed in Table 7.3 computes the complex decima-
tion-in-time FFT algorithm as shown in Figure 7.5. To prevent the results from over-
flowing, the intermediate results are scaled down in each stage as described in Section
7.4.2. The radix-2 FFT function contains two complex arguments and two unsigned
integer arguments. They are the complex input sample, X [N], the power of the radix-2
FFT, EXP, the initial complex twiddle-factor table W [EXP], and the scaling flag SCALE.
The FFT is performed in place, that is, the complex input array is overwritten by
the output array. The initial twiddle factors are created by the C function listed in
Table 7.2.
As discussed in Section 7.2.1, the data used for FFT need to be placed in the bit-reversal
order. An N-point FFT bit-reversal example is given in Table 7.1. Table 7.4 illustrated the
C function that performs the bit-reversal addressing task.
/*
fft_float.c Floating-point complex radix-2 decimation-in-time FFT
Perform in-place FFT, the output overwrite the input buffer
*/
#include "fcomplex.h" /* Floating-point complex.h header file */
void fft(complex *X, unsigned int M, complex *W, unsigned int SCALE)
{
complex temp; /* Temporary storage of complex variable */
complex U; /* Twiddle factor W^k */
unsigned int i,j;
unsigned int id; /* Index for lower point in butterfly */
unsigned int N 1EXP; /* Number of points for FFT */
unsigned int L; /* FFT stage */
unsigned int LE; /* Number of points in sub FFT at stage
L and offset to next FFT in stage */
unsigned int LE1; /* Number of butterflies in one FFT at
stage L. Also is offset to lower
point in butterfly at stage L */
float scale;
scale 0.5;
if (SCALE 0)
scale 1.0;
for(L 1; L < EXP; L) /* FFT butterfly */
{
LE 1L; /* LE 2^L points of sub DFT */
LE1 LE1; /*Number of butterflies in sub-DFT */
U.re 1.0;
U.im 0.;
EXPERIMENTS USING THE TMS320C55X 339
/*
fbit_rev.c Arrange input samples in bit-reversal order
The index j is the bit-reversal of i
*/
#include "fcomplex.h" /* Floating-point complex.h header file */
void bit_rev(complex *X, unsigned int EXP)
{
unsigned int i, j, k;
unsigned int N 1EXP; /* Number of points for FFT */
unsigned int N2 N1;
complex temp; /* Temp storage of the complex variable */
for(j 0, i 1; i < N 1; i)
{
k N2;
while(k < j)
{
continues overleaf
340 FAST FOURIER TRANSFORM AND ITS APPLICATIONS
j k;
k 1;
}
j k;
if(i < j)
{
temp X [j];
X [j] X [i];
X [i] temp;
}
}
}
In the program, X []is the complex sample buffer and U is the complex twiddle factor.
The scale is done by right-shifting 1-bit instead of multiply by 0.5.
2. Create the project exp7a using CCS. Add the command file exp7.cmd, the
functions epx7a.c, fft_a.c, and ibit_rev.c, and the header file
icomplex.h from the software package into the project.
EXPERIMENTS USING THE TMS320C55X 341
3. Build the fixed-point FFT project and verify the results. Comparing the results
with the floating-point complex radix-2 FFT results obtained by running it on
the PC.
4. The FFT output samples are squared and placed in a data buffer named
spectrum []. Use CCS to plot the results by displaying the spectrum []and
re1 []buffer.
5. Profile the DSP run-time clock cycles for 128-point and 1024-point FFTs. Record
the memory usage of the fixed-point functions bit_rev()and fft().
Although using intrinsics can improve the DSP performance, the assembly language
implementation has been proven to have the fastest execution speed and memory
efficiency for most applications, especially for computational intensive algorithms
such as FFT. The development time for assembly code, however, will be much
longer than that of C code. In addition, the maintenance and upgrade of assembly
code are usually more difficult. In this experiment, we will use C55x assembly routines
for computing the same radix-2 FFT algorithm as the fixed-point C function used
in Experiment 7A. The assembly FFT routine is listed in Table 7.5. This routine is
written based on the C function used for Experiment 7A, and it follows the C55x
C calling convention. For readability, the assembly code has been written to mimic
the C function of Experiment 7A closely. It optimizes the memory usage but not
the run-time efficiency. By unrolling the loop and taking advantage of the FFT butterfly
characteristics, the FFT execution speed can be further improved with the expense of
the memory space, see the exercise problems at the end of this chapter.
In fft.asm, the local variables are defined as structure using the stack relative
addressing mode when the assembly routine is called. The last memory location con-
tains the return address of the caller function. Since the status registers ST1 and ST3 will
be modified, we use two stack locations to store the contents of these status registers at
entry. The status registers will be restored upon returning to the caller function. The
complex temporary variable is stored in two consecutive memory locations by using a
bracket with the numerical number to indicate the number of memory locations for the
integer data type.
The FFT implementation is carried out in three nested loops. The butterfly computa-
tion is implemented in the inner loop and the group loop is in the middle, while the
stages are managed by the outer loop. Among these three loops, the butterfly loop is
repeated most often. We use the local block repeat instruction, rptblocal, for the
butterfly loop and the middle loop to minimize the loop overhead. We also use parallel
instructions, modulo addressing, and dual memory access instructions to further
improve the efficiency of butterfly computation. By limiting the size of the loop, we
can place the middle loop inside the DSP instruction buffer queue (IBQ) as well. The
FFT computation is improved since the two inner loops are only fetched once from the
program memory each time we compute the groups and butterflies. The twiddle-factor
342 FAST FOURIER TRANSFORM AND ITS APPLICATIONS
table is pre-calculated during the initialization phase. The calculation of the twiddle
factors can be implemented as follows:
for(i 0, l 1; l < EXP; l)
{
SL 1l; /* LE 2^L points of sub FFT */
SL1 SL1; /* # of twiddle factors in sub-FFT */
for(j 0; j < SL1; j)
{
W.re (int)((0x7fff*cos(j*pi/SL1)) 0.5);
W.im (int)((0x7fff*sin(j*pi/SL1)) 0.5);
U [i] W;
}
}
1. Create the project exp7b, add files exp7.cmd, exp7b.c, w_table.c, fft.asm,
and bit_rev.asm from the software package into the project.
2. Build and verify the FFT function, and compare the results with the results obtained
from Experiment 7A. Make sure that the scale flag for the FFT routine is set to 1.
3. Profile the FFT run-time clock cycles and its memory usage again and compare
these results with those obtained in Experiment 7A.
As discussed in Section 7.2.3, the inverse DFT defined by (7.2.11) is similar to the DFT
defined in (7.1.6). Thus the FFT routine developed in Experiment 7B can be modified
for computing the inverse FFT. Two simple changes are needed in order to use the same
FFT routine for the IFFT calculation. First, the conjugating twiddle factors imply the
sign change of the imaginary portion of the complex samples. That is, X [i].im =
X [I].im. Second, the normalization of 1=N is handled in the FFT routine by
setting the scale flag to 0. Table 7.7 shows the example of computing both the FFT
and IFFT.
Go over the following steps for Experiment 7C:
1. Create the project epx7c and include the files exp7.cmd, exp7c.c, w_table.c,
fft.asm, and bit_rev.asm from the software package into the project.
2. Build and view the IFFT results by plotting and comparing the input array re1 []
and IFFT output array re2 []. Make sure that the scale flag for the FFT calcula-
tion is set to 1 (one), and the IFFT calculation is set to 0 (zero).
As discussed in Section 7.3.4, the application of fast convolution using FFT/IFFT is the
most efficient technique of FIR filtering for long time-domain sequence such as for
high-fidelity digital audio systems, or FIR filtering in frequency-domain such as in the
xDSL modems. The fast convolution algorithm is shown in Figure 7.13. There are two
basic methods for FFT convolution as mentioned in Section 7.3.4. This experiment will
use the overlap-add technique. This method involves the following steps:
± Pad M N L zeros to the FIR filter impulse response of length L where N > L,
and process the sequence using an N-point FFT. Store the results in the complex
buffer H [N].
Table 7.7 Perform FFT and IFFT using the same routine
/* Start FFT */
bit_rev(X,EXP); /* Arrange X []in bit-reversal order */
fft(X,EXP,U,1); /* Perform FFT */
/* Inverse FFT */
for(i 0; i < N; i) /* Change the sign of imaginary part */
{
X [i].im X [i].im;
}
bit_rev(X,EXP); /* Arrange sample in bit-reversal order */
fft(X,EXP,U,0); /* Perform IFFT */
EXPERIMENTS USING THE TMS320C55X 345
± Segment the input sequence of length M with L 1 zeros padded at the end.
± Process each segment of data samples with an N-point FFT to obtain the complex
array X [N].
± Add the first L samples that are overlapped with the previous segment to form the
output. All resulting segments are combined to obtain y
n.
The C program implementation of fast convolution using FFT and IFFT is listed in
Table 7.8, where we use the same data file and FIR coefficients as the experiments given
in Chapter 5. In general, for low- to median-order FIR filters, the direct FIR routines
introduced in Chapter 5 are more efficient. Experiment 5A shows that an FIR filter can
be implemented as one clock cycle per filter tap, while Experiments 5B and 5C complete
two taps per cycle. However, the computational complexity of those routines is linearly
increased with the number of coefficients. When the application requires high-order
FIR filters, the computation requirements can be reduced by using fast convolution as
shown in this experiment.
/* Initialization */
for(i 0; i < L 1; i) /*Initialize overlap buffer */
OVRLAP [i] 0;
for(i 0; i < L; i) /* Copy filter coefficients to buffer */
{
X [i].re LP_h [i];
X [i].im 0;
}
for(i i; i < N; i) /* Pad zeros to the buffer */
{
X [i].re 0;
X [i].im 0;
}
w_table(U,EXP); /* Create twiddle-factor table */
bit_rev(X,EXP); /* Bit-reversal arrangement of coefficients */
fft(X,EXP,U,1); /* FFT of filter coefficients */
for(i 0; i < N; i) /* Save frequency-domain coefficients */
{
H [i].re X [i].re EXP;
H [i].im X [i].im EXP;
}
continues overleaf
346 FAST FOURIER TRANSFORM AND ITS APPLICATIONS
1. Create the project exp7d, add the files exp7.cmd, exp7d.c, w_table.c,
fft.asm, bit_rev.asm, freqflt.asm, and olap_add.asm from the software
package to the project.
2. Build and verify the fast convolution results, and compare the results with the
results obtained in Experiment 5A.
3. Profile the run-time clock cycles of the fast convolution using FFT/IFFT
for various FIR filter lengths by using different filter coefficient files
firlp8.dat, firlp16.dat, firlp32.dat, firlp64.dat, firlp128.dat,
firlp256.dat, and firlp512.dat. These files are included in the experiment
software package.
References
[1] D. J. DeFatta, J. G. Lucas, and W. S. Hodgkiss, Digital Signal Processing: A System Design
Approach, New York: Wiley, 1988.
EXERCISES 347
[2] N. Ahmed and T. Natarajan, Discrete-Time Signals and Systems, Englewood Cliffs, NJ: Prentice-
Hall, 1983.
[3] V. K. Ingle and J. G. Proakis, Digital Signal Processing Using MATLAB V.4, Boston: PWS
Publishing, 1997.
[4] L. B. Jackson, Digital Filters and Signal Processing, 2nd Ed., Boston: Kluwer Academic, 1989.
[5] MATLAB User's Guide, Math Works, 1992.
[6] MATLAB Reference Guide, Math Works, 1992.
[7] Signal Processing Toolbox for Use with MATLAB, Math Works, 1994.
[8] A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, Englewood Cliffs, NJ:
Prentice-Hall, 1989.
[9] S. J. Orfanidis, Introduction to Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1996.
[10] J. G. Proakis and D. G. Manolakis, Digital Signal Processing ± Principles, Algorithms, and
Applications, 3rd Ed., Englewood Cliffs, NJ: Prentice-Hall, 1996.
[11] A Bateman and W. Yates, Digital Signal Processing Design, New York: Computer Science Press,
1989.
[12] S. D. Stearns and D. R. Hush, Digital Signal Analysis, 2nd Ed., Englewood Cliffs, NJ: Prentice-
Hall, 1990.
Exercises
Part A
1. Compute the four-point DFT of the sequence f1, 1, 1, 1g using the matrix equations given in
(7.1.7) and (7.1.8).
2. Repeat Problem 1 with eight-point DFT of sequence f1, 1, 1, 1, 0, 0, 0, 0g. Compare the results
with the results of Problem 1.
4. Prove the symmetry and periodicity properties of the twiddle factors defined as
kN=2
(a) WN WNk .
5. Generalize the derivation of Example 7.7 to a four-point DFT and show a detailed signal-flow
graph of four-point DFT.
(a) Compute the circular convolution of the two sequences using DFT and IDFT.
(b) Show that the linear convolution of these two sequences is the triangular sequence given
by
(
n 1, 0n<N
x3
n 2N n 1, N n < 2N
0, otherwise.
(c) How to make the circular convolution of the two sequences becomes a triangular
sequence defined in (b)?
7. Construct the signal-flow diagram of FFT for N 16 using the decimation-in-time method
with bit-reversal input.
10. Consider a digitized signal of one second with the sampling rate 20 kHz. The spectrum is
desired with a computational frequency resolution of 100 Hz or less. Is this possible? If
possible, what FFT size N should be used?
11. A 1 kHz sinusoid is sampled at 8 kHz. The 128-point FFT is performed to compute X(k). At
what frequency indices k we expect to observe any peaks in jX
kj?
12. A touch-tone phone with a dual-tone multi-frequency (DTMF) transmitter encodes each
keypress as a sum of two sinusoids, with one frequency taken from each of the following
groups:
13. Compute the linear convolution y
n x
n h
n using 512-point FFT, where x(n) is of
length 4096 and h(n) is of length 256.
(a) How many FFTs and how many adds are required using the overlap-add method?
(b) How many FFTs are required using the overlap-save method?
(c) What is the length of output y(n)?
EXERCISES 349
Part B
14. Write a C or MATLAB program to compute the fast convolution of a long sequence with a
short sequence employing the overlap-save method introduced in Section 7.3.4. Compare the
results with the MATLAB function fftfilt that use overlap-add method.
15. Experiment with the capability of the psd function in the MATLAB. Use a sinusoid
embedded in white noise for testing signal.
16. Using the MATLAB function specgram to display the spectrogram of the speech file
timit1.asc included in the software package.
Part C
17. The radix-2 FFT code used in the experiments is written in consideration of minimizing
the code size. An alternative FFT implementation can be more efficient in terms of the
execution speed with the expense of using more program memory locations. For example,
the twiddle factors used by the first stage and the first group of other stages are
constants, WN0 1. Therefore the multiplication operations in these stages can be simplified.
Modify the assembly FFT routine given in Table 7.5 to incorporate this observation. Profile
the run-time clock cycles and record the memory usage. Compare the results with those
obtained by Experiment 7C.
18. The radix-2 FFT is the most widely used algorithm for FFT computation. When the number
of data samples are a power of 2n (i.e., N 22n 4n ), we can further improve the run-time
efficiency by employing the radix-4 FFT algorithm. Modify the assembly FFT routine give
in Table 7.5 for the radix-4 FFT algorithm. Profile the run-time clock cycles, and record the
memory space usage for a 1024-point radix-4 FFT (210 45 1024). Compare the radix-4
FFT results with the results of 1024-point radix-2 FFT computed by the assembly routine.
19. Take advantage of twiddle factor, WN0 1, to further improve the radix-4 FFT algorithm
run-time efficiency. Compare the results of 1024-point FFT implementation using different
approaches.
20. Most of DSP applications have real input samples, our complex FFT implementation zeros
out the imaginary components of the complex buffer (see exp7c.c). This approach is simple
and easy, but it is not efficient in terms of the execution speed. For real input, we can split the
even and odd samples into two sequences, and compute both even and odd sequences in
parallel. This approach will reduce the execution time by approximately 50 percent. Given a
real value input x(n) of 2N samples, we can define c
n a
n jb
n, where two inputs
a
n x
n and b
n x
n 1 are real sequences. We can represent these sequences
as a
n c
n c
n=2 and b
n jc
n c
n=2, then they can be written in terms
of DFTs as and A
k C
k C
N k=2 and B
k jC
k C
N k=2.
k
Finally, the real input FFT can be obtained by X
k A
k W2N B
k and
k
X
k N A
k W2N B
k, where k 0, 1, . . . , N 1. Modify the complex radix-2 FFT
assembly routine to efficiently compute 2N real input samples.
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
8
Adaptive Filtering
As discussed in previous chapters, filtering refers to the linear process designed to alter
the spectral content of an input signal in a specified manner. In Chapters 5 and 6, we
introduced techniques for designing and implementing FIR and IIR filters for given
specifications. Conventional FIR and IIR filters are time-invariant. They perform linear
operations on an input signal to generate an output signal based on the fixed coeffi-
cients. Adaptive filters are time varying, filter characteristics such as bandwidth and
frequency response change with time. Thus the filter coefficients cannot be determined
when the filter is implemented. The coefficients of the adaptive filter are adjusted
automatically by an adaptive algorithm based on incoming signals. This has the import-
ant effect of enabling adaptive filters to be applied in areas where the exact filtering
operation required is unknown or is non-stationary.
In Section 8.1, we will review the concepts of random processes that are useful in the
development and analysis of various adaptive algorithms. The most popular least-mean-
square (LMS) algorithm will be introduced in Section 8.2. Its important properties will be
analyzed in Section 8.3. Two widely used modified adaptive algorithms, the normalized
and leaky LMS algorithms, will be introduced in Section 8.4. In this chapter, we introduce
and analyze the LMS algorithm following the derivation and analysis given in [8]. In
Section 8.5, we will briefly introduce some important applications of adaptive filtering.
The implementation considerations will be discussed in Section 8.6, and the DSP imple-
mentations using the TMS320C55x will be presented in Section 8.7.
For many applications, one signal is often used to compare with another in order to
determine the similarity between the pair, and to determine additional information
based on the similarity. Autocorrelation is used to quantify the similarity between two
segments of the same signal. The autocorrelation function of the random process x(n) is
defined as
This function specifies the statistical relation of two samples at different time index n
and k, and gives the degree of dependence between two random variables of
n k
units apart. For example, consider a digital white noise x(n) as uncorrelated random
variables with zero-mean and variance s2x . The autocorrelation function is
0, n 6 k
rxx
n, k Ex
nx
k Ex
nEx
k
8:1:2
s2x , n k.
If we subtract the means in (8.1.1) before taking the expected value, we have the
autocovariance function
gxx n, k Efx n mx nx k mx kg rxx n, k mx nmx k: 8:1:3
The objective in computing the correlation between two different random signals is
to measure the degree in which the two signals are similar. The crosscorrelation
and crosscovariance functions between two random processes x(n) and y(n) are defined
as
and
gxy n, k Efx n mx ny k my kg rxy n, k mx nmy k: 8:1:5
Correlation is a very useful DSP tool for detecting signals that are corrupted
by additive random noise, measuring the time delay between two signals, determining
the impulse response of a system (such as obtain the room impulse response used in
Section 4.5.2), and many others. Signal correlation is often used in radar, sonar, digital
communications, and other engineering areas. For example, in CDMA digital commu-
nications, data symbols are represented with a set of unique key sequences. If one of
these sequences is transmitted, the receiver compares the received signal with every
possible sequence from the set to determine which sequence has been received. In radar
and sonar applications, the received signal reflected from the target is the delayed
version of the transmitted signal. By measuring the round-trip delay, one can determine
the location of the target.
Both correlation functions and covariance functions are extensively used in analyzing
random processes. In general, the statistical properties of a random signal such as the
mean, variance, and autocorrelation and autocovariance functions are time-varying
functions. A random process is said to be stationary if its statistics do not change
with time. The most useful and relaxed form of stationary is the wide-sense stationary
(WSS) process. A random process is called WSS if the following two conditions are
satisfied:
Ex n mx , 8:1:6
1. where mx is a constant.
2. The autocorrelation function depends only on the time difference. That is,
Equation (8.1.7) indicates that the autocorrelation function of a WSS process is inde-
pendent of the time shift and rxx
k denotes the autocorrelation function of a time lag of
k samples.
The autocorrelation function rxx
k of a WSS process has the following important
properties:
1. The autocorrelation function is an even function of the time lag k. That is,
2. The autocorrelation function is bounded by the mean squared value of the process
expressed as
2. where rxx
0 Ex2
n is equal to the mean-squared value, or the power in the
random process.
Thus the autocorrelation function of a signal has its maximum value at zero lag.
If x(n) has a periodic component, then rxx
k will contain the same periodic com-
ponent.
X
1 X
1 X
1
rxx
k x
n kx
n ank an ak
a2 n :
n 1 n0 n0
ak
rxx
k :
1 a2
x n cos !n,
Therefore ryx
k is simply the folded version of rxy
k. Hence, ryx
k provides exactly
the same information as rxy
k, with respect to the similarity of x(n) to y(n).
INTRODUCTION TO RANDOM PROCESSES 355
In practice, we only have one sample sequence fx
ng available for analysis. As
discussed earlier, a stationary random process x(n) is ergodic if all its statistics can be
determined from a single realization of the process, provided that the realization is long
enough. Therefore time averages are equal to ensemble averages when the record length
is infinite. Since we do not have data of infinite length, the averages we compute differ
from the true values. In dealing with finite-duration sequence, the sample mean of x(n)
is defined as
1NX1
x
m
N n0
x
n,
8:1:13
where N is the number of samples in the short-time analysis interval. The sample
variance is defined as
X1
1N 2
s 2x N x
n mx :
8:1:14
n0
NX
k 1
1
rxx
k N k
x
n kx
n, k 0, 1, . . . , N 1,
8:1:15
n0
where N is the length of the sequence x(n). Note that for a given sequence of length
N, Equation (8.1.15) generates values for up to N different lags. In practice, we can
only expect good results for lags of no more than 5±10 percent of the length of the
signals.
The autocorrelation and crosscorrelation functions introduced in this section can be
computed using the MATLAB function xcorr in the Signal Processing Toolbox. The
crosscorrelation function rxy
k of the two sequences x(n) and y(n) can be computed
using the statement
c = xcorr(x, y);
where x and y are length N vectors and the crosscorrelation vector c has length 2N 1.
The autocorrelation function rxx
k of the sequence x(n) can be computed using the
statement
c = xcorr(x);
In addition, the crosscovariance function can be estimated using
v = xcov(x, y);
and the autocovariance function can be computed with
v = xcov(x);
See Signal Processing Toolbox User's Guide for details.
356 ADAPTIVE FILTERING
In the study of deterministic digital signals, we use the discrete-time Fourier transform
(DTFT) or the z-transform to find the frequency contents of the signals. In this section,
we will use the same transform for random signals. Consider an ergodic random process
x(n). This sequence cannot be really representative of the random process because the
sequence x(n) is only one of infinitely possible sequences. However, if we consider the
autocorrelation function rxx
k, the result is always the same no matter which sample
sequence is used to compute rxx
k. Therefore we should apply the transform to rxx
k
rather than x(n).
The correlation functions represent the time-domain description of the statistics of a
random process. The frequency-domain statistics are represented by the power density
spectrum (PDS) or the autopower spectrum. The PDS is the DTFT (or the z-transform)
of the autocorrelation function rxx
k of a WSS signal x(n) defined as
X
1
j!k
Pxx
! rxx
ke ,
8:1:16
k 1
or
X
1
Pxx
z rxx
kz k :
8:1:17
k 1
A sufficient condition for the existence of the PDS is that rxx
k is summable. The PDS
defined in (7.3.16) is equal to the DFT of the autocorrelation function. The windowing
technique introduced in Section 7.3.3 can be used to improve the convergence properties
of (7.3.16) and (7.3.17) if the DFT is used in computing the PDS of random signals.
Equation (8.1.16) implies that the autocorrelation function is the inverse DTFT of the
PDS, which is expressed as
1 p
rxx
k Pxx
!e j!k d!:
8:1:18
2p p
Thus rxx
0 represents the average power in the random signal x(n). The PDS is a
periodic function of the frequency !, with the period equal to 2p. We can show (in the
exercise problems) that Pxx
! of a WSS signal is a real-valued function of !. If x(n) is a
real-valued signal, Pxx
! is an even function of !. That is,
or
The DTFT of the crosscorrelation function Pxy
! of two WSS signals x(n) and y(n) is
given by
X
1
j!k
Pxy
! rxy
ke ,
8:1:22
k 1
or
X
1
Pxy
z rxy
kz k :
8:1:23
k 1
Example 8.3: The autocorrelation function of a WSS white random process can be
defined as
An important white random signal is called white noise, which has zero mean.
Thus its autocorrelation function is expressed as
Consider a linear and time-invariant digital filter defined by the impulse response
h(n), or the transfer function H(z). The input of the filter is a WSS random signal x(n)
with the PDS Pxx
!. As illustrated in Figure 8.1, the PDS of the filter output y(n) can
be expressed as
2
Pyy
! H
! Pxx
!
8:1:28
or
2
Pyy
z H
z Pxx
z,
8:1:29
358 ADAPTIVE FILTERING
where H
! is the frequency response of the filter. Therefore the value of the output
PDS at frequency ! depends on the squared magnitude response of the filter and the
input PDS at the same frequency.
Another important relationships between x(n) and y(n) are
" #
X
1 X
1 X
1
my E h
lx
n l h
lEx
n l mx h
l,
8:1:30
l 1 l 1 l 1
and
" #
X
1
ryx
k Ey
n kx
n E h
lx
n k lx
n
l 1
X
1
h
lrxx
k l h
k rxx
k:
8:1:31
l 1
Similarly, the relationships between the input and the output signals are
X
1
rxy
k h
lrxx
k l h
k rxx
k
8:1:33
l 1
and
If the input signal x(n) is a zero-mean white noise with the autocorrelation function
defined in (8.1.26), Equation (8.1.31) becomes
X
1
ryx
k h
ls2x d
k l s2x h
k:
8:1:35
l 1
This equation shows that by computing the crosscorrelation function ryx
k, the impulse
response h(n) of a filter (or system) can be obtained. This fact can be used to estimate an
unknown system such as the room impulse response used in Chapter 4.
ADAPTIVE FILTERS 359
Example 8.4: Let the system shown in Figure 8.1 be a second-order FIR filter. The
input x(n) is a zero-mean white noise given by Example 8.3, and the I/O equation
is expressed as
Find the mean my and the autocorrelation function ryy
k of the output y(n).
(a) my E y
n Ex
n 3Ex
n 1 2Ex
n 2 0.
(b) ryy
k E y
n ky
n
14rxx
k 9rxx
k 1 9rxx
k 1 2rxx
k 2 2rxx
k 2
8
>
> 14s2 if k 0
> 2x
<
9sx if k 1
>
> 2s 2
if k 2
>
: x
0 otherwise.
An adaptive filter consists of two distinct parts ± a digital filter to perform the desired
signal processing, and an adaptive algorithm to adjust the coefficients (or weights) of
that filter. A general form of adaptive filter is illustrated in Figure 8.2, where d(n) is a
desired signal (or primary input signal), y(n) is the output of a digital filter driven by a
reference input signal x(n), and an error signal e(n) is the difference between d(n) and
y(n). The function of the adaptive algorithm is to adjust the digital filter coefficients to
360 ADAPTIVE FILTERING
d(n)
+
x(n) Digital y(n) − e(n)
filter
Adaptive
algorithm
y(n)
minimize the mean-square value of e(n). Therefore the filter weights are updated so that
the error is progressively minimized on a sample-by-sample basis.
In general, there are two types of digital filters that can be used for adaptive filtering:
FIR and IIR filters. The choice of an FIR or an IIR filter is determined by practical
considerations. The FIR filter is always stable and can provide a linear phase response.
On the other hand, the IIR filter involves both zeros and poles. Unless they are properly
controlled, the poles in the filter may move outside the unit circle and make the filter
unstable. Because the filter is required to be adaptive, the stability problems are much
difficult to handle. Thus the FIR adaptive filter is widely used for real-time applications.
The discussions in the following sections will be restricted to the class of adaptive FIR
filters.
The most widely used adaptive FIR filter is depicted in Figure 8.3. Given a set
of L coefficients, wl
n, l 0, 1, . . . , L 1, and a data sequence, fx
n x
n 1
. . . x
n L 1g, the filter output signal is computed as
X
L 1
y
n wl
nx
n l,
8:2:1
l0
where the filter coefficients wl
n are time varying and updated by the adaptive algo-
rithms that will be discussed next.
We define the input vector at time n as
Then the output signal y(n) in (8.2.1) can be expressed using the vector operation
The filter output y(n) is compared with the desired response d(n), which results in the
error signal
In the following sections, we assume that d(n) and x(n) are stationary, and our objective is
to determine the weight vector so that the performance (or cost) function is minimized.
The general block diagram of the adaptive filter shown in Figure 8.2 updates the
coefficients of the digital filter to optimize some predetermined performance criterion.
The most commonly used performance measurement is based on the mean-square error
(MSE) defined as
For an adaptive FIR filter, x
n will depend on the L filter weights w0
n, w1
n,
. . . , wL 1
n. The MSE function can be determined by substituting (8.2.5) into (8.2.6),
expressed as
and
is the crosscorrelation function between d(n) and x(n). In (8.2.7), R is the input auto-
correlation matrix defined as
2 3
rxx
0 rxx
1 rxx
L 1
6 rxx
1 rxx
0 rxx
L 2 7
6 7
R Ex
nxT
n 6 .. .. .. 7,
8:2:10
4 . . . 5
rxx
L 1 rxx
L 2 rxx
0
362 ADAPTIVE FILTERING
where
x(n) x(n−1)
d(n)
z −1
w1
+ e(n)
−
+ +
The optimum filter wo minimizes the MSE cost function x
n. Vector differentiation
of (8.2.7) gives wo as the solution to
Rwo p: 8:2:12
This system equation defines the optimum filter coefficients in terms of two correlation
functions ± the autocorrelation function of the filter input and the crosscorrelation
function between the filter input and the desired response. Equation (8.2.12) provides a
solution to the adaptive filtering problem in principle. However, in many applications,
the signal may be non-stationary. This linear algebraic solution, wo R 1 p, requires
continuous estimation of R and p, a considerable amount of computations. In addition,
when the dimension of the autocorrelation matrix is large, the calculation of R 1 may
present a significant computational burden. Therefore a more useful algorithm is
obtained by developing a recursive method for computing wo , which will be discussed
in the next section.
To obtain the minimum MSE, we substitute the optimum weight vector wo R 1 p
for w(n) in (8.2.7), resulting in
Since R is positive semidefinite, the quadratic form on the right-hand side of (8.2.7)
indicates that any departure of the weight vector w(n) from the optimum wo would
increase the error above its minimum value. In other words, the error surface is concave
and possesses a unique minimum. This feature is very useful when we utilize search
techniques in seeking the optimum weight vector. In such cases, our objective is to
develop an algorithm that can automatically search the error surface to find the
optimum weights that minimize x
n using the input signal x(n) and the error signal e(n).
Example 8.6: Consider a second-orderp FIR filter with two coefficients w0 and
w1 , the desired signal d
n 2 sin
n!0 , n 0, and the reference signal
x
n d
n 1. Find wo and xmin
Similar to Example 8.2, we can obtain rxx
0 Ex2
n Ed 2
n 1,
rxx
1 cos
!0 , rxx
2 cos
2!0 , rdx
0 rxx
1, and rdx
1 rxx
2. From
(8.2.12), we have
1
o 1 1 cos
!0 cos
!o 2 cos
!0
w R p :
cos
!0 1 cos
2!0 1
Equation (8.2.7) is the general expression for the performance function of an adaptive
FIR filter with given weights. That is, the MSE is a function of the filter coefficient
vector w(n). It is important to note that the MSE is a quadratic function because the
weights appear only to the first and second degrees in (8.2.7). For each coefficient vector
w(n), there is a corresponding (scalar) value of MSE. Therefore the MSE values
associated with w(n) form an
L 1-dimensional space, which is commonly called the
MSE surface, or the performance surface.
For L 2, this corresponds to an error surface in a three-dimensional space. The
height of x
n corresponds to the power of the error signal e(n) that results from filtering
the signal x(n) with the coefficients w(n). If the filter coefficients change, the power in the
error signal will also change. This is indicated by the changing height on the surface
above w0 w1 the plane as the component values of w(n) are varied. Since the error
surface is quadratic, a unique filter setting w
n wo will produce the minimum MSE,
xmin . In this two-weight case, the error surface is an elliptic paraboloid. If we cut the
paraboloid with planes parallel to the w0 w1 plane, we obtain concentric ellipses of
constant mean-square error. These ellipses are called the error contours of the error
surface.
Example 8.7: Consider a second-order FIR filter with two coefficients w0 and w1 .
The reference signal x(n) is a zero-mean white noise with unit variance. The
desired signal is given as
The MATLAB script (exam8_7a.m in the software package) is used to plot the error
surface shown in Figure 8.4(a) and the script exam8_7b.m is used to plot the error
contours shown in Figure 8.4(b).
Error Surface
1200
1000
800
MSE
600
400
200
0
40
20 40
0 20
−20 0
w1 −20 w0
−40 −40
Error Contour
20
15
10
5
w1
0
−5
−10
−15
−20
One of the most important properties of the MSE surface is that it has only one global
minimum point. At that minimum point, the tangents to the surface must be 0. Minim-
izing the MSE is the objective of many current adaptive methods such as the LMS
algorithm.
As shown in Figure 8.4, the MSE of (8.2.7) is a quadratic function of the weights that
can be pictured as a positive-concave hyperparabolic surface. Adjusting the weights to
minimize the error involves descending along this surface until reaching the `bottom of
the bowl.' Various gradient-based algorithms are available. These algorithms are based
on making local estimates of the gradient and moving downward toward the bottom of
the bowl. The selection of an algorithm is usually decided by the speed of convergence,
steady-state performance, and the computational complexity.
The steepest-descent method reaches the minimum by following the direction in
which the performance surface has the greatest rate of decrease. Specifically, an algo-
rithm whose path follows the negative gradient of the performance surface. The
steepest-descent method is an iterative (recursive) technique that starts from some initial
(arbitrary) weight vector. It improves with the increased number of iterations. Geomet-
rically, it is easy to see that with successive corrections of the weight vector in the
direction of the steepest descent on the concave performance surface, we should arrive
at its minimum, xmin , at which point the weight vector components take on their
optimum values. Let x
0 represent the value of the MSE at time n 0 with an arbitrary
choice of the weight vector w(0). The steepest-descent technique enables us to descend
to the bottom of the bowl, wo , in a systematic way. The idea is to move on the error
surface in the direction of the tangent at that point. The weights of the filter are updated
at each iteration in the direction of the negative gradient of the error surface.
The mathematical development of the method of steepest descent is easily seen from
the viewpoint of a geometric approach using the MSE surface. Each selection of a filter
weight vector w(n) corresponds to only one point on the MSE surface, [w
n, x
n].
Suppose that an initial filter setting w(0) on the MSE surface, [w
0, x
0] is arbitrarily
chosen. A specific orientation to the surface is then described using the directional
derivatives of the surface at that point. These directional derivatives quantify the rate of
change of the MSE surface with respect to the w(n) coordinate axes. The gradient of the
error surface rx
n is defined as the vector of these directional derivatives.
The concept of steepest descent can be implemented in the following algorithm:
m
w
n 1 w
n rx
n
8:2:14
2
where m is a convergence factor (or step size) that controls stability and the rate of
descent to the bottom of the bowl. The larger the value of m, the faster the speed of
descent. The vector rx
n denotes the gradient of the error function with respect to
w(n), and the negative sign increments the adaptive weight vector in the negative
gradient direction. The successive corrections to the weight vector in the direction of
366 ADAPTIVE FILTERING
the steepest descent of the performance surface should eventually lead to the minimum
mean-square error xmin , at which point the weight vector reaches its optimum value wo .
When w(n) has converged to wo , that is, when it reaches the minimum point of the
performance surface, the gradient rx
n 0. At this time, the adaptation in (8.2.14) is
stopped and the weight vector stays at its optimum solution. The convergence can be
viewed as a ball placed on the `bowl-shaped' MSE surface at the point [w
0, x
0]. If the
ball was released, it would roll toward the minimum of the surface, and would initially
roll in a direction opposite to the direction of the gradient, which can be interpreted as
rolling towards the bottom of the bowl.
From (8.2.14), we see that the increment from w(n) to w
n 1 is in the negative
gradient direction, so the weight tracking will closely follow the steepest descent path
on the performance surface. However, in many practical applications the statistics of
d(n) and x(n) are unknown. Therefore the method of steepest descent cannot be used
directly, since it assumes exact knowledge of the gradient vector at each iteration.
Widrow [13] used the instantaneous squared error, e2
n, to estimate the MSE. That is,
Since e n d n wT nx x, re n x n, the gradient estimate becomes
^
n
rx 2x
ne
n:
8:2:17
Substituting this gradient estimate into the steepest-descent algorithm of (8.2.14), we have
This is the well-known LMS algorithm, or stochastic gradient algorithm. This algorithm
is simple and does not require squaring, averaging, or differentiating. The LMS algo-
rithm provides an alternative method for determining the optimum filter coefficients
without explicitly computing the matrix inversion suggested in (8.2.12).
Widrow's LMS algorithm is illustrated in Figure 8.5 and is summarized as follows:
1. Determine L, m, and w(0), where L is the order of the filter, m is the step size, and
w(0) is the initial weight vector at time n 0.
X
L 1
y
n wl
nx
n l:
8:2:19
l0
PERFORMANCE ANALYSIS 367
d(n)
+
x(n) y(n) − e(n)
w(n)
LMS
Figure 8.5 Block diagram of an adaptive filter with the LMS algorithm
4. Update the adaptive weight vector from w(n) to w(n + 1) by using the LMS
algorithm
As shown in Figure 8.5, the LMS algorithm involves the presence of feedback. Thus
the algorithm is subject to the possibility of becoming unstable. From (8.2.18), we
observe that the parameter m controls the size of the incremental correction applied
to the weight vector as we adapt from one iteration to the next. The mean weight
convergence of the LMS algorithm from initial condition w(0) to the optimum filter wo
must satisfy
2
0<m< ,
8:3:1
lmax
where lmax is the largest eigenvalue of the autocorrelation matrix R defined in (8.2.10).
Applying the stability constraint on m given in (8.3.1) is difficult because of the compu-
tation of lmax when L is large.
In practical applications, it is desirable to estimate lmax using a simple method. From
(8.2.10), we have
368 ADAPTIVE FILTERING
X
L 1
trR Lrxx
0 ll ,
8:3:2
l0
X
L 1
lmax ll Lrxx
0 LPx ,
8:3:3
l0
where
Px rxx
0 E x2
n
8:3:4
2
0<m<
8:3:5
LPx
1. Since the upper bound on m is inversely proportional to L, a small m is used for large-
order filters.
2. Since m is made inversely proportional to the input signal power, weaker signals use
a larger m and stronger signals use a smaller m. One useful approach is to normalize
with respect to the input signal power Px . The resulting algorithm is called the
normalized LMS algorithm, which will be discussed in Section 8.4.
In the previous section, we saw that w(n) converges to wo if the selection of m satisfies
(8.3.1). Convergence of the weight vector w(n) from w(0) to wo corresponds to the
convergence of the MSE from x
0 to xmin . Therefore convergence of the MSE toward
its minimum value is a commonly used performance measurement in adaptive systems
because of its simplicity. During adaptation, the squared error e2
n is non-stationary as
the weight vector w(n) adapts toward wo . The corresponding MSE can thus be defined
only based on ensemble averages. A plot of the MSE versus time n is referred to as the
learning curve for a given adaptive algorithm. Since the MSE is the performance
criterion of LMS algorithms, the learning curve is a natural way to describe the transient
behavior.
Each adaptive mode has its own time constant, which is determined by the overall
adaptation constant m and the eigenvalue ll associated with that mode. Overall con-
vergence is clearly limited by the slowest mode. Thus the overall MSE time constant can
be approximated as
PERFORMANCE ANALYSIS 369
1
tmse ,
8:3:6
mlmin
where lmin is the minimum eigenvalue of the R matrix. Because tmse is inversely propor-
tional to m, we have a large tmse when m is small (i.e., the speed of convergence is slow). If
we use a large value of m, the time constant is small, which implies faster convergence.
The maximum time constant tmse 1=mlmin is a conservative estimate of filter per-
formance, since only large eigenvalues will exert significant influence on the conver-
gence time. Since some of the projections may be negligibly small, the adaptive filter
error convergence may be controlled by fewer modes than the number of adaptive
filter weights. Consequently, the MSE often converges more rapidly than the upper
bound of (8.3.6) would suggest.
Because the upper bound of tmse is inversely proportional to lmin , a small lmin can
result in a large time constant (i.e., a slow convergence rate). Unfortunately, if lmax is
also very large, the selection of m will be limited by (8.3.1) such that only a small m can
satisfy the stability constraint. Therefore if lmax is very large and lmin is very small, from
(8.3.6), the time constant can be very large, resulting in very slow convergence. As
previously noted, the fastest convergence of the dominant mode occurs for m 1=lmax .
Substituting this smallest step size into (8.3.6) results in
lmax
tmse :
8:3:7
lmin
For stationary input and sufficiently small m, the speed of convergence of the algorithm
is dependent on the eigenvalue spread (the ratio of the maximum to minimum eigen-
values) of the matrix R.
As mentioned in the previous section, the eigenvalues lmax and lmin are very difficult
to compute. However, there is an efficient way to estimate the eigenvalue spread from
the spectral dynamic range. That is,
where X
! is DTFT of x(n) and the maximum and minimum are calculated over the
frequency range 0 ! p. From (8.3.7) and (8.3.8), input signals with a flat (white)
spectrum have the fastest convergence speed.
noise prevents w
n 1 from staying at wo in steady state. The result is that w(n) varies
randomly about wo . Because wo corresponds to the minimum MSE, when w(n) moves
away from wo , it causes x
n to be larger than its minimum value, xmin , thus producing
excess noise at the filter output.
The excess MSE, which is caused by random noise in the weight vector after con-
vergence, is defined as the average increase of the MSE. For the LMS algorithm, it can
be approximated as
m
xexcess LPx xmin :
8:3:9
2
This approximation shows that the excess MSE is directly proportional to m. The larger
the value of m, the worse the steady-state performance after convergence. However,
Equation (8.3.6) shows that a larger m results in faster convergence. There is a design
trade-off between the excess MSE and the speed of convergence.
The optimal step size m is difficult to determine. Improper selection of m might make
the convergence speed unnecessarily slow or introduce excess MSE. If the signal is non-
stationary and real-time tracking capability is crucial for a given application, then use a
larger m. If the signal is stationary and convergence speed is not important, use a smaller
m to achieve better performance in a steady state. In some practical applications, we can
use a larger m at the beginning of the operation for faster convergence, then use a smaller
m to achieve better steady-state performance.
The excess MSE, xexcess , in (8.3.9) is also proportional to the filter order L, which
means that a larger L results in larger algorithm noise. From (8.3.5), a larger L implies a
smaller m, resulting in slower convergence. On the other hand, a large L also implies
better filter characteristics such as sharp cutoff. There exists an optimum order L for
any given application. The selection of L and m also will affect the finite-precision error,
which will be discussed in Section 8.6.
In a stationary environment, the signal statistics are unknown but fixed. The LMS
algorithm gradually learns the required input statistics. After convergence to a steady
state, the filter weights jitter around the desired fixed values. The algorithm perform-
ance is determined by both the speed of convergence and the weight fluctuations in
steady state. In the non-stationary case, the algorithm must continuously track the time-
varying statistics of the input. Performance is more difficult to assess.
The LMS algorithm described in the previous section is the most widely used adaptive
algorithm for practical applications. In this section, we present two modified algorithms
that are the direct variants of the basic LMS algorithm.
The stability, convergence speed, and fluctuation of the LMS algorithm are governed by
the step size m and the reference signal power. As shown in (8.3.5), the maximum stable
MODIFIED LMS ALGORITHMS 371
step-size m is inversely proportional to the filter order L and the power of the reference
signal x(n). One important technique to optimize the speed of convergence while
maintaining the desired steady-state performance, independent of the reference signal
power, is known as the normalized LMS algorithm (NLMS). The NLMS algorithm is
expressed as
a
m
n
8:4:2
^x
n,
LP
where P^x
n is an estimate of the power of x(n) at time n, and a is a normalized step size
that satisfies the criterion
2. Since it is not desirable that the power estimate P ^x
n be zero or very small, a
software constraint is required to ensure that m
n is bounded even if P^x
n is very
small when the signal is absent for a long time. This can be achieved by modifying
(8.4.2) as
a
m
n
8:4:4
^x
n c,
LP
Insufficient spectral excitation of the LMS algorithm may result in divergence of the
adaptive weights. In that case, the solution is not unique and finite-precision effects can
cause the unconstrained weights to grow without bound, resulting in overflow during the
weight update process. This long-term instability is undesirable for real-time applications.
Divergence can often be avoided by means of a `leaking' mechanism used during the
weight update calculation. This is called the leaky LMS algorithm and is expressed as
where v is the leakage factor with 0 < v 1. It can be shown that leakage is the
deterministic equivalent of adding low-level white noise. Therefore this approach results
372 ADAPTIVE FILTERING
in some degradation in adaptive filter performance. The value of the leakage factor is
determined by the designer on an experimental basis as a compromise between robust-
ness and loss of performance of the adaptive filter. The leakage factor introduces a bias
on the long-term coefficient estimation. The excess error power due to the leakage is
proportional to
1 v=m2 . Therefore (1 v) should be kept smaller than m in order to
maintain an acceptable level of performance. For fixed-point hardware realization,
multiplication of each coefficient by v, as shown in (8.4.5), can lead to the introduction
of roundoff noise, which adds to the excess MSE. Therefore the leakage effects must be
incorporated into the design procedure for determining the required coefficient and
internal data wordlength. The leaky LMS algorithm not only prevents unconstrained
weight overflow, but also limits the output power in order to avoid nonlinear distortion.
8.5 Applications
The desirable features of an adaptive filter are the ability to operate in an unknown
environment and to track time variations of the input signals, making it a powerful
algorithm for DSP applications. The essential difference between various applications
of adaptive filtering is where the signals x(n), d(n), y(n), and e(n) are connected. There
are four basic classes of adaptive filtering applications: identification, inverse modeling,
prediction, and interference canceling.
Unknown d(n)
system, P(z)
+
Signal x(n) Digital y(n) − e(n)
generator filter, W(z)
LMS
algorithm
Figure 8.6 Block diagram of adaptive system identification using the LMS algorithm
APPLICATIONS 373
Adaptive system identification is a technique that uses an adaptive filter for the model
W(z). This section presents the application of adaptive estimation techniques for direct
system modeling. This technique has been widely applied in echo cancellation, which
will be introduced in Sections 9.4 and 9.5. A further application for system modeling is
to estimate various transfer functions in active noise control systems [8].
Adaptive system identification is a very important procedure that is used frequently
in the fields of control systems, communications, and signal processing. The modeling
of a single-input/single-output dynamic system (or plant) is shown in Figure 8.6, where
x(n), which is usually white noise, is applied simultaneously to the adaptive filter and the
unknown system. The output of the unknown system then becomes the desired signal,
d(n), for the adaptive filter. If the input signal x(n) provides sufficient spectral excita-
tion, the adaptive filter output y(n) will approximate d(n) in an optimum sense after
convergence.
Identification could mean that a set of data is collected from the system, and that a
separate procedure is used to construct a model. Such a procedure is usually called off-
line (or batch) identification. In many practical applications, however, the model is
sometimes needed on-line during the operation of the system. That is, it is necessary to
identify the model at the same time that the data set is collected. The model is updated at
each time instant that a new data set becomes available. The updating is performed with
a recursive adaptive algorithm such as the LMS algorithm.
As shown in Figure 8.6, it is desired to learn the structure of the unknown system
from knowledge of its input x(n) and output d(n). If the unknown time-invariant system
P(z) can be modeled using an FIR filter of order L, the estimation error is given as
X
L 1
e
n d
n y
n p
l wl
n x
n l,
8:5:1
l0
The basic concept is that the adaptive filter adjusts itself, intending to cause its output to
match that of the unknown system. When the difference between the physical system
response d(n) and adaptive model response y(n) has been minimized, the adaptive model
approximates P(z). In actual applications, there will be additive noise present at the
adaptive filter input and so the filter structure will not exactly match that of the unknown
system. When the plant is time varying, the adaptive algorithm has the task of keeping the
modeling error small by continually tracking time variations of the plant dynamics.
Linear prediction is a classic signal processing technique that provides an estimate of the
value of an input process at a future time where no measured data is yet available. The
374 ADAPTIVE FILTERING
techniques have been successfully applied to a wide range of applications such as speech
coding and separating signals from noise. As illustrated in Figure 8.7, the time-domain
predictor consists of a linear prediction filter in which the coefficients wl
n are updated
with the LMS algorithm. The predictor output y(n) is expressed as
X
L 1
y
n wl
nx
n D l,
8:5:3
l0
where the delay D is the number of samples involved in the prediction distance of the
filter. The coefficients are updated as
X1
M
x
n s
n v
n Am sin
!m n fm v
n,
8:5:5
m0
where v(n) is white noise with uniform noise power s2v . In this application, the structure
shown in Figure 8.7 is called the adaptive line enhancer, which provides an efficient
means for the adaptive tracking of the sinusoidal components of a received signal x(n)
and separates these narrowband signals s(n) from broadband noise v(n). This technique
has been shown effective in practical applications when there is insufficient a priori
knowledge of the signal and noise parameters.
As shown in Figure 8.7, we want the highly correlated components of x(n) to appear
in y(n). This is accomplished by adjusting the weights to minimize the expected mean-
square value of the error signal e(n). This causes an adaptive filter W(z) to form
x(n)
z−∆
+ Broadband
Digital y(n) − e(n) output
filter W(z)
y(n)
Narrowband
LMS output
The wide spread use of cellular phones has significantly increased the use of com-
munication systems in high noise environments. Intense background noise, however,
often corrupts speech and degrading the performance of many communication
systems. Existing signal processing techniques such as speech coding, automatic speech
recognition, speaker identification, channel transmission, and echo cancellation are
developed under noise-free assumptions. These techniques could be employed in noisy
environments if a front-end noise suppression algorithm sufficiently reduces additive
noise. Noise reduction is becoming increasingly important with the development and
application of hands-free and voice-activated cellular phones.
Single-channel noise reduction methods involve Wiener filtering, Kalman filtering,
and spectral subtraction. In the dual-channel systems, a second sensor provides a
reference noise to better characterize changing noise statistics, which is necessary for
dealing with non-stationary noise. The most widely used dual-channel adaptive noise
canceler (ANC) employs an adaptive filter with the LMS algorithm to cancel the noise
component in the primary signal picked up by the primary sensor.
As illustrated in Figure 8.8, the basic concept of adaptive noise cancellation is to
process signals from two sensors and to reduce the level of the undesired noise with
adaptive filtering techniques. The primary sensor is placed close to the signal source in
order to pick up the desired signal. However, the primary sensor output also contains
noise from the noise source. The reference sensor is placed close to the noise source
to sense only the noise. This structure takes advantage of the correlation between the
noise signals picked up by the primary sensor and those picked up by the reference
sensor.
A block diagram of the adaptive noise cancellation system is illustrated in Figure 8.9,
where P(z) represents the transfer function between the noise source and the primary
sensor. The canceler has two inputs: the primary input d(n) and the reference input x(n).
The primary input d(n) consists of signal s(n) plus noise x0
n, i.e., d
n s
n x0
n,
which is highly correlated with x(n) since they are derived from the same noise source.
The reference input simply consists of noise x(n). The objective of the adaptive filter is to
use the reference input x(n) to estimate the noise x0
n. The filter output y(n), which is an
376 ADAPTIVE FILTERING
Primary
sensor
Signal d(n) e(n)
source z−1
+
−
Reference
sensor
Noise x(n)
W(z) y(n)
source
LMS
s(n) +
+ x′(n)
P(z) d(n)
+
x(n) Digital y(n) − e(n)
filter
Adaptive
algorithm
estimate of noise x0
n, is then subtracted from the primary channel signal d(n), produ-
cing e(n) as the desired signal plus reduced noise.
To minimize the residual error e(n), the adaptive filter W(z) will generate an output
y(n) that is an approximation of x0
n. Therefore the adaptive filter W(z) will converge
to the unknown plant P(z). This is the adaptive system identification scheme discussed
in Section 8.5.1. To apply the ANC effectively, the reference noise picked up by the
reference sensor must be highly correlated with the noise components in the primary
signal. This condition requires a close spacing between the primary and reference
sensors. Unfortunately, it is also critical to avoid the signal components from the signal
source being picked up by the reference sensor. This `crosstalk' effect will degrade the
performance of ANC because the presence of the signal components in reference signal
will cause the ANC to cancel the desired signal along with the undesired noise. The
performance degradation of ANC with crosstalk includes less noise reduction, slower
convergence, and reverberant distortion in the desired signal.
Crosstalk problems may be eliminated by placing the primary sensor far away from
the reference sensor. Unfortunately, this arrangement requires a large-order filter in
order to obtain adequate noise reduction. For example, a separation of a few meters
between the two sensors requires a filter with 1500 taps to achieve 20 dB noise reduction.
The long filter increases excess mean-square error and decreases the tracking ability of
APPLICATIONS 377
ANC because the step size must be reduced to ensure stability. Furthermore, it is not
always feasible to place the reference sensor far away from the signal source. The second
method for reducing crosstalk is to place an acoustic barrier (an oxygen mask in an
aircraft cockpit, for example) between the primary and reference sensors. However,
many applications do not allow an acoustic barrier between sensors, and a barrier may
reduce the correlation of the noise component in the primary and reference signals. The
third technique involves allowing the adaptive algorithm to update filter coefficients
during silent intervals in the speech. Unfortunately, this method depends on a reliable
speech detector that is very application dependent. This technique also fails to track the
environment changes during the speech periods.
w 0(n) w 1(n)
+
− e(n)
y(n)
x0(n)
LMS
x1(n)
Digital Hilbert transform filters can be employed for this purpose. Instead of using
cosine generator and a phase shifter, the recursive quadrature oscillator given in Figure
6.23 can be used to generate both sine and cosine signals simultaneously. For a reference
sinusoidal signal, two filter coefficients are needed.
The LMS algorithm employed in Figure 8.10 is summarized as follows:
Note that the two-weight adaptive filter W(z) shown in Figure 8.10 can be replaced with
a general L-weight adaptive FIR filter for a multiple sinusoid reference input x(n). The
reference input supplies a correlated version of the sinusoidal interference that is used to
estimate the composite sinusoidal interfering signal contained in the primary input d(n).
The single-frequency adaptive notch filter has the property of a tunable notch filter.
The center frequency of the notch filter depends on the sinusoidal reference signal,
whose frequency is equal to the frequency of the primary sinusoidal noise. Therefore the
noise at that frequency is attenuated. This adaptive notch filter provides a simple
method for tracking and eliminating sinusoidal interference.
Example 8.8: For a stationary input and sufficiently small m, the convergence
speed of the LMS algorithm is dependent on the eigenvalue spread of the input
autocorrelation matrix. For L 2 and the reference input is given in (8.5.6),
the autocorrelation matrix can be expressed as
x0
nx0
n x0
nx1
n
RE
x1
nx0
n x1
nx1
n
2 3
" # A2
2
A cos
!0 n 2 2
A cos
!0 n sin
!0 n 0
6 2 7
E 4 5:
2
A sin
!0 n cos
!0 n A2 sin2
!0 n 0 A2
2
APPLICATIONS 379
This equation shows that because of the 908 phase shift, x0
n is orthogonal to
x1
n and the off-diagonal terms in the R matrix are 0. The eigenvalues l1 and l2
of the R matrix are identical and equal to A2 =2. Therefore the system has very fast
convergence since the eigenvalue spread equals 1. The time constant of the
adaptation is approximated as
1 2
tmse ,
ml mA2
which is determined by the power of the reference sinewave and the step size m.
1
W
z ,
8:5:11
C
z
d(n)
z−∆
+
x(n) y(n) ˆ
x(n) − e(n)
C(z) W(z)
LMS
designed based on the average channel characteristics may not adequately reduce
intersymbol interference. Thus we need an adaptive equalizer that provides precise
compensation over the time-varying channel. As illustrated in Figure 8.11, the adaptive
channel equalizer is an adaptive filter with coefficients that are adjusted using the LMS
algorithm.
As shown in Figure 8.11, an adaptive filter requires the desired signal d(n) for
computing the error signal e(n) for the LMS algorithm. In theory, the delayed version
of the transmitted signal, x
n D, is the desired response for the adaptive equalizer
W(z). However, with the adaptive filter located in the receiver, the desired signal
generated by the transmitter is not available at the receiver. The desired signal may be
generated locally in the receiver using two methods. A decision-directed algorithm, in
which the equalized signal ^ x
n is sliced to form the desired signal, is the simplest and
can be used for channels that have only a moderate amount of distortion. However, if
the error rate of the data derived by slicing is too high, the convergence may be seriously
impaired. In this case, the training of an equalizer by using that sequence is agreed on
beforehand by the transmitter and the receiver.
During the training stage, the adaptive equalizer coefficients are adjusted by trans-
mitting a short training sequence. This known transmitted sequence is also generated in
the receiver and is used as the desired signal d(n) for the LMS algorithm. A widely used
training signal consists of pseudo-random noise (will be introduced in Section 9.2) with
a broad and flat power spectrum. After the short training period, the transmitter begins
to transmit the data sequence. In order to track the possible slow time variations in the
channel, the equalizer coefficients must continue to be adjusted while receiving data. In
this data mode, the output of the equalizer, ^ x
n, is used by a decision device (slicer) to
produce binary data. Assuming the output of the decision device is correct, the binary
sequence can be used as the desired signal d(n) to generate the error signal for the LMS
algorithm.
An equalizer for a one-dimensional baseband system has real input signals and filter
coefficients. However, for a two-dimensional quadrature amplitude modulation (QAM)
system, both signals and coefficients are complex. All operations must use complex
arithmetic and the complex LMS algorithm expressed as
and
In (8.5.13) and (8.5.14), the subscript R and I represent real and imaginary parts of
complex numbers. The complex output ^ x
n is given by
Since all multiplications are complex, the equalizer usually requires four times as many
multiplications.
The adaptive algorithms introduced in previous sections assume the use of in-
finite precision for the signal samples and filter coefficients. In practice, an adap-
tive algorithm is implemented on finite-precision hardware. It is important to
understand the finite-wordlength effects of adaptive algorithms in meeting design
specifications.
In the digital implementation of adaptive algorithms, both the signals and the internal
algorithmic quantities are carried to a certain limited precision. Therefore an adaptive
filter implementation with limited hardware precision requires special attention because
of the potential accumulation of quantization and arithmetic errors to unacceptable
levels as well as the possibility of overflow. This section analyzes finite-precision effects
in adaptive filters using fixed-point arithmetic and presents methods for confining these
effects to acceptable levels.
We assume that the input data samples are properly scaled so that their values lie
between 1 and 1. Each data sample and filter coefficient is represented by B bits (M
magnitude bits and one sign bit). For the addition of digital variables, the sum may
become larger than 1. This is known as overflow. As introduced in Section 3.6, the
techniques used to inhibit the probability of overflow are scaling, saturation arithmetic,
and guard bits. For adaptive filters, the feedback path makes scaling far more compli-
cated. The dynamic range of the filter output is determined by the time-varying filter
coefficients, which are unknown at the design stage.
For the adaptive FIR filter with the LMS algorithm, the scaling of the filter output
and coefficients can be achieved by scaling the `desired' signal, d(n). The scale factor a,
where 0 < a 1, is implemented by right-shifting the bits of the desired signal to
prevent overflow of the filter coefficients during the weight update. Reducing the
magnitude of d(n) reduces the gain demand on the filter, thereby reducing the magni-
tude of the weight values. Usually, the required value of a is not expected to be very
small. Since a only scales the desired signal, it does not affect the rate of convergence,
which depends on the reference signal x(n). An alternative method for preventing
overflow is to use the leaky LMS algorithm described in Section 8.4.2.
With rounding operations, the finite-precision LMS algorithm can be described as
follows:
" #
X
L 1
y
n R wl
nx
n l ,
8:6:1
l0
By using the assumptions that quantization and roundoff errors are zero-mean white
noise independent of the signals and each other, that the same wordlength is used for
both signal samples and coefficients, and that m is sufficiently small, the total output
MSE is expressed as
m Ls2 1 2
x xmin Ls2x xmin 2 e 2 wo k s2e ,
8:6:4
2 2a m a
where xmin is the minimum MSE defined in (8.2.13), s2x is the variance of the input
signal x(n), s2e is as defined in (3.5.6), k is the number of rounding operations in
(8.6.1) (k 1 if a double-precision accumulator is used), and wo is the optimum weight
vector.
The second term in (8.6.4) represents excess MSE due to algorithmic weight fluctu-
ation and is proportional to the step size m. For fixed-point arithmetic, the finite-
precision error given in (8.6.4) is dominated by the third term, which reflects the error
in the quantized weight vector, and is inversely proportional to the step size m. The last
term in (8.6.4) arises because of two quantization errors ± the error in the quantized
input vector and the error in the quantized filter output y(n).
Whereas the excess MSE given in the second term of (8.6.4) is proportional to m, the
power of the roundoff noise in the third term is inversely proportional to m. Although a
small value of m reduces the excess MSE, it may result in a large quantization error.
There will be an optimum step size that achieves a compromise between these competing
goals. The total error for a fixed-point implementation of the LMS algorithm is min-
imized using the optimum mo expressed as
s
o 2 M 1
m :
8:6:5
2a 3xmin s2x
In order to stabilize the digital implementation of the LMS algorithm, we may use the
leaky LMS algorithm to reduce numeric errors accumulated in the filter coefficients. As
discussed in Section 8.4.2, the leaky LMS algorithm prevents overflow in a finite-
precision implementation by providing a compromise between minimizing the MSE
and constraining the energy of the adaptive filter impulse response.
There is still another factor to consider in the selection of step size m. As mentioned in
Section 8.2, the adaptive algorithm is aimed at minimizing the error signal, e(n). As the
weight vector converges, the error term decreases. At some point, the update term will
be rounded to 0. Since mx
n le
n is a gradually decreasing random quantity, fewer
and fewer values will exceed the rounding threshold level, and eventually the weight will
stop changing almost completely. The step size value mo given in (8.6.5) is shown to be
too small to allow the adaptive algorithm to converge completely. Thus the `optimal'
value in (8.6.5) may not be the best choice from this standpoint.
From (8.6.1)±(8.6.3), the digital filter coefficients, as well as all internal signals,
are quantized to within the least significant bit LSB 2 B . From (8.6.3), the LMS
algorithm modifies the current parameter settings by adding a correction term,
Rmx
n le
n. Adaptation stops when the correction term is smaller in magnitude
than the LSB. At this point, the adaptation of the filter virtually stops. Roundoff
384 ADAPTIVE FILTERING
precludes the tap weights reaching the optimum (infinite-precision) value. This phenom-
enon is known as `stalling' or `lockup'.
The condition for the lth component of the weight vector wl
n not to be updated is
whenever the corresponding correction term for wl
n in the update equation is smaller
in magnitude than the least significant bit of the weight:
M
jmx
n le
nj < 2 :
8:6:6
Suppose that this equation is first satisfied for l 0 at time n. As the particular
input sample x(n) propagates down the tapped-delay-line, the error will further decrease
in magnitude, and thus this sample will turn off all weight adaptation beyond this
point.
To get an approximate condition for the overall algorithm to stop adapting, we can
replace jx
n lj and je
nj with their standard deviation values, sx and se , respectively.
The condition for the adaptation to stop becomes
M
msx se < 2 :
8:6:7
2 M
m mmin
8:6:8
sx se
is selected. In this case, the excess MSE due to misadjustment is larger than the finite-
precision error.
In conclusion, the most important design issue is to find the best value of m that
satisfies
1
mmin < m < :
8:6:10
Ls2x
To prevent algorithm stalling due to finite-precision effects, the design must allow the
residual error to reach small non-zero values. This can be achieved by using a suffi-
ciently large number of bits, and/or using a large step size m, while still guaranteeing
convergence of the algorithm. However, this will increase excess MSE as shown in
(8.6.4).
EXPERIMENTS USING THE TMS320C55X 385
The block diagram of adaptive system identification is shown in Figure 8.6. The input
sample x(n) is fed to both the unknown system and the adaptive filter. The output of the
unknown system is used by the adaptive filter as the desired signal d(n). The adaptive
algorithm minimizes the differences between the outputs of the unknown system and
adaptive filter. The filter coefficients are continuously adjusted until the error signal has
been minimized. When the adaptive filter has converged, the coefficients of the filter
describe the characteristics of the unknown system.
As shown in Figure 8.6, the system identification consists of three basic elements ± a
signal generator, an adaptive filter, and an unknown system that needs to be modeled.
The input signal x(n) should have a broad spectrum to excite all the poles and zeros of
the unknown system. Both the white noise and the chirp signal are widely used for
system identification. The signal generation algorithms will be introduced in Sections
9.1 and 9.2.
For the adaptive system identification experiment, we use the LMS algorithm
in conjunction with an FIR filter as shown in Figure 8.6. In practical applications, the
unknown system is a physical plant with both the input and output connected to
the adaptive filter. However, for experimental purposes and to better understand the
properties of adaptive algorithms, we simulate the unknown system in the same pro-
gram. The adaptive system identification operations can be expressed as:
1. Place the current input sample x(n) generated by the signal generator into x [0]of
the signal buffer.
X
L 1
y
n wl
nx
n l:
8:7:1
l0
386 ADAPTIVE FILTERING
The unknown system for this example is an FIR filter with the filter coefficients given by
coef []. The input is a zero-mean random noise. The unknown system output d is
used as the desired signal for the adaptive filter, and the adaptive filter coefficients
w [i], i=0,1,...N0, will match closely to the unknown system response after the
adaptive filter reaches its steady-state response.
Experiment 8A consists of the following modules ± an adaptive FIR filter using the
LMS algorithm implemented using the C55x assembly language, a random noise gen-
erator, an initialization function, and a C program for testing the adaptive system
identification experiment. These programs are listed in Table 8.1 to Table 8.4.
The assembly routine listed in Table 8.1 implements the adaptive FIR filter using the
LMS algorithm. The input signal is pointed by the auxiliary register AR0, and the
desired signal is ponted by AR1. The auxiliary registers AR3 and AR4 are used as
circular pointers for the signal buffer and coefficient buffer, respectively. The outer
EXPERIMENTS USING THE TMS320C55X 387
Table 8.1 Implementation of adaptive filter using the C55x assembly code
block-repeat loop controls the process of signal samples in blocks, while the two inner
repeat loops perform the adaptive filtering sample-by-sample. The repeat instruction
rpt CSR
macm *AR3, *AR4, AC0 ; y w [i]*x [i]
performs the FIR filtering, and the inner block-repeat loop, lms_loop, updates the
adaptive filter coefficients.
The zero-mean random noise generator given in Table 8.2 is used to generate testing
data for both the unknown system and the adaptive filter. The function rand()will
generate a 16-bit unsigned integer number between 0 and 65 536. We subtract 0x4000
from it to obtain the zero-mean pseudo-random number from 32 768 to 32 767.
The signal buffers and the adaptive filter coefficient buffer are initialized to 0 by the
function init.c listed in Table 8.3. For assembly language implementation, we apply
the block processing structure as we did for the FIR filter experiments in Chapter 5. To
use the circular buffer scheme, we pass the signal buffer index as an argument to the
adaptive filter subroutine. After a block of samples are processed, the subroutine
returns the index for the adaptive filter to use in the next iteration.
The adaptive system identification is tested by the C function exp8a.c given by
Table 8.4. The signal and coefficient buffers are initialized to 0 first. The random signal
generator is then used to generate Ns samples of white noise. The FIR filter used to
388 ADAPTIVE FILTERING
/*
random.c Zero-mean random noise generator
*/
#include <math.h>
void random(int *x, unsigned int N)
{
unsigned int t;
for(t N; t > 0; t )
*x rand() 0x4000; /* Zero-mean */
}
/*
init.c Initialize an array to zero
*/
void init(int *ptr, unsigned int N)
{
unsigned int i;
for(i N; i > 0; i )
*ptr 0;
}
/*
exp8a.c C program for Experiment 8A
Adaptive system identification using the LMS algorithm
*/
#include "LP_coef.dat"
#define N0 48 /* Adaptive filter order */
#define N1 48 /* Unknown system order */
#define Ns 128 /* Number of input signal */
extern unsigned int fir_filt(int *, unsigned int, int *,
unsigned int, int *, int *, unsigned int);
extern unsigned int adaptive(int *, int *, int *, int *,
unsigned int,unsigned int, unsigned int);
extern void init(int *, unsigned int);
extern void random(int *, unsigned int);
EXPERIMENTS USING THE TMS320C55X 389
simulate the unknown system is implemented in Experiment 5A. The adaptive filter uses
the unknown FIR filter output d(n) as the desired signal to produce the error signal that
is used for the LMS algorithm. After several iterations, the adaptive filter converges and
its coefficient vector w []contains N coefficients that can be used to describe the
unknown system in the form of an FIR filter. The results of the system identification
are plotted in Figure 8.12. The impulse responses (left) and the frequency responses
(right) of the unknown system (top) and the adaptive model (bottom) are almost
identical.
1. This experiment uses the following files: exp8.cmd, expt8a.c, init.c, ran-
dom.c, adaptive.asm, fir_flt.asm, LP_coef.dat, and randdata.dat,
where the assembly routine fir_flt.asm and its coefficients LP_coef.dat are
identical to those used for the experiments in Chapter 5.
2. Create the project epx8a, add files exp8.cmd, expt8a.c, init.c, random.c,
adaptive.asm, and fir_flt.asm into the project. Build, debug, and run the
experiment using the CCS.
3. Configure the CCS, and set the animation option for viewing the coefficient buffer
w []of the adaptive filter, and LP_coef []of the unknown the system in both the
time domain and frequency domain.
390 ADAPTIVE FILTERING
4. Verify the adaptation process by viewing how the adaptive filter coefficients are
adjusted. Record the steady-state values of w [], and plot the magnitude responses of
the adaptive filter and the unknown system. Save the adaptive filter coefficients,
and compare them with the unknown system coefficients given in the file
LP_coef.dat.
5. Adjust step size m, and repeat the adaptive system identification process. Observe
the change of the system performance.
6. Increase the number of the adaptive filter coefficients to N0 64, and observe the
system performance.
7. Reduce the number of the adaptive filter coefficients to N0 32, and observe the
system performance.
As shown in Figure 8.7, an adaptive predictor receives the primary signal that consists
of the broadband components v(n) and the narrowband components s(n). An adaptive
system can separate the narrowband signal from the broadband signal. The output of
the adaptive filter is the narrowband signal y
n s
n. For applications such as spread
spectrum communications, the narrowband interference can be tracked and removed by
the adaptive filter. The error signal, e
n v
n, contains the desired broadband signal.
EXPERIMENTS USING THE TMS320C55X 391
We use a fixed delay D in between the primary input signal and the reference input as
shown in Figure 8.7. If we choose a long enough delay, we can de-correlate the broadband
components at the reference input from those at the primary signal. The adaptive filter
output y(n) will be the narrowband signal because its periodic nature still keeps them
correlated. If the narrowband components are desired, the filter output y(n) is used as the
system output. On the other hand, if the broadband signal is corrupted by a narrowband
noise, the adaptive filter will reduce the narrowband interference by subtracting the
estimated narrowband components from the primary signal. Thus the error output e(n)
is used as the system output that consists of broadband signal.
In the experiment, we use the white noise as the broadband signal. Since the white
noise is uncorrelated, the delay D 1 is chosen. The adaptive predictor operation is
implemented as follows:
X
L 1
y
n wl
nx
n l 1:
8:7:5
l0
4. Update the signal buffer for adaptive filter and place the new sample into the buffer
The adaptive predictor written in floating-point C is given in Table 8.5. The fixed-
point implementation using the intrinsics can be implemented and compared against the
floating-point implementation. Finally, the assembly routine can be written to maximize
the run-time efficiency and minimize the program memory space usage. The adaptive
predictor using the leaky LMS algorithm written in the C55x assmebly language is listed
in Table 8.6.
In practice, it is preferred to initialize the adaptive filter coefficients to a known state.
The initialization can be done in two ways. If we know statistical characteristics of the
system, we can preset several adaptive filter coefficients to some predetermined value.
Using the preset values, the adaptation process usually converges to the steady state at a
faster rate. However, if we do not have any prior knowledge of the system, a common
practice is to start the adaptive process by initializing the coefficients to 0. The function
init.c listed in Table 8.3 is used to set both the coefficient and signal buffers to 0 at
the beginning of the adaptive process.
392 ADAPTIVE FILTERING
/*
alp.c Adaptive linear predictor
*/
#define twomu(96.0/32768.0) /* Step size mu */
void alp(float *in, float *y, float *e, float *x, float *w,
unsigned int Ns, unsigned int N)
{
unsigned int n;
int i;
float temp;
float uen;
for(n 0; n < Ns; n)
{
temp 0.0;
for(i N 1; i > 0; i ) /* FIR filtering */
temp (w [i]*x [i]);
y [n] temp;
e [n] in [n] y [n]; /* Calculate error */
uen twomu*e [n]; /* uen mu*e(n) */
for(i N 1; i > 0; i ) /* Update coefficients */
w [i] uen*x [i];
for(i N 1; i > 0; i ) /* Update signal buffer */
x [i] x [i 1];
x [0] in [n];
}
}
The experiment results are shown in Figure 8.13. The input signal x(n) shown in the
top window contains both the broadband random noise and the narrowband sinusoid
signal. The adaptive filter output y(n) consisting of the narrowband sinusoid signal is
shown in the middle window. The adaptive linear predictor output e(n) shown in the
bottom window contains the broadband noise.
394 ADAPTIVE FILTERING
The signal generator signal.c listed in Table 8.7 is used for the experiment to
produce a sinusoidal signal embedded in random noise.
The C program exp8b.c is listed in Table 8.8. The block size is chosen as 256. The
adaptive FIR filter order is 48. The initialization is performed once at the beginning of
the experiment. The adaptation step size is set to m 96=32 768. The system uses the
leaky LMS algorithm with leaky factor set to a 32 704=32 768 0:998.
3. Configure the CCS, and set the animation option for viewing the output of the
adaptive filter y [], the output of the system e [], the input signal in [], and the
adaptive filter coefficients w []at a block-by-block basis.
Table 8.7 List of C program for generating sinewave embedded in random noise
/*
signal.c Sinewave plus zero-mean random noise
*/
#include <math.h>
#include <intrindefs.h>
#define PI 3.1415962
#define K (Ns 6)
#define a1 0x4000
#define a2 0x4000
static unsigned int i 0;
void cos_rand(int *x, unsigned int Ns)
{
unsigned int t;
float two_pi_K_Ns;
int temp;
long ltemp;
two_pi_K_Ns 2.0*PI*K/Ns;
for(t Ns; t > 0; t )
{
temp (int)(0x7fff*cos(two_pi_K_Ns*i));
ltemp _lsmpy(a1,temp);
temp rand() 0x4000;
*x _smac(ltemp,a2,temp) 16;
i;
i % (Ns 1);
}
}
EXPERIMENTS USING THE TMS320C55X 395
/*
exp8b.c Experiment 8B, Adaptive linear predictor
*/
#define N 48 /* Adaptive FIR filter order */
#define Ns 256 /* Number of input signal per block */
#pragma DATA_SECTION(e, "lms_err");
#pragma DATA_SECTION(y, "lms_out");
#pragma DATA_SECTION(x, "lms_in");
#pragma DATA_SECTION(d, "lms_data");
#pragma DATA_SECTION(w, "lms_coef");
#pragma DATA_SECTION(index, "lms_data");
#pragma CODE_SECTION(main, "lms_code");
int e [Ns], /* Error signal buffer */
y [Ns], /* Output signal buffer */
in [Ns], /* Input signal buffer */
w [N], /* Filter coefficient buffer */
x [N], /* Filter signal buffer */
index;
extern void init(int *, unsigned int);
extern unsigned int alp(int *, int *, int *, int *, int *,
unsigned int, unsigned int, unsigned int);
extern void cos_rand(int *, unsigned int);
void main(void)
{
init(x,N); /* Initialize x []to zero */
init(w,N); /* Initialize w []to zero */
index 0;
for(;;)
{
cos_rand(x, Ns); /* Generate testing signal */
index alp(in, y, e, x, w, Ns, N, index); /* Adaptive predictor */
}
}
4. Verify the adaptive linear predictor and compare the results with Figure 8.13.
5. Verify the adaptation process by viewing how the adaptive coefficients w []are
adjusted. Record the steady-state values of w [], and plot the magnitude response
of the adaptive filter.
6. Change the order of the adaptive filter and observe the system performance.
8. Change the leaky factor value and observe the system performance.
9. Can we obtain a similar result without using the leaky LMS (by setting leaky factor
to 0x7fff)? Find the steady-state adaptive filter coefficients w []by running the
adaptive predictor for a period of time, and compare the magnitude response with
the one obtained in step 5.
References
[1] S. T. Alexander, Adaptive Signal Processing, New York: Springer-Verlag, 1986.
[2] M. Bellanger, Adaptive Digital Filters and Signal Analysis, New York: Marcel Dekker, 1987.
[3] P. M. Clarkson, Optimal and Adaptive Signal Processing, Boca Raton, FL: CRC Press, 1993.
[4] C. F. N. Cowan and P. M. Grant, Adaptive Filters, Englewood Cliffs, NJ: Prentice-Hall, 1985.
[5] J. R. Glover, Jr., `Adaptive noise canceling applied to sinusoidal interferences,' IEEE Trans.
Acoust., Speech, Signal Processing, ASSP-25, Dec. 1997, pp. 484±491.
[6] S. Haykin, Adaptive Filter Theory, 2nd Ed., Englewood Cliffs, NJ: Prentice-Hall, 1991.
[7] S. M. Kuo and C. Chen, `Implementation of adaptive filters with the TMS320C25 or
the TMS320C30,' in Digital Signal Processing Applications with the TMS320 Family, vol. 3,
P. Papamichalis, Ed., Englewood Cliffs, NJ: Prentice-Hall, 1990, pp. 191±271, Chap. 7.
[8] S. M. Kuo and D. R. Morgan, Active Noise Control Systems ± Algorithms and DSP Implementa-
tions, New York: Wiley, 1996.
[9] L. Ljung, System Identification: Theory for the User, Englewood Cliffs, NJ: Prentice-Hall, 1987.
[10] J. Makhoul, `Linear prediction: A tutorial review,' Proc. IEEE, vol. 63, Apr. 1975, pp. 561±580.
[11] J. R. Treichler, C. R. Johnson, Jr., and M. G. Larimore, Theory and Design of Adaptive Filters,
New York: Wiley, 1987.
[12] B. Widrow, J. R. Glover, J. M. McCool, J. Kaunitz, C. S. Williams, R. H. Hern, J. R. Zeidler,
E. Dong, and R. C. Goodlin, `Adaptive noise canceling: principles and applications,' Proc. IEEE,
vol. 63, Dec. 1975, pp. 1692±1716.
[13] B. Widrow and S. D. Stearns, Adaptive Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1985.
[14] M. L. Honig and D. G. Messerschmitt, Adaptive Filters: Structures, Algorithms, and Applications,
Boston, MA: Kluwer Academic Publishers, 1986.
Exercises
Part A
2. Find the crosscorrelation functions rxy
k and ryx
k, where x(n) and y(n) are defined in the
Problem 1.
3. Let x(n) and y(n) be two independent zero-mean WSS random signals. The random signal
w(n) is obtained by using
w
n ax
n by
n,
where a and b are constants. Express rww
k, rwx
k, and rwy
k in terms of rxx
k and ryy
k.
EXERCISES 397
4. An estimator
x of random process x is unbiased if
E
x x:
Show that the sample mean estimator given in (8.1.13) is unbiased, but the sample variance
estimator given in (8.1.14) is biased. That is, show that
1
E
s 2x s2x 1 6 s2x :
N
5. Show that the PDS Pxx ! of a WSS signal x(n) is a real-valued function of !.
7. Find the power density spectrum Pxx
z of a random signal with the following autocorrela-
tion function:
rxx
k 0:8jkj , 1<k<1
Part B
10. Write a MATLAB script to generate the length 1024 signal defined as
x
n 0:8 sin
!0 n v
n,
where !0 0:1p, v(n) is a zero-mean random noise with variance s2v 1 (see Section 3.3 for
details). Compute and plot, rxx
k, k 0, 1, . . . , 127 using MATLAB.
11. Consider the Example 8(b). The digital filter is a second-order FIR filter using the LMS
algorithm. The AR parameters are a1 0:195, a2 0:95, and s2v 0:0965. Simulate the
operation of the adaptive filter using either MATLAB or C program. After the convergence
of filter.
398 ADAPTIVE FILTERING
(a) Plot the learning curve Ee2
n, which can be approximated by the smoothed e2
n using
the first-order IIR filter.
(b) Repeat (a) using different values of step size m. Discuss the convergence speed and the
excess MSE related to m.
(c) Repeat the problem (a) using the parameters a1 1:9114, a2 0:95, and s2v 0:038.
Explain why the convergence is much slower than the problem (a) by analyzing the
eigenvalue spread given in (8.3.7).
(d) Plot the coefficient tracks w0
n and w1
n, and show the coefficients converge to the
optimum values derived in Example 8(b).
12. Implement the adaptive system identification technique illustrated in Figure 8.5 using
MATLAB or C program. The input signal is a zero-mean, unit-variance white noise. The
unknown system is defined by the room impulse response used in Chapter 4.
13. Implement the adaptive line enhancer illustrated in Figure 8.7 using MATLAB or C
program. The desired signal is given by
p
x
n 2 sin
!n v
n,
where frequency ! 0:2p and v
n is the zero-mean white noise with unit variance. The
decorrelation delay D 1. Plot both e(n) and y(n).
14. Implement the adaptive noise cancellation illustrated in Figure 8.8 using MATLAB or C
program. The primary signal is given by
d
n sin
!n 0:8v
n 1:2v
n 1 0:25v
n 2,
where v(n) is defined by Problem 13. The reference signal is v(n). Plot e(n).
15. Implement the single-frequency adaptive notch filter illustrated in Figure 8.10 using
MATLAB or C program. The desired signal d(n) is given in Problem 14, and x(n) is given by
p
x
n 2 sin
!n:
Plot e(n) and the magnitude response of second-order FIR filter after convergence.
Part C
16. Replace the unknown system in the Experiment 8A with the IIR filter iirform2.asm from
Chapter 6. Adjust the adaptive filter order to find the FIR filter coefficients that are the best
to identify the unknown IIR filter. Verify the system identification by comparing the
adaptive FIR filter magnitude response with the IIR filter response.
17. Given a corrupted primary input d
n 0:25 cos
2pnf1 =fs 0:25 sin
2pnf2 =fs , and the
reference signal x
n 0:125 cos
2pnf2 =fs , where fs is sampling frequency, f1 and f2 are
the frequencies of the desired signal and interference, respectively. Implement the adaptive
noise canceler that removed the interference signal.
18. Implement the adaptive linear predictor using the normalized LMS algorithm in real-time
using an EVM or DSK. Use signal generators to generate a sinusoid and white noise.
Connect both signals to the EVM input with a coupler. Run an adaptive linear predictor
and display both the input and the adaptive filter output on an oscilloscope.
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
9
Practical DSP Applications in
Communications
There are many DSP applications that are used in our daily lives, some of which have
been introduced in previous chapters. DSP algorithms, such as random number gen-
eration, tone generation and detection, echo cancellation, channel equalization, noise
reduction, speech and image coding, and many others can be found in a variety of
communication systems. In this chapter, we will introduce some selected DSP applica-
tions in communications that played an important role in the realization of the systems.
When designing algorithms for a sinewave (sine or cosine function) generation, several
characteristics should be considered. These issues include total harmonic distortion,
frequency and phase control, memory usage, execution time, and accuracy. The total
harmonic distortion (THD) determines the purity of a sinewave and is defined as
where the spurious harmonic power relates to the unwanted harmonic components of
the waveform. For example, a sinewave generator with a THD of 0.1 percent has a
distortion power level approximately 30 dB below the fundamental component. This is
the most important characteristic from the standpoint of performance. The other
characteristics are closely related to details of the implementation.
Polynomials can be used to express or approximate some trigonometric functions.
However, the sine or cosine function cannot be expressed as a finite number of additions
and multiplications. We must depend on approximation. Because polynomial approxi-
mations can be computed with multiplications and additions, they are ready to be
implemented on DSP devices. For example, the sine function can be approximated by
(3.8.1). The implementation of a sinewave generation using polynomial approximation
is given in Section 3.8.5. As discussed in Chapter 6, another approach of generating
sinusoidal signals is to design a filter H(z) whose impulse response h(n) is the desired
sinusoidal waveform. With an impulse function d
n used as input, the IIR filter will
400 PRACTICAL DSP APPLICATIONS IN COMMUNICATIONS
generate the desired impulse response (sinewave) at the output. In this section, we will
discuss the lookup-table method for generating sinewaves.
The lookup-table method or wavetable generator is probably the most flexible and
conceptually simple method for generating sinusoidal waveforms. The technique simply
involves the readout of a series of stored data values representing discrete samples of the
waveform to be generated. The data values can be obtained either by sampling the
appropriate analog waveform, or more commonly, by computing the desired values
using MATLAB or C programs. Enough samples are generated and stored to accurately
represent one complete period of the waveform. The periodic signal is then generated by
repeatedly cycling through the data memory locations using a circular pointer. This
technique is also used for generating computer music.
A sinewave table contains equally spaced sample values over one period of the
waveform. An N-point sinewave table can be computed by evaluating the function
2pn
x
n sin , n 0, 1, . . . , N 1:
9:1:2
N
These sample values must be represented in binary form. The accuracy of the sine
function is determined by the wordlength used to represent data and the table length.
The desired sinewave is generated by reading the stored values in the table at a constant
(sampling) rate of step D, wrapping around at the end of the table whenever the pointer
exceeds N 1. The frequency of the generated sinewave depends on the sampling
period T, table length N, and the sinewave table address increment D:
D
f Hz:
9:1:3
NT
For the designed sinewave table of length N, a sinewave of frequency f with sampling
rate fs can be generated by using the pointer address increment
Nf N
D , D :
9:1:4
fs 2
k m lDmod N , 9:1:5
where m determines the initial phase of sinewave. It is important to note that the step D
given in (9.1.4) may be a non-integer, thus (m lD) in (9.1.5) is a real number. That is, a
number consisting of an integer and a fractional part. When fractional values of D are
used, samples of points between table entries must be estimated using the table values.
SINEWAVE GENERATORS AND APPLICATIONS 401
The easy solution is to round this non-integer index to the nearest integer. However, the
better but more complex solution is to interpolate the two adjacent samples.
The lookup-table method is subject to the constraints imposed by aliasing, requiring
at least two samples per period in the generated waveform. Two sources of error in the
lookup-table algorithm cause harmonic distortion:
2. Time-quantization errors are introduced when points between table entries are
sampled, which increase with the address increment D.
The longer the table is, the less significant the second error will be. To reduce the
memory requirement for generating a high accuracy sinewave, we can take advantage of
waveform symmetry, which in effect results in a duplication of stored values. For
example, the values are repeated (regardless of sign change) four times every period.
Thus only a quarter of the memory is needed to represent the waveform. However, the
cost is a greater complexity of algorithm to keep track of which quadrant of the
waveform is to be generated and with the correct sign. The best compromise will be
determined by the available memory and computation power for a given application on
the target DSP hardware.
To decrease the harmonic distortion for a given table size N, an interpolation scheme
can be used to more accurately compute the values between table entries. Linear
interpolation is the simplest method for implementation. For linear interpolation, the
sine value for a point between successive table entries is assumed to lie on the straight
line between the two values. Suppose the integer part of the pointer is i
0 i < N and
the fractional part of the pointer is f
0 < f < 1, the sine value is computed as
where s
i 1 s
i is the slope of the line segment between successive table entries i
and i 1.
_cos_sin
mov T0, AC0 ; T0 a
sfts AC0, #11 ; Size of lookup table
mov #tab_0_PI, T0 ; Table based address
jj mov hi(AC0), AR2
mov AR2, AR3
abs AR2 ; cos( a) cos(a)
add #0x200, AR3 ; 90 degree offset for sine
and #0x7ff, AR3 ; Modulo 0x800 for 11±bit
sub #0x400, AR3 ; Offset 180 degree for sine
abs AR3 ; sin( a) sin(a)
jj mov *AR2(T0), *AR0 ; *AR0 cos(a)
mov *AR3(T0), *AR1 ; *AR1 sin(a)
ret
.end
In this example, we use a half table (0 ! p). Obviously, a sine (or cosine) function
generator using the complete table (0 ! 2p) can be easily implemented using only a few
lines of assembly code, while a function generator using a quarter table (0 ! p=2) will
be more challenging to implement efficiently. The assembly program cos_sin.asm
used in this example is available in the software package.
This expression shows that the instantaneous frequency goes from f
0 fL at time
n 0 to f
N 1 fU at time n N 1.
Because of the complexity of the linear chirp signal generator, it is more convenient
for real-time applications to generate such a sequence by a general-purpose computer
and store it in a lookup table. Then the lookup-table method introduced in Section 9.1.1
can be used to generate the desired signal.
An interesting application of chirp signal generator is generating sirens. The elec-
tronic sirens are often created by a small generator system inside the vehicle compartment.
This generator drives either a 60 or 100 Watt loudspeaker system present in the light bar
mounted on the vehicle roof or alternatively inside the vehicle radiator grill. The actual
siren characteristics (bandwidth and duration) vary slightly from manufacturers. The
wail type of siren sweeps between 800 Hz and 1700 Hz with a sweep period of approxi-
mately 4.92 seconds. The yelp siren has similar characteristics to the wail but with a
period of 0.32 seconds.
where T is the sampling period and the two frequencies fL and fH uniquely define the
key that was pressed. Figure 9.1 shows the matrix of sinewave frequencies used to
encode the 16 DTMF symbols. The values of the eight frequencies have been chosen
carefully so that they do not interfere with speech.
The low-frequency group (697, 770, 852, and 941 Hz) selects the four rows frequencies
of the 4 4 keypad, and the high-frequency group (1209, 1336, 1477, and 1633 Hz)
697 Hz 1 2 3 A
770 Hz 4 5 6 B
852 Hz 7 8 9 C
941 Hz * 0 # D
selects the columns frequencies. A pair of sinusoidal signals with fL from the low-
frequency group and fH from the high-frequency group will represent a particular
key. For example, the digit `3' is represented by two sinewaves at frequencies 697 Hz
and 1477 Hz. The row frequencies are in the low-frequency range below 1 kHz, and the
column frequencies are in the high-frequency between 1 kHz and 2 kHz. The digits are
displayed as they appear on a telephone's 4 4 matrix keypad, where the fourth column
is omitted on standard telephone sets.
The generation of dual tones can be implemented by using two sinewave generators
connected in parallel. Each sinewave generator can be realized using the polynomial
approximation technique introduced in Section 3.8.5, the recursive oscillator introduced
in Section 6.6.4, or the lookup-table method discussed in Section 9.1.1. Usually, DTMF
signals are interfaces to the analog world via a CODEC (coder/decoder) chip with an
8 kHz sampling rate.
The DTMF signal must meet timing requirements for duration and spacing of digit
tones. Digits are required to be transmitted at a rate of less than 10 per second. A
minimum spacing of 50 ms between tones is required, and the tones must be present for
a minimum of 40 ms. A tone-detection scheme used to implement a DTMF receiver
must have sufficient time resolution to verify correct digit timing. The issues of tone
detection will be discussed later in Section 9.3.
Random numbers are useful in simulating noise and are used in many practical applica-
tions. Because we are using digital hardware to generate numbers, we cannot produce
perfect random numbers. However, it is possible to generate a sequence of numbers that
are unrelated to each other. Such numbers are called pseudo-random numbers (PN
sequence).
Two basic techniques can be used for pseudo-random number generation. The
lookup-table method uses a set of stored random samples, and the other is based on
random number generation algorithms. Both techniques obtain a pseudo-random
sequence that repeats itself after a finite period, and therefore is not truly random at
all time. The number of stored samples determines the length of a sequence generated by
the lookup-table method. The random number generation algorithm by computation is
determined by the register size. In this section, two random number generation algo-
rithms will be introduced.
The linear congruential method is probably the most widely used random number
generator. It requires a single multiplication, addition, and modulo division. Thus
it is simple to implement on DSP chips. The linear congruential algorithm can be
expressed as
where the modulo operation (mod) returns the remainder after division by M. The
constants a, b, and M are chosen to produce both a long period and good statistical
characteristics of the sequences. These constants can be chosen as
a 4K 1, 9:2:2
M 2L 9:2:3
is a power of 2, and b can be any odd number. Equations (9.2.2) and (9.2.3) guarantee
that the period of the sequence in (9.2.1) is of full-length M.
A good choice of parameters are M 220 10 48 576, a 4
511 1 2045, and
x
0 12 357. Since a random number routine usually produces samples between 0 and
1, we can normalize the nth random sample as
x
n 1
r
n
9:2:4
M1
so that the random samples are greater than 0 and less than 1. Note that the random
numbers r(n) can be generated by performing Equations (9.2.1) and (9.2.4) in real time.
A C function (uran.c in the software package) that implements this random number
generator is listed in Table 9.1.
Example 9.2: Most of the fixed-point DSP processors are 16-bit. The following
TMS320C55x assembly code implements an M 216
65 536 random number
generator.
/*************************************************************
* URAN This function generates pseudo-random numbers *
*************************************************************/
static long n (long)12357; // Seed x(0) 12357
float uran()
{
float ran; // Random noise r(n)
n (long)2045*n1L; // x(n) 2045*x(n 1)1
n (n/1048576L)*1048576L; // x(n) x(n) INT [x(n)/
// 1048576]*1048576
ran (float)(n1L)/(float)1048577; // r(n) FLOAT [x(n)1]/
// 1048577
return(ran); // Return r(n) to the main
} // function
406 PRACTICAL DSP APPLICATIONS IN COMMUNICATIONS
The assembly program rand16_gen.asm used for this example is available in the
software package.
A shift register with feedback from specific bits can also generate a repetitive pseudo-
random sequence. A schematic of the 16-bit generator is shown in Figure 9.2, where the
functional circle labeled `XOR' performs the exclusive-OR function of its two binary
inputs. The sequence itself is determined by the position of the feedback bits on the shift
register. In Figure 9.2, x1 is the output of b0 XOR with b2 , x2 is the output of b11 XOR
with b15 , and x is the output of x1 XOR with x2 .
An output from the sequence generator is the entire 16-bit word. After the random
number is generated, every bit in the register is shifted left 1 bit (b15 is lost), and then x is
shift to b0 to generate the next random number. A shift register length of 16 bits can
x
b15 b14 b13 b12 b11 b10 b9 b8 b7 b6 b5 b4 b3 b2 b1 b0
XOR XOR
x2 x1
XOR
readily be accommodated by a single word on the many 16-bit DSP devices. Thus
memory usage is minimum. It is important to recognize, however, that sequential words
formed by this process will be correlated. The maximum sequence length before repeti-
tion is
L 2M 1, 9:2:5
Example 9.3: The PN sequence generator given in Table 9.2 uses many Boolean
operations. The C program requires at least 11 operations to complete the com-
putation. The following TMS320C55x assembly program computes the same PN
sequence in 11 cycles:
; pn_gen.asm 16-bit zero-mean PN sequence generator
;
; Prototype: int pn_gen(int *)
;
; Entry: arg0 AR0 pointer to the shift register
; Return: T0 random number
BIT15 .equ 0x8000 ; b15
BIT11 .equ 0x0800 ; b11
/*************************************************************
* PN Sequence generator *
*************************************************************/
static int shift_reg;
int pn_sequence(int *sreg)
{
int b2, b11, b15;
int x1, x2; /* x2 also used for x */
b15 *sreg 15;
b11 *sreg 11;
x2 b15^b11; /* First XOR bit15 and bit11 */
b2 *sreg 2;
x1 *sreg ^b2; /* Second XOR bit2 and bit0 */
x2 x1^x2; /* Final XOR of x1 and x2 */
x2 & 1;
*sreg *sreg 1;
*sreg *sreg j x2; /* Update the shift register */
x2 *sreg 0x4000; /* Zero-mean random number */
return x2;
}
408 PRACTICAL DSP APPLICATIONS IN COMMUNICATIONS
where v(n) is an internally generated zero-mean pseudo-random noise and x(n) is the
input applied to the center clipper with the clipping threshold b.
Noise v(n)
generator
a
The power of the comfort noise should match the background noise when neither
talker is active. Therefore the algorithm shown in Figure 9.3 is the process of estimating
the power of the background noise in x(n) and generating the comfort noise of the same
power to replace signals suppressed by the center clipper.
1. Generate the random noise x(n). In the acoustic echo canceler (will be discussed in
Section 9.5), x(n) is converted to an analog signal, amplified, and then used to drive
a loudspeaker.
2. Obtain the desired signal d(n). In the acoustic echo canceler, d(n) is the digital signal
picked up by a microphone.
X
L 1
y
n h^l
nx
n l,
9:2:7
l0
3. a. where h^l
n is the lth coefficient of the adaptive filter H
z
^ at time n.
3. b. Compute the error signal
Unknown d(n)
System, H(z)
Random +
x(n) y(n) − e(n)
noise Ĥ(z) Σ
generator
LMS
algorithm
^ converges to the
4. Go to step 1 for the next iteration until the adaptive filter H
z
optimum solution. That is, the power of e(n) is minimized.
X1
1 NM
h^l h^l
n, l 0, 1, . . . , L 1:
9:2:10
M nN
This section introduces detection methods for DTMF tones used in the communication
networks. The correct detection of a digit requires both a valid tone pair and the correct
timing intervals. DTMF signaling is used both to set up a call and to control features
such as call forwarding and teleconferencing calling. In some applications, it is neces-
sary to detect DTMF signaling in the presence of speech, so it is important that the
speech waveform is not interpreted as valid signaling tones.
9.3.1 Specifications
The implementation of a DTMF receiver involves the detection of the signaling tones,
validation of a correct tone pair, and the timing to determine that a digit is present for
the correct amount of time and with the correct spacing between tones. In addition, it is
necessary to perform additional tests to improve the performance of the decoder in the
presence of speech. A DSP implementation is useful in applications in which the
digitized signal is available and several channels need to be processed such as in a
private branch exchange.
DTMF TONE DETECTION 411
DTMF receivers are required to detect frequencies with a tolerance of 1:5 percent as
valid tones. Tones that are offset by 3:5 percent or greater must not be detected. This
requirement is necessary to prevent the detector from falsely detecting speech and other
signals as valid DTMF digits. The receiver is required to work with a worst-case signal-
to-noise ratio of 15 dB and with a dynamic range of 26 dB.
Another requirement of the receiver is the ability to detect DTMF signals when two
tones are received at different levels. The high-frequency tone may be received at a lower
level than the low-frequency tone due to the magnitude response of the communication
channel. This level difference is called twist, and the situation described above is called a
forward (or standard) twist. Reverse twist occurs when the low-frequency tone is
received at a lower level than the high-frequency tone. The receiver must operate with
a maximum of 8 dB normal twist and 4 dB reverse twist. A final requirement for the
receiver is that it operates in the presence of speech without incorrectly identifying the
speech signal as valid DTMF tones. This is referred to as talk-off performance.
The principle of DTMF detection is to examine the energy of the received signal at the
DTMF frequencies (defined in Figure 9.1) to determine whether a valid DTMF tone
pair has been received. The detection algorithm can be a DFT implementation using an
FFT algorithm or a filter-bank implementation. An FFT can be used to calculate the
energies of N evenly spaced frequencies. To achieve the frequency resolution required to
detect the eight DTMF frequencies within 1:5 percent frequency deviation, a 256-
point FFT is needed for an 8 kHz sample rate. For the relatively small number of tones
to be detected, the filter-bank implementation is more efficient.
Since only eight frequencies are of interest, it is more efficient to use the DFT directly
to compute
X1
N
X
k x
nWNkn
9:3:1
n0
for eight different values of k that correspond to the DTMF frequencies defined in
Figure 9.1. The DFT coefficients can be more efficiently calculated by using the Goertzel
algorithm, which can be interpreted as a matched filter for each frequency k as illustrated
in Figure 9.5. In this figure, x(n) is the input signal of the system, Hk
z is the transfer
function of the filter at kth frequency bin, and X(k) is the corresponding filter output.
H0(z) X(0)
HN−1(z) X(N−1)
X1
N X1
N
k
N n
X
k WN kN x
nWNkn x
nWN :
9:3:3
n0 n0
X1
N
k
n m
yk
n x
mWN ,
9:3:4
m0
due to the finite-length input x(n). Thus Equation (9.3.4) can be expressed as
From (9.3.3) and (9.3.4), and the fact that x n 0 for n < 0 and n N, we show that
1
Yk
z X
z :
9:3:8
1 WN k z 1
Yk
z 1
Hk
z , k 0, 1, . . . , N 1:
9:3:9
X
z 1 WN k z 1
This filter has a pole on the unit circle at the frequency !k 2pk=N. Thus the entire
DFT can be computed by filtering the block of input data using a parallel bank of N
filters defined by (9.3.9), where each filter has a pole at the corresponding frequency of
the DFT. Since the Goertzel algorithm computes N DFT coefficients, the parameter N
DTMF TONE DETECTION 413
must be chosen to make sure that X(k) is close to the DTMF frequencies fk . This can be
accomplished by choosing N such that
fk k
,
9:3:10
fs N
1 WNk z 1 1 e j2pk=N z 1
Hk
z k
:
9:3:11
1 WN z 1
1 WNk z 1 1 2cos
2pk=Nz 1 z 2
The signal-flow graph of the transfer function defined by (9.3.11) is shown in Figure
9.7 using the direct-form II realization. The recursive part of the filter is on the left-hand
side of the delay elements, and the non-recursive part is on the right-hand side. Since the
Hk(z)
x(n) yk(n)
z−1
WN−k
H k (z)
x(n) w k (n) y k(n)
2cos(2pfk/fs) z−1
−e −j2pf k/f s
w k (n−1)
z−1
−1
w k (n−2)
output yk
n is required only at time N 1, we just need to compute the non-recursive
part of the filter at the
N 1th iteration. The recursive part of algorithm can be
expressed as
j2pfk =fs
X
k yk
N 1 wk
N 1 e wk
N 2:
9:3:13
A further simplification of the algorithm is made by realizing that only the magnitude
squared of X(k) is needed for tone detection. From (9.3.13), the squared magnitude of
X(k) is computed as
Therefore the complex arithmetic given in (9.3.13) is eliminated and (9.3.14) requires
only one coefficient, 2cos
2pfk =fs , for each jX
kj2 to be evaluated. Since there are eight
possible tones to be detected, we need eight filters described by (9.3.12) and (9.3.14).
Each filter is tuned to one of the eight frequencies defined in Figure 9.1. Note that
Equation (9.3.12) is computed for n 0, 1, . . . , N 1, but Equation (9.3.14) is com-
puted only once at time n N 1.
The flow chart of DTMF tone detection algorithm is illustrated in Figure 9.8. At
the beginning of each frame of length N, the state variables x
n, wk
n, wk
n 1,
wk
n 2, and yk
n for each of the eight Goertzel filters and the energy are set to 0. For
each sample, the recursive part of each filter defined in (9.3.12) is executed. At the end of
each frame, i.e., n N 1, the squared magnitude jX
kj2 for each DTMF frequency
is computed based on the (9.3.14). The following six tests are performed to determine if
a valid DTMF digit has been detected.
Magnitude test
Initialization
Magnitude > N
threshold ?
Get 8 kHz
input sample
Y
N
Compute the Twist normal
recursive part ?
of the Goertzel Y
filter for the 8
frequencies Does frequency N
offset pass ?
n = N−1? Y
No
Total energy N
Yes test pass ?
Compute the Y
non-recursive part
of the Goertzel 2nd harmonic Y
filter for the 8 signal too strong
frequencies ?
N
Y
D(m) = D(m−2)?
N
N
D(m) = D(m−1)?
Y
Output digit
magnitude jX
kj2 defined in (9.3.14) for each DTMF frequency is computed. The
largest magnitude in each group is obtained.
Twist test
Because of the frequency response of a telephone system, the tones may be attenuated
according to the system's gains at the tonal frequencies. Consequently, we do not expect
the high- and low-frequency tones to have exactly the same amplitude at the receiver,
even though they were transmitted at the same strength. Twist is the difference, in
decibels, between the low-frequency tone level and the high-frequency tone level. For-
ward twist exists when the high-frequency tone level is less than the low-frequency tone
level. Generally, the DTMF digits are generated with some forward twist to compensate
for greater losses at higher frequency within a long telephone cable. Different adminis-
trations recommend different amounts of allowable twist for a DTMF receiver. For
416 PRACTICAL DSP APPLICATIONS IN COMMUNICATIONS
example, Australia allows 10 dB, Japan allows only 5 dB, and AT&T recommends not
more than 4 dB of forward twist or 8 dB of reverse twist.
This test is performed to prevent some broadband noises from being detected as
effective tones. If the effective DTMF tones are present, the power levels at those two
frequencies should be much higher than the power levels at the other frequencies. To
perform this test, the largest magnitude in each group is compared to the magnitudes of
other frequencies in that group. The difference must be greater than a predetermined
threshold in each group.
Similar to the frequency-offset test, the goal of this test is to reject some broad noises
(such as speech) and further improve the robustness of the receiver. To perform this test,
three different constants, c1, c2, and c3, are used. The energy of the detected tone in the
low-frequency group is weighted by c1, the energy of the detected tone in the high-
frequency group is weighted by c2, and the sum of the two energies is weighted by c3.
Each of these terms must be greater than the summation of the energy of eight filter
outputs. For this test, the total energy is computed as
X
8
E jX
kj2 :
9:3:15
k1
The objective of this test is to reject speech that has harmonics close to fk so that they
might be detected as DTMF tones. Since DTMF tones are pure sinusoids, they contain
very little second harmonic energy. Speech, on the other hand, contains a significant
amount of second harmonic energy. To test the level of second harmonic, the decoder
must evaluate the second harmonic frequencies of all eight DTMF tones. These second
harmonic frequencies (1394 Hz, 1540 Hz, 1704 Hz, 1882 Hz, 2418 Hz, 2672 Hz, 2954 Hz,
and 3266 Hz) also can be detected using the Goertzel algorithm.
Digit decoder
Finally, if all five tests are passed, the tone pair is decoded as an integer between 1 and
16. Thus the digit decoder is implemented as
where D
m is the digit detected for frame m, m 0, 1, 2, . . . is the frame index, C is the
index of column frequencies which has been detected, and R is the index of row
frequencies which has been detected. For example, if two frequencies 750 Hz and
1219 Hz are detected, the valid digit is computed as
This value is placed in a memory location designated D(m). If any of the tests fail,
then ` 1' representing `no detection' is placed in D(m). For a new valid digit to be
declared, D(m) must be the same for two successive frames, i.e., D
m 2 D
m 1.
If the digit is valid for more than two successive frames, the receiver is detecting the
continuation of a previously validated digit, and a third digit D(m) is not the output.
There are two reasons for checking three successive digits at each pass. First, the
check eliminates the need to generate hits every time a tone is present. As long as the
tone is present, it can be ignored until it changes. Second, comparing digits D
m 2,
D
m 1, and D(m) improves noise and speech immunity.
One of the main problems associated with telephone communications is the generation
of echoes due to impedance mismatches at various points in telecommunication net-
works. Such echoes are called line (or network) echoes. If the time delay between the
speech and the echo is short, the echo is not noticeable. Distinct echoes are noticeable
only if the delay exceeds tens of milliseconds, which are annoying and can disrupt a
conversation under certain conditions. The deleterious effects of echoes depend upon
their loudness, spectral distortion, and delay. In general, the longer the echo is delayed,
the more echo attenuation is required. Echo is probably the most devastating degrad-
ation for long-distance telecommunications, especially if the two parties are separated
by a great distance with a long transmission delay.
418 PRACTICAL DSP APPLICATIONS IN COMMUNICATIONS
Four-wire facility
Echo
Two-wire Two-wire
Telephone H H Telephone
facility facility
Echo
To explain the principle of echo cancellation, the function of the hybrid in Figure 9.10
can be illustrated in Figure 9.11, where the far-end signal x(n) passing through the echo
path P(z) results in the undesired echo r(n). The primary signal d(n) is a combination of
echo r(n), near-end signal u(n), and noise v(n) which consists of the quantization noise
from the A/D converter and other noises from the circuit. The adaptive filter W(z)
adaptively learns the response of the echo path P(z) by using the far-end speech x(n) as
an excitation signal. The echo replica y(n) is generated by W(z), and is subtracted from
the primary signal d(n) to yield the error signal e(n). Ideally, y
n r
n and the residual
error e(n) is substantially echo free.
A typical impulse response of echo path is shown in Figure 9.12. The time span over
the impulse response of the echo path is significant (non-zero) and is typically about
4 ms. This portion is called the dispersive delay since it is associated with the frequency-
dependent delay and loss through the echo path. Because of the existence of the four-
wire circuit between the location of the echo canceler and the hybrid, the impulse
response of echo path is delayed. Therefore the initial samples of p(n) are all zeros,
representing a flat delay between the canceler and the hybrid. The flat delay depends on
the transmission delay and the delay through the sharp filters associated with frequency
d(n) + e(n)
Σ
−
y(n)
Near-end Far-end
x(n)
v(n)
+
u(n) + d(n) + e(n)
Σ Σ
+ −
r(n) y(n)
Telephone
P(z) W(z) LMS
x(n)
Hybrid
Figure 9.11 Equivalent diagram of echo canceler that show details of hybrid function
420 PRACTICAL DSP APPLICATIONS IN COMMUNICATIONS
Time, n
Flat
delay
Dispersive
division multiplex equipment. The sum of the flat delay and the dispersive delay is called
the tail delay.
Assume that the echo path P(z) is linear, time invariant, and with infinite impulse
response p
n, n 0, 1, . . . , 1. As shown in Figure 9.11, the primary signal d(n) can be
expressed as
X
1
d
n r
n u
n v
n p
lx
n l u
n v
n,
9:4:1
l0
where the additive noise v(n) is assumed to be uncorrelated with the near-end speech u(n)
and the echo r(n). The most widely used FIR filter generates the echo mimic
X
L 1
y
n wl
nx
n l
9:4:2
l0
As shown in (9.4.3), the adaptive filter W(z) has to adjust its weights to mimic the
response of echo path in order to cancel out the echo signal. The simple normalized
LMS algorithm introduced in Section 8.4 is used for most voice echo cancellation
applications. Assuming that disturbances u(n) and v(n) are uncorrelated with x(n), we
can show that W(z) will converge to P(z). Unfortunately this requires L to be quite large
in many applications Echo cancellation is achieved if as W
z P
z shown in (9.4.3).
Thus the residual error after the echo canceler has converged can be expressed as
X
1
e
n p
lx
n l u
n v
n:
9:4:4
lL
By making the length L of W(z) sufficiently long, this residual echo can be minimized.
However, the excess MSE produced by the adaptive algorithm is also proportional to L.
ADAPTIVE ECHO CANCELLATION 421
Therefore there is an optimum order L that will minimize the MSE if an FIR filter is
used.
The number of coefficients for the transversal filter is directly related to the tail delay
(total delay) of the channel between the echo canceler and the hybrid. As mentioned
earlier, the length of the impulse response of the hybrid (dispersive delay) is relatively
short. However, the transmission delay (flat delay) from the echo canceler to the hybrid
depends on the physical location of the echo canceler. As shown in Figure 9.13, the split
echo canceler configuration is especially important for channels with particularly long
delays, such as satellite channels. In the split configuration, the number of transversal
filter coefficients need only compensate for the delay between the hybrid and the
canceler and not the much longer delay through the satellite. Hence, the number of
coefficients is minimized.
The design of an adaptive echo canceler involves many considerations, such as the
speed of adaptation, the effect of near-end and far-end signals, the impact of signal
levels and spectra, and the impact of nonlinearity. The echo canceler must accurately
model the echo path and rapidly adapt to its variation. This involves the selection of an
adaptive filter structure and an adaptation algorithm. Because the potential applica-
tions of echo cancellation are numerous, there have been considerable activities in the
design of echo cancellation devices. The best selection depends on performance require-
ments for a particular application.
The effectiveness of an echo canceler is measured by the echo return loss enhancement
(ERLE) defined as
Ed 2
n
ERLE 10 log :
9:4:5
Ee2
n
For a given application, the ERLE depends on the step size m, the filter length L, the
signal-to-noise ratio (SNR), and the nature of signal in terms of power and spectral
content. A larger value of step size provides a faster initial convergence, but the final
ERLE is smaller due to the excess MSE. Provided the length is large enough to correct
for the length of echo tail, increasing L further is detrimental since doubling L will
reduce the ERLE by 3 dB.
Most echo cancelers aim at canceling echo components up to 30 dB. Further reduc-
tion of the residual echo can be achieved by using a residual echo suppressor that will be
+
Σ
−
Long
Echo Echo
H delay H
canceler canceler
channel
−
Σ
+
discussed later. Detailed requirements of an echo canceler are described in ITU recom-
mendations G.165 and G.168, including the maximum residual echo level, the echo
suppression effect on the hybrid, the convergence time must be less than 500 ms, the
initial set-up time, and degradation in a double-talk situation.
The first special-purpose chip for echo cancellation implements a single 128-tap
adaptive echo canceler [17]. Most echo cancelers were implemented using customized
devices in order to handle the large amount of computation associated with it in real-
time applications. Disadvantages of VLSI implementation include the high devel-
opment cost and a lack of flexibility to meet application-specific requirements and
improvements. There has been considerable activity in the design of devices using
commercially available DSP chips.
There are some practical issues to be considered in designing adaptive echo canceler: (1)
Adaptation should be stopped if the far-end signal x(n) is absent. (2) We must also
worry about the quality of adaptation over the large dynamic range of far-end signal
power. (3) The adaptive process benefits when the far-end signal contains a well-
distributed frequency component to persistently excite the adaptive system and the
interfering signals u(n) and v(n) are small. When the reference x(n) is a narrowband
signal, the adaptive filter response cannot be controlled at frequencies other than that
frequency band. If the reference signal later changes to a broadband signal, then the
canceler may actually become an echo generator. Therefore a tone detector may be used
to inhibit adaptation in this case.
As discussed in Section 9.4.2, the initial part of the impulse response of the echo path
is all zeros, representing a flat transmission delay between the canceler and the hybrid.
To take advantage of the flat delay, the structure illustrated in Figure 9.14 was de-
veloped, where D is a measure of flat delay and the order of shorter echo canceler W(z) is
L D. Estimation of the number of zero coefficients and using buffer indexing, one
does not need to perform the actual adaptive filtering operation on the zero coefficients
but simply index into the buffer appropriately. This technique can effectively reduce the
real-time computational requirement. However, there are two difficulties: the multiple
echoes and there has not been a good way to estimate the flat delay.
d(n) + e(n)
Σ
−
y(n)
x(n−∆)
z−∆
x(n)
Detection
& control
x(n)
Figure 9.15 Adaptive echo canceler with speech detectors and residual echo suppressor
424 PRACTICAL DSP APPLICATIONS IN COMMUNICATIONS
The conventional DTD is based on the echo return loss (ERL) or hybrid loss, which
can be expressed as
E jx
nj
r 20 log10 :
9:4:6
E jd
nj
If the echo path is time-invariant, the ERL may be measured during the training period
for some applications. In several adaptive echo cancelers, the value of ERL is assumed
to be 6 dB. Based on this assumption, the near-end speech is present if
1
jd
nj > jx
nj:
9:4:7
2
However, we cannot just compare the instantaneous absolute values jd
nj and jx
nj
because of noise. Therefore the modified near-end speech detection algorithm declares
the presence of near-end speech if
1
jd
nj > maxfjx
nj, . . . , jx
n L 1jg:
9:4:8
2
This algorithm compares an instantaneous absolute value jd
nj with the maximum
absolute value of x(n) over a time window spanning the echo path delay range. The
advantage of using an instantaneous power of d(n) is fast response to the near-end
speech. However, it will increase the probability of false alarm if noise exists in the
network.
A more robust version of the algorithm uses short-term power estimates Px
n and
Pd
n to replace the instantaneous power jx
nj and jd
nj. The short-term power
estimates are implemented as the first-order IIR filter as follows:
and
where 0 < a 1. The use of a larger a results in robust detector in noise. However, it
also results in slower response to the present of near-end speech. With these modified
short-term power estimates, the near-end speech is detected if
1
Pd
n > maxfPx
n, Px
n 1, . . . , Px
n L 1g:
9:4:11
2
It is important to note that a considerable portion of the initial break-in near-end speech
u(n) may not be detected by this detector. Thus adaptation would proceed for a
considerable amount of time in the presence of double-talking. Furthermore, the
requirement of the buffer to store L power estimates increases the complexity of
algorithm.
ADAPTIVE ECHO CANCELLATION 425
The assumption that the ERL is a constant of value 6 dB is usually incorrect in most
applications. Even if the ERL is 6 dB, pd
n can still be greater than the threshold
without near-end speech because of the dispersive characteristics of the echo path and/
or far-end speech. If the ERL is higher than 6 dB, it will take longer to detect the
presence of near-end speech. On the other hand, if the ERL is below 6 dB, most far-end
speech will be falsely detected as near-end speech. For practical applications, it is better
to dynamically estimate the time-varying threshold r by observing the signal level of
x(n) and d(n) when the near-end speech u(n) is absent.
Nonlinearities in the echo path of the telephone circuit and uncorrelated near-end
speech limit the amount of achievable cancellation in a typical adaptive echo canceler
from 30 to 35 dB. The residual echo suppressor shown in Figure 9.15 is used to remove
the last vestiges of remaining echo. This device also effectively removes echo during the
initial convergence of the echo canceler if off-line training stage is prohibited.
The most widely used residual echo suppressor is a center clipper with an input±
output characteristic illustrated in Figure 9.16. The center clipper is used to remove the
low-level echo signal caused by circuit noises, finite-precision errors, etc., which cannot
be canceled by the echo canceler. This nonlinear operation is expressed as
0, jx
nj
y
n
9:4:12
x
n, jx
nj > ,
where b is the clipping level. This center clipper completely eliminates signals below the
clipping level, but leaves instantaneous signal values greater than the clipping level
unaffected. Thus large signals go through unchanged, but small signals are eliminated.
Since small signals are consistent with echo, the device achieves the function of residual
echo suppression. The clipping threshold b determines how `choppy' the speech will
sound with respect to the echo level. A large value of b suppresses all the residual echoes
but also deteriorates the quality of the near-end speech. Usually the threshold is set so as
to equal or exceed the return echo peak amplitude.
y(n)
−b x(n)
0 b
−b
9.5.1 Introduction
Room Power
Far-end
amplifier
signal
Reflection
Direct
coupling
Acoustic
echo
Pre-amplifier
Near-end talker
conference room. Unfortunately, not only the direct coupling but also the sound
bounces back and forth between the walls and the furniture will be picked up by the
microphones and be transmitted back to the far-end. These acoustic echoes can be very
annoying because they cause the far-end talker to hear a delayed version of his or her
own speech.
The most effective technique to eliminate the acoustic echo is to use an adaptive echo
cancellation discussed in the previous section. The basic concept of acoustic echo
cancellation is similar to the line echo cancellation. However, the acoustic echo canceler
requirements are different from those of line echo cancelers due to the fact that their
functions are different and the different nature of the echo paths. Instead of the
mismatch of the hybrid, a loudspeaker-room-microphone system needs to be modeled
in these applications. The acoustic echo canceler controls the long echo using a high-
order adaptive FIR filter. This full-band acoustic echo canceler will be discussed in this
section. A more effective technique to cancel the acoustic echo is called the subband
acoustic echo canceler, in which the input signal is split into several adjacent subbands
and uses an independent low-order filter in each subband.
Compared with line echo cancellation, there are three major factors making the
acoustic echo cancellation far more difficult. These factors are summarized as follows:
2. The acoustic echo path is generally non-stationary and it may change rapidly due to
the motion of people in the room, the position changes of the microphone and some
other factors like temperature change, doors and/or windows opened or closed, etc.
The canceler should trace these changes quickly enough to cancel the echoes, thus
requiring a faster convergence algorithm.
Therefore acoustic echo cancelers require more computation power, faster convergence
adaptive algorithms, and more sophisticated double-talk detectors.
x(n)
adaptive filter W(z) models the acoustic echo path P(z) and yields an echo replica y(n),
which is used to cancel acoustic echo components in the microphone signal d(n).
An acoustic echo canceler removes the acoustic echoes by using the adaptive filter
W(z) to generate a replica of the echo expressed as
X
L 1
y
n wl
nx
n l:
9:5:1
l0
This replica is then subtracted from the microphone signal d(n). The coefficients of W(z)
are updated by the normalized LMS algorithm expressed as
where m
n is the normalized step size by the power estimate of x(n) and
e
n d
n y
n. This adaptation must be stopped if the near-end talker is speaking.
Acoustic echo cancelers usually operate in two modes. In the off-line training mode
discussed in Section 9.2.4, the impulse response of the acoustic echo path is estimated
with the use of white noise or chirp signals as the training signal x(n). During the
training mode, the correct length of the echo path response may be determined. In the
subsequent on-line operating mode, an adaptive algorithm is used to track slight
variations in the impulse response of echo path using the far-end speech x(n).
For an adaptive FIR filter with the LMS algorithm, a large L requires a small step
size m, thus resulting in slow convergence. Therefore the filter is unable to track the
relatively fast transient behavior of the acoustic echo path P(z). Perhaps the number of
taps could be reduced significantly by modeling the acoustic echo path as an IIR filter.
However, there are difficulties such as the stability associated with the adaptive IIR
structures.
As discussed in Chapter 8, if fixed-point arithmetic is used for implementation and m
is sufficiently small, the excess MSE is increased when a large L is used, and the
numerical error (due to coefficient quantization and roundoff ) is increased when a
large L and a small are m used. Furthermore, roundoff causes early termination of the
adaptation when a small m is used. In order to alleviate these problems, a higher
dynamic range is required, which can be achieved by using floating-point arithmetic.
However, this solution includes the added cost of more expensive hardware.
As mentioned earlier, the adaptation of coefficients must be stopped when the near-
end talker is speaking. Most double-talk detectors for adaptive line echo cancelers
discussed in the previous section are based on echo return loss (or acoustic loss) from
the loudspeaker to the microphone. In acoustic echo canceler cases, this loss is very
small or may be even a gain because of amplifiers used in the system. Therefore the
higher level of acoustic echo than the near-end speech makes detection of weak near-end
speech very difficult.
In many speech communication settings, the presence of background noise degrades the
quality or intelligibility of speech. This section discusses the design of single-channel
speech enhancement (or noise reduction) algorithms, which use only one microphone to
reduce background noise in the corrupted speech without an additional reference noise.
The wide spread use of cellular/wireless phones has significantly increased the use of
communication systems in high noise environments. Intense background noise, how-
ever, often degrades the quality or intelligibility of speech, degrading the performance of
many existing signal processing techniques such as speech coding, speech recognition,
speaker identification, channel transmission, and echo cancellation. Since most voice
coders and voice recognition units assume high signal-to-noise ratio (SNR), low SNR
will deteriorate the performance dramatically. With the development of hands-free and
voice-activated cellular phones, the noise reduction becomes increasingly important to
improve voice quality in noisy environments.
The purpose of many speech enhancement algorithms is to reduce background noise,
improve speech quality, or suppress undesired interference. There are three general
classes of speech enhancement techniques: subtraction of interference, suppression of
harmonic frequencies, and re-synthesis using vocoders. Each technique has its own set
of assumptions, advantages, and limitations. The first class of techniques suppresses
noise by subtracting a noise spectrum, which will be discussed in Section 9.6.2. The
430 PRACTICAL DSP APPLICATIONS IN COMMUNICATIONS
second type of speech enhancement is based on the periodicity of noise. These methods
employ fundamental frequency tracking using adaptive comb filtering of the harmonic
noise. The third class of techniques is based on speech modeling using iterative methods.
These systems focus on estimating model parameters that characterize the speech signal,
followed by re-synthesis of the noise-free signal. These techniques require a prior
knowledge of noise and speech statistics and generally results in iterative enhancement
schemes.
Noise subtraction algorithms can also be partitioned depending on whether a single-
channel or dual-channel (or multiple-channel) approach is used. A dual-channel adaptive
noise cancellation was discussed in Section 8.5. In this type of system, the primary channel
contains speech with additive noise and the reference channel contains a reference noise
that is correlated to the noise in the primary channel. In situations such as telephone or
radio communications, only a single-channel system is available. A typical single-channel
speech enhancement system is shown in Figure 9.19, where noisy speech x(n) is the input
signal of the system, which contains the speech signal s(n) from the speech source and the
noise v(n) from the noise source. The output signal is the enhanced speech ^ s
n. Char-
acteristics of noise can only be estimated during silence periods between utterances, under
the assumption that the background noise is stationary.
This section concentrates on the signal-channel speech enhancement system. Since
only a single recording is available and the performance of the noise suppression system
is based upon the accuracy of the background noise estimate, speech enhancement
techniques must estimate noise characteristics during the non-speech periods when
only background noise is present. Therefore an effective and robust voice activity
detector (VAD) plays an important role in the single-channel noise suppression system.
Noise subtraction algorithms can be implemented in time-domain or frequency-
domain. Based on the periodicity of voiced speech, the time-domain adaptive noise
canceling technique can be utilized by generating a reference signal that is formed by
delaying the primary signal by one period. Thus a complicated pitch estimation algo-
rithm is required. Also, this technique can only be applied for voiced speech, but fails to
process unvoiced speech. The frequency-domain implementation is based on short-time
spectral amplitude estimation called the spectral subtraction. The basic idea was to
obtain the short-term magnitude and phase spectra of the noisy speech during speech
frames using the FFT, subtracting by an estimated noise magnitude spectrum, and
inverse transforming this subtracted spectral amplitude using the phase of the original
noisy speech. The enhancement procedure is performed frame-by-frame, thus data
buffer requirements, block data handling, and time delay imposed by FFT complicate
this technique for some real-time applications. Also, musical tone artifacts are often
heard at frame boundaries in such reconstructed speech.
Speech
source
Noisy speech Single-channel Enhanced speech
x(n) = s(n) + v(n) system ŝ(n)
Noise
source
x(n)
Data segmenting
and buffering
Non-speech
VAD
ŝ(n)
balanced by the presence of the same amount of noise during non-speech segments.
Setting the output to 0 has the effect of amplifying the noise during the speech segments.
Therefore it is best to attenuate the noise by a fixed factor during the non-speech
periods. A balance must be maintained between the magnitude and characteristics of
the noise perceived during the speech segment and the noise that is perceived during the
noise segment. A reasonable amount of attenuation was found to be about 30 dB. As a
result, some undesirable audio effects such as clicking, fluttering, or even slurring of the
speech signal are avoided.
As mentioned earlier, input signal from the A/D converter is segmented and wind-
owed. To do this, the input sequence is separated into a half (50 percent) overlapped
data buffer. Data at each buffer is then multiplied by the coefficients of the Hanning
(or Hamming) window. After the noise subtraction, the time-domain enhanced speech
waveform is reconstructed by the inverse FFT. These output segments are overlapped
and added to produce the output signal. The processed data is stored in an output
buffer.
Several assumptions were made for developing the algorithm. We assume that the
background noise remains stationary such that its expected magnitude spectrum prior
to speech segments unchanged during speech segments. If the environment changes,
there is enough time to estimate a new magnitude spectrum of background noise before
speech frame commences. For the slowly varying noise, the algorithm requires a VAD
to determine that speech has ceased and a new noise spectrum could be estimated. The
algorithm also assumes that significant noise reduction is possible by removing the
effect of noise from the magnitude spectrum only.
Assuming that a speech signal s(n) has been degraded by the uncorrelated additive
signal v(n), the corrupted noisy signal can be expressed as
Assuming that v(n) is zero-mean and uncorrelated with s(n), the estimate of jS
kj can
be expressed as
^
kj jX
kj
jS EjV
kj,
9:6:3
where EjV
kj is the expected noise spectrum taken during the non-speech periods.
^
kj, the spectral estimate can be expressed as
Given the estimate j S
where
X
k
e jyx
k
9:6:5
jX
kj
SPEECH ENHANCEMENT TECHNIQUES 433
and yx
k is the phase of measured noisy signal. It is sufficient to use the noisy speech
phase for practical purposes. Therefore we reconstructed the processed signal using the
estimate of short-term speech magnitude spectrum j S ^
kj and the phase of degraded
speech, yx
k.
Substituting Equations (9.6.3) and (9.6.5) into Equation (9.6.4), the estimator can be
expressed as
X
k
^
k jX
kj
S EjV
kj H
kX
k,
9:6:6
jX
kj
where
EjV
kj
H
k 1 :
9:6:7
jX
kj
Note that the spectral subtraction algorithm given in Equations (9.6.6) and (9.6.7)
avoids computation of the phase yx
k, which is too complicated to implement in a
fixed-point hardware.
A number of modifications are developed to reduce the auditory effect of spectral error.
These methods are spectral magnitude averaging, half-wave rectification, and residual
noise reduction. A detailed diagram for spectral subtraction algorithm is illustrated in
Figure 9.21.
Since the spectral error is proportional to the difference between the noise spectrum and
its mean, local averaging of the magnitude spectral
1 XM
jX
kj jXi
kj
9:6:8
M i1
Phase
can be used to reduce the spectral error, where Xi
k is ith time-windowed transform of
x(n). One problem with this modification is that the speech signal has been considered as
short-term stationary for a maximum of 30 ms. The averaging has the risk of some
temporal smearing of short transitory sounds. From the simulation results, a reasonable
compromise between variance reduction and time resolution appears to be averaging
2±3 frames.
Half-wave rectification
For each frequency bin where the signal magnitude spectrum jX
kj is less than the
averaged noise magnitude spectrum EjV
kj, the output is set to 0 because the magni-
tude spectrum cannot be negative. This modification can be implemented by half-wave
rectifying the spectral subtraction filter H
k. Thus Equation (9.6.6) becomes
H
k jH
kj
^
k
S X
k:
9:6:9
2
The advantage of half-wave rectification is that any low variance coherent tonal noise is
essentially eliminated. The disadvantage of half-wave rectification occurs in the situ-
ation where the sum of noise and speech at a frequency bin k is less than EjV
kj. In this
case the speech information at that frequency is incorrectly removed, implying a
possible decrease in intelligibility.
As mentioned earlier, a small amount of noise improved the output speech quality.
This idea can be implemented by using a software constraint
where the minimum spectrum floor is 34 dB respected to the estimated noise spectrum.
For uncorrelated noise, the residual noise spectrum occurs randomly as narrowband
magnitude spikes. This residual noise spectrum will have a magnitude between 0 and a
maximum value measured during non-speech periods. When these narrowband com-
ponents are transformed back to the time domain, the residual noise will sound like the
sum of tones with random fundamental frequency which is turned on and off at a rate of
about 20 ms. During speech frame the residual noise will also be perceived at those
frequencies which are not masked by the speech.
Since the residual noise will randomly fluctuate in amplitude at each frame, it can be
suppressed by replacing its current value with its minimum value chosen from the
adjacent frames. The minimum value is used only when j S ^
kj is less than the maximum
residual noise calculated during non-speech periods. The motivation behind this re-
^
kj lies below the maximum residual noise
placement scheme is threefold: (1) If the j S
and it varies radically from frame to frame, there is a high probability that the spectrum
at that frequency is due to noise. Therefore it can be suppressed by taking the minimum
PROJECTS USING THE TMS320C55X 435
^
kj below the maximum but has a nearly constant value, there is
value. (2) If j S
a high probability that the spectrum at that frequency is due to low-energy speech.
^
kj is greater than the
Therefore taking the minimum will retain the information. (3) If j S
maximum, the bias is sufficient. Thus the estimated spectrum j S ^
kj is used to
reconstruct the output speech. However, with this approach high-energy frequency
bins are not averaged together. The disadvantages to the scheme are that more storage
is required to save the maximum noise residuals and the magnitude values for three
adjacent frames, and more computations are required to find the maximum value and
minimum value of spectra for the three adjacent frames.
Some DSP applications that can be used as the course projects for this book are listed in
this section. Brief descriptions are provided, so that we can evaluate and define the
scope of each project. The numbers in the parentheses indicate the level of difficulty of
the projects, where the larger the number, the greater the difficulty.
Speech Codecs
9. A-law and m-law companding (1)
10. International Telecommunications Union (ITU) G.726 ADPCM (3)
Telecommunications
12. DTMF tone generation (1)
Error Coding
17. Cyclic redundancy code (1)
Image Processing
Signal generation and simulation are used widely for DSP algorithm development.
They are an integrated part of many applications implemented using DSP processors.
The most widely used signal generators are the sinusoid and random number gener-
ators. For designing modern communication applications, engineers often use channel
simulations to study and implement DSP applications. The widely used channel models
are telephone and wireless channels.
As discussed in previous sections, echoes exist in both the full-duplex dial-up tele-
phone networks and hands-free telephones. For a full-duplex telephone communica-
tion, there exists the near-end and far-end data echoes. The adaptive data echo canceler
PROJECTS USING THE TMS320C55X 437
is required for high-speed modems. The acoustic echo canceler is needed for speaker-
phone applications used in conference rooms.
Speech codecs are the voice coder±decoder pairs that are used for transmitting
speech (audio) signals more efficiently than passing the raw audio data samples. At
the 8 kHz sampling rate, the 16-bit audio data requires the rate of 1 28 000 bits per
second ( 128 kbps) to transmit through a given channel. By using speech codecs with
speech compression techniques, many voice codecs can pass speech at a rate of 8 kbps or
lower. Using lower bit rate vocoders, a channel with fixed capacity will be able to serve
more users.
Telecommunication has changed our daily life dramatically. DSP applications can be
found in various communication devices such as cellular phones and DSL modems.
Modulation techniques are widely used in communications. Quadrature amplitude
modulation (QAM) is one example used by modems to transmit information over the
telephone lines.
Channel coding or error coding is becoming more and more important in telecom-
munication applications. Convolutional encoding and Viterbi decoding, also known as
forward error correction (FEC) coding techniques, are used in modems to help improve
the error-free transmission. Cyclic redundant check (CRC) is widely used to verify the
data code correctness in the receiver.
Image processing is another important DSP application as a result of the increasing
need for video compression for transmission and storage. Many standards exist, the
JPEG ( joint photographic experts group) is used for still images, while the MPEG
(moving picture experts group) is designed to support more advanced video commu-
nications. The image compression is centered on the block-based DCT and IDCT
technologies.
Wireless communication is one of the most important technologies that has been
developed and greatly improved in the past several years. Digital cellular phone systems
include both the infrastructure such as the cellular base-stations and the handsets all use
DSP processors. Some of the systems use general-purpose DSP processors, while others
use DSP cores associating with ASIC technologies. A simplified wireless communica-
tion system is illustrated in Figure 9.22. The system can be divided into three sections:
transmitter, receiver, and the communication channel. The system can also be distin-
guished as speech coding and decoding, channel coding and decoding, and finally,
modulation and demodulation.
The speech (source) coding is important DSP applications in wireless communica-
tions. The vocoders are used to compress speech signals for bandwidth limited commu-
nication channels. The most popular vocoders for wireless communications compress
speech samples from 64 kbps to the range of 6 ±13 kbps.
The FEC coding scheme is widely used in the communication systems as an important
channel coding method to reduce the bit errors on noisy channels. The FEC used in the
system shown in Figure 9.22 consists of the convolutional encoding and Viterbi decod-
ing algorithms. Modern DSP processors such as the TMS320C55x have special instruc-
tions to aid efficient implementations for the computational intensive Viterbi decoders.
438 PRACTICAL DSP APPLICATIONS IN COMMUNICATIONS
Speech
samples
Speech Convolutional
CRC
encoder encoder
Transmitter
Modulation Interleave
filter
Transmitter
Rayleigh
fading
Gaussian
noise
Receiver
Receiver
Demodulation Synchronize
filter
Viterbi
Deinterleave Equalizer
decoder
Speech Speech
CRC
decoder samples
b 0 1 x x3 x 5 9:7:1
and
b 1 1 x 2 x3 x 4 x5 , 9:7:2
where denotes the modulo two adder, an XOR operation. For the rate 1/2 convolu-
tional encoder, each input information bit has two encoded bits, where bit 0 is generated
by (9.7.1) and bit 1 by generator (9.7.2). This redundancy enables the Viterbi decoder
to choose the correct bits under noisy conditions. Convolutional encoding is often
represented using the states. The convolutional encoder given in Figure 9.23 has a
PROJECTS USING THE TMS320C55X 439
trellis of 32-state with each state connecting to two states. The basic block of this
32-state trellis diagram is illustrated in Figure 9.24.
For this encoding scheme, each encoding state at time n is linked to two states at time
n 1 as shown in Figure 9.24. The Viterbi algorithm (see references for details) is used
for decoding the trellis coded information bits by expanding the trellis over the received
symbols. The Viterbi algorithm reduces the computational load by taking advantage of
the special structure of the trellis codes. It calculates the `distance' between the received
signal path and all the accumulated trellis paths entering each state. After comparison,
only the most likely path based on the current and past path history (called surviving
path ± the path with the shortest distance) is kept, and all other unlikely paths are
discarded at each state. Such early rejection of the unlikely paths greatly reduces the
computation needed for the decoding process. From Figure 9.24, each link from an old
state at time n to a new state at time n 1 associates with a transition path. For
example, the path mx is the transition from state i to state j, and my is the transition
path from state i 16 to state j. The accumulated path history is calculated as
where the new state history, state( j), is chosen as the smaller of the two accumulated pass
history paths state(i) and state(i 16) plus the transition paths mx and my , respectively.
Convolutional
XOR coded bit 0
Input bit
z−1 z−1 z−1 z−1 z−1
XOR Convolutional
coded bit 1
Time n Time n + 1
Path mx
State i State j
Path mx State j + 1
Path my
Path my
State i + 16
Figure 9.24 The trellis diagram of the rate 1/2 constraint length 5 convolutional encoder
440 PRACTICAL DSP APPLICATIONS IN COMMUNICATIONS
In most communication systems, the cyclic redundancy check (CRC) is used to detect
transmission errors. The implementation of CRC is usually done using the shift-register.
For example, a 7-bit CRC can be represented using the following polynomial generator:
bCRC 1 x x2 x4 x5 x7 : 9:7:4
Above CRC generator can produce a unique CRC code of a block sample up to 127
27
bits. To generate the CRC code for longer data streams, use a longer CRC generator,
such as the CRC16 and CRC32 polynomials, specified by the ITU.
At the front end of the system, transmit and receive filters are used to remove the out-
of-band signals. The transmit filter and receive filter shown in Figure 9.22 are chosen to
be square-root raised-cosine pulse-shape frequency response FIR filters. The detailed
DSP implementation of FIR filters has been described in Chapter 5.
DSP processors are also often used to provide modulation and demodulation for
digital communication systems. Modulation can be implemented in several ways.
The most commonly used method is to arrange the transmitting symbols into the I
(in-phase) and Q (quadrature) symbols. Figure 9.25 is a simplified modulation and
demodulation scheme for communication systems, where !c is the carrier frequency.
Other functions for the digital communication systems may include timing synchron-
ization and recovery, automatic gain control (AGC), and match filtering. Although
cos(w c n)
Transmit
filter
I
Data bits Re x(n)
Encoder
Im
Transmit Q
filter
(a)
sin(w c n)
cos(w c n)
Receive
I
filter
Receive
Q
filter
(b)
cos(w c n)
Figure 9.25 The simplified block diagram of modulator and demodulator for digital commu-
nication systems: (a) a passband transmitter modulator, and (b) a passband receiver demodulator
PROJECTS USING THE TMS320C55X 441
Write column-
Read in by-column Read out
row-by-row row-by-row
b0 b1 b2 b3 b4 b0 b5 b10 b15 b20
b5 b6 b7 b8 b9 b1 b6 b11 b16 b21
b10 b11 b12 b13 b14 b2 b7 b12 b17 b22
b15 b16 b17 b18 b19 b3 b8 b13 b18 b23
b20 b21 b22 b23 b24 b4 b9 b14 b19 b24
(a) (b)
Figure 9.26 A simple example of interleave: (a) before interleave, and (b) after interleave
442 PRACTICAL DSP APPLICATIONS IN COMMUNICATIONS
/* PI 3.14
C 30 00 00 000 m/s
V Mobile speed in mph
Fc Carrier frequency in Hz
N Number of simulated multi-path signals
N0 N/4 1/2, the number of oscillators
*/
wm 2*PI*V*Fc/C;
xc(t) sqrt(2)*cos(PI/4)*cos(wm*t);
xs(t) sqrt(2)*sin(PI/4)*cos(wm*t);
for(n 1;n < N0;n)
{
wn wm*cos(2*PI*n/N);
xc(t) 2*cos(PI*n/N0)*cos(wm*t);
xs(t) 2*sin(PI*n/N0)*cos(wm*t);
}
References
[1] S. M. Kuo and D. R. Morgan, Active Noise Control Systems ± Algorithms and DSP Implementa-
tions, New York: Wiley, 1996.
[2] D. E. Knuth, The Art of Computer Programming, vol. 2: Seminumerical Algorithms, 2nd Ed.,
Reading, MA: Addison-Wesley, 1981.
[3] N. Ahmed and T. Natarajan, Discrete-Time Signals and Systems, Englewood Cliffs, NJ: Prentice-
Hall, 1983.
[4] A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, Englewood Cliffs, NJ:
Prentice-Hall, 1989.
[5] S. J. Orfanidis, Introduction to Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1996.
[6] J. G. Proakis and D. G. Manolakis, Digital Signal Processing ± Principles, Algorithms, and
Applications, 3rd Ed., Englewood Cliffs, NJ: Prentice-Hall, 1996.
[7] A. Bateman and W. Yates, Digital Signal Processing Design, New York: Computer Science Press,
1989.
[8] G. L. Smith, `Dual-tone multifrequency receiver using the WE DSP16 digital signal processor,'
Application note, AT&T.
[9] Analog Devices, Digital Signal Processing Applications Using the ADSP-2100 Family, Englewood
Cliffs, NJ: Prentice-Hall, 1990.
[10] P. Mock, `Add DTMF generation and decoding to DSP-uP designs,' in Digital Signal Processing
Applications with the TMS320 Family, Texas Instruments, 1986, Chap. 19.
[11] D. O'Shaughnessy, `Enhancing speech degraded by additive noise or interfering speakers,' IEEE
Communications Magazine, Feb. 1989, pp. 46±52.
[12] B. Widrow et al., `Adaptive noise canceling: principles and applications,' Proc. of the IEEE, vol. 63,
Dec. 1975, pp. 1692±1716.
[13] M. R. Sambur, `Adaptive noise canceling for speech signals,' IEEE Trans. on ASSP, vol. 26, Oct.
1978, pp. 419±423.
[14] S. F. Boll, `Suppression of acoustic noise in speech using spectral subtraction,' IEEE Trans. ASSP,
vol. 27, Apr. 1979, pp. 113±120.
REFERENCES 443
[15] J. S. Lim and A. V. Oppenheim, `Enhancement and bandwidth compression of noisy speech,' Proc.
of the IEEE, vol. 67, 12, Dec. 1979, pp. 1586±1604.
[16] J. R. Deller, Jr., J. G. Proakis, and J. H. L. Hansen, Discrete-Time Processing of Speech Signals,
New York: MacMillan, 1993.
[17] D. L. Duttweiler, `A twelve-channel digital echo canceler,' IEEE Trans. on Comm., vol. COM-26,
May 1978, pp. 647±653.
[18] D. L. Duttweiler and Y. S. Chen, `A single-chip VLSI echo canceler,' Bell System Technical J.,
vol. 59, Feb. 1980, pp. 149±160.
[19] C. W. K. Gritton and D. W. Lin, `Echo cancellation algorithms,' IEEE ASSP Magazine, Apr.
1984, pp. 30±38.
[20] `Echo cancelers,' CCITT Recommendation G.165, 1984.
[21] M. M. Sondhi and D. A. Berkley, `Silencing echoes on the telephone network,' Proc. of IEEE,
vol. 68, Aug. 1980, pp. 948±963.
[22] M. M. Sondhi and W. Kellermann, `Adaptive echo cancellation for speech signals,' in Advances in
Speech Signal Processing, S. Furui and M. Sondhi, Eds., New York: Marcel Dekker, 1992, Chap. 11.
[23] Texas Instruments, Inc., Acoustic Echo Cancellation Software for Hands-Free Wireless Systems,
Literature no. SPRA162, 1997.
[24] Texas Instruments, Inc., Echo Cancellation S/W for TMS320C54x, Literature no. BPRA054, 1997.
[25] Texas Instruments, Inc., Implementing a Line-Echo Canceler Using Block Update & NLMS Algo-
rithms-'C54x, Literature no. SPRA188, 1997.
[26] Texas Instruments, Inc., A-Law and mu-Law Companding Implementations Using the
TMS320C54x, Literature no. SPRA163A, 1997.
[27] Texas Instruments, Inc., The Implementation of G.726 ADPCM on TMS320C54x DSP, Literature
no. BPRA053, 1997.
[28] Texas Instruments, Inc., Cyclic Redundancy Check Computation: An Implementation Using the
TMS320C54x, Literature no. SPRA530, 1999.
[29] Texas Instruments, Inc., DTMF Tone Generation and Detection on the TMS320C54x, Literature
no. SPRA096A, 1999.
[30] Texas Instruments, Inc., IS-54 Simulation, Literature no. SPRA135, 1994.
[31] Texas Instruments, Inc., Implement High Speed Modem w/Multilevel Multidimensional Modulation-
TMS320C542, Literature no. SPRA321, 1997.
[32] Texas Instruments, Inc., Viterbi Decoding Techniques in the TMS320C54x Family Application
Report, Literature no. SPRA071, 1996.
[33] A. J. Viterbi, `Error bounds for convolutional codes and an asymptotically optimum decoding
algorithm', IEEE Trans. Information Theory, vol. IT-13, Apr. 1967, pp. 260±269.
[34] W. C. Jakes, Jr., Microwave Mobile Communications, New York, NY: John Wiley & Sons, 1974.
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
Appendix A
Some Useful Formulas
This appendix briefly summarizes some basic formulas of algebra that will be used
extensively in this book.
Trigonometric identities are often required in the manipulation of Fourier series, trans-
forms, and harmonic analysis. Some of the most common identities are listed as follows:
X1
N
1 xN
xn , x 6 1:
A:9
n0
1 x
X1
N X1
N
1 e j!N
j!n j! n
e
e :
A:10
n0 n0
1 e j!
X
1
1
xn , jxj < 1:
A:11
n0
1 x
COMPLEX VARIABLES 447
Since the complex number z represents the point (x, y) in the two-dimensional plane, it
can be drawn as a vector illustrated in Figure A.1. The horizontal coordinate x is called
the real part, and the vertical coordinate y is the imaginary part.
As shown in Figure A.1, the vector z also can be defined by its length (radius) r and its
direction (angle) y. The x and y coordinates of the vector are given by
where
p
r jzj x2 y 2
A:15
Im[z]
y (x, y)
r
q
Re[z]
0 x
z1
x1 x2 y1 y2 j
x2 y1 x1 y 2
A:19a
z2 x22 y22
r1 j
y1 y2
e :
A:19b
r2
Note that addition and subtraction are straightforward in rectangular form, but is
difficult in polar form. Division is simple in polar form, but is complicated in rectangu-
lar form.
The complex arithmetic of the complex number x can be listed as
z x jy re jy
,
A:20
1 1 1 jy
z e ,
A:22
z r
zN rN e jNy : A:23
The solution of
zN 1 A:24
are
As illustrated in Figure A.2, these N solutions are equally spaced around the unit circle
jzj 1. The angular spacing between them is y 2p=N.
Im[z]
e j(2p /N)
Re[z]
Thus we have
1
d
tdt 1
A:27
1
and
1
d
t t0 x
tdt x
t0 ,
A:28
1
Vectors and matrices are often used in signal analysis to represent the state of a system
at a particular time, a set of signal values, and a set of linear equations. The vector
concepts can be applied to effectively describe a DSP algorithm. For example, define an
L1 coefficient vector as a column vector
b b0 b1 . . . bL 1 T , A:29
where T denotes the transpose operator and the bold lower case character is used to
denote a vector. We further define an input signal vector at time n as
The output signal of FIR filter defined in (3.1.16) can be expressed in vector form as
X
L 1
y
n bl x
n l bT x
n xT
nb:
A:31
l0
Therefore, the linear convolution of an FIR filter can be described as the inner (or dot)
product of the coefficient and signal vectors, and the result is a scalar y(n).
If we further define the coefficient vector
a a1 a2 aM T
A:32
450 APPENDIX A: SOME USEFUL FORMULAS
then the input/output equation of IIR filter given in (3.2.18) can be expressed as
Power and energy calculations are important in circuit analysis. Power is defined as the
time rate of expending or absorbing energy, and can be expressed in the form of a
derivative as
dE
P ,
A:35
dt
where P is the power in watts, E is the energy in joules, and t is the time in seconds. The
power associated with the voltage and current can be expressed as
v2
P vi i2 R,
A:36
R
where v is the voltage in volts, i is the current in amperes, and R is the resistance in
ohms.
The unit bel, named in honor of Alexander Graham Bell, is defined as the common
logarithm of the ratio of two power, Px and Py . In engineering applications, the most
popular description of signal strength is decibel (dB) defined as
Px
N 10 log10 dB:
A:37
Py
Therefore the decibel unit is used to describe the ratio of two powers and requires a
reference value, Py for comparison.
It is important to note that both the current i(t) and voltage v(t) can be considered as
an analog signal x(t), thus the power of signal is proportional to the square of signal
amplitude. For example, if the signal x(t) is amplified by a factor g, that is, y(t) gx(t).
The signal gain can be expressed in dB as
Px
gain 10 log10 20 log10
g,
A:38
Py
since the power is a function of the square of the voltage (or current) as shown in
(A.36). As the second example, consider that the sound-pressure level, Lp , in decibels
REFERENCE 451
Reference
[1] Jan J. Tuma, Engineering Mathematics Handbook, New York, NY: McGraw-Hall, 1979.
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
Appendix B
Introduction of MATLAB for
DSP Applications
This section briefly introduces the MATLAB environment for numerical computation,
data analysis, and graphics.
The fundamental data-type of MATLAB is array. Vectors, scalars, matrices are handled
as special cases of the basic array. A finite-duration sequence can be represented by
MATLAB as a row vector. To declare a variable, simply assign it a value at the
MATLAB prompt. For example, a sequence x
n {2, 4, 6, 3, 1} for n 0, 1, 2, 3, 4
can be represented in MATLAB by two row vectors n and xn as follows:
454 APPENDIX B: INTRODUCTION OF MATLAB FOR DSP APPLICATIONS
n [0, 1, 2, 3, 4];
xn [2, 4, 6, 3, 1];
Note that the MATLAB command prompt `' in the command window is ignored
throughout this book.
The above commands are examples of the MATLAB assignment statement, which
consists of a variable name followed by an equal sign and the data values to assign to the
variable. The data values are enclosed in brackets, which can be separated by commas
and/or blanks. A scalar does not need brackets. For example,
Alpha 0.9999;
MATLAB statements are case sensitive, for example, Alpha is different from alpha.
There is no need to declare variables as integer, real (float or double in C), or
complex because MATLAB automatically sets the variables to be real with double
precision. The output of every command is displayed on the screen, however, a semi-
colon `;' at the end of a command suppresses the screen output, except for graphics and
on-line help commands.
The xn vector itself is sufficient to represent the sequence x(n), since the time
index n is trivial when the sequence begins at n 0. It is important to note that
MATLAB assumes all vectors are indexed starting with 1, and thus xn(1) 2,
xn(2) 4, . . . and xn(5) 1. We can check individual values of the vector xn. For
example, typing
xn(3)
will display the value of xn (3).
MATLAB saves previously typed commands in a buffer. These commands can be
recalled with the up-arrow key `"' and down-arrow key `#'. This helps in editing
previous commands with different arguments. Terminating a MATLAB session will
delete all the variables in the workspace. These variables can be saved for later use by
using the MATLAB command
save
This command saves all variables in the file matlab.mat. These variables can be
restored to the workspace using the load command. The command
save file_name xn yn
will save only selective variables xn and yn in the file named file_name.
MATLAB provides an on-line help system accessible by using the help command.
For example, to get information about the function save, we can enter the following
statement at the command prompt:
help save
The help command will return the text information on how to use save in the
command window. The help command with no arguments will display a list of
directories that contains the MATLAB related files. A more general search for informa-
tion is provided by lookfor.
ELEMENTARY OPERATIONS 455
B.1.2 Graphics
This command produces a graph of xn versus n, a connected plot with straight lines
between the data points [n, x(n)]. The outputs of all graphics commands given in the
command window are flushed to the separated graphics window.
If xn is a vector, plot(xn) produces a linear graph of the elements of xn versus the
index of the elements of xn. For a causal sequence, we can use x-vector representation
alone as
plot(xn);
In this case, the plot is generated with the values of the indices of the vector xn used as
the n values.
The command plot(x,y)generates a line plot that connects the points represented
by the vectors x and y with line segments. We can pass a character string as an argument
to the plot function in order to specify various line styles, plot symbols, and colors.
Table B.1 summarizes the options for lines and marks, and the color options are listed in
Table B.2.
For example, the following command:
plot(x,y,`r ');
will create a line plot with the red dashed line.
solid - plus +
dashed -- star *
dotted : circle o
dash-dot -. x-mark x
456 APPENDIX B: INTRODUCTION OF MATLAB FOR DSP APPLICATIONS
Symbol Color
y yellow
m magenta
c cyan
r red
g green
b blue
w white
k black
Plots may be annotated with title, xlabel, and ylabel commands. For
example,
plot(n, xn);
title(`Time-domain signal x(n)');
xlabel(`Time index');
ylabel(`Amplitude');
where title gives the plot with the title `Time-domain signal x(n)', xlabel labels the
x-axis with `Time index' and ylabel labels the y-axis with `Amplitude'. Note that these
commands can be written in the same line.
By default, MATLAB automatically scales the axes to fit the data values. However,
we can override this scaling with the axis command. For example, the plot statement
followed by
axis( [xmin xmax ymin ymax]);
sets the scaling limits for the x- and y-axes on the current plot. The axis command
must follow the plot command to have the desired effect. This command is especially
useful when we want to compare curves from different plots using the identical scale.
The axis command may be used to zoom-in (or zoom-out) on a particular section of
the plot. There are some predefined string-arguments for the axis command. For
example,
axis(`equal');
sets equal scale on both axes, and
axis(`square');
sets the default rectangular graphic frame to a square.
The command plot(x,y)assumes that the x and y axes are divided into equally
spaced intervals; these plots are called linear plots. The MATLAB commands can also
generate a logarithmic scale (base 10) using the following commands:
ELEMENTARY OPERATIONS 457
Generally, we use the linear plot to display a time-domain signal, but we prefer to use
the logarithmic scale for y to show the magnitude response in the unit of decibels, which
will be discussed in Chapter 4.
There are many other specialized graphics functions for 2D plotting. For example, the
command
stem(n,xn);
produces the `lollipop' presentation of the same data. In addition, bar creates a bar
graph, contour makes contour plots, hist makes histograms, etc.
To compare different vectors by plotting the latter over the former, we can use the
command
hold on
to generate overlay plots. This command freezes the current plot in the graphics
window. All subsequent plots generated by the plot command are simply added to
the existing plot. To return to normal plotting, use
hold off
to clear the hold command. When the entire set of data is available, the plot
command with multiple arguments can be used to generate an overlay plot. For ex-
ample, if we have two sets of data (x1, y1) and (x2, y2), the command
plot(x1, y1, x2, y2,`:');
plots (x1, y1) with a solid line and (x2, y2) with a dotted line on the same graph.
Multiple plots per window can be done with the MATLAB subplot function. The
subplot command allows us to split the graph window into sub-windows. The
possible splits can be either two sub-windows or four sub-windows. Two windows
can be arranged as either top-and-bottom or left-and-right. The arguments to the
subplot(m,n,p) command are three integers m, n, and p. The integer m and n specify
that the graph window is to be split into an m-by-n grid of smaller windows, and the
digit p specifies the pth window for the current plot. The windows are numbered from
left to right, top to bottom. For example,
subplot(2,1,1), plot(n), subplot(2,1,2), plot(xn);
will split the graph window into a top plot for vector n and a bottom plot for
vector xn.
or simply
expression
Since MATLAB supports long variable names (up to 19 characters, start with a letter,
followed by letters, or digits, or underscores), we should take advantage of this feature
to give variables descriptive names.
The default operations in MATLAB are matrix (including vector and scalar) opera-
tions. The arithmetic operations between two scalars (11 matrix) a and b are: a b
(addition), a b (subtraction), a * b (multiplication), a/b (division), and a^b (ab ). An
array operation is performed element-by-element. Suppose A and B vectors are row
vectors with the same number of elements. To generate a new row vector C with values
that are the operations of corresponding values in A and B element-by-element, we use
A B, A B, A.*B, A./B, and A.^B. For example,
x [1, 2, 3]; y [4, 5, 6];
then
z x.*y
results in
z 4 10 18
A period preceding an operator indicates an array or element-by-element operation.
For addition and subtraction, array operation and scalar operation are the same. Array
(element-by-element) operations apply not only to operations between two vectors of
the same size, but also to operations between a scalar and vector. For example, every
element in a vector A can be multiplied by a scalar b in MATLAB as B b*A or
B b.*A. In general, when `point' is used with another arithmetic operator, it modifies
that operator's usual matrix definition to a pointwise one.
Six relational operators: < (less than), <= (less than or equal), > (greater than), >=
(greater than or equal), == (equal), and ~= (not equal), are available for comparing two
matrices of equal dimensions. MATLAB compares the pairs of corresponding elements.
The result is a matrix of ones and zeros, with one representing `true' and zero represent-
ing `false.' In addition, the operators & (AND), | (OR), ~ (NOT), and xor (exclusive
OR) are the logical operators. These operators are particularly useful in if statements.
For example,
if a > b
do something
end
The colon operator `;' is useful for creating index arrays and creating vectors of
evenly spaced values. The index range can be generated using a start (initial value), a
skip (increment), and an end (final value). Therefore, a regularly spaced vector of
numbers is obtained by means of
n [start:skip:end ]
Note that no brackets are required if a vector is generated this way. However, brackets
are required to force the concatenation of the two vectors. Without the skip para-
meter, the default increment is 1. For example,
ELEMENTARY OPERATIONS 459
n 0:2:100;
generates the vector n [0 2 4 . . . 100], and
m [1:10 20:2:40];
produces the vector m [1 2 . . . 10 20 22 . . . 40].
In DSP application, the vector form of the impulse response h
n 0:8n for
n 0,1, . . .,127 can be generated by the commands
n [0:127]; hn (0.8).^n;
B.1.4 Files
MATLAB provides three types of files for storing information: M-files, Mat-files, and
Mex-files. M-files are text files, with a `.m' extension. There are two types of M-files:
script files and function files. A script file is a user-created file with a sequence of
MATLAB commands. The file must be saved with a .m extension. A script file can be
executed by typing its name (without extension) at the command prompt in the com-
mand window. It is equivalent to typing all the commands stored in the script file at the
MATLAB prompt. A script file may contain any number of commands, including those
built-in and user-written functions. Script files are useful when we have to repeat a set of
commands and functions several times. It is important to note that we should not name
a script file the same as the name of a variable in the workspace and the name of a
460 APPENDIX B: INTRODUCTION OF MATLAB FOR DSP APPLICATIONS
variable it created. In addition, avoid names that clash with built-in functions. We can
use any text editor to write, edit, and save M-files. However, MATLAB provides its
own text editor. On PC, select New!M-file from the File menu. A new edit window
will appear for creating a script file.
A function file is also an M-file, just like a script file, except it has a function
definition line on the top that defines the input and output explicitly. We will discuss
function files later. Mat-files are binary data files with a `.mat' extension. Mat-files are
created by MATLAB when we save data with the save command. The data is written
in a special format that MATLAB can read. Mat-files can be loaded into MATLAB
with the load command. Mex-files are MATLAB callable C programs with .mex
extension. We do not use and discuss this type of files in this book.
For example, to generate x(n) for A 1.5, f 100 Hz, T 0.001 second (1 ms),
f 0:25p, and N 100, we can easily use the following MATLAB script (figb1.m
in the software package):
n [0:99];
xn 1.5*sin(2*pi*100*n*0.001+0.25*pi);
where the function pi returns the value of p. To view the generated sinewave, we can
use
plot(n,xn); title(`Sinewave');
xlabel(`Time index'); ylabel(`Amplitude');
The waveform of the generated sinewave is shown in Figure B.1.
In Figure B.1, a trivial integer index n is used for the x-axis instead of an actual time
index in seconds. To better represent the time-domain signal, we can use the colon
operator to generate values between the first and third numbers, using the second
number as the increment. For example, if we wish to view x(n) generated in the previous
GENERATION AND PROCESSING OF DIGITAL SIGNALS 461
example with the actual time index t 0, 0.001, . . ., 0.099, we can use the following
script (figb2.m in the software package):
n [0:99];
xn 1.5*sin(2*pi*100*n*0.001+0.25*pi);
t [0:0.001:0.099];
plot(t,xn); title(`Sinewave');
xlabel(`Time in second'); ylabel(`Amplitude');
The result is shown in Figure B.2.
In addition to these sin, cos, rand, and randn functions discussed in Chapter 3,
MATLAB provides many other functions, such as abs(x), log(x), etc. The arguments
or parameters of the function are contained in parentheses following the name of the
1.5
1.0
0.5
Amplitude
−0.5
−1.0
−1.5
0 10 20 30 40 50 60 70 80 90 100
Time index
1.5
1.0
0.5
Amplitude
−0.5
−1.0
−1.5
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time in second
function. If a function contains more than one argument, it is very important to list the
arguments in the correct order. Some functions also require that the arguments be in
specific units. For example, the trigonometric functions assume that the arguments are
in radians. It is possible to nest a sequence of function calls. For example, the following
equation:
X
L
y log
jx
nj
B:2
n1
can be implemented as
y sum(log(abs(xn)));
where xn is the vector containing the elements x(n).
The built-in functions are optimized for vector operations. Writing efficient
MATLAB code (scripts or user-written functions) requires a programming style that
generates small functions that are vectorized. The primary way to avoid loops is to use
MATLAB functions as often as possible. The details of user-written functions will be
presented in Section B.4.
Two sequences x1
n and x2
n can be added sample-by-sample to form a new
sequence
Adding their corresponding sample sums these two sequences. The summation of
two sequences can be implemented in MATLAB by the arithmetic operator `' if
sequences are of equal length. For example, we can add a random noise with a sinewave
as follows:
n [0:127];
x1n 1.5*sin(2*pi*100*n*0.001+0.25*pi);
x2n 1.2*randn(1,128);
yn x1nx2;
A given sequence x(n) multiplied by a constant a can be implemented in MATLAB by
the scaling operation. For example, y
n ax
n, where each sample in x(n) is multi-
plied by a scalar a 1:5, can be implemented as
yn 1.5*xn;
Consider the discrete-time linear time-invariant system. Let x(n) be the input
sequence and y(n) be the output sequence of the system. If h(n) is the impulse response
of the system, the output signal of the system can be expressed as
X
1 X
1
y
n x
kh
n k h
kx
n k:
B:4
k 1 k 1
and h(n) are finite causal sequences, MATLAB has a function called conv(a,b)that
computes the convolution between vectors a and b. For example,
xn [1, 3, 5, 7, 9]; hn [2, 4, 6, 8, 10];
yn conv(xn,hn)
yn 2 10 28 60 110 148 160 142 90
As discussed in Chapter 4, the transfer function of this IIR filter can be expressed as
LP1
l
bl z
l0
H
z ,
B:6
P
M
m
am z
m0
the MATLAB script (figb3.m in the software package) to compute the filter output
yn with input sequence xn is given as follows:
n [0:139];
x1n 1.5*sin(2*pi*100*n*0.001+0.25*pi);
x2n 1.2*randn(1,140);
xn x1n+x2n;
b [0.0305, 0 0.0305];
a [1, 1.5695, 0.9391];
yn filter(b,a,xn);
plot(n,xn,`:', n,yn,` ');
464 APPENDIX B: INTRODUCTION OF MATLAB FOR DSP APPLICATIONS
Figure B.3 shows the MATLAB plots of input and output signals. Note that the vector
a is defined based on coefficients am used in Equation (B.6), which have different signs
than the coefficients used in Equation (B.5). An FIR filter can be implemented by
setting a [1]or using
filter(b,1,x);
where the vector b consists of FIR filter coefficients bl .
MATLAB can interface with three different types of data files: Mat-files, ASCII files,
and binary files. The Mat-file and binary file contain data stored in a memory-efficient
binary format, whereas an ASCII file contains information stored in ASCII characters.
Mat-files are preferable for data that is going to be generated and used by MATLAB
programs.
Data saved in both the Mat-file and ASCII file can be loaded (retrieved) from the disk
file into an array in workspace using the load command. For example,
Load xn;
will load the Mat-file xn.mat, and
load xn.dat;
will read the data from the ASCII file with the name xn.dat in the disk into an array
with the name xn.
To load the file xn.bin stored in the binary format into the array xn, we have to use
the following C-like commands:
fid fopen(`xn.bin',`r');
xn fread(fid,`float32');
where the fopen command open the file xn.bin, `r' indicates to open the file for
reading, and fid is a file identifier associated with an open file. The second command
4
Input signal
3 Output signal
−1
−2
−3
−4
0 20 40 60 80 100 120 140
Figure B.3 Filter input (dotted line) and output (solid line) waveforms
USER-WRITTEN FUNCTIONS 465
fread reads binary data from the specified file and writes it into the array xn, and
`float32' indicates each data in the file is a 32-bit floating-point value. Actually,
MATLAB supports all the C and FORTRAN data types such as int, long, etc. Other
related MATLAB commands are fclose, fscanf, sscanf, fseek, and ftell.
Mat-files are generated by MATALB program using the save command, which
contains a file name and the array to be stored in the file. For example,
save out_file yn;
will save samples in the array yn into a disk file out_file.mat, where the .mat
extension is automatically added to the filename. To save variables in the ASCII format,
we can use a save command with an additional keyword as follows:
save out_file yn -ascii;
To save data in the binary format, we can use the fwrite command, which writes the
elements of an array to the specified file.
As discussed in the previous section, MATLAB has an intensive set of functions. Some
of the functions are intrinsic (built-in) to the MATLAB itself. A function file is a script
file. Details about individual functions are available in the on-line help facility and the
MATLAB Reference Guide. Other functions are available in the library of external M-
files called the MATLAB toolbox such as Signal Processing Toolbox. Finally, functions
can be developed by individual users for more specialized applications. This is an
important feature of MATLAB.
Each user-written function should have a single purpose. This will lead to short,
simple modules that can be linked together by functional composition to produce more
complex operations. Avoid the temptation to build super functions with many options
and outputs. The function M-file has very specific rules that are summarized as follows.
An example of the user-written function dft.m given in Chapter 4 is listed as follows:
function [Xk] dft(xn, N)
% Discrete Fourier transform function
% [Xk] dft(xn, N)
% where xn is the time-domain signal x(n)
% N is the length of sequence
% Xk is the frequency-domain X(k)
n [0:1:N 1];
k [0:1:N 1];
WN exp( j*2pi/N); % Twiddle factor
nk n'*k; % N by N matrix
WNnk WN.^nk; % Twiddle factor matrix
Xk xn*WNnk; % DFT
A function file begins with a function definition line, which has a well-defined list of
inputs and outputs as follows:
function [output variables] facction_name(input variables);
466 APPENDIX B: INTRODUCTION OF MATLAB FOR DSP APPLICATIONS
The function must begin with a line containing the word function, which is followed
by the output arguments, an equal sign, and the name of the function. The input
arguments to the function follow the function name and are enclosed in parentheses.
This first line distinguishes the function file from a script file. The name of the function
should match the name of the M-file. If there is a conflict, it is the name of the M-file
located on the disk that is known to the MATLAB command environment.
A line beginning with the % (the percent symbol) sign is a comment line and a `%' sign
may be put anywhere. Anything after a % in a line is ignored by MATLAB during
execution. The first few lines immediately following the function definition line should
be comments because they will be displayed if help is requested for the function name.
For example, typing
help dft
will display five comment lines from dft.m M-file on the screen as help information if
this M-file is in the current directory. These comment lines are usually used to provide a
brief explanation, define the function calling sequence, and then a definition of the input
and output arguments.
The only information returned from the function is contained in the output argu-
ments, which are matrices. For example, Xk in the dft.m. Multiple output arguments
are also possible if square brackets surround the list of output arguments. MATLAB
returns whatever value is contained in the output matrix when the function completes.
The same matrix names can be used in both a function and the program that
references it since the function makes local copies of its arguments. However, these
local variables disappear after the function completes, thus any values computed in the
function, other than the output arguments, are not accessible from the program.
It is possible to declare a set of variables to be accessible to all or some functions
without passing the variables in the input list. For example,
global input_x;
The global command declares the variable input_x to be global. This command goes
before any executable command in the functions and scripts that need to access the value
of the global variables. Be careful with the names of the global variables to avoid conflict
with other local variables.
References
[1] D. M. Etter, Introduction to MATLAB for Engineers and Scientists, Englewood Cliffs, NJ: Prentice-
Hall, 1996.
[2] V. K. Ingle and J. G. Proakis, Digital Signal Processing Using MATLAB V.4, Boston: PWS
Publishing, 1997.
[3] E. W. Kamen and B. S. Heck, Fundamentals of Signals and Systems Using MATLAB, Englewood
Cliffs, NJ: Prentice-Hall, 1997.
[4] J. H. McClellan, et al., Computer-Based Exercises for Signal Processing Using MATLAB 5, Engle-
wood Cliffs, NJ: Prentice-Hall, 1998.
[5] R. Pratap, Getting Started with MATLAB 5, New York: Oxford University Press, 1999.
[6] MATLAB User's Guide, Math Works, 1992.
[7] MATLAB Reference Guide, Math Works, 1992.
[8] Signal Processing Toolbox for Use with MATLAB, Math Works, 1994.
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
Appendix C
Introduction of
C Programming for
DSP Applications
C has become the language of choice for many DSP software developments not only
because of its powerful commands and data structures but also because of its portability
to migrate between DSP platforms and devices. In this appendix, we will cover some of
the important features of C for DSP applications.
The processes of compilation, linking/loading, and execution of C programs differ
slightly among operating environments. To illustrate the process we use a general UNIX
system C compiler shown in Figure C.1 as an example. C compiler translates high-level
C programs into machine language that can be executed by computers or DSP proces-
sors such as the TMS320C55x. The fact that C compilers are available for a wide range
of computer platforms and DSP processors makes C programming the most portable
software for DSP applications. Many C programming environments include debugger
programs, which are useful for identifying errors in source programs. Debugger pro-
grams allow us to see values stored in variables at different points in a program and to
step through the program line by line.
The purpose of DSP programming is to manipulate digital signals for a specific signal
processing application. To achieve this goal, DSP programs must be able to organize
the variables (different data types), describe the actions (operators), control the oper-
ations (program flow), and move data back and forth between the outside world and the
program (input/output). This appendix provides a brief overview of the elements
required for efficient programming of DSP algorithms in C language and introduces
fundamental C programming concepts using C examples, but does not attempt to
cover all the elements in detail. C programming language used throughout this book
is conformed to the ANSI C standard (American National Standard Institute C
Standard).
470 APPENDIX C: INTRODUCTION OF C PROGRAMMING FOR DSP APPLICATIONS
C program
(Source)
Preprocessor
Compiler
Assembly code
Assembler
Object code
Linker
Libraries
(loader)
Execution Data
Output
In this section, we will present a simple C program and use it as an example to introduce
C language program components. As discussed in Section 3.1, an N-point unit-impulse
sequence can be written as
1, n0
d
n
C:1
0, n 1, 2, . . . N 1.
The following C program (impulse.c in the software package) can be used to generate
this unit-sample sequence
/****************************************************************
* IMPULSE.C Unit impulse sequence generator *
****************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#define K 1024
void main()
{
float y [K];
int k;
int N 256;
/* Generate unit impulse sequence */
A SIMPLE C PROGRAM 471
A program written in C must have a few basic components. We now briefly discuss these
components used in this example C program.
C program comments may contain any message beginning with the characters
sequence /* and ending with the characters sequence */. The comments will be ignored
by the compiler. Program comments may be interspersed within the source code as well
as at the beginning of a function. In above example, the extra asterisks around the
program comments in lines one through three are there only to enhance the appearance
of the comments; they are not necessary. Most of the C compiler nowadays also accepts
the C programming language comments sequence, //. In our example, we mixed
both comment sequences for demonstration purpose. Although program comments are
optional, good programming style requires that they be used throughout a program to
document the operations and to improve its readability. Detailed comments are very
important in maintaining complicated DSP software for new readers, even for the
original programmers after time has passed.
The preprocessor is the first pass of the C compiler. It reads in C source files as input
and produces output files for the C compiler. The tasks of preprocessor are to remove
comments, expand macro definition, interpret include files, and check conditional
compilation. Preprocessor directives give instructions to the compiler that are per-
formed before the program is compiled. Each preprocessor directive begins with a
pound sign `#' followed by the preprocessor keyword. For example, the program
impulse.c contains the following directives:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#define K 1024
The #include <file> directive copies the file from a directory inside the standard
C compiler, while #include `file' directly copies the file from current working
directory for compilation unless a path has been specified. Thus the first three directives
specify that the statements in the files of stdio.h, stdlib.h, and math.h should be
inserted in place of the directives before the program is compiled. Note that these files
are provided with the C compiler. The #define directs the preprocessor to replace
subsequent occurrences of K with the constant value 1024.
A C program may consist of one or more functions and one and only one function is
called main()with which the operating system will start executing. After starting the
application, the main function is executed first, and it is usually responsible for the main
control of the application program. This function can be written to return a value, or it
can be written as a void function that does not return a value. The body of the function
is enclosed in braces as follows:
472 APPENDIX C: INTRODUCTION OF C PROGRAMMING FOR DSP APPLICATIONS
void main()
{
variable declarations; /* Statements define variables */
executable statements; /* Statements execute program */
}
As shown in the example, all C statements end with a semicolon. The function contains
two types of statements ± statements that define memory locations that will be used in the
program and statements that specify actions to be taken.
Before a variable can be used in a C program, it must be declared to inform the compiler
the name and type of the variable. A variable name is defined by declaring a sequence of
characters (the variable identifier or name) as a particular predefined type of data. C
constants are specific values that are included in the C statements, while variables are
memory locations that are assigned a name or identifier. The variable declaration uses
the following syntax:
data_type name;
For example, in the simple example we have
int k;
The term int indicates that the variable named k will store as integer data value. C also
allows multiple variables to be defined within one statement by separating them with the
commas. For example,
int i,j,k;
An identifier may be any sequence of characters (usually with some length restric-
tions) that starts with a letter or an underscore, and cannot be any of the C compiler
reserved keywords. Note that C is case sensitive; making the variable k different from
the variable K. C language supports several data types that represent: integers numbers,
floating-point numbers, and text data. Arrays of each variable type and pointers of each
type may also be declared and used. Once variables are defined to be a given size and
type, some sort of manipulation can be performed using the variables.
Memory locations must be defined before other statements use them. Initial values
can also be specified at the same time when memory locations are defined. For example,
int N 256;
defines the variable N as an integer, and assigns it with the value 256.
An assignment statement is used to assign a value to an identifier. The most basic
assignment operator in C is the single equal sign, =, where the value to the right of the
equal sign is assigned to the variable on the left. The general form of the assignment
statement is
identifier expression;
A SIMPLE C PROGRAM 473
where the expression can be a constant, another variable, or the result of an operation.
C also allows multiple expressions to be placed within one statement by separating them
with the commas. Each expression is evaluated left to right, and the entire expression
assumes the value of the last expression which is evaluated. Multiple assignments are
also allowed in C, for example,
int i j k 0;
In this case, the statement is evaluated from right to left, so that 0 is assigned to k, j,
and i.
Numeric data types are used to specify the types of numbers that will be contained in
variables. There are several types of data used depending on the format in which the
numbers are stored and the accuracy of the data. In C, numeric numbers are either
integers (short, int, long) or floating-point (float, double, long double) values.
The specific ranges of values are system dependent, which means that the ranges may vary
from one computer to another. Table C.1 contains information on the precision and
range of integers represented by a 32-bit machine and a 16-bit machine. Thus the size of a
variable declared as just int depends on the compiler implementation and could make
the program behave differently on different machine. To make a program truly portable,
the program should contain only short and long declarations. In practice, explicit
defined data types are often used, such as:
#define Word16 short
#define Word32 long
main()
{
Word16 k; /* Declare as 16±bit variable */
Word32 x; /* Declare as 32±bit variable */
statements;
}
Instead of using short and long data type, the example code uses Word16 for the 16-
bit integer data type and Word32 for the 32-bit data type. In addition, the three integer
types (int, short, and long) can be declared as unsigned by preceding the declaration
with unsigned. For example,
unsigned int counter;
where counter has a value range from 0 to 65 535.
Statements and expressions using the operators should normally use variables and
constants of the same type. If data types are mixed, C uses two basic rules to auto-
matically make type conversions:
1. If an operation involves two types, the value with a lower rank is converted to the
type of higher rank. This process is called promotion, and the ranking from highest
to lowest type is double, float, long, int, short, and char.
2. In an assignment statement, the result is converted to the type of the variable that is
being assigned. This may result in promotion or demotion when the value is
truncated to a lower ranking type.
Sometimes the conversion must be stated explicitly in order to demand that a con-
version be done in a certain way. A cast operator places the name of the desired type in
parentheses before the variable or expression, thus allowing us to specify a type change
in the value. For example, the data casting(int)used in the following expressions
treats the floating-point number z as an integer:
int x, y;
float z 2.8;
x (int)z; /* Truncate z to an integer x */
y (int)(z0.5); /* Rounding z to an integer y */
The casting result will truncate 2.8 to 2 and store it in x, and allows rounding of the
floating variable z to an integer 3 and stores it in y.
C.1.3 Arrays
An array groups distinct variables of the same type under a single name. A one-dimen-
sional array can be visualized as a list of values arranged in either a row or a column. We
assign an identifier to an array, and then distinguish between elements or values in the
array by using subscripts. The subscripts always start with 0 and are incremented by 1. In
C, all data types can be declared as an array by placing the number of elements to be
assigned to an array in brackets after its name. One-dimensional array is declared as
data_type array_name [N ];
where the array_name is the name of an array of N elements of data type specified. For
example,
float y [5];
ARITHMETIC AND BITWISE OPERATORS 475
where an integer expression 5 in brackets specifies there are five float (floating-point)
elements in the array y []. The first value in the y array is referenced by y [0], the
second value in the y array is referenced by y [1], and the last element is indexed by
y [K 1 ].
Multidimensional arrays can be defined simply by appending more brackets contain-
ing the array size in each dimension. For example,
int matrix_a [4][2];
defines a 42 matrix called matrix_a. The matrix array would be referenced as
matrix_a [i][j], where i and j are row and column indices respectively.
An array can be initialized when it is defined, or the values can be assigned to it using
program statements. To initialize the array at the same time when it is defined, the
values are specified in a sequence that is separated by commas and enclosed with braces.
For example,
float y [5] {1.0, 0.0, 0.0, 0.0, 0.0 };
initializes a 5-point unit impulse response sequence in the floating-point array y [].
Arrays can also be assigned values by means of program statements. For example, the
following example generates an N-point unit impulse sequence.
for(k 1; k < N; k)
{
y [k] 0.0; /* Clear array */
}
y [0] 1.0; /* y(0) 1 */
A more detailed discussion on arrays and loops will be given later.
Once variables are defined to be a given size and type, a certain manipulation (operator)
can be performed using the variables. We have discussed assignment operators in C.1.1.
This section will introduce arithmetic and bitwise operators. Logical operators will be
introduced later.
The modulus operator is useful in implementing a circular pointer for signal proces-
sing. For example,
k (k+1)%128;
Some compiler implementations may generate code that is more efficient if the com-
bined operator is used. The combined operators include +, -, *, /, %, and other logical
operators.
C supplies the binary bitwise operators: & (bitwise AND), | (bitwise OR), ^ (bitwise
exclusive OR), (arithmetic shift left), and (arithmetic shift right), which are
performed on integer operands. The unary bitwise operator, which inverts all the bits
in the operand, is implemented with the ~ symbol. These bitwise operators make C
programming an efficient programming language for DSP applications.
The function main can have two parameters, argc and argv [], to catch arguments
passed to main from the command line when the program begins executing. These
arguments could be file names on which the program is to act or options that
influence the logic of the program. The parameter argv []is an array of pointers to
strings, and argc is an int whose value is equal to the number of strings to which
argv []points to. The command-line arguments are passed to the main( ) function as
follows:
void main(int argc, char *argv [])
Suppose that we compile the firfltr.c such that the executable program is generated
and saved as firfltr.exe. We can run the program on a PC under MS-DOS Prompt
by typing
firfltr infile coefile outfile <enter>
The operating system passes the strings on the command line to main. More precisely,
the operating system stores the strings on the command line in memory and sets
argv [0 ]to the address of the first string (firfltr), the name of the file that holds
the program to be executed on the command line. argv [1 ]points to the address of the
second string (infile) on the command line, argv [2 ]to coefile, and argv [3 ]to
outfile. The argument argc is set to the number of strings on the command line. In
this example, argc 4.
The use of command-line arguments makes the executable program flexible, because
we can run the program with different arguments (data files, parameter values, etc.)
specified at the execution time without modifying the program and re-compiling it
again. For example, the file firfltr.exe can be used to perform FIR filtering
function for different FIR filter with their coefficients defined by coefile. This
program can also be used to filter different input signals contained in the infile.
The flexibility is especially convenient when the parameter values used in the program
need to be tuned based on given data.
C.3.2 Pointers
A pointer is a variable that holds the address of data, rather than the data itself. The use
of pointers is usually closely related to manipulating the elements in an array. Two
special pointer operators are required to effectively manipulate pointers. The indirection
operator * is used whenever the data stored at the address pointed to by a pointer
(indirect addressing) is required. The address operator & is used to set the pointer to the
desired address. For example,
int i 5;
int *ptr;
ptr &i;
*ptr 8;
478 APPENDIX C: INTRODUCTION OF C PROGRAMMING FOR DSP APPLICATIONS
The first statement declares i as an integer of value 5, the second statement declares
that ptr is a pointer to an integer variable, and the third statement sets the pointer ptr
to the address of the integer variable i. Finally, the last statement changes the data at
the address pointed by ptr to 8. This results in changing the value of variable i from
5 to 8.
An array introduced in Section C.1.3 is essentially a section of memory that is
allocated by the compiler and assigned the name given in the declaration statement.
In fact, the name given is just a fixed pointer to the beginning of the array. In C, the
array name can be used as a pointer or it can be used to reference elements of the array.
For example, in the function shift in firfltr.c, the statement
float *x;
defines x as a pointer to floating-point variables. Thus *x and x [0]are exactly
equivalent, although the meaning of x [0]is often more clear.
C.3.3 C Functions
As discussed earlier, all C programs consist of one or more functions, including the
main(). In C, functions (subroutines) are available from libraries such as the standard C
library and programmer-defined routines that are written specifically to accompany the
main function, such as:
void shift(float *, int, float);
float fir(float *, float *, int);
These functions are sets of statements that typically perform an operation, such as
shift to update data buffers for FIR filters, or as fir to compute an output of the
FIR filter.
To maintain simplicity and readability for more complicated applications, we develop
programs that use a main()function plus additional functions, instead of using one
long main function. In C, any function can call any other function, or be called by any
other function. Breaking a long program into a set of simple functions has following
advantages:
1. A function can be written and tested separately from other parts of the program.
Thus module development can be done in parallel for large projects. Several
engineers can work on the same project if it is separated into modules because the
individual modules can be developed and tested independently of each other.
3. Once a function has been carefully tested, it can be used in other programs without
being retested. This reusability is a very important issue in the development of large
software systems because it greatly reduces development time.
4. The use of functions frequently reduces the overall length of a program, because
many solutions include steps that are repeated several places in the program.
AN FIR FILTER PROGRAM 479
To use standard input/output functions provided by the C compiler, we must have the
statement #include <stdio.h> that includes the standard I/O header file for func-
tion declaration. Some functions that perform standard input/output identify the file to
read or write by using a file pointer to store the address of information required to
access the file. To define file pointers, we use
FILE *fpin; /* File pointer to x(n) */
FILE *fpimp; /* File pointer to b(n) */
FILE *fpout; /* File pointer to y(n) */
We can open a file with the function fopen and close it with the function fclose. For
example,
fpin fopen(argv [1],"rb");
fpout fopen(argv [3],"wb");
The function fopen requires two arguments ± the name of the file and the mode. The
name of the file is given in the character string argv [], or simply use `file_name'
where file_name is a file name that contains the desired data. For example, to open
the ASCII data file infile.dat for reading, we can use
fpin fopen("infile.dat","r");
For data files in ASCII format, the mode `r' opens the file for reading and the mode
`w' opens a new file for writing. Appending a character b to the mode string, such as `rb'
and `wb', is used to open a binary formatted data file. If the file is successfully opened,
fopen returns a pointer to FILE that references the opened file. Otherwise, it returns
NULL. The function fclose expects a pointer to FILE, such as the following statement:
fclose(fpin);
will close the file pointed by the pointer fpin while the statement of fcloseall()used
in our previous example firfltr will close all the open files. When a program
terminates, all open files are automatically closed.
Functions such as scanf and printf perform formatted input/output. Formatting
input or output is to control where data is read or written, to convert input to the
desired type (int, float, etc.), and to write output in the desired manner. The function
CONTROL STRUCTURES AND LOOPS 481
scanf, fscanf, and sscanf provides formatted input (scanning or reading). For
example, the statement
fscanf(fpimp,"%f",&bn);
reads from an arbitrary file pointed by the file pointer fpimp to a variable of address
&bn, and %f indicates the number is floating-point data. In addition, the formatted I/O
functions also recognize %d for decimal integers, %x for hexadecimals, %c for characters,
and %s for character strings.
The function fwrite writes binary data. That is, fwrite writes blocks of data
without formatting to a file that has been opened in binary mode. The data written may
be integers or floating-point numbers in binary form (they may represent digital sounds
or pictures). Conversely, the function fread is used for reading unformatted binary
data from a file. The function fread requires four arguments. For example, the
statement
fread(&xn, sizeof(float), 1, fpin);
reads 1 item, each item of size float data type, into the array xn from the file
pointed to by the file pointer fpin. The fread function returns the number of items
successfully read. The operator sizeof(object)has a value that is the amount of
storage required by an object. The values of sizeof for different data types may vary
from system to system. For example, in a workstation, the value of sizeof(int)is 4
(bytes), whereas on fixed-point DSP systems, the value of sizeof(int)is typically 2
(bytes).
The function fwrite expects four arguments. For example, the statement
fwrite(&yn, sizeof(float), 1, fpout);
writes the binary form float array yn of size sizeof(float)to the file pointed to by
the pointer fpout. The difference between using fwrite to write a floating-point
number to a file and using fprintf with the %f format descriptor is that fwrite
writes the value in binary format using 4 bytes, whereas fprintf writes the value as
ASCII text which usually need more than 4 bytes.
The C language provides two basic methods for executing a statement or series state-
ments conditionally: the if statement and the switch-case statement. The if
statement allows us to test a condition and then execute statements based on whether
the given condition is true or false. The if statement has the following general
format:
482 APPENDIX C: INTRODUCTION OF C PROGRAMMING FOR DSP APPLICATIONS
if (condition)
statement1;
If the condition is true, the statement1 is executed; otherwise, this statement will be
skipped. When more than one statement needs to be executed if the condition is true, a
compound statement that consists of a left brace {, some number of statements, and a
right brace }is used as follows:
if (condition)
{
statements;
}
If the condition is true, those statements enclosed in braces are executed; if the condition
is false, we skip these statements. Figure C.2 shows flowchart of the program control
with the simple if statement.
An if/else statement allows us to execute one set of statements if the condition is
true and a different set if the condition is false. The simplest form of an if/else
statement is
if (condition)
{
statements A;
}
else
{
statements B;
}
A flowchart for this if/else statement is illustrated in Figure C.3. By using com-
pound statements, the if/else control structure can be nested.
When a program must choose between several alternatives, the if/else statement
becomes inconvenient and somewhat inefficient. When more than four alternatives
from a single expression are chosen, the switch-case statement is very useful. The
basic syntax of the switch-case statement is
Is No
Condition
True?
Yes
Statements
Is No
Condition
True?
Yes
Statements A Statements B
switch(integer expression)
{
case constant_1:
statements;
break;
case constant_2;
statements;
break;
...
default:
statements;
}
Program control jumps to the statement if the case label with the constant (an integer
or single character in quotes) matches the result of the integer expression in the
switch statement. If no constant matches the expression value, the control goes to
the statement following the default label. When a match occurs, the statements
following the corresponding case label will be executed. The program execution will
continue until the end of the switch statement is reached, or a break statement that
redirects control to the end of the switch-case statement is reached. A break
statement should be included in each case not required after the last case or default
statement.
! logical NOT
&& logical AND
|| logical OR
C.4.3 Loops
C contains three different loop structures that allow a statement or group of statements
to be repeated for a fixed (or variable) number of times, and they are for loop, while
loop, and do/while loop.
Many DSP operations require loops that are based on the value of the variable (loop
counter) that is incremented (or decremented) by the same amount each time it passes
through the loop. When the counter reaches a specified value, we want the program
execution to exit the loop. This type of loop can be implemented with for loop, which
combines an initialization statement, an end condition statement, and an action state-
ment into one control structure. The most frequent use of for loop is indexing an array
through its elements. For example, the simple C program listed in Section C.1 uses the
following for loop:
for(k 1; k < N; k)
{
y [k ] 0.0; /* Clear array */
}
This for statement sets k to 1 (one) first and then checks if k is less than the number N,
If the test condition is true, it executes the statement y [k] 0.0; increments k, and then
repeats the loop until k is equal to N. Note that the integer k is incremented at the end
of the loop if the test condition statement k < N is true. When the loop is completed, the
elements of the array y are set to zero from y [1]up to y [N 1 ].
DATA TYPE USED BY THE TMS320C55X 485
The while loop repeats the statements until a test expression becomes false or zero.
The general format of a while loop is
while(condition)
{
statements;
}
The condition is evaluated before the statements within the loop are executed. If the
condition is false, the loop statements are skipped, and execution continues with the
statement following the while loop. If the condition is true, then the loop statements
are executed, and the condition is evaluated again. This repetition continues until the
condition is false. Note that the decision to go through the loop is made before the loop
is ever started. Thus it is possible that the loop is never executed. For example, in the
FIR filter program firfltr.c, the while loop
while((fscanf(fpimp,"%f",&hn))! EOF)
{
bn_coef [len_imp](float)bn; /* Read coefficients */
xn_buf [len_imp] 0.; /* Clear x(n)vector */
len_imp; /* Order of filter */
}
will be executed until the file pointer reaches the end_of_file (EOF).
The do/while loop is used when a group of statements needs to be repeated and the
exit condition should be tested at the end of the loop. The general format of do/while
loop is
do {
statements;
} while(condition);
The decision to go through the loop again is made after the loop is executed so that the
loop is executed at least once. The format of do-while is similar to the while loop,
except that the do key word starts the statement and while ends the statement.
data type uses 32-bit binary values. The signed data types use 2's-complement notation.
Finally, all floating-point data types are the same and represented by the IEEE single-
precision format. It is the programmer/engineer's responsibility to correctly define and
use the data types while writing the program. When porting applications from one
platform to the other, it is equally important that the correct data type conversion is
applied, such as convert all the data defined as char from 8-bit to 16-bit integer.
References
[1] P. M. Embree, C Algorithms for Real-Time DSP, Englewood Cliffs, NJ: Prentice-Hall, 1995.
[2] P. M. Embree and B. Kimble, C Language Algorithms for Digital Signal Processing, Englewood
Cliffs, NJ: Prentice-Hall, 1991.
[3] D. M. Etter, Introduction to C for Engineers and Scientists, Englewood Cliffs, NJ: Prentice-Hall,
1997.
[4] S. P. Harbison and G. L. Harbison, C: A Reference Manual, Englewood Cliffs, NJ: Prentice-Hall,
1987.
[5] R. Johnsonbaugh and M. Kalin, C for Scientists and Engineers, Englewood Cliffs, NJ: Prentice-
Hall, 1997.
[6] B. W. Kernighan and D. M. Ritchie, The C Programming Language, Englewood Cliffs, NJ:
Prentice-Hall, 1978.
Real-Time Digital Signal Processing. Sen M Kuo, Bob H Lee
Copyright # 2001 John Wiley & Sons Ltd
ISBNs: 0-470-84137-0 (Hardback); 0-470-84534-1 (Electronic)
Appendix D
About the Software
Index