ECE-VII-DSP ALGORITHMS & ARCHITECTURE PartB
ECE-VII-DSP ALGORITHMS & ARCHITECTURE PartB
ECE-VII-DSP ALGORITHMS & ARCHITECTURE PartB
asia 1
PART B UNIT-5
A commonly used notation for DSP implementations is Q15. In the Q15 representation, the least
significant 15 bits represent the fractional part of a number. In a processor where 16 bits are used to
represent numbers, the Q15 notation uses the MSB to represent the sign of the number and the rest of
the bits represent the value of the number.
In general, the value of a 16-bit Q15 number N represented as:
Multiplication of numbers represented using the Q-notation is important for DSP implementations.
Figure 5.1(a) shows typical cases encountered in such implementations.
The implementation requires signal delay for each sample to compute the next output,
y(n+1), is given as y(n+1)=h(N-1)x(n-(N-2))+h(N-2)x(n-(N-3))+ ...h(1)x(n)+h(0)x(n+1) Figure 5.3
shows the memory organization for the implementation of the filter. The filter Coefficients and the
signal samples are stored in two circular buffers each of a size equal to the filter. AR2 is used to point
to the samples and AR3 to the coefficients. In order to start with the last product, the pointer register
AR2 must be initialized to access the signal sample x(2-(N-1)), and the pointer register AR3 to access
the filter coefficient h(N-1). As each product is computed and added to the previous result, the pointers
advance circularly. At the end of the computation, the signal sample pointer is at the oldest sample,
which is replaced with the newest sample to proceed with the next output computation.
; Enter with A=the current sample x(n)-an integer, AR2 pointing to the location for the current sample
x(n),andAR3pointingtotheq15coefficienth(N-1). Exit with A = y(n) as q15 number.
The kind of interpolation carried out in the examples is called linear interpolation because the
convolving sequence h(n) is derived based on linear interpolation of samples. Further, in this case, the
h(n) selected is just a second-order filter and therefore uses just two adjacent samples to interpolate a
sample. A higher-order filter can be used to base interpolation on more input samples. To implement
an ideal interpolation. Figure 5.6 shows how an interpolating filter using a 15-tap FIR filter and an
interpolation factor of 5 can be implemented. In this example, each incoming samples is followed by
four zeros to increase the number of samples by a factor of 5.
The interpolated samples are computed using a program similar to the one used for a FIR filter
implementation. One drawback of using the implementation strategy depicted in Figure 5.7 is that
there are many multiplies in which one of the multiplying elements is zero. Such multiplies need not
be included in computation if the computation is rearranged to take advantage of this fact. One such
scheme, based on generating what are called poly-phase sub-filters, is available for reducing the
computation. For a case where the number of filter coefficients N is a multiple of the interpolating
factor L, the scheme implements the interpolation filter using the equation.
Figure 5.7 shows a scheme that uses poly-phase sub-filters to implement the interpolating filter
using the 15-tap FIR filter and an interpolation factor of 5. In this implementation, the 15 filter taps are
arranged as shown and divided into five 3-tap sub filters. The input samples x(n), x(n-1) and x(n-2) are
used five times to generate the five output samples. This implementation requires 15 multiplies as
opposed to 75 in the direct implementation of Figure 5.7.
Figure 5.6 interpolating filter using a 15-tap FIR filter and an interpolation factor of 5
Figure5.7: A scheme that uses poly-phase sub-filters to implement the interpolating filter
Using the 15-tap FIR filter and an interpolation factor of 5
To circumvent the problem of violating the sampling theorem, the signal to be decimated is first
filtered using a low pass filter. The cutoff frequency of the filter is chosen so that it is less than half the
final sampling frequency. The filtered signal can be
decimated by dropping samples. In fact, the samples that are to be dropped need not be computed at
all. Thus, the implementation of a decimator is just a FIR filter implementation in which some of the
outputs are not calculated.
Figure 5.8 shows a block diagram of a decimation filter. Digital decimation can be
implemented as depicted in Figure 5.9 for an example of a decimation filter with decimation factor of
3. It uses a low pass FIR filter with 5 taps. The computation is similar to that of a FIR filter. However,
after computing each output sample, the signal array is delayed by three sample intervals by bringing
the next three samples into the circular buffer to replace the three oldest samples.
This routine sets AR2 as the pointer for the sample circular buffer, and AR3 as the
pointer for coefficient circular buffer.
BK = Number of filter taps. ; AR0 = 1 = circular buffer pointer increment.
Problems:
1. What values are represented by the 16-bit fixed point number N=4000h in
Q15 & Q7 notations?
Solution:
Q15 notation: 0.100 0000 0000 0000 (N=0.5)
Q7 notation: 0100 0000 0.000 0000 (N=+128)
Recommended Questions:
1. Describe the importance of Q-notation in DSP algorithm implementation with examples. What
are the values represented by 16- bit fixed point number N=4000h in Q15, Q10, Q7 notations?
Explain how the FIR filter algorithms can be implemented using TMS320c54xx processor.
2. Explain with the help of a block diagram and mathematical equations the implementation of a
second order IIR filter. No program code is required.
3. Write the assembly language program for TMS320C54XX processor to implement an FIR
filter.
4. What is the drawback of using linear interpolation for implementing of an FIR filter in
TMS320C54XX processor? Show the memory organization for the filter implementation.
5. Briefly explain IIR filters
6. Determine the value of each of the following 16- bit numbers represented using the given Q-
notations:
7. (i) 4400h as a Q10 number (ii) 4400h as a Q7 number (iii) 0.3125 as a Q15 number (iv) -
0.3125 as a Q15 number.
8. Write an assembly language program for TMS320C54XX processors to multiply two Q15
numbers to produce Q15 number result.
9. What is an interpolation filter? Explain the implementation of digital interpolation using FIR
filter and poly phase sub filter.
10. Determine the value of each of the following 16- bit numbers represented using the given Q-
notations:
11. (i) 4400h as a Q10 number (ii) 4400h as a Q7 number (iii) 0.3125 as a Q15 number (iv) -
0.3125 as a Q15 number. (MAY-JUNE 10, 6m)
12. Write an assembly language program for TMS320C54XX processors to multiply two Q15
numbers to produce Q15 number result. (Dec 12 , 6 marks)(July 11, 6m) (June/July2012,
4m)
13. What is an interpolation filter? Explain the implementation of digital interpolation using FIR
filter and poly phase sub filter. (Dec 12 8 marks)
14. Describe the importance of Q-notation in DSP algorithm implementation with examples. What
are the values represented by 16- bit fixed point number N=4000h in Q15, Q10, Q7 notations?
(MAY-JUNE 10, 6m)
15. Explain how the FIR filter algorithms can be implemented using TMS320c54xx processor.
Unit 6
By referring to eq (6.1) and eq (6.2), the difference between DFT & IDFT are seen to be
the sign of the argument for the exponent and multiplication factor, 1/N. The computational
complexity in computing DFT / I DFT is thus same (except for the additional multiplication factor in
IDFT). The computational complexity in computing each X(k) and all the x(k) is shown in table 6.1.
In a typical Signal Processing System, shown in fig 6.1 signal is processed using DSP in the DFT
domain. After processing, IDFT is taken to get the signal in its original domain. Though certain
amount of time is required for forward and inverse transform, it is because of the advantages of
transformed domain manipulation, the signal processing is carried out in DFT domain. The
transformed domain manipulations are sometimes simpler. They are also more useful and powerful
than time domain manipulation. For example, convolution in time domain requires one of the signals
to be folded, shifted and multiplied by another signal, cumulatively. Instead, when the signals to be
convolved are transformed to DFT domain, the two DFT are multiplied and inverse transform is taken.
Thus, it simplifies the process of convolution.
6.2 An FFT Algorithm for DFT Computation: As DFT / IDFT are part of signal processing system,
there is a need for fast computation of DFT / IDFT. There are algorithms available for fast
computation of DFT/ IDFT. There are referred to as Fast Fourier Transform (FFT) algorithms. There
are two FFT algorithms: Decimation-In-Time
FFT (DITFFT) and Decimation-In-Frequency FFT (DIFFFT). The computational complexity of both
the algorithms are of the order of log2(N). From the hardware / software implementation viewpoint the
algorithms have similar structure throughout the
computation. In-place computation is possible reducing the requirement of large memory locations.
The features of FFT are tabulated in the table 6.2.
Consider an example of computation of 2 point DFT. The signal flow graph of 2 point DITFFT
Computation is shown in fig 6.2. The input / output relations is as in eq (6.3) which are arrived at from
eq(6.1).
Similarly, the Butterfly structure in general for DITFFT algorithm is shown in fig. 6.3. The signal flow
graph for N=8 point DITFFT is shown in fig. 4. The relation between input and output of any Butterfly
structure is shown in eq (6.4) and eq(6.5).
Separating the real and imaginary parts, the four equations to be realized in implementation of
DITFFT Butterfly structure are as in eq(6.6).
Observe that with N=2^M, the number of stages in signal flow graph=M, number of multiplications =
(N/2)log2(N) and number of additions = (N/2)log2(N). Number of Butterfly Structures per stage =
N/2. They are identical and hence in-place computation is possible. Also reusability of hardware
designed for implementing Butterfly structure is
possible. However in case FFT is to be computed for a input sequence of length other than 2^M the
sequence is extended to N=2^M by appending additional zeros. The process will not alter the
information content of the signal. It improves frequency resolution. To make the point clear, consider
a sequence whose spectrum is shown in fig. 6.5.
The spectrum is sampled to get DFT with only N=10. The same is shown in fig 6.
The variations in the spectrum are not traced or caught by the DFT with N=10. For example, dip in the
spectrum near sample no. 2, between sample no.7 & 8 are not represented in DFT. By increasing
N=16, the DFT plot is shown in fig. 6.7. As depicted in fig 6.7, the approximation to the spectrum
with N=16 is better than with N=10. Thus, increasing N to a suitable value as required by an algorithm
improves frequency resolution.
Problem P6.1: What minimum size FFT must be used to compute a DFT of 40 points? What
must be done to samples before the chosen FFT is applied? What is the frequency resolution
achieved?
Solution:
Minimum size FFT for a 40 point sequence is 64 point FFT. Sequence is extended to 64 by appending
additional 24 zeros. The process improves frequency resolution from
6.3 Overflow and Scaling: In any processing system, number of bits per data in signal
processing is fixed and it is limited by the DSP processor used. Limited number of bits leads to
overflow and it results in erroneous answer. InQ15 notation, the range of numbers that can be
represented is -1 to 1. If the value of a number exceeds these limits, there will be underflow /
overflow. Data is scaled down to avoid overflow.
However, it is an additional multiplication operation. Scaling operation is simplified by
selecting scaling factor of 2^-n. And scaling can be achieved by right shifting data by n bits. Scaling
factor is defined as the reciprocal of maximum possible number in the operation. Multiply all the
numbers at the beginning of the operation by scaling factor so that the maximum number to be
processed is not more than 1. In the case of DITFFT computation, consider for example,
To find the maximum possible value for LHS term, Differentiate and equate to zero
Thus scaling factor is 1/2.414=0.414. A scaling factor of 0.4 is taken so that it can be implemented by
shifting the data by 2 positions to the right. The symbolic representation
Dept.ECE, SJBIT Page 132
Smartworld.asia 22
of Butterfly Structure is shown in fig. 6.8. The complete signal flow graph with scaling factor is shown
in fig. 6.9.
6.4 Bit-Reversed Index Generation: As noted in table 6.2, DITFFT algorithm requires input in bit
reversed order. The input sequence can be arranged in bit reverse order by reverse carry add operation.
Add half of DFT size (=N/2) to the present bit reversed ndex to get next bit reverse index. And employ
reverse carry propagation while adding bits from left to right. The original index and bit reverse index
for N=8 is listed in table 6.3
Consider an example of computing bit reverse index. The present bit reversed index be
110. The next bit reversed index is
There are addressing modes in DSP supporting bit reverse indexing, which do the computation of
reverse index.
6.5 Implementation of FFT on TMS32OC54xx: The main program flow for the implementation of
DITFFT is shown in fig. 6.10. The subroutines used are _clear to clear all the memory locations
reserved for the results. _bitrev stores the data sequence x (n) in bit reverse order. _butterfly computes
the four equations of computing real and imaginary parts of butterfly structure. _spectrum computes
the spectrum of x (n). The Butterfly subroutine is invoked 12 times and the other subroutines are
invoked only once.
Clear subroutine is shown in fig. 6.11. Sixteen locations meant for final results are cleared. AR2 is
used as pointer to the locations. Bit reverse subroutine is shown in fig. 6.12. Here, AR1 is used as
pointer to x(n). AR2 is used as pointer to X(k) locations. AR0 is loaded with 8 and used in bit reverse
addressing. Instead of N/2 =4, it is loaded with N=8 because each X(k) requires two locations, one for
real part and the other for imaginary part. Thus, x(n) is stored in alternate locations, which are meant
for real part of X(k). AR3 is used to keep track of number of transfers.
Butterfly subroutine is invoked 12 times. Part of the subroutine is shown in fig. 6.13. Real part and
imaginary of A and B input data of butterfly structure is divided by 4 which
is the scaling factor. Real part of A data which is divided by 2 is stored in temp location. It is used
further in computation of eq (3) and eq (4) of butterfly. Division is carried out by shifting the data to
the right by two places. AR5 points to real part of A input data, AR2 points to real part of B input data
and AR3 points to real part of twiddle factor while
invoking the butterfly subroutine. After all the four equations are computed, the pointers
are in the same position as they were when the subroutine is invoked. Thus, the results
are stored such that in-place computation is achieved. Fig. 6.14 through 6.17 show the
butterfly subroutine for the computation of 4 equations.
Figure 6.18 depicts the part of the main program that invokes butterfly subroutine by supplying
appropriate inputs, A and B to the subroutine. The associated butterfly structure is also shown for
quick reference. Figures 6.19 and 6.20 depict the main program for the computation of 2nd and 3rd
stage of butterfly.
After the computation of X(k), spectrum is computed using the eq(6.8). The pointer AR1
is made to point to X(k). AR2 is made to point to location meant for spectrum. AR3 is loaded with
keeps track of number of computation to be performed. The initialization of
the pointer registers before invoking the spectrum subroutine is shown in fig. 6.21. The
subroutine is shown in fig. 6.22. In the subroutine, square of real and imaginary parts are computed
and they are added. The result is converted to Q15 notation and stored.
Problems:
1. Derive equations to implement a Butterfly encountered in a DIFFFT implementation.
Solution:
Butterfly structure for DIFFFT:
The input / output relations are
2. How many add/subtract and multiply operations are needed to implement a general butterfly of
DITFFT?
Solution:
Referring to 4 equations required in implementing DITFFT Butterfly structure, Add//subttrractt
operations 06 and Multiply operations 04
3. Derive the optimum scaling factor for the DIFFFT Butterfly structure.
Solution: The four equations of Butterfly structure are
Differentiating 4th relation and setting it to zero, (any equation may be considered)
Thus scaling factor is 0.707. To achieve multiplication by right shift, it is chosen as 0.5.
Recommended Questions:
18. Write a subroutine program to find the spectrum of the transformed data using TMS320C54XX
DSP. (DEC 2012, 6m)
19. With the help of the implementation structure, explain the FFT algorithm for DIT-FFT
computation on TMS320C54XX processors. Use ¼ as a scale factor for all butterflies
20. Determine the following for a 128-point FFT computation: (i) number of stages (ii) number of
butterflies in each stage (iii) number of butterflies needed for the entire computation (iv)
number of butterflies that need no twiddle factors (v) number of butterflies that require real
twiddle factors (vi) number of butterflies that require complex twiddle factors. (MAY-JUNE
11)
Unit 7
7.2 Memory Space Organization: Memory Space in TMS320C54xx has 192K words of 16 bits each.
Memory is divided into Program Memory, Data Memory and I/O Space, each are of 64K words. The
actual memory and type of memory depends on particular DSP device of the family. If the memory
available on a DSP is not sufficient for an application, it can be interfaced to an external memory as
depicted in fig. 7.2. The On- Chip Memory are faster than External Memory. There are no interfacing
requirements. Because they are on-chip, power consumption is less and size is small. It exhibits better
performance by DSP because of better data flow within pipeline. The purpose of such memory is to
hold Program / Code / Instructions, to hold constant data such as filter coefficients / filter order, also to
hold trigonometric tables / kernels of transforms employed in an algorithm. Not only constants are
stored in such memory, they are also used to hold variable data and intermediate results so that the
processor need not refer to external memory for the purpose.
External memory is off-chip. They are slower memory. External Interfacing is required to
establish the communication between the memory and the DSP. They can be with large memory
space. The purpose is being to store variable data and as scratch pad memory. Program memory can be
ROM, Dual Access RAM (DARAM), Single Access RAM (SARAM), or a combination of all these.
The program memory can be extended externally to 8192K words. That is, 128 pages of 64K words
each. The arrangement of memory and DSP in the case of Single Access RAM (SARAM) and Dual
Access RAM (DARAM) is shown in fig. 7.3. One set of address bus and data bus is available in the
case of SARAM and two sets of address bus and data bus is available in the case of DARAM. The
DSP can thus access two memory locations simultaneously.
There are 3 bits available in memory mapped register, PMST for the purpose of on-chip
memory mapping. They are microprocessor / microcomputer mode. If this bit is 0, the on-chip ROM is
enabled and addressable and if this bit is 1 the on-chip ROM not available. The bit can be manipulated
by software / set to the value on this pin at system
reset. Second bit is OVLY. It implies RAM Overlay. It enables on-chip DARAM data memory blocks
to be mapped into program space. If this bit is 0, on-chip RAM is addressable in data space but not in
Program Space and if it is 1, on-chip RAM is mapped into Program & Data Space. The third bit is
DROM. It enables on-chip DARAM 4-7 to be mapped into data space. If this bit is 0, on-chip
DARAM 4-7 is not mapped into data space and if this bit is 1, on-chip DARAM 4-7 is mapped into
Data Space. On-chip data memory is partitioned into several regions as shown in table 7.1. Data
memory can be onchip / off-chip.
The on-chip memory of TMS320C54xx can be both program & data memory. It enhances speed of
program execution by using parallelism. That is, multiple data access capability is provided for
concurrent memory operations. The number of operations in single memory access is 3 reads & one
write. The external memory to DSP can be interfaced with 16 -23 bit Address Bus, 16 bit Data Bus.
Interfacing Signals are generated by the DSP to refer to external memory. The signals required by the
memory are typically chip Select, Output Enable and Write Enable. For example, TMS320C5416 has
16K ROM, 64K DARAM and 64K SARAM.
Extended external Program Memory is interfaced with 23 address lines i.e., 8192K locations. The
external memory thus interfaced is divided into 128 pages, with 64K words per page.
7.3: External Bus Interfacing Signals: In DSP there are 16 external bus interfacing signals. The
signal is characterized as single bit i.e., single line or multiple bits i.e., Multiple lines / bus. It can be
synchronous / asynchronous with clock. The signal can be
active low / active high. It can be output / input Signal. The signal carrying line / lines Can be
unidirectional / bidirectional Signal. The characteristics of the signal depend on
the purpose it serves. The signals available in TMS320C54xx are listed in table 7.2 (a) & table 7.2 (b).
In external bus interfacing signals, address bus and data bus are multi-lines bus. Address bus is
unidirectional and carries address of the location refereed. Data bus is bidirectional and carries data to
or from DSP. When data lines are not in use, they are tri-stated. Data Space Select, Program Space
Select, I/O Space Select are meant for data space, program space or I/O space selection. These
interfacing signals are all active low. They are active during the entire operation of data memory /
program memory / I/O space reference. Read/Write Signal determines if the DSP is reading the
external device or writing.
Read/Write Signal is low when DSP is writing and high when DSP is reading. Strobe Interfacing
Signals, Memory Strobe and I/O Strobe both are active low. They remain low
during the entire read & write operations of memory and I/O operations respectively. External Bus
Interfacing Signals from 1-8 are all are unidirectional except Data Bus which is bidirectional. Address
Lines are outgoing signals and all other control signals are also outgoing signals.
Data Ready signal is used when a slow device is to be interfaced. Hold Request and Hold
Acknowledge are used in conjunction with DMA controller. There are two Interrupt related signals:
Interrupt Request and Interrupt Acknowledge. Both are active low. Interrupt Request typically for data
exchange. For example, between ADC / another Processor. TMS320C5416 has 14 hardware interrupts
for the purpose of User interrupt, Mc-BSP, DMA and timer. The External Flag is active high,
asynchronous and outgoing control signal. It initiates an action or informs about the completion of a
transaction to the peripheral device. Branch Control Input is a active low, asynchronous, incoming
control signal. A low on this signal makes the DSP to respond or attend to the peripheral device. It
informs about the completion of a transaction to the DSP.
7.4 The Memory Interface: The memory is organized as several locations of certain number of bits.
The number of locations decides the address bus width and memory capacity. The number of bits per
locations decides the data bus width and hence word length. Each location has unique address. The
demand of an application may be such that memory capacity required is more than that available in a
memory IC. That means there are insufficient words in memory IC. Or the word length required may
be more than that is available in a memory IC. Thus, there may be insufficient word length. In both the
cases, more number of memory ICs are required.
Typical signals in a memory device are address bus to carry address of referred memory location. Data
bus carries data to or from referred memory location. Chip Select Signal selects one or more memory
ICs among many memory ICs in the system. Write Enable enables writing of data available on data
bus to a memory location. Output Enable signal enables the availability of data from a memory
location onto the data bus. The address bus is unidirectional, carries address into the memory IC. Data
bus is bidirectional. Chip Select, Write Enable and Output Enable control signals are active high or
low and they carry signals into the memory ICs. The task of the memory interface is to use DSP
signals and generate the appropriate signals for setting up communication with the memory. The
logical spacing of interface is shown in fig. 7.4.
The timing sequence of memory access is shown in fig. 7.5. There are two read operations, both
referring to program memory. Read Signal is high and Program Memory Select is low. There is one
Write operation referring to external data memory. Data Memory Select is low and Write Signal low.
Read and write are to memory device and hence memory strobe is low. Internal program memory
reads take one clock cycle and External data memory access require two clock cycles.
7.5 Parallel I/O Interface: I/O devices are interfaced to DSP using unconditional I/O mode,
programmed I/O mode or interrupt I/O mode. Unconditional I/O does not require any handshaking
signals. DSP assumes the readiness of the I/O and transfers the data with its own speed. Programmed
I/O requires handshaking signals. DSP waits for the readiness of the I/O readiness signal which is one
of the handshaking signals. After the
completion of transaction DSP conveys the same to the I/O through another handshaking signal.
Interrupt I/O also requires handshaking signals. DSP is interrupted by the I/O indicating the readiness
of the I/O. DSP acknowledges the interrupt, attends to the interrupt. Thus, DSP need not wait for the
I/O to respond. It can engage itself in execution as long as there is no interrupt.
7.6: Programmed I /O interface: The timing diagram in the case of programmed I/O is shown in fig.
7.6. I/O strobe and I/O space select are issued by the DSP. Two clock cycles each are required for I/O
read and I/O write operations.
An example of interfacing ADC to DSP in programmed I/O mode is shown in fig. 7.7. ADC has a start
of conversion (SOC) signal which initiates the conversion. In programmed I/O mode, external flag
signal is issued by DSP to start the conversion. ADC issues end of conversion (EOC) after completion
of conversion. DSP receives Branch input control by ADC when ADC completes the conversion. The
DSP issues address of the ADC, I/O strobe and read / write signal as high to read the data. An address
decoder does the translation of this information into active low read signal to ADC. The data is
supplied on data bus by ADC and DSP reads the same. After reading,
DSP issues start of conversion once again after the elapse of sample interval. Note that
there are no address lines for ADC. The decoded address selects the ADC. During conversion, DSP
waits checking branch input control signal status for zero. The flow chart of the activities in
programmed I/O is shown in fig. 7.8.
7.7 Interrupt I/O: This mode of interfacing I/O devices also requires handshaking signals. DSP is
interrupted by the I/O whenever it is ready. DSP Acknowledges the interrupt, after testing certain
conditions, attends to the interrupt. DSP need not wait for the I/O to respond. It can engage itself in
execution. There are a variety of interrupts. One of the classifications is maskable and nonmaskable. If
maskable, DSP can ignore when that interrupt is masked. Another classification is vectored and non-
vectored. If vectored, Interrupt Service subroutine (ISR) is in specific location. In Software Interrupt,
instruction is written in the program.
In Hardware interrupt, a hardware pin, on the DSP IC will receive an interrupt by the external
device. Hardware interrupt is also referred to as external interrupt and software interrupt is referred to
as internal interrupt. Internal interrupt may also be due to execution of certain instruction can causing
interrupt. In TMS320C54xx there are total of 30 interrupts. Reset, Non-maskable, Timer Interrupt,
HPI, one each, 14 Software Interrupts, 4 External user Interrupts, 6 Mc-BSP related Interrupts and 2
DMA related Interrupts. Host Port Interface (HPI) is a 8 bit parallel port. It is possible to interface to a
Host Processor using HPI. Information exchange is through on-chip memory of DSP
which is also accessible Host processor.
Registers used in managing interrupts are Interrupt flag Register (IFR) and Interrupt Mask
Register (IMR). IFR maintains pending external & internal interrupts. One in any bit position implies
pending interrupt. Once an interrupt is received, the orresponding bit is set. IMR is used to mask or
unmask an interrupt. One implies that the corresponding interrupt is unmasked. Both these registers
are Memory Mapped Registers. One flag, Global enable bit (INTM), in ST1 register is used to enable
or disable all interrupts globally. If INTM is zero, all unmasked interrupts are enabled. If it is one, all
maskable interrupts are disabled.
When an interrupt is received by the DSP, it checks if the interrupt is maskable. If the interrupt
is non-maskable, DSP issues the interrupt acknowledgement and thus serves the interrupt. If the
interrupt is hardware interrupt, global enable bit is set so that no other interrupts are entertained by the
DSP. If the interrupt is maskable, status of the INTM is checked. If INTM is 1, DSP does not respond
to the interrupt and it continues with program execution. If the INTM is 0, bit in IMR register
corresponding to the interrupt is checked. If that bit is 0, implying that the interrupt is masked, DSP
does not respond to the interrupt and continues with its program execution. If the interrupt is
unmasked, then DSP issues interrupt acknowledgement. Before branching to the interrupt service
routine, DSP saves the PC onto the stack. The same will be reloaded after attending the interrupt so as
to return to the program that has been interrupted. The response of DSP to an Interrupt is shown in
flow chart in fig. 7.9.
7.8: Direct Memory Access (DMA) operation: In any application, there is data transfer
between DSP and memory and also DSP and I/O device, as shown in fig. 7.10. However, there may be
need for transfer of large amount of data between two memory regions or between memory and I/O.
DSP can be involved in such transfer, as shown in fig. 7.11. Since amount of data is large, it will
engage DSP in data transfer task for a long time. DSP thus will not get utilized for the purpose it is
meant for, i.e., data manipulation. The intervention of DSP has to be avoided for two reasons: to
utilize DSP for useful signal processing task and to increase the speed of transfer by direct data
transfer between memory or memory and I/O. The direct data transfer is referred to as direct memory
access (DMA). The arrangement expected is shown in fig. 7.12. DMA controller helps in data transfer
instead of DSP.
In DMA, data transfer can be between memory and peripherals which are either internal
or external devices. DMA controller manages DMA operation. Thus DSP is relieved of the task of
data transfer. Because of direct transfer, speed of transfer is high. In TMS320C54xx, there are up to 6
independent programmable DMA channels. Each channel is between certain source & destination.
One channel at a time can be used for
data transfer and not all six simultaneously. These channels can be prioritized. The speed of transfer
measured in terms of number of clock cycles for one DMA transfer depends on several factors such as
source and destination location, external interface conditions, number of active DMA channels, wait
states and bank switching time. The time for data transfer between two internal memory is 4 cycles for
each word.
Requirements of maintaining a channel are source & Destination address for a channel,
separately for each channel. Data transfer is in the form of block, with each block having frames of 16
/ 32 bits. Block size, frame size, data are programmable. Along with these, mode of transfer and
assignment of priorities to different channels are also to be maintained for the purpose of data transfer.
There are five, channel context registers for each DMA channel. They are Source
Address Register (DMSRC), Destination Address Register (DMDST), Element Count Register
(DMCTR), Sync select & Frame Count register (DMSFC), Transfer Mode Control Register
(DMMCR). There are four reload registers. The context register DMSRC & DMDST are source &
destination address holders. DMCTR is for holding number of data elements in a frame. DMSFC is to
convey sync event to use to trigger DMA transfer, word size for transfer and for holding frame count.
DMMCR Controls transfer mode by specifying source and destination spaces as program memory,
data memory or I/O space. Source address reload & Destination address reload are useful in
reloading source address and destination address. Similarly, count reload and frame count reload are
used in reloading count and frame count. Additional registers for DMA that are common to all
channels are Source Program page address, DMSRCP, Destination Program page address, DMDSTP,
Element index address register, Frame index address register.
Number of memory mapped registers for DMA are 6x(5+4) and some common registers
for all channels, amounting to total of 62 registers required. However, only 3 (+1 for priority related)
are available. They are DMA Priority & Enable Control Register (DMPREC), DMA sub bank Address
Register (DMSA), DMA sub bank Data Register with auto increment (DMSDI) and DMA sub bank
Data Register (DMSDN). To access each of the DMA Registers Register sub addressing Technique is
employed. The schematic of the arrangement is shown in fig. 7.13. A set of DMA registers of all
channels (62) are made available in set of memory locations called sub bank. This voids the need for
62 memory mapped registers. Contents of either DMSDI or DMSDN indicate the code (1’s & 0’s) to
be written for a DMA register and contents of DMSA refers to the unique sub address of DMA
register to be accessed. Mux routes either DMSDI or DMSDN to the sub bank. The memory location
to be written
DMSDI is used when an automatic increment of the sub address is required after each access. Thus it
can be used to configure the entire set of registers. DMSDN is used when single DMA register access
is required. The following examples bring out clearly the method of accessing the DMA registers and
transfer of data in DMA mode.
Recommended Questions:
1. Explain an interface between an A/D converter and the TMS320C54XX processor in the
programmed I/O mode.
2. Describe DMA with respect to TMS320C54XX processors.
3. Drew the timing diagram for memory interface for read-read-write sequence of operation.
Explain the purpose of each signal involved.
4. Explain the memory interface block diagram for the TMS 320 C54xx processor.
5. Draw the I/O interface timing diagram for read – write read sequence of operation.
6. What are interrupts? How interrupts are handled by C54xx DSP Processors.
7. Explain the memory interface block diagram for the TMS 320 C54xx processor.
8. Draw the I/O interface timing diagram for read – write read sequence of operation.
9. What are interrupts? How interrupts are handled by C54xx DSP Processors.
10. Design a data memory system with address range 000800h – 000fffh for a c5416 processor
using 2kx8 SRAM memory chips.
11. Design a data memory system with address range 000800h – 000fffh for a c5416 processor
using 2kx8 SRAM memory chips. (MAY-JUNE 10, 6m)
12. Explain an interface between an A/D converter and the TMS320C54XX processor in the
programmed I/O mode. . (JUNE 12, 10m)
13. Describe DMA with respect to TMS320C54XX processors. (June/July 11, 10m)
14. Drew the timing diagram for memory interface for read-read-write sequence of operation.
Explain the purpose of each signal involved.(June/July 11, 10m)
15. Explain the memory interface block diagram for the TMS 320 C54xx processor.(Dec 2010)
16. Draw the I/O interface timing diagram for read – write read sequence of operation (Dec 2010)
17. What are interrupts? How interrupts are handled by C54xx DSP Processors. (Dec 2010,12)
18. What are interrupts? What are the classes of interrupts available in the TMS320C54xx
processor. (JUNE/July 11, 8m)
Unit 8
Interfacing and Applications of DSP Processor
8.1 Introduction: In the case of parallel peripheral interface, the data word will be transferred with all
the bits together. In addition to parallel peripheral interface, there is a
need for interfacing serial peripherals. DSP has provision of interfacing serial devices too.
8.2 Synchronous Serial Interface: There are certain I/O devices which handle transfer
of one bit at a time. Such devices are referred to as serial I/O devices or peripherals. Communication
with serial peripherals can be synchronous, with processor clock as reference or it can be
asynchronous. Synchronous serial interface (SSI) makes communication a fast serial communication
and asynchronous mode of communication is slow serial communication. However, in comparison
with parallel peripheral interface,
the SSI is slow. The time taken depends on the number of bits in the data word.
8.3 CODEC Interface Circuit: CODEC, a coder-decoder is an example for synchronous serial I/O. It
has analog input-output, ADC and DAC. The signals in SSI generated by the DSP are DX: Data
Transmit to CODEC, DR: Data Receive from CODEC, CLKX: Transmit data with this clock
reference, CLKR: Receive data with this clock reference, FSX: Frame sync signal for transmit, FSR:
Frame sync signal for receive, First bit, during transmission or reception, is in sync with these signals,
RRDY: indicator for receiving all bits of data and XRDY: indicator for transmitting all bits of data.
Similarly, on the CODEC side, signals are FS*: Frame sync signal, DIN: Data Receive from DSP,
DOUT: Data Transmit to DSP and SCLK: Tx / Rx data with this clock reference. The block diagram
depicting the interface between TMS320C54xx and CODEC is shown in fig. 8.1. As only one signal
each is available on CODEC for clock and frame synchronization, the related DSP side signals are
connected together to clock and frame sync signals on CODEC. Fig. 8.2 and fig. 8.3 show the timings
for receive and transmit in SSI, respectively.
As shown, the receiving or transmit activity is initiated at the rising edge of clock, CLKR
/ CLKX. Reception / Transfer starts after FSR / FSX remains high for one clock cycle. RRDY /
XRDY is initially high, goes LOW to HIGH after the completion of data transfer. Each transfer of bit
requires one clock cycle. Thus, time required to transfer / receive data word depends on the number of
bits in the data word. An example of data word of 8 bits is shown in the fig. 8.2 and fig. 8.3.
Fig. 8.4 shows the block diagram of PCM3002 CODEC. Analog front end samples signal at 64X over
sampling rate. It eliminates need for sample-and-hold circuit and simplifies need for anti aliasing filter.
ADC is based on Delta-sigma modulator to convert analog signal to digital form. Decimation filter
reduces the sampling rate and thus processing does not need high speed devices. DAC is Delta-sigma
modulator, converts digital signal to analog signal. Interpolation increases the sampling rate back to
original value. LPF smoothens the analog reconstructed signal by removing high frequency
components. The Serial Interface monitors serial data transfer. It accepts built-in ADC output and
converts to serial data and transmits the same on DOUT. It also accepts serial data on DIN & gives the
same to DAC. The serial interface works in synchronization with BCLKIN & LRCIN. The Mode
Control initializes the serial data transfer. It sets all the desired modes, the number of bits and the
mode Control Signals, MD, MC and ML. MD carries Mode Word. MC is the mode Clock Signal. MD
to be loaded is sent with reference to this clock. ML is the mode Load Signal. It defines start and end
of latching bits into CODEC device.
Figure 8.5 shows interfacing of PCM3002 to DSP in DSK. DSP is connected to PCM3002 through
McBSP2. The same port can be connected to HPI. Mux selects one among these two based on CPLD
signal. CPLD in Interface also provides system clock for DSP and for CODEC, Mode control signals
for CODEC. CPLD generates BCLKIN and LRCIN signals required for serial interface.
PCM3002 CODEC handles data size of 16 / 20 bits. It has 64x over-sampling, delta-sigma ADC &
DAC. It has two channels, called left and right. The CODEC is programmable for digital de-emphasis,
digital attenuation, soft mute, digital loop back, power-down mode. System clock, SYSCLK of
CODEC can be 256fs, 384fs or 512fs. Internal clock is always 256fs for converters, digital filters.
DIN, DOUT are the single line data lines to carry the data into the CODEC and from CODEC.
Another signal BCLKIN is data bit clock, the default value of which is CODEC SYSCLK / 4. LRCIN
is frame sync signal for Left and Right Channels. The frequency of this signal is same as the sampling
frequency. The default divide factor can be 2, 4, 6 and 8. Thus, sampling rate is minimum of 6 KHz
and maximum of 48 KHz.
Problem P8.1: A PCM3002 is programmed for the 12 KHz sampling rate. Determine the divisor N
that should be written to the CPLD of the DSK and the various clock frequencies for the set up.
Problem P8.3: Frame Sync is generated by dividing the 8.192MHz clock by 256 for the
serial communication. Determine the sampling rate and the time a 16 bit sample takes when
transmitted on the data line.
The CODEC PCM3002 supports four data formats as listed in table 8.1. The four data formats depend
on the number of bits in the data word, if the data is right justified or left justified with respect to
LRCIN and if it is I2S (Integrated Inter-chip Sound) format.
Figure 8.6 and fig. 8.7 depicts the data transaction for CODEC PCM3002. As shown in fig. 8.6, DIN (/
DOUT) carries the data. BCLKIN is the reference for transfer. When LRCIN is high, left channel
inputs (/ outputs) the data and when LRCIN is low, right channel inputs (/ outputs) the data. The data
bits at the end (/ beginning) of the LRCIN thus Right (/ left) justified.
Another data format handled by PCM3002 is I2S (Integrated Inter-chip Sound). It is used for
transferring PCM between CD transport & DAC in CD player. LRCIN is low for left channel and high
for right channel in this mode of transfer. During the first BCKIN, there is no transmission by ADC.
During 2nd BCKIN onwards, there is transmission with MSB first and LSB last. Left channel data is
handled first followed by right channel data.
An example of processing ECG signal is considered. The scheme involves modulation of ECG signal
by employing Pulse Position Modulation (PPM). At the receiving end, it is
demodulated. This is followed by determination of Heart beat Rate (HR). PPM Signal either encodes
single or multiple signals. The principle of modulation being that the position of pulse decides the
sample value.
The PPM signal with two ECG signals encoded is shown in fig. 8.9. The transmission requires a sync
signal which has 2 pulses of equal interval to mark beginning of a cycle.
The sync pulses are followed by certain time gap based on the amplitude of the sample of 1st signal to
be transmitted. At the end of this time interval there is another pulse. This is again followed by time
gap based on the amplitude of the sample of the 2nd signal to be transmitted. After encoding all the
samples, there is a compensation time gap followed by sync pulses to mark the beginning of next set
of samples. Third signal may be encoded in either of the intervals of 1st or 2nd signal. With two
signals encoded and the pulse width as tp, the total time duration is 5tp.
Since the time gap between the pulses represent the sample value, at the receiving end the time gap has
to be measured and the value so obtained has to be translated to sample value. The scheme for
decoding is shown in fig. 8.10. DSP Internal Timer employed. The pulses in PPM generate interrupt
signals for DSP. The interrupt start / terminate the timer.
The count in the timer is equivalent to the sample value that has been encoded. Thus, ADC is avoided
while decoding the PPM signal.
A DSP based PPM signal decoding is shown in fig. 8.11. PPM signal interface generates the interrupt
for DSP. DSP entertains the interrupt and starts a timer. When it receives another interrupt, it stops the
timer and the count is treated as the digital equivalent of the sample value. The process repeats. Dual
DAC converts two signals encoded into analog signals. And heart rate is determined referring to the
ECG obtained by decoding
Heart Rate (HR) is a measure of time interval between QRS complexes in ECG signal. QRS complex
in ECG is an important segment representing the heart beat. There is periodicity in its appearance
indicating the heart rate. The algorithm is based on 1st and 2nd order absolute derivatives of the ECG
signal. Since absolute value of derivative is taken, the filter will be a nonlinear filtering.
Mean of half of peak amplitudes is determined, which is threshold for detection of QRS complex.
QRS interval is then the time interval between two such peaks. Time Interval between two peaks is
determined using internal timer of DSP. Heart Rate, heart beat perminute is computed using the
relation HR=Sampling rate x 60 / QRS interval. The signals at various stages are shown in fig. 8.12.
8.5 A Speech Processing System: The purpose of speech processing is for analysis, transmission or
reception as in the case of radio / TV / phone, denoising, compression and so on. There are various
applications of speech processing which include identification and verification of speaker, speech
synthesis, voice to text conversion and
vice versa and so on. A speech processing system has a vocoder, a voice coding / decoding circuit.
Schematic of speech production is shown in fig. 8.13. The vocal tract has vocal cord at one end and
mouth at the other end. The shape of the vocal tract depends on position of lips, jaws, tongue and the
velum. It decides the sound that is produced. There is another tract, nasal tract. Movement of velum
connects or disconnects nasal tract. The overall voice that sounds depends on both, the vocal tract and
nasal tract.
Two types of speech are voiced sound and unvoiced sound. Vocal tract is excited with quasi periodic
pulses of air pressure caused by vibration of vocal cords resulting in voiced sound. Unvoiced sound is
produced by forcing air through the constriction, formed somewhere in the vocal tract and creating
turbulence that produces source of noise to excite the vocal tract.
By the understanding of speech production mechanism, a speech production model representing the
same is shown in fig. 8.14. Pulse train generator generates periodic pulse train. Thus it represents the
voiced speech signal. Noise generator represents unvoiced speech. Vocal tract system is supplied
either with periodic pulse train or noise. The final output is the synthesized speech signal.
Sequence of peaks occurs periodically in voiced speech and it is the fundamental frequency of speech.
The fundamental frequency of speech differs from person to person and hence sound of speech differs
from person to person. Speech is a non stationary signal. However, it can be considered to be
relatively stationary in the intervals of 20ms. Fundamental frequency of speech can be determined by
The speech signal s(t) is filtered to retain frequencies up to 900Hz and sampled using ADC to get s(n).
The sampled signal is processed by dividing it into set of samples of 30ms duration with 20ms overlap
of the windows. The same is shown in fig. 8.16.
A threshold is set for three level clipping by computing minimum of average of absolute values of 1st
100 samples and last 100 samples. The scheme is shown in fig. 8.17.
The transfer characteristics of three level clipping circuit is shown in fig. 8.18. If the sample value is
greater than +CL, the output y(n) of the clipper is set to 1. If the sample value is more negative than -
CL, the output y(n) of the clipper is set to -1. If the sample value is between –CL and +CL, the output
y(n) of the clipper is set to 0.
The autocorrelation of y(n) is computed which will be 0,1 or -1 as defined by eq (1). The largest peak
in autocorrelation is found and the peak value is compared to a fixed threshold. If the peak value is
below threshold, the segment of s(n) is classified as unvoiced segment. If the peak value is above
threshold, the segment of s(n) is classified
as voiced segment. The functioning of autocorrelation is shown in fig. 8.19.
As shown in fig. 8.19, A is a sample sequence y(n). B is a window of samples of length N and it is
compared with the N samples of y(n). There is maximum match. As the window is moved further, say
to a position C the match reduces. When window is moved further say to a position D, again there is
maximum match. Thus, sequence y(n) is periodic. The period of repetition can be measured by
locating the peaks and finding the time gap between them.
8.5 An Image Processing System: In comparison with the ECG or speech signal considered so far,
image has entirely different requirements. It is a two dimensional signal. It can be a color or gray
image. A color image requires 3 matrices to be maintained for three primary colors-red, green and
blue. A gray image requires only one
matrix, maintaining the gray information of each pixel (picture cell). Image is a signal with large
amount of data. Of the many processing, enhancement, restoration, etc., image compression is one
important processing because of the large amount of data in image.
To reduce the storage requirement and also to reduce the time and band width required to transmit the
image, it has to be compressed. Data compression of the order of factor 50 is sometimes preferred.
JPEG, a standard for image compression employs lossy compression technique. It is based on discrete
cosine transform (DCT). Transform domain compression separates the image signal into low
frequency components and high frequency components. Low frequency components are retained
because they represent major variations. High frequency components are ignored because they
represent minute variations and our eye is not sensitive to minute variations.
Image is divided into blocks of 8 x 8. DCT is applied to each block. Low frequency coefficients are of
higher value and hence they are retained. The amount of high frequency components to be retained is
decided by the desirable quality of reconstructed image. Forward DCT is given by eq (2).
Since the coefficients values may vary with a large range, they are quantized. As already noted low
frequency coefficients are significant and high frequency coefficients are insignificant, they are
allotted varying number of bits. Significant coefficients are quantized precisely, with more bits and
insignificant coefficients are quantized coarsely,
with fewer bits. To achieve this, a quantization table as shown in fig. 8.20 is employed. The contents
of Quantization Table indicate the step size for quantization. An entry as smaller value implies smaller
step size, leading to more bits for the coefficients and vice
versa.
The quantized coefficients are coded using Huffman coding. It is a variable length coding Huffman
Encoding. Shorter codes are allotted for frequently occurring long sequence of 1’s & 0’s. Decoding
requires Huffman table and dequantization table. Inverse DCT is taken employing eq(3). The data
blocks so obtained are combined to form complete image. The schematic of encoding and decoding is
shown in fig. 8.21.
Recommended Questions:
1. With the help of a block diagram, explain the image compression and reconstruction using
JPEG encoder and decoder.
2. Write a pseudo algorithm heart rate(HR), using the digital signal processor.
3. Explain briefly the building blocks of a PCM3002 CODEC device. What do you understand by
a DSP based biotelemetry receiver?
4. With the help of block diagram explain JPEG algorithm.
5. Explain with the neat diagram the operation of pitch detector.
6. Explain with a neat diagram, the synchronous serial interface between the C54xx and a
CODEC device. Explain the operation of pulse position modulation (PPM) to encode two
biomedical signals.
7. Explain with a neat block diagram the operation, the operation of the pitch detector.