100% found this document useful (1 vote)
707 views

Channel Coding

This document contains exercises and solutions for channel coding. It begins with exercises on basic channel models like the binary symmetric channel. Later sections include exercises on linear block codes, cyclic codes, convolutional codes, and decoding algorithms. The document is intended as a study guide for a course on channel coding. It provides over 50 exercises and step-by-step solutions to help students learn key concepts in information theory and error correction coding.

Uploaded by

Sarah-Javed
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
707 views

Channel Coding

This document contains exercises and solutions for channel coding. It begins with exercises on basic channel models like the binary symmetric channel. Later sections include exercises on linear block codes, cyclic codes, convolutional codes, and decoding algorithms. The document is intended as a study guide for a course on channel coding. It provides over 50 exercises and step-by-step solutions to help students learn key concepts in information theory and error correction coding.

Uploaded by

Sarah-Javed
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Channel Coding I

Exercises

– WS 2009/2010 –

Carsten Bockelmann
NW1, Room N 2360, Tel.: 0421/218-62386
E-mail: bockelmann@ant.uni-bremen.de

Universität Bremen, FB1


Institut für Telekommunikation und Hochfrequenztechnik
Arbeitsbereich Nachrichtentechnik
Prof. Dr.-Ing. K. D. Kammeyer
Postfach 33 04 40
D–28334 Bremen

WWW-Server: http://www.ant.uni-bremen.de

Version from October 26, 2009


CONTENTS October 26, 2009 I

Contents

1 Introduction 1
Exercise 1.1: Design of a discrete channel . . . . . . . . . . . . . . . . . . . . . . . . 1
Exercise 1.2: Statistics of the discrete channel . . . . . . . . . . . . . . . . . . . . . . 1
Exercise 1.3: Binary symmetric channel (BSC) . . . . . . . . . . . . . . . . . . . . . 1
Exercise 1.4: Serial concatenation of two BSC . . . . . . . . . . . . . . . . . . . . . . 1
Exercise 1.5: Transmission of a coded data over BSCs . . . . . . . . . . . . . . . . . . 2

2 Information theory 3
Exercise 2.1: Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Exercise 2.2: Channel capacity of a discrete memory-free channel . . . . . . . . . . . 3
Exercise 2.3: Channel capacity of a BSC . . . . . . . . . . . . . . . . . . . . . . . . . 3
Exercise 2.4: Channel capacity of the AWGN . . . . . . . . . . . . . . . . . . . . . . 4

3 Linear block codes 5


3.2 Finite field algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Exercise 3.1: 2-out-of-5-code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Exercise 3.2: Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.3 Distance properties of block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Exercise 3.3: Error correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Exercise 3.4: Sphere-packing bound . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.5 Matrix description of block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Exercise 3.5: Generator and parity check matrices . . . . . . . . . . . . . . . . . . . . 6
Exercise 3.6: Expansion, shortening, and puncturing of codes . . . . . . . . . . . . . . 6
Exercise 3.7: Coset decomposition and syndrome decoding . . . . . . . . . . . . . . . 7
Exercise 3.8: Coding program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.6 Cyclic Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Exercise 3.9: Polynomial multiplication . . . . . . . . . . . . . . . . . . . . . . . . . 7
Exercise 3.10: Polynomial division . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Exercise 3.11: Generator polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Exercise 3.12: Syndrome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Exercise 3.13: Primitive polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Exercise 3.14: CRC codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.7 Reed-Solomon- and BCH-codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Exercise 3.15: Reed-Solomon-codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
CONTENTS October 26, 2009 II

Exercise 3.16: BCH-codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


Exercise 3.17: Bit- and word error curves of BCH-codes . . . . . . . . . . . . . . . . 10

4 Convolution codes 11
4.1 Fundamental principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 4.1: Convolution code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 Characterization of convolution codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 4.2: Catastrophic codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3 Distance properties of convolution codes . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 4.3: Distance properties of convolution codes* . . . . . . . . . . . . . . . . . 11
Exercise 4.4: Comparison of NSC- and RSC-codes* . . . . . . . . . . . . . . . . . . . 11
4.4 Puncturing of convolution codes of the rate 1/n . . . . . . . . . . . . . . . . . . . . . . 12
Exercise 4.5: Puncturing of convolution codes* . . . . . . . . . . . . . . . . . . . . . 12
4.5 Optimal decoding with Viterbi-algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 13
Exercise 4.6: Viterbi-decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Exercise 4.7: Viterbi-decoding with puncturing . . . . . . . . . . . . . . . . . . . . . 13
Exercise 4.8: Coding and decoding of a RSC code . . . . . . . . . . . . . . . . . . . . 13
Exercise 4.9: Investigation to Viterbi-decoding . . . . . . . . . . . . . . . . . . . . . . 14
Exercise 4.10: Simulation of a convolution-coder and -decoder . . . . . . . . . . . . . 14

1 Introduction 1
Solution 1.1: Design of a discrete channel . . . . . . . . . . . . . . . . . . . . . . . . 1
Solution 1.3: Binary symmetric channel (BSC) . . . . . . . . . . . . . . . . . . . . . 1
Solution 1.4: Serial concatenation of two BSC . . . . . . . . . . . . . . . . . . . . . . 2
Solution 1.4: Transmission of a coded data over BSCs . . . . . . . . . . . . . . . . . . 2

2 Information theory 3
Solution 2.1: Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Solution 2.2: Channel capacity of a discrete memory-free channel . . . . . . . . . . . 4
Solution 2.3: Channel capacity of a binary channel . . . . . . . . . . . . . . . . . . . 5
Solution 2.4: Channel capacity of the AWGN . . . . . . . . . . . . . . . . . . . . . . 8

3 Linear block codes 10


3.2 Finite field algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Solution 3.1: 2-out-of-5-code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Solution 3.2: Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
CONTENTS October 26, 2009 III

Solution 3.3: Error correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12


3.3 Distance properties of block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Solution 3.4: Sphere-packing bound . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.5 Matrix description of block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Solution 3.5: Generator and parity check matrices . . . . . . . . . . . . . . . . . . . . 16
Solution 3.6: Expansion, shortening, and puncturing of codes . . . . . . . . . . . . . . 17
Solution 3.7: Coset decomposition and syndrome decoding . . . . . . . . . . . . . . . 18
Solution 3.8: Coding program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.6 Cyclic codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Solution 3.9: Polynomial multiplication . . . . . . . . . . . . . . . . . . . . . . . . . 19
Solution 3.10: Polynomial division . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Solution 3.11: Generator polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Solution 3.12: Syndrome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Solution 3.13: Primitive Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Solution 3.14: CRC codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.7 Reed-Solomon- and BCH-codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Solution 3.15: Reed-Solomon-codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Solution 3.16: BCH-codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Solution 3.17: Bit and word error curves of BCH-codes . . . . . . . . . . . . . . . . . 27

4 Convolution codes 29
4.1 Fundamental principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Solution 4.1: Convolution code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2 Characterization of convolution codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Solution 4.2: Catastrophic codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3 Distance properties of convolution codes . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Solution 4.3: Distance properties of convolution codes . . . . . . . . . . . . . . . . . 31
Solution 4.4: Comparison of NSC- and RSC-codes . . . . . . . . . . . . . . . . . . . 33
4.4 Puncturing of convolution codes of the rate 1/n . . . . . . . . . . . . . . . . . . . . . . 35
Solution 4.5: Puncturing of convolution codes . . . . . . . . . . . . . . . . . . . . . . 35
4.5 Optimal decoding with Viterbi-algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 36
Solution 4.6: Viterbi-decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Solution 4.7: Viterbi-decoding with puncturing . . . . . . . . . . . . . . . . . . . . . 37
Solution 4.8: Coding and decoding of a RSC code . . . . . . . . . . . . . . . . . . . . 39
Solution 4.9: Investigation to Viterbi-decoding . . . . . . . . . . . . . . . . . . . . . . 40
CONTENTS October 26, 2009 IV

Solution 4.10: Simulation of a convolution-coder and -decoder . . . . . . . . . . . . . 42

1 Introduction 2
Matlab solution 1.1: Design of a discrete channel . . . . . . . . . . . . . . . . . . . . 2
Matlab solution 1.2: Statistics of the channel . . . . . . . . . . . . . . . . . . . . . . . 3
Matlab solution 1.5: Transmission of a coded data over BSCs . . . . . . . . . . . . . . 5

2 Information theory 6
Matlab solution 2.1: Design of a discrete channel . . . . . . . . . . . . . . . . . . . . 6
Matlab solution 2.3: Channel capacity of a discrete memory-free channel . . . . . . . 6

3 Linear block codes 10


3.2 Finite field algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Matlab solution 3.1: 2-out-of-5-code . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.5 Matrix description of block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Matlab solution 3.5: Generator and parity check matrices . . . . . . . . . . . . . . . . 12
Matlab solution 3.6: Expansion, shortening and puncturing of codes . . . . . . . . . . 13
Matlab solution 3.7: Coset decomposition and syndrome decoding . . . . . . . . . . . 14
Matlab solution 3.8: Coding program . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.6 Cyclic Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Matlab solution 3.9: Polynomial multiplication . . . . . . . . . . . . . . . . . . . . . 17
Matlab solution 3.10: Polynomial division . . . . . . . . . . . . . . . . . . . . . . . . 18
Matlab solution 3.11: Generator polynomial . . . . . . . . . . . . . . . . . . . . . . . 18
Matlab solution 3.12: Syndrome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Matlab solution 3.14: CRC codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.7 Reed-Solomon- und BCH-Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Matlab solution 3.15: Reed-Solomon-codes . . . . . . . . . . . . . . . . . . . . . . . 23
Matlab solution 3.16: BCH-Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Matlab solution 3.17: Bit- und Wortfehlerkurven von BCH-Codes . . . . . . . . . . . 26

4 Convolution codes 29
4.3 Distance properties of convolution codes . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Matlab solution 4.3: Distance properties of convolution codes* . . . . . . . . . . . . . 29
Matlab solution 4.4: Comparison of NSC- and RSC-codes* . . . . . . . . . . . . . . . 31
4.4 Puncturing of 1/n-rate convolution codes . . . . . . . . . . . . . . . . . . . . . . . . . 33
Matlab solution 4.5: Puncturing of convolution codes* . . . . . . . . . . . . . . . . . 33
CONTENTS October 26, 2009 V

4.5 Optimal decoding with Viterbi-algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 36


Matlab solution 4.6: Viterbi-decoding . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Matlab solution 4.6: Viterbi-decoding with puncturing . . . . . . . . . . . . . . . . . 37
Matlab solution 4.8: Coding and decoding of a RSC code . . . . . . . . . . . . . . . . 38
Matlab solution 4.9: Investigation to Viterbi-decoding . . . . . . . . . . . . . . . . . . 39
Matlab solution 4.10: Simulation of a convolution-coder and -decoder . . . . . . . . . 42
CONTENTS October 26, 2009 VI

General Information

• The dates for the exercises are arranged in the lectures and take place in room N 2250 or N 2410.
The exercises cover the contents of past lectures and contain a theoretical part and a programming
part, in general. The students are advised to repeat the corresponding chapters and to prepare the
exercises for presenting them at the board.

• All references to passages in the text (chapter- and equation numbers) refer to the script: V. Kühn,
“Kanalcodierung I+II” in german language. References of equations in the form of (1.1) refer to
the script, too, whereas equations in the form (1) refer to solutions of the exercises.

• Although M ATLAB is used in the exercises, no tutorial introduction can be given due to the limited
time. A tutorial and further information can be found in

– MATLAB Primer, 3rd edition, Kermit Sigmon


– Practical Introduction to Matlab, Mark S. Gockenbach
– NT Tips und Tricks für MATLAB, Arbeitsbereich Nachrichtentechnik, Universität Bremen
– Einführung in MATLAB von Peter Arbenz, ETH Zürich

available on http://www.ant.uni-bremen.de/teaching/kc/exercises/.

• Exercises not treated in the lesson are marked by *. Additional exercises, complete solutions
and all necessary M ATLAB files can be found in K.-D. Kammeyer u. V. Kühn: “Matlab in der
Nachrichtentechnik”, J. Schlembach Fachverlag, 2001, ISBN: 3-935340-05-2.

• PS-files of the tasks, solutions and M ATLAB codes are available on


http://www.ant.uni-bremen.de/whomes/wuebben/kc.

Within the university net the additional page


http://www.ant.uni-bremen.de/whomes/wuebben/kc
is available. Beside the tasks and solutions you will find additional information on this page, e.g the
matlab primer, the original paper of C. E. Shannon A mathematical theory of communication,
some scripts and a preliminary version of the book ”Error-Control Coding” of B. Friedrichs!
1 I NTRODUCTION October 26, 2009 1

1 Introduction

Exercise 1.1 Design of a discrete channel

a) Given is a transmitting alphabet consisting of the symbols −3, −1, +1, +3. They shall be
transmitted over an AWGN-channel, which is real valued and has the variance σN 2 = 1. At the

output of the channel a hard-decision takes place, i.e. the noisy values are again mapped to the
4 symbols (Aout = Ain ), where the decision thresholds in each case lies in the middle of two
symbols. Calculate the transition probabilities P (Yµ |Xν ) of the channel and represent them in a
matrix.

b) Check the correctness of the matrix by checking the sums of probabilities to one.

c) Determine the joint probabilities P (Xν , Yµ ) for equally probable transmitting symbols.

d) Calculate the occurrence probabilities for the channel output symbols Yµ .

e) Calculate the error probability Pe (Xν ) for the transmitting symbols Xν and the mean overall error
probability Pe .

Exercise 1.2 Statistics of the discrete channel

a) Given are the channels in figure 1. Complete the missing probabilities.

Channel 1 Channel 2
0.7 0.8
0.2 X0 Y0 0.3 0.2 X0 Y0

Y1 Y1 0.08

X1 Y2 X1 Y2
0.8

Fig. 1: Discrete channel models

Exercise 1.3 Binary symmetric channel (BSC)

a) The word 0110100 is transmitted by a BSC with the error probability Pe = 0.01. Specify the
probability of receiving the word 0010101, wrongly.

b) Determine the probability of m incorrectly received bits at the transmission of n bits.

c) For a BSC with the error probability Pe = 0.01, the probability of more than 2 errors at words of
the length 31 shall be determined.

Exercise 1.4 Serial concatenation of two BSC


1 I NTRODUCTION October 26, 2009 2

Two BSC-channels with Pe,1 and Pe,2 shall be connected in series. Determine the error probability of
the new channel.

Exercise 1.5 Transmission of a coded data over BSCs

Use Matlab to simulate the transmission of coded data over a Binary Symmetric Channel (BSC) with
error probability Pe = 0.1. Apply repetition coding of rate Rc = 1/5 for error protection.
2 I NFORMATION THEORY October 26, 2009 3

2 Information theory

Exercise 2.1 Entropy

a) The average information content H(Xν ) of the signal Xν (also called partial entropy) shall be
maximized. Determine the value P (Xν ), for which the partial entropy reaches its maximum value
and specify H(Xν )max . Check the result with M ATLAB by determining the partial entropy for
P (Xν ) = 0 : 0.01 : 1 and plotting H(Xν ) over P (Xν ).

b) The random vector (X1 X2 X3 ) can exclusively carry the values (000), (001), (011), (101) and
(111) each with a probability of 1/5.
Determine the entropies:

1. H(X1 )
2. H(X2 )
3. H(X3 )
4. H(X1 , X2 )
5. H(X1 , X2 , X3 )
6. H(X2 |X1 )
7. H(X2 |X1 = 0)
8. H(X2 |X1 = 1)
9. H(X3 |X1 , X2 )

Exercise 2.2 Channel capacity of a discrete memory-free channel

Determine the channel capacity for the following discrete memory-free channel on condition that P (Xν ) =
1/3 is valid.

X0 1/2 Y0
1/3
1/6
1/6 1/2
X1 Y1
1/3
1/3
1/6
X2 Y2
1/2

Exercise 2.3 Channel capacity of a BSC

a) Derive the capacity for equally probable input symbols P (X0 ) = P (X1 ) = 0.5 in dependence on
the error probability for a binary symmetric channel (BSC).

b) Prepare a M ATLAB program, which calculates the capacity of an unsymmetric binary channel for
the input probabilities P (x) = 0 : 0.01 : 1. It shall be possible to set the error probabilities Pe,x0
and Pe,x1 at the program start. (Attention: P (y) must be calculated!)
2 I NFORMATION THEORY October 26, 2009 4

1-Pe,x0
x0 y0
Pe,x0

Pe,x1
x1 y1
1-Pex1

c) Determine the channel capacity for several error probabilities Pe at a fixed input probability P (x)
within M ATLAB. Plot C(Pe ).

Exercise 2.4 Channel capacity of the AWGN

Derive the channel capacity for a bandwidth limited Gaussian channel (σn2 = N0 /2) with a normal
distributed input (σx2 = Es ) according to eq. (2.28).
3 L INEAR BLOCK CODES October 26, 2009 5

3 Linear block codes

3.2 Finite field algebra

Exercise 3.1 2-out-of-5-code

a) Given is a simple 2-out-of-5-code of the length n = 5 that is composed of any possible words with
the weight wH (c) = 2. Specify the code C. Is it a linear code?

b) Determine the distance properties of the code. What is to be considered?

c) Calculate the probability Pue of the occurrence of an undetected error for the considered code at
a binary symmetric channel. The transition probabilities of the BSC shall be in the range 10−3 ≤
Pe ≤ 0.5. Represent Pue in dependence on Pe graphically and depict also the error rate of the
BSC in the chart.

d) Check the result of c) by simulating a data transmission scheme in M ATLAB and measuring the
not detected errors for certain error probabilities Pe of the BSC.
Hint: Choose N code words by random and conduct a BPSK-modulation. Subsequently, the
words are superposed with additive white, Gaussianp distributed noise, where its power corresponds
to the desired error probability Pe = 0.5 · erfc( Es /N0 ) of the BSC. After a hard-decision of the
single bit, the MLD can be realized at the receiver by a correlation of the received word and all
possible code words with subsequent determination of the maximum. Subsequently, the errors
have to be counted.

e) Calculate the error probability Pw at soft-decision-ML-decoding for the AWGN-channel.

f) Check the results of e) with the help of simulations in the range −2 dB ≤ Es /N0 ≤ 6 dB.
Compare the obtained error rates with the uncoded case.
Hint: You can use the same transmission scheme as in item d). But this time, the hard-decision
has to be conducted after the ML-decoding.

Exercise 3.2 Field

Given are the sets Mq := {0, 1, . . . , q − 1} and the two connections

• addition modulo q

• multiplication modulo q

Calculate the connection tables for q = 2, 3, 4, 5, 6 and specify, from which q results a field.

3.3 Distance properties of block codes

Exercise 3.3 Error correction

Given is a (n, k)q block code with the minimal distance dmin = 8.
3 L INEAR BLOCK CODES October 26, 2009 6

a) Determine the maximal number of correctable errors and the number of detectable errors at pure
error detection.

b) The code shall be used for the correction of 2 errors and for the simultaneous detection of 5 further
errors. Demonstrate by illustration in the field of code words (cp. fig. 3.1), that this is possible.

c) Demonstrate by illustration in the field of code words, how many possibilities of variation of a
code word have to be taken into consideration at the transmission over a disturbed channel.

Exercise 3.4 Sphere-packing bound

a) A linear (n, 2)2 -block code has the minimal Hamming distance dmin = 5. Determine the minimal
block length.

b) A binary code C has the parameters n = 15, k = 7, dmin = 5. Is the sphere packing bound
fulfilled? What does the right side minus the left side of the sphere packing bound (cp. eq. (3.7))
state?

c) Can a binary code with the parameters n = 15, k = 7, dmin = 7 exist?

d) Check, if a binary code with the parameters n = 23, k = 12, dmin = 7 can exist.

e) A source-coder generates 16 symbols, that shall be coded binary with a correction capacity of 4 in
the channel coder. How great does the code rate have to be in any case?

3.5 Matrix description of block codes

Exercise 3.5 Generator and parity check matrices

a) State the generator- as well as the parity check matrix for a (n, 1, n) repetition code with n = 4.
What parameters and properties does the dual code have?

b) The columns of the parity check matrix of a Hamming code of degree r represent all dual numbers
from 1 to 2r − 1. State a parity check matrix for r = 3 and calculate the corresponding generator
matrix. Determine the parameters of the code and state the code rate.

c) Analyze which connection is between the minimal distance of a code and the number of linear
independent columns of the parity check matrix H by means of this example.

Exercise 3.6 Expansion, shortening, and puncturing of codes

a) Given is the systematical (7, 4, 3)-Hamming code from exercise 3.5. Expand the code by an
additional test digit such that it gets a minimum distance of dmin = 4. State the generator matrix
GE and the parity check matrix HE of the expanded (8,4,4)-code.
Hint: Conduct the construction with the help of the systematic parity check matrix HS and note
the result from 3.5c.
3 L INEAR BLOCK CODES October 26, 2009 7

b) Shortening a code means the decrease of the cardinality of the field of code words, i.e. information
bits are cancelled. Shorten the Hamming code from above to the half of the code words and state
the generator matrix GK and the parity check matrix HK for the systematic case. What minimal
distance does the shortened code have?

c) Puncturing a code means gating out code bits which serves for increasing the code rate. Puncture
the Hamming code from above to the rate Rc = 2/3. What minimal distance does the new code
have?

Exercise 3.7 Coset decomposition and syndrome decoding

a) State the number of syndromes of the (7, 4, 3)-Hamming code and compare it with the number of
correctable error patterns.

b) Make up a table for the systematic (7, 4, 3)-Hamming coder which contains all syndromes and the
corresponding coset leaders.

c) The word y = (1 1 0 1 0 0 1) is found at the receiver. Which information word u was sent with the
greatest probability?

d) The search for the position of the syndrome s in H can be dropped by resorting the columns of H.
The decimal representation of s then can directly be used for the addressing of the coset leader.
Give the corresponding parity check matrix H̃.

Exercise 3.8 Coding program

a) Write a M ATLAB-programm which codes and again decodes a certain number of input data bits.
Besides it shall be possible to insert errors before the decoding. The (7, 4, 3)-Hamming code with
the generator matrix
 
1 0 0 0 0 1 1
 0 1 0 0 1 0 1 
G=
 
0 0 1 0 1 1 0

 
0 0 0 1 1 1 1
and the parity check matrix
 
0 1 1 1 1 0 0
H= 1 0 1 1 0 1 0 
 

1 1 0 1 0 0 1
shall be used.

b) Extend the M ATLABprogram for taking up bit error curves in the range 0 : 20 dB.

3.6 Cyclic Codes

Exercise 3.9 Polynomial multiplication

Given are the two polynomials f (D) = D3 + D + 1 and g(D) = D + 1.


3 L INEAR BLOCK CODES October 26, 2009 8

a) Calculate f (D) · g(D). Check the result with M ATLAB.

b) Give the block diagram of a nonrecursive system for a sequential multiplication of both polynomi-
als.

c) Illustrate the stepwise calculation of the polynomial f (D) · g(D) in dependence on the symbol
clock by means of a table.

Exercise 3.10 Polynomial division

Given are the two polynomials f (D) = D3 + D + 1 and g(D) = D2 + D + 1.

a) Calculate the division of f (D) by g(D). Check the result with M ATLAB.

b) Give the block diagram of a recursive system for a sequential division of the polynomial f (D) by
the polynomial g(D).

c) Illustrate the stepwise calculation of the polynomial f (D) : g(D) in dependence on the symbol
clock by means of a table.

Exercise 3.11 Generator polynomial

Given is a cyclic (15, 7) block code with the generator polynomial g(D) = D8 + D 7 + D 6 + D 4 + 1.

a) Indicate that g(D) can be a generator polynomial of the code.

b) Determine the code polynomial (code word) in systematical form for the message u(D) = D4 +
D + 1.

c) Is the polynomial y(D) = D14 + D 5 + D + 1 a code word?

Exercise 3.12 Syndrome

The syndromes s1 to s8 of a 1-error-correcting cyclic code are given.


s1 = 1 0 1 0 0
s2 = 0 1 0 1 0
s3 = 0 0 1 0 1
s4 = 1 0 0 0 0
s5 = 0 1 0 0 0
s6 = 0 0 1 0 0
s7 = 0 0 0 1 0
s8 = 0 0 0 0 1

a) Determine the parity check matrix H and the generator matrix G.

b) State the number of test digits n − k and the number of information digits k.

c) What is the generator polynomial g(D)?

d) How many different code words can be built with the generator polynomial g(D)?
3 L INEAR BLOCK CODES October 26, 2009 9

e) The code word y1 = (0 1 1 0 1 0 1 1) is received. Has this code word been falsified on the channel?
Can the right code word be determined? If yes, name the right code word.

f) Now the code word y2 = (1 0 1 1 0 1 1 1) has been received. Has this code word been falsified on
the channel? Can the right code word be determined? If yes, name the right code word.

Exercise 3.13 Primitive polynomials

Given are the two irreducible polynomials g1 (D) = D 4 + D + 1 and g2 (D) = D4 + D3 + D2 + D + 1.


Which of these two polynomials is primitive?

Exercise 3.14 CRC codes

a) Generate the generator polynomial for a CRC code of the length n = 15.

b) Determine the generator matrix G and the parity check matrix H.

c) Now the efficiency of the CRC code shall be examined by considering the perceptibility of all
burst errors of the length 4 ≤ le ≤ n. Only error patterns with the le errors directly succeeding
one another shall be taken into consideration. Give the relative frequency of the detected errors.

3.7 Reed-Solomon- and BCH-codes

Exercise 3.15 Reed-Solomon-codes

a) A RS-code in the GF (23 ) that can correct t = 1 error shall be generated. Determine the parameters
k, n and Rc and state the maximal length of a correctable error in bit.

b) Give the generator polynomial g(D) of the Reed-Solomon-code. Use the commands rspoly
resp. eq. (3.77).

c) The message u = (110 010 111 000 001) shall be coded. At the subsequent transmission 3 bits
shall be falsified. How does the position of the error affect on the decoding result?

d) Now a t = 2 errors correcting code in the GF (8) shall be designed. Again determine the
parameters k and Rc and give g(D).

e) The word y = (000 100 001 011 110 010 111) is received. Determine the syndrome, the error digit
polynomial, the position of the error digits and the amplitudes of the errors. Use the commands
rssyndrom, rselp, rschien and rserramp. What was the transmitted message (correction
with rscorrect )?

Exercise 3.16 BCH-codes

a) We now want to design a t = 3 errors correcting BCH-code in the GF (24 ). Form the cyclotomic
cosets Ki with the command gfcosets. Which sets can be combined to one set Ki for fulfilling
the challenge t = 3. Which parameters does the resulting BCH-code have?
3 L INEAR BLOCK CODES October 26, 2009 10

b) With the command bchpoly all valid parameter combinations can be announced, at declaration
of the parameters n and k bchgenpoly supplies a generator polynomial g1 (D) and the polyno-
mials mi (D) of those cyclotomic cosets Ki , that are involved in g(D). Choose a suitable code and
determine g1 (D) as well as the polynomials mi (D).

c) Calculate an alternative generator polynomial g2 (D) for the same parameters by means of the
results from item a) by hand.

d) Code the information word u = (1 1 0 1 1) with the command bchenc for the two generator
polynomials g1 (D) and g2 (D). Determine the zeroes of the two code words c1 (D) and c2 (D) and
compare them.

e) Transform the code words with the function fft into the spectral range. What’s striking?

Exercise 3.17 Bit- and word error curves of BCH-codes

a) Simulate the transmission over an AWGN-channel for the standard BCH-code from exercise 3.16
for the signal-to-noise ratios from 0 to 10 dB in 2-dB-steps. Conduct the BMD-decoding with
bchdec as well as a ML-decoding by comparison with all possible code words. Measure the bit-
and word error rates and oppose the results of the different decoding methods in a diagram.

b) Compare the results with the uncoded case!


4 C ONVOLUTION CODES October 26, 2009 11

4 Convolution codes

4.1 Fundamental principles

Exercise 4.1 Convolution code

Given is a convolution code with the code rate R = 1/3, memory m = 2 and the generator polynomials
g1 (D) = 1 + D + D 2 and g2 (D) = 1 + D2 and g3 (D) = 1 + D + D 2 .

a) Determine the output sequence for the input sequence u = (0 1 1 0 1 0).

b) Sketch the corresponding Trellis chart in the case of the under a) given input sequence.

c) Sketch the state diagram of the encoder.

d) Determine the corresponding free distance df .

4.2 Characterization of convolution codes

Exercise 4.2 Catastrophic codes

Given is the convolution code with the generator polynomial g(D) = (1 + D2 , 1 + D). Show that this
is a catastrophic code and explain the consequences.

4.3 Distance properties of convolution codes

Exercise 4.3 Distance properties of convolution codes*

a) Given is the non-recursive convolution code with the generators g0 (D) = 1+D+D3 and g1 (D) =
1 + D + D2 + D3 . Determine by means of the M ATLAB-routine iowef conv the IOWEF and
present aw,d (in the script also called Tw,d) for even and odd input weights in separate diagrams
(maximal distance dmax = 20). How large is the free distance of the code and what’s striking with
reference to the weight distribution?

b) Calculate the coefficients ad and cd .

c) Estimate the word- and bit error rates with the help of the Union Bound for the AWGN-channel
in the range 0 dB ≤ Eb /N0 ≤ 6 dB. Draw the determined error rates for several maximally
considered distances d.

d) The convolution code shall now be terminated and considered as block code of the length n = 50.
Determine the IOWEF with the function iowef block conv and calculate the bit error rate
with the help of the routine pb unionbound awgn (now, the block length n = 50 and the
information length k = 22 is to be specified). How can the difference to the results of item c) be
explained?

Exercise 4.4 Comparison of NSC- and RSC-codes*


4 C ONVOLUTION CODES October 26, 2009 12

a) The generator polynomial g0 (D) of the code from exercise 4.3 shall now be used for the feedback
of a recursive systematic convolution code. Determine the IOWEF of the code and again draw
them separately for even and odd input weights.

b) Now alternatively use the polynomial g1 (D) for feedback. Again determine the IOWEF and
determine for both RSC-codes the coefficients ad and cd . Compare them with those of the non-
recursive code.

c) Estimate the bit error rates with the help of the Union Bound for both RSC-codes and draw them
together with the error rate of the non-recursive convolution code in a diagram. Which differences
are striking?

4.4 Puncturing of convolution codes of the rate 1/n

Exercise 4.5 Puncturing of convolution codes*

a) The non-recursive convolution code from exercise 4.3 shall now be punctured to the code rate
Rc = 2/3. Give several puncturing patterns and find out, if the puncturing results in a catastrophic
code with the routine tkatastroph.

b) Determine for the suitable puncturing schemes from item a) the coefficients ad and cd and oppose
them to those of the unpunctured code. Note, that the routine iowef block conv doesn’t
consider different starting instants of time and these therefore have to be entered manually (P1a =
[3 1]T and P1b = [1 3]T yield different results).

c) Calculate an estimation of the bit error rate for the different puncturing patterns from item b) with
the routine pb unionbound awgn and draw it together with that of the unpunctured code in a
diagram. Hand over a mean IOWEF of the under b) determined spectra to the routine.

d) Now puncture the code to the rate Rc = 3/4 and consider the results from item a). Check if the
code is catastrophic.

e) For reaching higher code rates than 2/3 with this code, the RSC-code from exercise 4.4b) shall
now be punctured, where the systematic information bits are always transmitted. Determine for
the code rates Rc = 3/4, Rc = 4/5 and Rc = 5/6 the puncturing schemes and the resulting bit
error rates for the AWGN-channel. Sketch them together with those of the unpunctured code and
of the code that is punctured to the rate Rc = 2/3.
4 C ONVOLUTION CODES October 26, 2009 13

4.5 Optimal decoding with Viterbi-algorithm

Exercise 4.6 Viterbi-decoding

Given is a convolution code with g0 (D) = 1 + D + D2 and g1 (D) = 1 + D2 , where a terminated code
shall be used.

a) Generate the corresponding Trellis and code the information sequence u(ℓ) = (1 1 0 1).
b) Conduct the Viterbi-decoding respectively for the transmitted code sequence x = (110101001011)
and for the two disturbed receiving sequences y1 = (111101011011) and y2 = (111110011011)
and describe the differences.
c) Check the results with the help of a M ATLAB-program.
Define the convolution code with G=[7;5], r flag=0 and term=1, generate the trellis diagram
with trellis = make trellis(G,r flag) and sketch it with show trellis(trellis).
Encode the information sequence u with c = conv encoder (u,G,r flag,term) and
decode this sequence with
viterbi omnip(c,trellis,r flag,term,length(c)/n,1).
Decode now the sequences y 1 and y 2.

Exercise 4.7 Viterbi-decoding with puncturing

Given is a convolution code with g0 (D) = 1 + D + D2 and g1 (D) = 1 + D2 , out of which shall be
generated a punctured code by puncturing with the scheme
!
1 1 1 0
P1 =
1 0 0 1

a) Determine the code rate of the punctured code.


b) Conduct the Viterbi-decoding in the case of the undisturbed receiving sequence
y = (1 1 0 0 0 1 0 1) (pay attention to the puncturing!).

Exercise 4.8 Coding and decoding of a RSC code

a) We now consider the recursive systematic convolution code with the generator polynomials g̃0 (D) =
1 and g̃1 (D) = (1 + D + D3 )/(1 + D + D 2 + D3 ). Generate the Trellis diagram of the code with
the help of the M ATLAB-command make trellis([11;15],2).
b) Conduct a coding for the input sequence u(ℓ) = (1 1 0 1 1). The coder shall be conducted to the
zero state by adding tail bits (command conv encoder (u,[11;15],2,1)). What are the
output- and the state sequences?
c) Now the decoding of convolution codes with the help of the Viterbi-algorithm shall be considered.
For the present the sequence u(ℓ) = (1 1 0 1 1 0 0 1 1 1 1 0) has to be coded (with termination),
modulated with BPSK and subsequently superposed by Gaussian distributed noise. Choose a
signal-to-noise ratio of Eb /N0 = 4 dB. With the help of the program viterbi the decoding
now can be made. By setting the parameter demo=1 the sequence of the decoding is represented
graphically by means of the Trellis diagram. What’s striking concerning the error distribution if
the Trellis is considered as not terminated?
4 C ONVOLUTION CODES October 26, 2009 14

d) Now the influence of the error structure at the decoder input shall be examined. Therefor specifi-
cally add four errors, that you once arrange bundled and another time distribute in a block, to the
coded and BPSK-modulated sequence. How does the decoding behave in both cases?

Exercise 4.9 Investigation to Viterbi-decoding

a) Simulate a data transmission system with the convolution code from exercise 4.8, a BPSK-modulation,
an AWGN-channel and a Viterbi-decoder. The block length of the terminated convolution code
shall be L = 100 bits. Decode the convolution code with the decision depths (K1 = 10, K2 = 15,
K3 = 20, K4 = 30 und K5 = 50). The signal-to-noise ratio shall be Eb /N0 = 4 dB. What
influence does the decision depth have on the bit error rate?

b) Now puncture the convolution code to the rate Rc = 3/4 and repeat item a). What about the
decision depths now?

c) Conduct simulations for an unknown final state. Evaluate the distribution of the decoding errors at
the output. What’s striking?

Exercise 4.10 Simulation of a convolution-coder and -decoder

Generate a M ATLAB-program that codes, BPSK-modulates, transmits over an AWGN-channel and sub-
sequently hard decodes an information sequence u of the length k = 48 with the terminated convolution
code g = (5, 7). Take with this program the bit error curve for Eb /N0 = 0 : 1 : 6 dB corresponding to
fig. 4.14 by transmitting N = 1000 frames per SNR, adding the bit errors per SNR and subsequently
determining the bit error rate per SNR.
Therefor use the following M ATLAB-functions:
trellis = make trellis(G,r flag
c= conv encoder(u,G,r flag,term)
u hat= viterbi(y,trellis,r flag,term,10)

You might also like