Lec 18

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

ELG3175 Introduction to

Communication Systems

Introduction to Error
Control Coding
Types of Error Control Codes

• Block Codes
– Linear
• Hamming, LDPC
– Non-Linear
– Cyclic
• BCH, RS
• Convolutional Codes
• Turbo Codes
Parity Bits

• Suppoose we wish to transmit m=[1001001].


• Let us assume that the second bit is received in error, r
= [1101001].
• The receiver has no way of knowing that the second bit
has been incorrectly detected, therefore we must accept
the consequences of the detection error.
• Suppose, before transmission, we add an even parity bit
to the message to be transmitted, mc= [10010011].
• Now, let us assume that the second bit is in error, r =
[11010011]. There are now 5 1’s, which is not
permitted. Therefore the error is detected and the
receiver can request a retransmission.
• The detection of the error was made possible by the
addition of the parity bit.
Block Codes

• The data is grouped into segments of k bits.


• Each block of k bits is encoded to produce a block of n
bits. where n>k. The encoder adds redundancy to the
data to be transmitted.
• The code rate is r = k/n.

m c
encoder
1011 1011100
Binary addition and multiplication

• 0+0 = 0, 0+1 = 1, 1+0 = 1 and 1+1=0 (there is no


carry).
• 0x = 0 where x = 0 or 1. 1x = x where x = 0 or 1.

• Examples 1010 + 1100 = 0110. 0(10010) = (00000).


Linear Block Codes

• Let C be a code made up of the vectors {c1, c2, … cK}.


• C is a linear code if for any ci and cj in C, ci+cj is also in
C.
• Example C = {c1 = 0000, c2 = 0110, c3 = 1001, c4 =
1111}.
• c1+cx = cx for x = 1, 2, 3 ou 4.
• cx+cx = c1.
• c2+c3 = c4, c3+c4 = c2, c2+c4 = c3.
• C is a linear code.
• C2 = {c1 = 0001, c2 = 0111, c3 = 1000, c4 = 1110}.
• cx+cx = 0000 which is not in C2.
• C2 is not linear.
Hamming Weight

• For codeword cx of code C, its Hamming Weight is the


number of symbols in cx that are not 0.
• C = {0000 0110 1001 1111}
• H.W{0000} = 0
• H.W{0110} = 2
• H.W{1001} = 2
• H.W{1111} = 4
Hamming Distance

• The Hamming Distance between codewords ci and cj of


C is the number of positions in which they differ.
• 0000 0110 1001 1111
• 0000 0 2 2 4
• 0110 2 0 4 2
• 1001 2 4 0 2
• 1111 4 2 2 0

• ci+cj = 0 in the positions in which they are the same


and ci+cj = 1 in the positions in which they differ.
Therefore HD{ci, cj} = HW{ci+cj}.
Minimum Distance

• A code’s minimum distance is the minimum Hamming


distance between two different codewords in the code.
• In our example, dmin = 2.
• We saw previously HD{ci,cj} = HW{ci+cj} = HW{cx}
where, in the case of linear block codes, cx is another
codeword in C excluding the all-zero codeword.
– Therefore for linear block codes, dmin = minimum
Hamming weight of all codewords in C excluding the
all-zero codeword.
• In our example, if we exclude codeword 0000, the
remaining codewords are 0110, 1001 and 1111. The
minimum Hamming weight is 2. Therefore dmin = 2.
Basis of a linear block code

• C is a linear block code.


• Let us choose k linearly independent codewords, c1, c2, …, ck.
None of these k codewords can be expressed as a linear
combination of the others.
• All codewords in C can then be expressed as a linear
combination of these k codewords.
– The k codewords selected form the basis of code C.
• cx = a1c1+a2c2+a3c3+…+akck where ai = 0 ou 1 (binary blcok
codes).
• In our example, we can select 0110 and 1111, or 0110 and
1001 or 1001 and 1111.
• Example, let us select c1 = 0110 and c2 = 1111 as the basis
of the code.
– 0000 = 0c1+0c2, 0110 = 1c1+0c2, 1001 = 1c1+1c2 et
1111 = 0c1+1c2.
Generator Matrix
c1
c2 Example
G
 0 1 1 0
ck G
1 1 1 1
cx m xG [0 0] G [0 0 0 0]
[0 1] G [1 1 1 1]
[1 0] G [0 1 1 0]
[1 1]G [1 0 0 1]

The dimensions of G are k×n.


Equivalent codes

• The codes generated by G1 and G2 are equivalent


if they generate the same codewords but with a
different mapping to message words.
• Example

0 1 1 0 1 0 0 1
G1 G2
1 1 1 1 1 1 1 1

m 00 0000 0000
01 1111 1111
10 0110 1001
11 1001 0110
Systematic codes

• A code is systematic if the message bits can be found at


the beginning of the codeword.
• c = [m|p].
• Gsyst = [Ik|P].
• Any generator matrix can be transformed into Gsyst
using linear transformation.
1 0 0 1
m 00 0000 G syst
01 0110 0 1 1 0
10 1001
11 1111
Parity Check Matrix

• A parity check matrix H is a matrix with property cHT =


0.
• cHT = 0 can be written as mGHT = 0
• Therefore GHT = 0.
• We can find H from Gsyst.
• H= [PT|In-k].

0 1 1 0
H
1 0 0 1

• H has dimensions (n-k)×n.


Example Hamming (7,4) code

1 1 0 0 1 0 1
0 1 1 0 1 0 0
G
0 0 1 1 0 1 0
0 0 0 1 1 0 1

Find all of the codewords, find dmin, find H.


Decoding

• The received word, r = c+e, where e = error pattern.


• For example if c = (1 1 0 0 1 1 0 1) and r = (1 0 0 0 1
1 0 1), then e = (0 1 0 0 0 0 0 0).
• Assuming that errors occur independently with
probability p < 0.5
– Therefore, code bits are correctly detected with
probability (1-p)
• Lower weight error patterns are more probable than
higher weight ones.
Example
• C = {(00000) (01011) (10110) (11101)}
• r = (11111)
• If c = (00000), then e = (11111) which occurs with
probability p5.
• If c = (01011), then e = (10100) which occurs with
probability p2(1-p)3.
• If c = (10110), then e = (01001) which occurs with
probability p2(1-p)3.
• If c = (11101), then e = (00010) which occurs with
probability p(1-p)4 > p2(1-p)3 > p5.
• Therefore receiver selects c = (11101) as most likely
transmitted codeword and outputs message that
corresponds to this codeword.
Standard Array Decoding

• Lookup table that maps received words to most likely


transmitted codewords.
• Each received word points to a memory address which
holds the value of the most likely transmitted word.

00000 01011 10110 11101


00001 01010 10111 11100
00010 01001 10100 11111
00100 01111 10010 11001
01000 00011 11110 10101
10000 11011 00110 01101
10001 11010 00111 01100
11000 10011 01110 00101
How to Build Standard Array

• Write out all possible received words.


• Remove all codewords and place at top of columns with
all-zero codeword at left side (left most column
corresponds to error pattern)
• Take lowest weight vector from remaining words and
place in left column. Add this vector to all codewords
and place result below that codeword.
– Remove all of these results from list of all possible
received words.
• Repeat until list of possible received words is exhausted
Syndrome decoding

• S = rHT.
• r=c+e, therefore S = (c+e)HT = cHT + eHT = eHT.
• All vectors in the same row of the standard array
produce the same syndrome.
• Syndrome points to a memory address whcih contains
the most likely error pattern, then decoder computes c
= r+e.
Example

• For our code:

1 0 1 1 0
G
0 1 0 1 1
1 0 1 0 0
H 1 1 0 1 0
0 1 0 0 1
Example continued

• Suppose r = (01001), then


1 1 0
0 1 1
(0 1 0 0 1) 1 0 0 0 1 0
0 1 0
0 0 1

• This indicates that the 4th bit is in error : e = (00010)


and c = (01011).
Error correcting and Error
Detecting Capabilities of a code
• t = number of error that decoder can always correct.
• J = number of errors that decoder can always detect.
• t = (dmin-1)/2 (dmin is odd) or (dmin-2)/2 (dmin is even).
• J = dmin -1
• We can have codes that both correct and detect errors,
then t+j = dmin -1 where j > t.
Performance: Decoder Failure

• Probability of decoder failure = probability that decoder


selects the incorrect codeword = probability that error
pattern is not one of the error patterns that it can
correct
– In our example, the decoder can correct all 5 error
patterns of weight 1 and 2 error patterns of weight
two. The probability that the error pattern IS one of
these is (1-p)5+5p(1-p)4 + 2p2(1-p)3. Therefore
P(E) = 1- (1-p)5-5p(1-p)4 - 2p2(1-p)3
– In many cases, the code has too many codewords to
construct a standard array.
– But we usually know dmin, therefore we know t.
Performance: Decoder Failure

t n i
P( E ) 1 p (1 p) n i

i 0 i
Performance: Bit Error Rate

• (1/k)P(E) < Pb < P(E)


Performance: Probability
Undetected Error
• P(U) = probability that an error is undetected =
probability that syndrome = 0 even if error pattern is
not 0 = probability that error pattern is same as a
codeword.
• In our example P(U) = 2p3(1-p)2 + p4(1-p).
• If we don’t know the codewords because code is too
large, then P(U) < probability error pattern has weight
greater than j = 1 – probability that error pattern has
weight j or less
j
n i
P(U ) 1 p (1 p) n i

i 0 i

You might also like