CC 03 Linear Block Code
CC 03 Linear Block Code
CC 03 Linear Block Code
2
Introduction to Linear Block
C d
Codes
4
Introduction to Linear Block Codes
5
Introduction to Linear Block Codes
6
Introduction to Linear Block Codes
v = u 0 g 0 + u1g 1 + ⋅ ⋅ ⋅ + u k −1g k −1 (3 1)
(3.1)
8
Introduction to Linear Block Codes
◊ Let us arrange these k linearly independent code words as
the rows of a k × n matrix as follows:
⎡ g 0 ⎤ ⎡ g 00 g 01 g 02 . . g 0,n −1 ⎤
⎢g ⎥ ⎢ g g11 g12 . . g1,n −1 ⎥⎥
⎢ 1 ⎥ ⎢ 10
⎢ . ⎥ ⎢ . . . . . . ⎥
⎢ ⎥ ⎢ ⎥
G=⎢ . ⎥=⎢ . . . . . . ⎥ (3.2)
⎢ . ⎥ ⎢ . . . . . . ⎥
⎢ ⎥ ⎢ ⎥
⎢ . ⎥ ⎢ . . . . . . ⎥
⎢g ⎥ ⎢⎢ g . . g k −1,n −1 ⎥⎥⎦
⎣ k −1 ⎦ ⎣ k −1,0 g k −1,1 g k −1,2
where gi = (gi0, gi1,…,gi,n-1) for 0 ≤ i < k
9
Introduction to Linear Block Codes
10
Introduction to Linear Block Codes
11
Introduction to Linear Block Codes
◊ Example 3.1
◊ the (7,
th (7 4) linear
li code
d given
i in
i Table
T bl 3.1
3 1 has
h the
th following
f ll i matrix
ti
as a generator matrix :
⎡ g 0 ⎤ ⎡1 1 0 1 0 0 0⎤
⎢ g ⎥ ⎢0 1 1 0 1 0 0 ⎥⎥
G = ⎢ 1⎥ = ⎢
⎢g 2 ⎥ ⎢1 1 1 0 0 1 0⎥
⎢ ⎥ ⎢ ⎥
⎣ g 3 ⎦ ⎣1 0 1 0 0 0 1⎦
13
Introduction to Linear Block Codes
where pij = 0 or 1
14
Introduction to Linear Block Codes
16
Introduction to Linear Block Codes
◊ Example 3.2
◊ The matrix
Th t i G given
i in
i example l 3.1
31
◊ Let u = (u0, u1, u2, u3) be the message to be encoded
◊ L v = (v
Let ( 0, v1, v2, v3, v4, v5,v6) be
b the
h corresponding
di code
d wordd
◊ Solution :
⎡1 1 0 1 0 0 0⎤
⎢0 1 1 0 1 0 0 ⎥⎥
v = u ⋅ G = (u 0 , u1 , u 2 , u3 ) ⋅ ⎢
⎢1 1 1 0 0 1 0⎥
⎢ ⎥
⎣1 0 1 0 0 0 1⎦
17
Introduction to Linear Block Codes
◊ By matrix multiplication, we obtain the following digits of the
code word v
v6 = u 3
v5 = u 2
v4 = u1
v3 = u 0
v2 = u1 + u 2 + u3
v1 = u 0 + u1 + u 2
v0 = u 0 + u 2 + u 3
18
Introduction to Linear Block Codes
◊ For any k × n matrix G with k linearly independent rows, there exists
an (n
(n-k)
k) ×n matrix H with nn-kk linearly independent rows such that
any vector in the row space of G is orthogonal to the rows of H and
any vector that is orthogonal to the rows of H is in the row space of
G.
◊ An n-tuple v is a code word in the code generated by G if and only if
v • HT = 0
◊ This matrix H is called a parity-check matrix of the code
◊ The 2n-k linear combinations of the rows of matrix H form an (n, n –
k) linear code Cd
◊ Thi code
This d is
i the
h null
ll space off the
h (n,
( k) linear
li code
d C generatedd by
b
matrix G
◊ Cd is
i called
ll d the
th dual d off C
d l code
19
Introduction to Linear Block Codes
20
Introduction to Linear Block Codes
g i ⋅ h j = pij + pij = 0
21
Introduction to Linear Block Codes
22
Introduction to Linear Block Codes
◊ Example 3.3
◊ Consider
C id the
th generator
t matrix
t i off a (7,4)
(7 4) linear
li code
d given
i in
i
example 3.1
◊ The corresponding parity-check
parity check matrix is
⎡1 0 0 1 0 1 1 ⎤
⎢
H = ⎢0 1 0 1 1 1 0 ⎥ ⎥
⎢⎣0 0 1 0 1 1 1 ⎥⎦
23
Introduction to Linear Block Codes
◊ Summaries
◊ If G is
i off the
h form
f given
i by
b (3.4),
(3 4) then
h H may take
k fform
given by (3.7), and vice versa
24
Introduction to Linear Block Codes
◊ Based on the equation of (3.6a) and (3.6b), the encoding
circuit for an ((n,, k)) linear systematic
y code can be
implemented easily
◊ The encoding circuit is shown in Fig. 3.2
where
◊ denotes a shift-register stage (flip-flop)
◊ Pij denotes a connection if pij = 1 and no connection if pij = 0
◊ denotes a modulo-2 adder
◊ As soon as the entire message has entered the message register, the n-k
parity-check
parity check digits are formed at the outputs of the nn-kk module
module-2
2 adders
25
Introduction to Linear Block Codes
26
Introduction to Linear Block Codes
27
Syndrome
y and Error Detection
e
◊ e = r + v = (e0, e1, …, en-1) is an n-tuple
◊ ei = 1 for ri ≠ vi
◊ ei = 0 for ri = vi
◊ The n-tuple e is called the error vector (or error pattern)
29
Syndrome and Error Detection
31
Syndrome and Error Detection
◊ Based on (3.7) and (3.10), the syndrome digits are as
follows : H = ⎡I P ⎤ T
s = r • HT
⎣ n−k ⎦
32
Syndrome and Error Detection
33
Syndrome and Error Detection
◊ Example 3.4
◊ The parity-check
Th it h k matrix t i is
i given
i in
i example l 3.3
33
◊ Let r = (r0, r1, r2, r3, r4, r5, r6) be the received vector
◊ Th syndrome
The d is
i given
i by
b
⎡1 0 0⎤
⎢0 1 0 ⎥⎥
⎢
⎢0 0 1⎥
⎢ ⎥
s = ( s0 , s1 , s2 ) = ( r0 , r1 , r2 , r3 , r4 , r5 , r6 ) ⋅ ⎢1 1 0⎥
⎢0 1 1⎥
注意:三個位元只 ⎢ ⎥
⎢ 1 1 1⎥
能表示八種情況
⎢1 0 1 ⎥⎦
⎣
34
Syndrome and Error Detection
◊ Example 3.4
◊ The syndrome digits are
s0 = r0 + r3 + r5 + r6
s1 = r1 + r3 + r4 + r5
s2 = r2 + r4 + r5 + r6
◊ The syndrome circuit for this code is shown below
35
Syndrome and Error Detection
36
Syndrome and Error Detection
37
Syndrome and Error Detection
◊ The syndrome digits are linear combinations of the error
digits
◊ The syndrome digits can be used for error correction
◊ Because the n – k linear equations of (3.13)
(3 13) do not have a
unique solution but have 2k solutions
◊ There are 2k error pattern that result in the same syndrome,
syndrome
and the true error pattern e is one of them
◊ The decoder has to determine the true error vector from a
set of 2k candidates
◊ To minimize the probability of a decoding error,
error the most
probable error pattern that satisfies the equations of (3.13)
is chosen as the true error vector
38
Syndrome and Error Detection
◊ Example 3.5
◊ We consider the (7,4)
(7 4) code whose parity-check
parity check matrix is given in
example 3.3
◊ Let v = ((1 0 0 1 0 1 1)) be the transmitted code word
◊ Let r = (1 0 0 1 0 0 1) be the received vector
◊ The receiver computes the syndrome
s = r • HT = (1 1 1)
◊ The receiver attempts to determine the true error vector
e = (e
( 0, e1, e2, e3, e4, e5, e6),
) which
hi h yields
i ld the
h syndrome
d above
b
1 = e0 + e3 + e5 + e6
1 = e1 + e3 + e4 + e5
1 = e2 + e4 + e5 + e6
◊ There are 24 = 16 error patterns that satisfy the equations above
39
Syndrome and Error Detection
◊ Example 3.5
◊ The error vector
Th t e = (0 0 0 0 0 1 0) has h the
th smallest
ll t numberb off
nonzero components
◊ If the channel is a BSC,
BSC e = (0 0 0 0 0 1 0) is the most probable
error vector that satisfies the equation above
◊ Taking e = (0 0 0 0 0 1 0) as the true error vector
vector, the receiver
decodes the received vector r = (1 0 0 1 0 0 1) into the following
code word
v* = r + e = (1 0 0 1 0 0 1) + (0 0 0 0 0 1 0)
= ((1 0 0 1 0 1 1))
where v* is the actual transmitted code word
40
The Minimum Distance of a Block
Code
Wireless Information Transmission System Lab.
Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University
The Minimum Distance of a Block Code
42
The Minimum Distance of a Block Code
◊ The Hamming distance is a metric function that satisfied
the triangle inequality
d(v, w) + d(w, x) ≥ d(v, x) (3.14)
◊ the proof of this inequality is left as a problem
◊ From the definition of Hamming distance and the
definition of module-2 addition that the Hamming distance
between two n-tuple, v and w, is equal to the Hamming
weight
g of the sum of v and w,, that is
d(v, w) = w(v + w) (3.15)
◊ For example, the Hamming distance between v = (1 0 0 1 0 1 1)
and w = (1 1 1 0 0 1 0) is 4 and the weight of v + w = (0 1 1 1 0 0
1) is also 4
43
The Minimum Distance of a Block Code
◊ Given a block code C, the minimum distance of C, denoted dmin, is
defined as
dmin = min{d(v, w) : v, w C, v ≠ w} (3.16)
i = min{w(v + w): v,
dmin v w C, C v ≠ w}
= min{w(x): x C, x ≠0} (3.17)
≡ wmin
i
44
The Minimum Distance of a Block Code
◊ The parameter wmin ≡ {w(x): x∈ C, x ≠0} is called the minimum
weight of the linear code C
46
The Minimum Distance of a Block Code
◊ Proof
◊ S
Suppose th
thatt hi1, hi2, … , hil are l columns
l off H suchh that
th t
hi1 + hi2 + ··· + hil = 0 (3.18)
◊ L x = (x
Let ( 1, x2, … , xn-1) whose
h nonzero components are xi1, xi2, xil
x • HT = x0h0 + x1h1 + ··· + xn-1hn-1
= xi1hi1 + xi2hi2 + ··· + xilhil
= hi1 + hi2 + ··· + hil
◊ It following from (3.18) that x • HT = 0, x is code vector of
weight l in C
47
The Minimum Distance of a Block Code
48
Error-Detecting and Error-
Error- Error-
Correcting Capabilities of a Block
Code
Wireless Information Transmission System Lab.
Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University
Error-Detecting and Error-
Error- Error-Correcting
Capabilities
p of a Block Code
50
Error-Detecting and Error-
Error- Error-Correcting
Capabilities
p of a Block Code
◊ An (n, k) linear code is capable of detecting 2n – 2k error
patterns of length n
◊ Among the 2n – 1 possible nonzero error patterns, there are
2k – 1 error patterns that are identical to the 2k – 1 nonzero
code words
◊ If any of these 2k – 1 error patterns occurs
occurs, it alters the
transmitted code word v into another code word w, thus w
will be received and its syndrome
y is zero
◊ There are 2k – 1 undetectable error patterns
◊ If an error pattern is not identical to a nonzero code word,
word
the received vector r will not be a code word and the
syndrome
y will not be zero
51
Error-Detecting and Error-
Error- Error-Correcting
Capabilities
p of a Block Code
53
Error-Detecting and Error-
Error- Error-Correcting
Capabilities
p of a Block Code
◊ Since v and w are code vectors in C, we have
d(v w)≥ dmin ≥2t+1.
d(v,w)≥ ≥2t+1
d(w,r)≥d(v,w)-d(v,r)
≥ dmin-t’
t’
≥2t+1-t’≥t+1 (if t ≥t’)
>t Î d(w,r)>t’
d( )>t’
◊ The inequality above says that if an error pattern of t or fewer
errors occurs,
occurs the received vector r is closer (in Hamming
distance) to the transmitted code vector v than to any other code
vector w in C.
◊ For a BSC, this means that the conditional probability P(r|v) is
greater than the conditional probability P(r|w) for w≠v. Q.E.D.
54
Error-Detecting and Error-
Error- Error-Correcting
Capabilities
p of a Block Code
◊ Fact 2: The code is not capable of correcting all the error patterns of
l errors with l > t,
t for there is at least one case where an error pattern
of l errors results in a received vector which is closer to an incorrect
code vector than to the actual transmitted code vector.
◊ Proof:
◊ Let v and w be two code vectors in C such that d(v,w)=d
( ) min.
◊ Let e1 and e2 be two error patterns that satisfy the following
conditions:
◊ e1+ e2=v+w
55
Error-Detecting and Error-
Error- Error-Correcting
Capabilities
p of a Block Code
◊ Suppose that v is transmitted and is corrupted by the error pattern
e1, then the received vector is
r = v + e1
◊ The Hamming distance between v and r is
d(v, r) = w(v + r) = w(e1). (3.24)
◊ The Hamming distance between w and r is
d(w, r) = w(w + r) = w(w + v + e1) = w(e2) (3.25)
◊ Now suppose that the error pattern e1 contains more than t errors
Now,
[i.e. w(e1) ≥t+1].
◊ Since 2t + 1 ≤ dmini ≤ 2t +2,
+2 it follows from (3.23)
(3 23) that
w(e2) = dmin- w(e1) ≤ (2t +2) - (t+1) = t + 1
56
Error-Detecting and Error-
Error- Error-Correcting
Capabilities
p of a Block Code
◊ Combining (3.24) and (3.25) and using the fact that w(e1) ≥ t+1
and w(e2) ≤ t + 1,
1 we have
d(v, r) ≥ d(w, r)
◊ This inequality say that there exists an error pattern of l (l > t)
errors which results in a received vector that is closer to an
incorrect code vector than to the transmitted code vector.
◊ Based on the maximum likelihood decoding scheme, an incorrect
decoding would be committed. Q.E.D.
57
Error-Detecting and Error-
Error- Error-Correcting
Capabilities
p of a Block Code
◊ A block code with minimum distance dmin guarantees
correcting all the error patterns of t = ⎢⎣(d mini − 1) / 2 ⎥⎦ or fewer
errors, where ⎢⎣(d min − 1) / 2⎥⎦ denotes the largest integer no
ggreater than ((dmin – 1)/2
)
◊ The parameter t = ⎢⎣(d min − 1) / 2 ⎥⎦ is called the random-error
correctingg capability
p y of the code
◊ The code is referred to as a t-error-correcting code
◊ A block code with random
random-error-correcting
error correcting capability t is
usually capable of correcting many error patterns of t + 1
or more errors
◊ For a t-error-correcting (n, k) linear code, it is capable of
correctingg a total 2n-k error ppatterns (shown
( in next section).)
58
Error-Detecting and Error-
Error- Error-Correcting
Capabilities
p of a Block Code
◊ If a t-error-correcting block code is used strictly for error correction
on a BSC with transition probability p,p the probability that the
decoder commits an erroneous decoding is upper bounded by:
n
⎛n⎞ i
P ( E ) ≤ ∑ ⎜ ⎟ p (1 − p )
n −i
i = t +1 ⎝ i ⎠
59
Standard Array and Syndrome
Decoding
Wireless Information Transmission System Lab.
Institute of Communications Engineering
g g
National Sun Yat-
Yat-sen University
Standard Array and Syndrome Decoding
61
Standard Array and Syndrome Decoding
zero code
d vectort v1 = (0,
(0 00, …, 0) as the
th first
fi t (leftmost)
(l ft t) element
l t
◊ From the remaining 2n – 2k n-tuple, an n-tuple e2 is chosen and is
placed under the zero vector v1
◊ Now, we form a second row by adding e2 to each code vector vi
in the first row and placing the sum e2 + vi under vi
◊ An unused n-tuple e3 is chosen from the remaining n-tuples and
is placed under e2.
◊ Then
Th a thirdhi d row is
i formed
f d by
b adding
ddi e3 to eachh code d vector vi in
i
the first row and placing e3 + vi under vi .
◊ we continue this process until all the n-tuples are used.
used
62
Standard Array and Syndrome Decoding
63
Standard Array and Syndrome Decoding
64
Standard Array and Syndrome Decoding
◊ Proof
◊ It follows from the construction rule of the standard array that
every n-tuple appears at least once
◊ Suppose that an n-tuple appears in both lth row and the mth row
with l < m
◊ Then this n-tuple must be equal to el + vi for some i and equal to
em + vj for some j
◊ As a result, el + vi = em + vj
◊ From this equality we obtain em = el + (vi + vj)
◊ Si
Since vi and
d vj are code
d vectors
t ini C,
C vi + vj isi also
l a coded vector
t
in C, say vs
◊ This implies
p that the n-tuple
p em is in the lth row of the array,
y,
which contradicts the construction rule of the array that em, the
first element of the mth row, should be unused in any previous
row
◊ No n-tuple can appear in more than one row of the array
65
Standard Array and Syndrome Decoding
66
Standard Array and Syndrome Decoding
67
Standard Array and Syndrome Decoding
68
Standard Array and Syndrome Decoding
70
Standard Array and Syndrome Decoding
71
Standard Array and Syndrome Decoding
◊ Example 3.7
◊ The standard
Th t d d array for
f this
thi code
d is
i shown
h in
i Fig.
Fi 3.7
37
◊ The weight distribution of the coset leader is α0 =1, α1 = 6, α2=1
and
d α3 =α4 =α5 =α6 = 0
◊ Thus,
P(E) = 1 – (1 – p)6 – 6p(1 – p)5 – p2(1 – p)4
◊ For p = 10-2, we have P(E) ≈ 1.37 × 10-3
72
Standard Array and Syndrome Decoding
73
Standard Array and Syndrome Decoding
74
Standard Array and Syndrome Decoding
75
Standard Array and Syndrome Decoding
76
Standard Array and Syndrome Decoding
◊ Proof
◊ Lett ej and
L d el be
b th
the cosett leaders
l d off theth jth andd lth cosets
t
respectively, where j < l
◊ Suppose that the syndromes of these two cosets are equal
◊ Then,
ejHT = elHT
(ej + el)HT = 0
◊ Thi implies
This i li thatth t ej + el is
i a code
d vector
t in
i C,
C say vj
◊ Thus, ej + el = vj and el = ej + vj
◊ Thi implies
This i li that h el isi in
i theh jth
j h coset, which
hi h contradicts
di theh
construction rule of a standard array that a coset leader should be
previously unused
77
Standard Array and Syndrome Decoding
78
Standard Array and Syndrome Decoding
79
Standard Array and Syndrome Decoding
80
Standard Array and Syndrome Decoding
81
Standard Array and Syndrome Decoding
83
Standard Array and Syndrome Decoding
84
Standard Array and Syndrome Decoding
85
Standard Array and Syndrome Decoding
86
Standard Array and Syndrome Decoding
87
Standard Array and Syndrome Decoding
89
Standard Array and Syndrome Decoding
90
Probability
robability of An Undetected Error
for Linear Codes Over a BSC
92
Probability of An Undetected Error for
Linear Codes Over a BSC
◊ The polynomials A(z) and B(z) are called the weight
enumerators for the (n,
(n k) linear code C and its dual Cd
◊ Using the MacWilliams identity, we can compute the
probability
b bili off an undetected
d d error for
f an (n,
( k) li
linear code
d
from the weight distribution of its dual.
◊ From equation 3.19:
n
Pu ( E ) = ∑ Ai p (1 − p )
i n −i
i =1
n
p i
= (1 − p ) ∑ Ai (
n
) (3.33)
i =1 1− p
93
Probability of An Undetected Error for
Linear Codes Over a BSC
◊ Substituting z = p/(1 – p) in A(z) of (3.31) and using the
fact that A0 = 1,
1 we obtain
i
⎛ p ⎞ n
⎛ p ⎞
A⎜ ⎟ − 1 = ∑ Ai ⎜ ⎟ (3.34)
⎝ 1− p ⎠ i =1 ⎝ 1− p ⎠
◊ Combining (3.33) and (3.34), we have the following
expression for the probability of an undetected error
⎡ ⎛ p ⎞ ⎤
Pu ( E ) = (1 − p ) ⎢ A ⎜
n
⎟ − 1⎥ (3.35)
⎣ ⎝ 1− p ⎠ ⎦
94
Probability of An Undetected Error for
Linear Codes Over a BSC
◊ From (3.35) and the MacWilliams identity of (3.32), we
finally obtain the following expression for Pu(E) :
Pu(E) = 2-(n – k) B(1 – 2p) – (1 – p)n (3.36)
where
n
B(1 − 2 p ) = ∑ Bi (1 − 2 p )i
i =0
◊ Hence, there are two ways for computing the probability of
an undetected
d d error for
f a linear
li code;
d often
f one is i easier
i
than the other.
◊ If n-kk is
i smaller
ll than
h k,k it
i is
i muchh easier
i to compute Pu(E)
from (3.36); otherwise, it is easier to use (3.35).
95
Probability of An Undetected Error for
Linear Codes Over a BSC
◊ Example 3.10 consider the (7, 4) linear code given in Table 3.1
◊ Th dual
The d l off this
thi code
d is
i generated
t d by
b its
it parity-check
it h k matrix
ti
⎡1 0 0 1 0 1 1⎤
⎢
H = ⎢0 1 0 1 1 1 0⎥ ⎥
⎢⎣ 0 0 1 0 1 1 1 ⎥⎦
◊ Taking the linear combinations of the row of H, we obtain the
following eight vectors in the dual code
(0 0 0 0 0 0 0), (1 1 0 0 1 0 1),
(1 0 0 1 0 1 1), (1 0 1 1 1 0 0),
(0 1 0 1 1 1 0), (0 1 1 1 0 0 1),
(0 0 1 0 1 1 1), (1 1 1 0 0 1 0)
96
Probability of An Undetected Error for
Linear Codes Over a BSC
◊ Example 3.10
◊ Th the
Thus, th weight
i ht enumerator
t for
f the
th dual
d l code
d is
i
B(z) = 1 + 7z4
97
Probability of An Undetected Error for
Linear Codes Over a BSC
◊ For large n, k, and n – k, the computation becomes
practically impossible
◊ Except for some short linear codes and a few small classes
of linear codes, the weight distributions for many known
linear code are still unknown
98
Probability of An Undetected Error for
Linear Codes Over a BSC
◊ It is quite easy to derive an upper bound on the average probability
of an undetected error for the ensemble of all (n,
(n k) linear systematic
codes
n
⎛n⎞ i
Pu ( E ) ≤ 2 −(n−k )
∑ ⎜ ⎟
i =1 ⎝ i ⎠
p (1 − p ) n −i
= 2 − ( n − k ) [1 − (1 − p ) n ] (3.42)
99
Hamming
mm g Codes
◊ These codes and their variations have been widely used for
error control in digital communication and data storage
systems
◊ For any positive
i i integer
i m ≥ 3,
3 there
h exists
i a Hamming i
code with the following parameters :
◊ Code length: n = 2m – 1
◊ Number of information symbols: k = 2m – m – 1
◊ Number of parity-check symbols: n–k=m
◊ Error-correcting capability : t = 1( dmin = 3)
◊ The parity-check matrix H of this code consists of all the nonzero
m-tuple as its columns (2m-1).
101
Hamming Codes
102
Hamming Codes
104
Hamming Codes
105
Hamming Codes
106
Hamming Codes
◊ For example, if we delete from the submatrix Q all the columns of
g , we obtain an m x 2m-1 matrix.
even weight,
H’=[Im Q’]
◊ Q’ consists of 2m-1-m columns of odd weight.
107
Hamming Codes
◊ When a single error occurs during the transmission of a code
vector,, the resultant syndrome
y is nonzero and it contains an odd
number of 1’s (e x H’T corresponds to a column in H’)
◊ When double errors occurs, the syndrome is nonzero, but it
contains
t i even number b off 1’s
1’
◊ Decoding can be accomplished in the following manner :
◊ If the syndrome s is zero,
zero we assume that no error occurred
◊ If s is nonzero and it contains odd number of 1’s, we assume that
a ssingle
g e error
e o occurred.
occu ed. Thee error
e o pattern
p e ofo a single
s g e error
e o that
corresponds to s is added to the received vector for error
correction
◊ If s is
i nonzero andd it
i contains
i even number b off 1’s,
1’ an
uncorrectable error pattern has been detected
108
Hamming Codes
◊ The dual code of a (2m–1, 2m–m–1) Hamming code is a
(2m–11,m)
m) linear code
◊ If a Hamming code is used for error detection over a BSC,
its probability of an undetected error,
error Pu(E),
(E) can be
computed either from (3.35) and (3.43) or from (3.36) and
(
(3.44))
◊ Computing Pu(E) from (3.36) and (3.44) is easier
◊ Combining (3.36)
(3 36) and (3.44),
(3 44) we obtain
2 m-1 2 m–1
Pu(E) = 2 {1+(2 – 1)(1 – 2p) } – (1 – p)
-m m
109