Channel Coding
Channel Coding
Channel Coding
Presented by: Salah Zeyad Radwan Presented to: Dr. Mohab Mangoud
1. 2.
Introduction.
Outline
Linear Block Codes. 2.1 Binary Linear Block Codes. 2.2 Generator Matrix. 2.3 Syndrome Testing & LUT. Convolutional Codes. 3.1 Mathematical Representation. 3.2 Tree diagram 3.3 Trellis Diagram. 3.4 Viterbi Algorithm. Interleaver Concatenated Codes Conclusion References
3.
4. 5. 6. 7.
1. Introduction
Information to be Source transmitted encoding
Modulation
Transmitter
Demodulation
Receiver
1. Introduction
Channel encoding : The application of redundant symbols to correct data errors. Modulation : Conversion of symbols to a waveform for transmission. Demodulation : Conversion of the waveform back to symbols. Decoding: Using the redundant symbols to correct errors.
1. Introduction
What is Coding?
Coding is the conversion of information to another form for some purpose Source Coding : The purpose is lowering the redundancy in the information. (e.g. ZIP, JPEG, MPEG2) Channel Coding : The purpose is to defeat channel noise.
1. Introduction
channel coding starting from 1948 Claude Shannon was working on the information
C W
1. Introduction
Channel coding can be partitioned into two study areas 1. Waveform Coding: Transforming waveforms to better waveforms detection process less subject to errors 2. Structured Sequences : Transforming data sequences into better sequences, having structured redundancy redundant bits can be detect and correct In this study Im going to describe codes designed for AWGN channels, Which requires a background in block & Convolutional codes
Data block
Codeword
k bits
n bits
k-symbol block
Block Encoder
n-symbol block
Codeword c
This system may be written in a compact form using matrix notations m=[m0,m1,m2,..mk-1] b= [b0,b1,b2,bn-k-1] c= [c0,c1,c2,...xn-1] massage vector 1-by-k parity vector 1-by-(n-k) code vector 1-by-n
P=
(3)
Where Ik is the k-by-k identity matrix. The generator matrix is defined as: G = [P Ik] (4) From (4) in (3) we get c = mG
(5)
3. Convolutional Codes
Convolutional codes are fundamentally different from the block codes. It is not possible to separate the codes into independent blocks. Instead each code bit depends on a certain number of previous data bits.
3. Convolutional Codes
+ x2 C o d e s e q u e n c e
Data
D1 D
D2 x1
+
Input: Output: Input: Output: 1 11 1 11 1 01 0 10 1 10 1 00 0 01 0 10 0 11 0 11 0 00 0 00
MUX
3. Convolutional Codes
The previous convolutional encoder can be represented mathematically g(1) (D) = 1+ D2 g(2) (D) = 1 + D + D2 For message sequence (1001), it can be represented as m (D) = 1 + D3 Hence the output polynomial of path 1 & 2 are c(1) (D) = g(1) (D) m (D) = 1 + D2 + D3 + D5 c(2) (D) = g(2) (D) m (D) = 1 + D + D2 + D3 + D4 + D5 By multiplexing the two output sequences we get the code sequence c = (11,01,11,11,01,11) Which is (nonsystematic)
3. Convolutional Codes
Systematic Convolutional
Data
Data
D1
D2
MUX
parity
C o d e s e q u e n c e
3. Convolutional Codes
Structural properties of convolutional encoder are portrayed in graphical form by using any one of
1. Code tree 2. Trellis 3. State diagram
+ x2
D1 D2
MUX
Data
x1
C o d e s e q u e n c e
3. Convolutional Codes
Tree Diagram
00
First input
00 00
First output 10 11 11 01 11
11001
10
11 11
11 10 01 11 10
1
11
00 01
00 01 10 00 11
01 00 01 01 10 10
11 10 01 11 00 01 10
3. Convolutional Codes
Trellis for convolutional encoder We may collapse the code tree in the previous slides into a new form called a trellis
00 a b c 11
00 11 01 10 11
00 11
state Binary description 00 10 01 11
00 01 10 01 10
a b c d
3. Convolutional Codes
Trellis for convolutional encoder 00 a b c 10 d 10 01 11 11 01 11 00 00 11 00 01 10 11
Binary description 00 10 01 11
state a b c d
For incoming data 1001 the generated code sequence becomes 11.01.11.11
3. Convolutional Codes
Viterbi Algorithm
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a 3 b 1 c d 2 2 1 00 1
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c 2 d 2 1 3 1 00 1 10 2 3 2 3 5 2 3 4
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 2 2 3 00 10 2
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 2 2 3 00 10 2 00 2 4 4 2 3 4 3 4
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 2 3 3 00 10 00 2
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 00 10 00 00 2 5 4 3 3 4 3 4
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 3 3 3 00 10 00 00 2
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 00 00 00 10 00 00 00 00 00
3. Convolutional Codes
State diagram of the convolutional codes 00 The left nodes represent the four a a possible current state of the 11 11 encoder b 00 b 01 The right nodes represent the next c state c 10 10 d d We can coalesce the left and right 01 nodes to get state diagram
A portion of the central part of the trellis for the encoder
3. Convolutional Codes
b
11 01
state a
Binary description 00 10 01 11
00
10
00
10
b c d
11
01
4. Interleaver
The simple type of interleaver (sometimes known as a rectangular or block interleaver) Interleaver is commonly denoted by the Greek letter and its corresponding de-interleaver by -1. The original order can then be restored by a corresponding de-interleaver:
4. Interleaver
1 2 3 4 5 6 7 8 9
4. Interleaver
1 2 3 4 5 6 7 8 9
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
Interleaver ()
4. Interleaver
1 2 3 4 5 6 7 8 9
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
Interleaver ()
1 6 11 16 2 7 12 17 3
Interleaved data
4. Interleaver
1 2 3 4 5 6 7 8 9
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
Interleaver ()
1 6 11 16 2 7 12
De-interleaver (-1)
17 3
Interleaved data
4. Interleaver
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
Interleaver ()
1 6 11 16 2 7 12
De-interleaver (-1)
17 3
Interleaved data
5. Concatenated Codes
We have seen that the power of FEC codes increases with length k and approaches the Shannon bound only at very large k, but also that decoding complexity increases very rapidly with k. This suggests that it would be desirable to build a long, complex code out of much shorter component codes, which can be decoded much more easily.
5. Concatenated Codes
The principle is to feed the output of one encoder (called the outer encoder) to the input of another encoder, and so on, as required. The final encoder before the channel is known as the inner encoder. The resulting composite code is clearly much more complex than any of the individual codes.
5. Concatenated Codes
Outer encoder Interleaver Inner encoder
Channel
Outer decoder
deInterleaver
Inner decoder
6. Conclusion
We have reviewed the basic principles of channel coding including the linear block codes, the convolutional codes and the concatenated coding, showing how it is that they achieve such remarkable performance.
7. References
Wireless Communications Andrea Goldsmith Digital Communications Simon Haykin Digital Communications Bernard Sklar