Coding For Wireless Channel
Presented by: Salah Zeyad Radwan Presented to: Dr. Mohab Mangoud
1. 2.
Introduction.
Outline
Linear Block Codes. 2.1 Binary Linear Block Codes. 2.2 Generator Matrix. 2.3 Syndrome Testing & LUT. Convolutional Codes. 3.1 Mathematical Representation. 3.2 Tree diagram 3.3 Trellis Diagram. 3.4 Viterbi Algorithm. Interleaver Concatenated Codes Conclusion References
3.
4. 5. 6. 7.
1. Introduction
Information to be Source transmitted encoding
Channel Channel encoding coding
Modulation
Transmitter
Channel Information received Source decoding
Channel Channel decoding decoding
Demodulation
Receiver
Block Diagram of a general Communication System
1. Introduction
Channel encoding : The application of redundant symbols to correct data errors. Modulation : Conversion of symbols to a waveform for transmission. Demodulation : Conversion of the waveform back to symbols. Decoding: Using the redundant symbols to correct errors.
1. Introduction
What is Coding?
Coding is the conversion of information to another form for some purpose Source Coding : The purpose is lowering the redundancy in the information. (e.g. ZIP, JPEG, MPEG2) Channel Coding : The purpose is to defeat channel noise.
1. Introduction
channel coding starting from 1948 Claude Shannon was working on the information
transmission capacity of a communication channel, He showed that capacity depends
on (SNR)
C = W log2 (1 + S/N)
C W
: Capacity of the channel : Bandwidth of the channel
1. Introduction
Channel coding can be partitioned into two study areas 1. Waveform Coding: Transforming waveforms to better waveforms detection process less subject to errors 2. Structured Sequences : Transforming data sequences into better sequences, having structured redundancy redundant bits can be detect and correct In this study Im going to describe codes designed for AWGN channels, Which requires a background in block & Convolutional codes
2. Linear Block Code
Linear block codes are extension of single-bit-parity check codes for error detection. characterized by the (n,k) notation This code uses one bit parity in a block of n data to indicate whether the number of 1s in a block is odd or even. Linear Block Code using a large number of parity bits to detected or correct more than one error
b1,b2,,bn-k Parity bits (n-k)
m1,m2,.,mk Message bits (k)
Code word (n)
2. Linear Block Code
The information bit stream is chopped into blocks of k bits. Each block is encoded to a larger block of n bits. The coded bits are modulated and sent over channel. The reverse procedure is done at the receiver. Channel encoder
Data block
Codeword
k bits
n bits
2. Linear Block Code
Encoding Process
Uncoded Data stream
k-symbol block
Block Encoder
n-symbol block
Coded Data Stream
2. Linear Block Code
Generator matrix Message m
Codeword c
This system may be written in a compact form using matrix notations m=[m0,m1,m2,..mk-1] b= [b0,b1,b2,bn-k-1] c= [c0,c1,c2,...xn-1] massage vector 1-by-k parity vector 1-by-(n-k) code vector 1-by-n
Note: all three vectors are row vectors
2. Linear Block Code
b = mP (1)
Where P is the k- by-(n-k) coefficient matrix
P=
p11 p12 p21 p22 .. . . . pk1 pk2
p1,n-k p2,n-k . . . . pk,n-k
2. Linear Block Code
c = [b m] From (1) in (2) we get c = m [P Ik] (2)
(3)
Where Ik is the k-by-k identity matrix. The generator matrix is defined as: G = [P Ik] (4) From (4) in (3) we get c = mG
(5)
2. Linear Block Code
Syndrome decoding
The received message can be represented as: R=c+e The decoder computes the code vector from the received message by using what we call the syndrome, which depends only upon the error pattern. s = R HT
2. Linear Block Code
Then we write the error matrix e= (2n-kn) to get the LUT : e HT LUT help us to locate the bit corresponding to the bit error in the codeword then we can correct it by changing the location of the bit error if the bit error 1 change it to 0 and if we receive 0 change it to 1 to get the correct codeword.
3. Convolutional Codes
Convolutional codes are fundamentally different from the block codes. It is not possible to separate the codes into independent blocks. Instead each code bit depends on a certain number of previous data bits.
We can coding using a structure consisting
of shift register, a set of exclusive or (XOR) gates and multiplexer
3. Convolutional Codes
+ x2 C o d e s e q u e n c e
Data
D1 D
D2 x1
+
Input: Output: Input: Output: 1 11 1 11 1 01 0 10 1 10 1 00 0 01 0 10 0 11 0 11 0 00 0 00
MUX
3. Convolutional Codes
The previous convolutional encoder can be represented mathematically g(1) (D) = 1+ D2 g(2) (D) = 1 + D + D2 For message sequence (1001), it can be represented as m (D) = 1 + D3 Hence the output polynomial of path 1 & 2 are c(1) (D) = g(1) (D) m (D) = 1 + D2 + D3 + D5 c(2) (D) = g(2) (D) m (D) = 1 + D + D2 + D3 + D4 + D5 By multiplexing the two output sequences we get the code sequence c = (11,01,11,11,01,11) Which is (nonsystematic)
3. Convolutional Codes
Systematic Convolutional
Data
Data
D1
D2
+ g(1) (D) = 1 g(2) (D) = 1 + D + D2
MUX
parity
C o d e s e q u e n c e
3. Convolutional Codes
Structural properties of convolutional encoder are portrayed in graphical form by using any one of
1. Code tree 2. Trellis 3. State diagram
+ x2
D1 D2
MUX
Data
x1
C o d e s e q u e n c e
3. Convolutional Codes
Tree Diagram
00
First input
00 00
First output 10 11 11 01 11
11001
10
11 11
11 10 01 11 10
1
11
00 01
00 01 10 00 11
01 00 01 01 10 10
11 10 01 11 00 01 10
3. Convolutional Codes
Trellis for convolutional encoder We may collapse the code tree in the previous slides into a new form called a trellis
00 a b c 11
00 11 01 10 11
00 11
state Binary description 00 10 01 11
00 01 10 01 10
a b c d
3. Convolutional Codes
Trellis for convolutional encoder 00 a b c 10 d 10 01 11 11 01 11 00 00 11 00 01 10 11
Binary description 00 10 01 11
state a b c d
For incoming data 1001 the generated code sequence becomes 11.01.11.11
3. Convolutional Codes
Viterbi Algorithm
To decode the all-zero sequences when received as 0100100000
01 a b 1 1
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a 3 b 1 c d 2 2 1 00 1
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c 2 d 2 1 3 1 00 1 10 2 3 2 3 5 2 3 4
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 2 2 3 00 10 2
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 2 2 3 00 10 2 00 2 4 4 2 3 4 3 4
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 2 3 3 00 10 00 2
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 00 10 00 00 2 5 4 3 3 4 3 4
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 3 3 3 00 10 00 00 2
3. Convolutional Codes
To decode the all-zero sequences when received as 0100100000
01 a b c d 00 00 00 10 00 00 00 00 00
3. Convolutional Codes
State diagram of the convolutional codes 00 The left nodes represent the four a a possible current state of the 11 11 encoder b 00 b 01 The right nodes represent the next c state c 10 10 d d We can coalesce the left and right 01 nodes to get state diagram
A portion of the central part of the trellis for the encoder
3. Convolutional Codes
b
11 01
state a
Binary description 00 10 01 11
00
10
00
10
b c d
11
01
4. Interleaver
The simple type of interleaver (sometimes known as a rectangular or block interleaver) Interleaver is commonly denoted by the Greek letter and its corresponding de-interleaver by -1. The original order can then be restored by a corresponding de-interleaver:
4. Interleaver
1 2 3 4 5 6 7 8 9
4. Interleaver
1 2 3 4 5 6 7 8 9
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
Interleaver ()
4. Interleaver
1 2 3 4 5 6 7 8 9
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
Interleaver ()
1 6 11 16 2 7 12 17 3
Interleaved data
4. Interleaver
1 2 3 4 5 6 7 8 9
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
Interleaver ()
1 6 11 16 2 7 12
De-interleaver (-1)
17 3
Interleaved data
4. Interleaver
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
1 6 11 16
2 7 12 17
3 8 13 18
4 9 14 19
5 10 15 20
Interleaver ()
1 6 11 16 2 7 12
De-interleaver (-1)
17 3
Interleaved data
5. Concatenated Codes
We have seen that the power of FEC codes increases with length k and approaches the Shannon bound only at very large k, but also that decoding complexity increases very rapidly with k. This suggests that it would be desirable to build a long, complex code out of much shorter component codes, which can be decoded much more easily.
5. Concatenated Codes
The principle is to feed the output of one encoder (called the outer encoder) to the input of another encoder, and so on, as required. The final encoder before the channel is known as the inner encoder. The resulting composite code is clearly much more complex than any of the individual codes.
5. Concatenated Codes
Outer encoder Interleaver Inner encoder
Serial Concatenated Codes
Channel
Outer decoder
deInterleaver
Inner decoder
General block diagram for concatenated codes
6. Conclusion
We have reviewed the basic principles of channel coding including the linear block codes, the convolutional codes and the concatenated coding, showing how it is that they achieve such remarkable performance.
7. References
Wireless Communications Andrea Goldsmith Digital Communications Simon Haykin Digital Communications Bernard Sklar