0% found this document useful (0 votes)
39 views10 pages

Unit 5DC

This document discusses linear block codes, focusing on their matrix descriptions, error detection and correction capabilities, and cyclic codes. It explains the concepts of block codes, Hamming distance, generator and parity check matrices, and decoding methods, including systematic codes and error syndromes. Additionally, it provides examples and mathematical formulations to illustrate the principles of linear block coding.

Uploaded by

Rajesh Pyla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views10 pages

Unit 5DC

This document discusses linear block codes, focusing on their matrix descriptions, error detection and correction capabilities, and cyclic codes. It explains the concepts of block codes, Hamming distance, generator and parity check matrices, and decoding methods, including systematic codes and error syndromes. Additionally, it provides examples and mathematical formulations to illustrate the principles of linear block coding.

Uploaded by

Rajesh Pyla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT 6

Linear Block Codes

 Matrix description of linear block codes,


 Matrix description of linear block codes,
 Error detection and error correction capabilities of linear block codes
 Cyclic codes: algebraic structure, encoding, syndrome calculation, decoding

Block Codes:


We will consider only binary data

Data is grouped into blocks of length k bits (dataword)

Each dataword is coded into blocks of length n bits (codeword), where in general n>k

This is known as an (n,k) block code

A vector notation is used for the datawords and codewords,
– Dataword d = (d1 d2….dk)
– Codeword c = (c1 c2……..cn)
• The redundancy introduced by the code is quantified by the code rate,
– Code rate = k/n
– i.e., the higher the redundancy, the lower the code rate
Hamming Distance:


Error control capability is determined by the Hamming distance

The Hamming distance between two codewords is equal to the number of differences
between them, e.g.,
10011011

11010010 have a Hamming distance = 3

• Alternatively, can compute by adding codewords (mod 2)


=01001001 (now count up the ones)

• The maximum number of detectable errors is

dmin 1
• That is the maximum number of correctable errors is given by,

 dmin 1
t
 2 
where dmin is the minimum Hamming distance between 2 codewords and means
the smallest integer

Linear Block Codes:

• As seen from the second Parity Code example, it is possible to use a table to hold all
the codewords for a code and to look-up the appropriate codeword based on the
supplied dataword
• Alternatively, it is possible to create codewords by addition of other codewords. This
has the advantage that there is now no longer the need to held every possible
codeword in the table.
• If there are k data bits, all that is required is to hold k linearly independent codewords,
i.e., a set of k codewords none of which can be produced by linear combinations of 2
or more codewords in the set.
• The easiest way to find k linearly independent codewords is to choose those which
have ‘1’ in just one of the first k positions and ‘0’ in the other k-1 of the first k
positions.

• For example for a (7,4) code, only four codewords are required, e.g.,

1 0 0 0 1 1 0
0 1 0 0 1 0 1
0 0 1 0 0 1 1
0 0 0 1 1 1 1

• So, to obtain the codeword for dataword 1011, the first, third and fourth codewords in
the list are added together, giving 1011010
• This process will now be described in more detail

• An (n,k) block code has code


vectors d=(d1 d2….dk) and

c=(c1 c2……..cn)

• The block coding process can be written as c=dG


where G is the Generator Matrix

a11 a12
 ... a1n  a1
a a ... a a
2n 
G   .
21 22  2

. ... .   . 
   
a k1 ak ... akn   ak 
2
• Thus,
k
c   di
a i i1

•ai must be linearly independent, i.e.,


Since codewords are given by summations of the ai vectors, then to avoid 2 datawords
having the same codeword the ai vectors must be linearly independent.

• Sum (mod 2) of any 2 codewords is also a codeword, i.e.,


Since for datawords d1 and d2 we have;

d 3  d1  d 2

So,

k k k k
c3   d 3i   (d  d  d1i a i   d 2i a i
1i 2i
ai )a i
i1 i1 i1 i1

3 c  c  c2
1
Error Correcting Power of LBC:

• The Hamming distance of a linear block code (LBC) is simply the minimum
Hamming weight (number of 1’s or equivalently the distance from the all 0
codeword) of the non-zero codewords
• Note d(c1,c2) = w(c1+ c2) as shown previously
• For an LBC, c1+ c2=c3
• So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))
• Therefore to find min Hamming distance just need to search among the 2k codewords
to find the min Hamming weight – far simpler than doing a pair wise check for all
possible codewords.

Linear Block Codes – example 1:

• For example a (4,2) code, suppose;

1
G  0 1 1
0 1 0 1
a1 = [1011]

a2 = [0101]

• For d = [1 1], then;

1 0 1 1
 0 1 0 1
c
_ _ _ _
 1 1 1 0

Linear Block Codes – example 2:

• A (6,5) code wit h


1 0 0 0 0 1
 
0 1 0 0 0 1

G  0 0 1 0 0 1
 
 0 0 0 1 0 1 
• 0 0 0 0 1 1
Is an even single parity code

Systematic Codes:

• For a systematic block code the dataword appears unaltered in the codeword – usually
at the start
• The generator matrix has the structure,

1 0
.. 0 p11 p12 .. p1R 
 p 
G  0 1 .. 0 p21 p22 .. 2 R   I | P
.. .. .. .. .. .. .. .. 
 0 .. 1 p p .. 
0 k1 k2
p kR 
R=n-k

• is often referred to as parity bits


I is k*k identity matrix. Ensures data word appears as beginning of codeword P is k*R matrix.

Decoding Linear Codes:

• One possibility is a ROM look-up table


• In this case received codeword is used as an address
• Example – Even single parity check
code; Address Data

000000 0

000001 1

000010 1

000011 0

……… .

• Data output is the error flag, i.e., 0 – codeword ok,


• If no error, data word is first k bits of codeword
• For an error correcting code the ROM can also store data words
• Another possibility is algebraic decoding, i.e., the error flag is computed from the
received codeword (as in the case of simple parity codes)
• How can this method be extended to more complex error detection and correction
codes?

Parity Check Matrix:

• A linear block code is a linear subspace S sub of all length n vectors (Space S)
• Consider the subset S null of all length n vectors in space S that are orthogonal to all
length n vectors in S sub
• It can be shown that the dimensionality of S null is n-k, where n is the dimensionality
of S and k is the dimensionality of
S sub

• It can also be shown that S null is a valid subspace of S and consequently S sub is also
the null space of S null
• S null can be represented by its basis vectors. In this case the generator basis vectors
(or ‘generator matrix’ H) denote the generator matrix for S null - of dimension n-k = R
• This matrix is called the parity check matrix of the code defined by G, where G is
obviously the generator matrix for S sub - of dimension k
• Note that the number of vectors in the basis defines the dimension of the subspace
• So the dimension of H is n-k (= R) and all vectors in the null space are orthogonal to
all the vectors of the code
• Since the rows of H, namely the vectors bi are members of the null space they are
orthogonal to any code vector
• So a vector y is a codeword only if yHT=0
• Note that a linear block code can be specified by either G or H

Parity Check Matrix:

b11
... b1n   b1 
b12
b b ... b   b 
2n 
 
2
H
21 22

R = n - k .
 . ... .   . 
bR1 bR 2 ... bRn  bR 

• So H is used to check if a codeword is valid,


• The rows of H, namely, bi, are chosen to be orthogonal to rows of G, namely ai
• Consequently the dot product of any valid codeword with any bi is

zero This is so since,

k
c   di
a i i1

and so,

b j .c  b j . d i
ai
i1

k
  d i (a i .b j )  0
i1
• This means that a codeword is valid (but not necessarily correct) only if cHT = 0. To
ensure this it is required that the rows of H are independent and are orthogonal to the
rows of G
• That is the bi span the remaining R (= n - k) dimensions of the codespace

• For example consider a (3,2) code. In this case G has 2 rows, a1 and a2
• Consequently all valid codewords sit in the subspace (in this case a plane) spanned by
a1 and a2
• In this example the H matrix has only one row, namely b1. This vector is orthogonal
to the plane containing the rows of the G matrix, i.e., a1 and a2
• Any received codeword which is not in the plane containing a1 and a2 (i.e., an invalid
codeword) will thus have a component in the direction of b1 yielding a non- zero dot
product between itself and b1.

Error Syndrome:

• For error correcting codes we need a method to compute the required correction
• To do this we use the Error Syndrome, s of a received codeword,
cr s = crHT

• If cr is corrupted by the addition of an error vector, e, then


cr = c + e

and

s = (c + e) HT = cHT + eHT

s = 0 + eHT

Syndrome depends only on the error

• That is, we can add the same error pattern to different code words and get the same
syndrome.
– There are 2(n - k) syndromes but 2n error patterns
– For example for a (3,2) code there are 2 syndromes and 8 error patterns
– Clearly no error correction possible in this case
– Another example. A (7,4) code has 8 syndromes and 128 error patterns.
– With 8 syndromes we can provide a different value to indicate single errors in
any of the 7 bit positions as well as the zero value to indicate no errors
• Now need to determine which error pattern caused the syndrome

• For systematic linear block codes, H is constructed as follows,


G = [ I | P] and so H = [-PT | I]

where I is the k*k identity for G and the R*R identity for H

• Example, (7,4) code, dmin= 3


1
 0 0 0 0 1 1
0 
G  I | P   1 0 0 1 0 1 0 1 1 1 1 0 0
0
0 1 0 1 1 0
 H -  
| I  0 1 1 0 1 0 

  T
0 P
0 0 1 1 1 1 1 1 0 1 0 0 1

1

Error Syndrome – Example:

• For a correct received codeword cr = [1101001]


In this case,

0 1 1 
 
 1 0 1 
1 1 0

s  rc H  1 1 0 1  1 1 1  0 0 0
T

0 0 1 0 0
Standard Array: 1 
0 1 0
 
• The Standard Array is construc te0d as0foll1o ws,
c1 (all c2 …… cM s0
zero)
e1 c2+e1 …… cM+e1 s1
e2 c2+e2 …… cM+e2 s2
e3 c2+e3 …… cM+e3 s3
… …… …… …… …
eN c2+eN …… cM+eN sN

• The array has 2k columns (i.e., equal to the number of valid codewords) and 2R rows
(i.e., the number of syndromes)

Hamming Codes:

• We will consider a special class of SEC codes (i.e., Hamming distance = 3) where,
– Number of parity bits R = n – k and n = 2R – 1
– Syndrome has R bits
– 0 value implies zero errors
– 2R – 1 other syndrome values, i.e., one for each bit that might need to be
corrected
– This is achieved if each column of H is a different binary word – remember s
= eHT
• Systematic form of (7,4) Hamming code is,

1 0 0 0 0 1 1

G  I | P  
0 1 0 0 1 0 1 0 1 1 1 1 00
0 0 1 0 1 1 0 
H -

| I  0 1 1 0 1 0 

  P
T
0 0 0 1 1 1 1 1 1 0 1 0 0 1

1

• The original form is non-systematic,

1 1 1 0 0 0 0
1
 0 0 1 1 0 0 0 0 0 1 1 1 1
G  0 1 0  
 1 0 1 0 H  0 1 1 0 0 1 1
1 1 0 1 0 0 1
 1 0 1 0 1 0 1

• Compared with the systematic code, the column orders of both G
and H are swapped so that the columns of H are a binary count
• The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the non-
systematic H is col. 7 in the systematic H.

You might also like