0% found this document useful (0 votes)
58 views72 pages

(2of3) - ECE3151 (B) (31.3.24) - 1

Uploaded by

sharmika das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views72 pages

(2of3) - ECE3151 (B) (31.3.24) - 1

Uploaded by

sharmika das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd

Chapter 10

Error Detection
and
Correction

10.1
Note

Data can be corrupted


during transmission.

Some applications require that


errors be detected and corrected.
 Some applications can tolerate a small level of error. For
example, random errors in audio or video transmissions may
be tolerable, but when we transfer text, we expect a very high
level of accuracy.

10.2
10-1 INTRODUCTION

Let us first discuss some issues related, directly or


indirectly, to error detection and correction.

Topics discussed in this section:


Types of Errors
Redundancy
Detection Versus Correction
Forward Error Correction Versus Retransmission
Coding
Modular Arithmetic

10.3
Note

In a single-bit error, only 1 bit in the data


unit has changed.
 Single-bit errors are the least likely type of error in serial data
transmission. To understand why, imagine data sent at 1 Mbps.
This means that each bit lasts only 1/1,000,000 s, or 1µs. For a
single-bit error to occur, the noise must have a duration of only
1µs, which is very rare; noise normally lasts much longer than
this.

10.4
Figure 10.1 Single-bit error

American Standard Code for Information Interchange = ASCII


It's a 7-bit character code where every single bit represents a unique
character.
27=128
ASCII code: 0-127

2:STX=start of text 10: LF= line feed

10.5
Note

A burst error means that 2 or more bits


in the data unit have changed.
 A burst error does not necessarily mean that the errors occur
in consecutive bits. The length of the burst is measured from
the first corrupted bit to the last corrupted bit. Some bits in
between may not have been corrupted.

10.6
Figure 10.2 Burst error of length 8

 The number of bits affected depends on the data rate and duration of
noise. For example, if we are sending data at 1 kbps, a noise of 0.01 s
can affect 10 bits(=1000×0.01); if we are sending data at 1 Mbps, the
same noise can affect 10,000 bits(=1000000×0.01).

10.7
Detection Versus Correction
 The correction of errors is more difficult than the detection.
 In error detection, we are looking only to see if any error has
occurred. The answer is a simple yes or no. We are not even
interested in the number of errors. A single-bit error is the same
for us as a burst error.
 In error correction, we need to know the exact number of bits
that are corrupted and more importantly, their location in the
message. The number of the errors and the size of the message
are important factors.
 If we need to correct one single error in an 8-bit data unit, we
need to consider 8 possible error locations. (8C1)
 If we need to correct two errors in a data unit of the same size, we
need to consider 28 possibilities. (8C2)
 You can imagine the receiver's difficulty in finding 10 errors in a
data unit of 1000 bits.
 (1000C10)=263409560461970212832400 possibilities

10.8
Redundancy
To detect or correct errors, we need to
send extra (redundant) bits with data.

 These redundant bits are added by the sender and removed by


the receiver. Their presence allows the receiver to detect or
correct corrupted bits.

 Redundancy is achieved through various coding schemes.


The sender adds redundant bits through a process that
creates a relationship between the redundant bits and the
actual data bits. The receiver checks the relationships
between the two sets of bits to detect or correct the errors.
Figure 10.3 shows the general idea of coding.

10.9
Figure 10.3 The structure of encoder and decoder

10.10
Modular Arithmetic
In modulo-N arithmetic, we use only the
integers in the range 0 to N −1, inclusive.
Inclusive - Including the last number, Exclusive - Excluding the last number
e.g., [1,10]=1 2 3…10 is inclusive and [1,10)=1 2 3…9 is exclusive
 For example, if the modulus is 12, we use only the integers 0 to
11, inclusive. An example of modulo arithmetic is our clock
system. It is based on modulo-12 arithmetic, substituting the
number 12 for 0.
 In a modulo-N system, if a number is greater than N, it is
divided by N and the remainder is the result. If it is negative, as
many Ns as needed are added to make it positive.

 Consider our clock system again. If we start a job at 11 A.M.


and the job takes 5 h, we can say that the job is to be finished
at 16:00 if we are in the military, or we can say that it will be
finished at 4 P.M. (the remainder of 16/12 is 4).
10.11
Modulo-2 Arithmetic: N=2
If we get a negative result, we add enough multiples of N to make it positive.

Addition: 0+0=0, 0+1=1, 1+0=1, 1+1=0


Subtraction: 0-0=0, 0-1=1, 1-0=1, 1-1=0
[-1+2=1]
Figure 10.4 XORing of two single bits or two words

10.12
10-2 BLOCK CODING

In block coding, we divide our message into blocks,


each of k bits, called datawords. We add r redundant
bits to each block to make the length n = k + r. The
resulting n-bit blocks are called codewords.

Topics discussed in this section:


Error Detection
Error Correction
Hamming Distance
Minimum Hamming Distance

10.13
Figure 10.5 Datawords and codewords in block coding

Since n > k, the number of possible codewords is larger than the


number of possible datawords. The block coding process is one-to-
one; the same dataword is always encoded as the same codeword.
This means that we have 2n - 2k codewords that are not used. We
call these codewords invalid or illegal.

10.14
Error Detection
How can errors be detected by using block coding?

 If the following two conditions are met, the receiver can detect a
change in the original codeword.
1. The receiver has (or can find) a list of valid codewords.
2. The original codeword has changed to an invalid one.

The sender creates codewords out of datawords by using a


generator that applies the rules and procedures of encoding.
 Each codeword sent to the receiver may change during
transmission. If the received codeword is the same as one of the
valid codewords, the word is accepted; the corresponding dataword
is extracted for use.
 If the received codeword is not valid, it is discarded. However, if
the codeword is corrupted during transmission but the received word
still matches a valid codeword, the error remains undetected.
 This type of coding can detect only single errors. Two or more
errors may remain undetected.
10.15
Figure 10.6 Process of error detection in block coding

10.16
Example 10.2
Table 10.1 A code for error detection (Example 10.2)

Let us assume that k = 2 and n = 3. Table 10.1 shows the list of


datawords and codewords. Later, we will see how to derive a
codeword from a dataword.

Assume the sender encodes the dataword 01 as 011 and


sends it to the receiver. Consider the following cases:
011011
1. The receiver receives 011. It is a valid codeword. The
receiver extracts the dataword 01 from it.

10.17
Example 10.2 (continued)
011111
2. The codeword is corrupted during transmission, and
111 is received. This is not a valid codeword and is
discarded.
011000
3. The codeword is corrupted during transmission, and
000 is received. This is a valid codeword. The receiver
incorrectly extracts the dataword 00. Two corrupted
bits have made the error undetectable.

10.18
Note

An error-detecting code can detect


only the types of errors for which it is
designed; other types of errors may
remain undetected.

10.19
Error Correction
As we said before, error correction is much more difficult than
error detection.

 In error detection, the receiver needs to know only that the


received codeword is invalid; in error correction the receiver
needs to find (or guess) the original codeword sent.

 We can say that we need more redundant bits for error


correction than for error detection.

 Figure 10.7 shows the role of block coding in error


correction. The idea is the same as error detection but the
checker functions are much more complex.

10.20
Figure 10.7 Structure of encoder and decoder in error correction

10.21
Example 10.3
Let us add more redundant bits to Example 10.2 to see if the
receiver can correct an error without knowing what was actually
sent. We add 3 redundant bits to the 2-bit dataword to make 5-bit
codewords. Table 10.2 shows the datawords and codewords.
Assume the dataword is 01. The sender creates the codeword
01011. The codeword is corrupted during transmission, and 01001
is received. First, the receiver finds that the received codeword is
not in the table. This means an error has occurred. The receiver,
assuming that there is only 1 bit corrupted, uses the following
strategy to guess the correct dataword.
Table 10.2 A code for error correction (Example 10.3)

10.22
Example 10.3 (continued)
1. Comparing the received codeword with the first codeword in
the table (01001 versus 00000), the receiver decides that the
first codeword is not the one that was sent because there are
two different bits.
2. By the same reasoning, the original codeword cannot be the
third or fourth one in the table.
3. The original codeword must be the second one in the table
because this is the only one that differs from the received
codeword by 1 bit. The receiver replaces 01001 with 01011 and
consults the table to find the dataword 01.

01001
01001
01001
01001
10.23
Hamming Distance

The Hamming distance between two


words is the number of differences
between corresponding bits.
One of the central concepts in coding for error control is
the idea of the Hamming distance.
We show the Hamming distance between two words x
and y as d(x, y).
The Hamming distance can easily be found if we apply
the XOR operation on the two words and count the
number of 1s in the result.
 Note that the Hamming distance is a value greater than
zero.
10.24
Example 10.4

Let us find the Hamming distance between two pairs of


words.

1. The Hamming distance d(000, 011) is 2 because

2. The Hamming distance d(10101, 11110) is 3 because

10.25
Minimum Hamming Distance
The minimum Hamming distance is the
smallest Hamming distance between
all possible pairs in a set of words.
Although the concept of the Hamming distance is the
central point in dealing with error detection and correction
codes, the measurement that is used for designing a code
is the minimum Hamming distance.
We use dmin to define the minimum Hamming distance in
a coding scheme.
 To find this value, we find the Hamming
distances between all words and select the smallest one.

10.26
Example 10.5

Find the minimum Hamming distance of the coding


scheme in Table 10.1.

Solution
We first find all Hamming distances.

The dmin in this case is 2.

10.27
Example 10.6

Find the minimum Hamming distance of the coding


scheme in Table 10.2.

Solution
We first find all the Hamming distances.

The dmin in this case is 3.

10.28
Any coding scheme needs to have at least three
parameters: the codeword size n, the dataword size k, and
the minimum Hamming distance dmin.
 A coding scheme C is written as C(n, k) with a separate
expression for dmin C(5, 2) with dmin = 3.
C(3, 2) with dmin =2

When a codeword is corrupted during transmission, the Hamming


distance between the sent and received codewords is the number of
bits affected by the error.
For example, if the codeword 00000 is sent and 01101 is received, 3
bits are in error and the Hamming distance between the two is
d(00000, 01101) =3

10.29
Note

To guarantee the detection of up to s


errors in all cases, the minimum
Hamming distance in a block
code must be dmin = s + 1.

10.30
Example 10.7

The minimum Hamming distance for our first code


scheme (Table 10.1) is 2. This code guarantees detection
of only a single error. For example, if the third codeword
(101) is sent and one error occurs, the received codeword
does not match any valid codeword. If two errors occur,
however, the received codeword may match a valid
codeword and the errors are not detected.

dmin = s +1=2
s=1
101111
101000
Only single error detected
10.31
Example 10.8

Our second block code scheme (Table 10.2) has dmin = 3. This code
can detect up to two errors. Again, we see that when any of the
valid codewords is sent, two errors create a codeword which is not
in the table of valid codewords. The receiver cannot be fooled.
However, some combinations of three errors change a valid
codeword to another valid codeword. The receiver accepts the
received codeword and the errors are undetected.

dmin = s +1=3
s=2
0101101010
0101101000
0101100000
Only single and double error detected
10.32
Minimum Distance for Error Correction

Note

To guarantee correction of up to t errors


in all cases, the minimum Hamming
distance in a block code
must be dmin = 2t + 1.

10.33
Example 10.9

A code scheme has a Hamming distance dmin = 4. What is


the error detection and correction capability of this
scheme?
dmin = s +1=4 dmin = 2t + 1=4
s=3 t=1.5~1
Solution
This code guarantees the detection of up to three errors
(s = 3), but it can correct up to one error. In other words,
if this code is used for error correction, part of its capability is
wasted. Error correction codes need to have an odd minimum
distance (3, 5, 7, . . . ).

10.34
10-3 LINEAR BLOCK CODES

Almost all block codes used today belong to a subset


called linear block codes. A linear block code is a code
in which the exclusive OR (addition modulo-2) of two
valid codewords creates another valid codeword.

Topics discussed in this section:


Minimum Distance for Linear Block Codes
Some Linear Block Codes

10.35
Note

In a linear block code, the exclusive OR


(XOR) of any two valid codewords
creates another valid codeword.

10.36
Example 10.10
Let us see if the two codes we defined in Table 10.1 and Table 10.2 belong to the
class of linear block codes.

1.The scheme in Table 10.1 is a linear block code


because the result of XORing any codeword with any
other codeword is a valid codeword. For example, the
XORing of the second and third codewords creates the
fourth one.

2. The scheme in Table 10.2 is also a linear block code.


We can create all four codewords by XORing two
other codewords.

011⊕101=110 01011⊕10101=11110
10.37
Error-detecting code: Simple Parity-Check Code
Perhaps the most familiar error-detecting code is the simple parity-check code.
 In this code, a k-bit dataword is changed to an n-bit codeword where n = k + 1.
The extra bit, called the parity bit, is selected to make the total number of 1s in
the codeword even.
 Although some implementations specify an odd number of 1s, we discuss the
even case.

Note

A simple parity-check code is a


single-bit error-detecting
code in which
n = k + 1 with dmin = 2.

10.38
Table 10.3 Simple parity-check code C(5, 4) with dmin=2

5th bit in the codewords is chosen such that total number of 1 is even.

Simple parity-check code C(3, 2)


with dmin=2
Table 10.1

10.39
Figure 10.10 Encoder and decoder for simple parity-check code

ro=a3 ⊕ a2 ⊕ a1 ⊕ a0 so=b3 ⊕ b2 ⊕ b1 ⊕ b0⊕ q0


The syndrome is 0 when the number of 1s in the received codeword
10.40 is even; otherwise, it is 1.
ro=a3 ⊕ a2 ⊕ a1 ⊕ a0 so=b3 ⊕ b2 ⊕ b1 ⊕ b0⊕ q0

The syndrome is passed to the decision logic


analyzer.

If the syndrome is 0, there is no error in the received


codeword; the data portion of the received codeword is
accepted as the dataword.

 If the syndrome is 1, the data portion of the received


codeword is discarded. The dataword is not created.

10.41
Example 10.12 codeword
a3a2a1a01011
r0=1⊕0⊕1 ⊕1=1
a3a2a1a0r010111
Let us look at some transmission scenarios. Assume the
sender sends the dataword 1011. The codeword created
from this dataword is 10111, which is sent to the receiver.
We examine five cases:
so=1 ⊕0 ⊕1 ⊕1 ⊕1=0
1. No error occurs; the received codeword is 10111. The
syndrome is 0. The dataword 1011 is created.
2. One single-bit error changes a1 . The received
codeword is 10011. The syndrome is 1. No dataword
is created. so=1 ⊕0 ⊕0 ⊕1 ⊕1=1
3. One single-bit error changes r0 . The received codeword
is 10110. The syndrome is 1. No dataword is created.
so=1 ⊕0 ⊕1 ⊕1 ⊕0=1
10.42
Example 10.12 (continued)
codeword
a3a2a1a0r010111
4. An error changes r0 and a second error changes a3 .
The received codeword is 00110. The syndrome is 0.
The dataword 0011 is created at the receiver. Note that
here the dataword is wrongly created due to the
syndrome value.
5. Three bits—a3, a2, and a1—are changed by errors.
The received codeword is 01011. The syndrome is 1.
The dataword is not created. This shows that the simple
parity check, guaranteed to detect one single error, can
also find any odd number of errors.
10.43
Note

A simple parity-check code can detect


an odd number of errors.

10.44
Figure 10.11 Two-dimensional parity-check code
 A better approach is the two-dimensional parity check. In this method,
the dataword is organized in a table (rows and columns).

 In Figure 10.11, the data to


be sent, five 7-bit bytes, are put
in separate rows. For each row
and each column, 1 parity-
check bit is calculated.

 The whole table is sent to the receiver, which finds the syndrome
for each row and each column.

10.45
Figure 10.11 Two-dimensional parity-check code

 the two-dimensional parity check can detect up to three


errors that occur anywhere in the table (arrows point to the
locations of the created nonzero syndromes).

10.46
Figure 10.11 Two-dimensional parity-check code

 Errors affecting 4 bits may not be detected.

10.47
Richard Hamming, the inventor of Hamming
codes, worked at Bell Labs in the late 1940s
on the Bell Model V computer, an
electromechanical relay-based machine.
During weekdays, when errors in the relays
were detected, the machine would stop and
flash lights so that the operators could
correct the problem. During after-hours
periods and on weekends, when there were
no operators, the machine simply moved on
to the next job. In a taped interview,
Hamming said, "And so I said, 'Damn it, if the
machine can detect an error, why can't it
locate the position of the error and correct
it?'". Over the next few years, he worked on
the problem of error-correction, developing
an increasingly powerful array of algorithms.
In 1950, he published what is now known as
Hamming code.

Richard Wesley Hamming (February 11, 1915


– January 7, 1998) was an American
mathematician whose work had many
implications for computer engineering and
R. W. Hamming
telecommunications.

[Ref.] https://en.wikipedia.org/wiki/Hamming_code
10.48
Error correcting code:
Note Hamming codes
All Hamming codes discussed in this book have dmin = 3, which
means that they can detect up to two errors or correct one single
error.
 We need to choose an integer m >= 3.

The values of n and k are then calculated from m as n = 2m-1 and


k= n - m.
 The number of check bits or parity bits r = m.

 For example, if m =3, then n= 7 and k= 4. This is a


Hamming code C(7, 4) with dmin =3.
 Table 10.4 shows the datawords and codewords for this
code.
10.49
Table 10.4 Hamming code C(7, 4)

 A copy of a 4-bit dataword is fed into the generator that creates


three parity checks ro, r1 and r2 as shown below:
 r0 = a 2 ⊕ a 1 ⊕ a 0 r1 = a 3 ⊕ a 2 ⊕ a 1 r2 = a1 ⊕ a 0 ⊕ a 3
 The total number of 1s in each 4-bit combination (3 dataword bits and
1 parity bit) must be even.
 any three equations that involve 3 of the 4 bits in the dataword and
create independent equations (a combination of two cannot create the
third) are valid.
10.50
a3 a2 a1 a0 r2 r1 r0 The total number of 1s in each 4-bit
combination must be even.
0 1 0 1 1 1 0
Independent equations are
r0 = a 2 ⊕ a 1 ⊕ a 0 = 1 ⊕ 0 ⊕ 1 = 0
valid, i.e., a combination of two
r1 = a 3 ⊕ a 2 ⊕ a 1 = 0 ⊕ 1 ⊕ 0 = 1 cannot create the third
r2 = a 1 ⊕ a 0 ⊕ a 3 = 0 ⊕ 1 ⊕ 0 = 1

r0 ⊕r1 = a2 ⊕ a1 ⊕ a0 ⊕ a3 ⊕ a2 ⊕ a1=a0 ⊕ a3
r2= a1 ⊕ a0 ⊕ a3 So, r0 ⊕ r1≠ r2
10.51
Figure 10.12 The structure of the encoder and decoder for a Hamming code

s2= b1 ⊕ b0 ⊕ b3 ⊕ q2
s1= b3 ⊕ b2 ⊕ b1 ⊕ q1
s0= b2 ⊕ b1 ⊕ b0 ⊕ q0

r2= a1 ⊕ a0 ⊕ a3
r1= a3 ⊕ a2 ⊕ a1
r0= a2 ⊕ a1 ⊕ a0

 The equations used by the checker are the same as those used by the
generator with the parity-check bits added to the right-hand side of the
10.52 equation.
Table 10.5 Logical decision made by the correction logic analyzer

s2= b1 ⊕ b0 ⊕ b3 ⊕ q2 (1)
s1= b3 ⊕ b2 ⊕ b1 ⊕ q1 (2) s2 s1 s0
(3)
s0= b2 ⊕ b1 ⊕ b0 ⊕ q0 q0 0 0 1
Syndrome bit 0 means no error.
Now look closely at each of the above q1 0 1 0
equations (1,2,3) and carefully watch the
following scenario: q2 1 0 0
q0 bit is in error => s0 is affected
q1 bit is in error => s1 is affected b3 1 1 0
q2 bit is in error => s2 is affected
b3 bit is in error => both s2 and s1 are affected b2 0 1 1
b2 bit is in error => both s1 and s0 are affected
b1 bit is in error => s2, s1 and s0 are all affected b1 1 1 1
b0 bit is in error => both s2 and s0 are affected
10.53 b0 1 0 1
Table 10.5 Logical decision made by the correction logic analyzer

 The 3-bit syndrome creates eight different bit patterns (000 to 111) that
can represent eight different conditions.
 These conditions define a lack of error or an error in 1 of the 7 bits of
the received codeword, as shown in Table 10.5
 The generator is not concerned with the four cases shaded in Table
10.5 because there is either no error or an error in the parity bit. In the
other four cases, 1 of the bits must be flipped (changed from 0 to 1 or 1 to
0) to find the correct dataword.
The syndrome values in Table 10.5 are based on the syndrome bit
calculations.
 For example, if qo is in error, s0 is the only bit affected; the syndrome,
therefore, is 001.
 If b2 is in error, s0 and s1 are the bits affected; the syndrome, therefore
is 011.
 Similarly, if b1 is in error, all 3 syndrome bits are affected and the
syndrome is 111
10.54
Example 10.13 b3 b2 b1 b0 q2 q1 q0
0 1 0 0 0 1 1
Let us trace the path of three datawords from the sender to the
destination:
1.The dataword 0100 becomes the codeword 0100011. The
codeword 0100011 is received. The syndrome is 000, the final
dataword is 0100.
s2= b1 ⊕ b0 ⊕ b3 ⊕ q2 =0 ⊕0 ⊕0 ⊕0=0, s1= b3 ⊕ b2 ⊕ b1 ⊕ q1 =0 ⊕1 ⊕0
⊕1=0, s0= b2 ⊕ b1 ⊕ b0 ⊕ q0= 1 ⊕ 0 ⊕ 0 ⊕ 1=0

2. The dataword 0111 becomes the codeword 0111001.The


codeword 0011001 is received. The syndrome is 011. After
flipping b2 (changing the 0 to 1), the final dataword is 0111.
3. The dataword 1101 becomes the codeword 1101000.The
codeword 0001000 is received. The syndrome is 101. After flipping
b0, we get 0000, the wrong dataword. This shows that our code
cannot correct two errors. Recall, dmin=3=2t+1, so t=1
10.55
Example 10.14
Note: the datawords in table 10.4 are 4 bits long
We need a dataword of at least 7 bits. Calculate values of
k and n that satisfy this requirement.

Solution n = 2m-1 m=r parity check bits

We need to make k = n − m greater than or equal to 7, or


2m − 1-m ≥ 7.
1. If we set m = 3, the result is 23 − 1-3 = 4 which is not
acceptable. Here, n= 23 − 1=7 and k = 7 − 3=4.
2. If we set m = 4, then 24 − 1- 4 = 11 which satisfies the
condition and hence acceptable. Here, n=24 − 1=15 and k
= 15 − 4 =11. C(15, 11)
There are methods to make the dataword a specific size, but the
discussion and implementation are beyond the scope of this book.
10.56
Figure 10.13 Burst error correction using Hamming code

10.57
Burst error correction using Hamming code
 The key is to split a burst error between several codewords,
one error for each codeword.
 In data communications, we normally send a packet or a frame
of data.
 To make the Hamming code respond to a burst error of size N,
we need to make N codewords out of our frame.
 Then, instead of sending one codeword at a time, we arrange
the codewords in a table and send the bits in the table a column
at a time.
 In Figure 10.13, the bits are sent column by column (from the
left). In each column, the bits are sent from the bottom to the top.
 In this way, a frame is made out of the four codewords and
sent to the receiver.
 In Figure 10.13, when a burst error of size 4 corrupts the
frame, only 1 bit from each codeword is corrupted. The corrupted
bit in each codeword can then easily be corrected at the receiver.

10.58
10-4 CYCLIC CODES

Cyclic codes are special linear block codes with one


extra property. In a cyclic code, if a codeword is
cyclically shifted (rotated), the result is another
codeword.
If 00010111 is a valid
codeword, applying a right
circular shift gives the string
10001011. If the code is
cyclic, then 10001011 is again
a valid codeword.
[Ref.] https://en.wikipedia.org/wiki/Cyclic_code

10.59
Table 10.6 A CRC code with C(7, 4)

Cyclic Redundancy Check (CRC)


The cyclic redundancy check (CRC) is a category of cyclic codes that is used
in networks such as LANs and WANs. This is an error-detecting code. Table
10.6 shows an example of a CRC code. We can see both the linear and
cyclic properties of this code.
10.60
William Wesley Peterson (April
22, 1924 – May 6, 2009) was an
American mathematician and
computer scientist. He was best
known for designing the cyclic
redundancy check (CRC), for
which research he was awarded
the Japan Prize in 1999.
https://en.wikipedia.org/wiki/W._Wesley_Peterson

Japan prize: ≈ 4,00,00,000.00 BDT


Nobel prize: ≈ 10,00,00,000.00 BDT

10.61
1011 1100 0111 Linear
1011000 ⊕1100010 = 0111010 proper
ty
1011
1 0 1 1 0 0 0
checke
0 1 0 1 1 0 0 d
0101

Circular property checke


10.62
CRC encoder and decoder
 In the encoder, the dataword has k bits (4 here);
the codeword has n bits (7 here).
 The size of the dataword is augmented by adding
n - k (3 here) 0s to the right-hand side of the word.
 The n-bit result is fed into the generator. The
generator uses a divisor of size n - k + 1 (4 here),
redefined and agreed upon.
 The generator divides the augmented dataword
by the divisor (modulo-2 division).
The quotient of the division is discarded; the
remainder (r2r1r0) is appended to the dataword to
create the codeword.
10.63
CRC encoder and decoder
 The decoder receives the possibly corrupted
codeword.
 A copy of all n bits is fed to the checker which is
a replica of the generator.
 The remainder produced by the checker is a
syndrome of n - k (3 here) bits, which is fed to the
decision logic analyzer. The analyzer has a simple
function.
 If the syndrome bits are all 0s, the 4 leftmost
bits of the codeword are accepted as the dataword
(interpreted as no error); otherwise, the 4 bits
are discarded (error).

10.64
Figure 10.14 CRC encoder and decoder

k=4, n=7

n-k=7-4=3

n-k=3
n-k+1=3+1=4

10.65
Figure 10.15 Division in CRC encoder

 The encoder takes the


dataword and augments it with
n - k number of 0s. It then
divides the augmented
dataword by the divisor, as
shown in Figure 10.15

10.66
Figure 10.16 Division in the CRC decoder for two cases

10.67
Figure 10.21 A polynomial to represent a binary word

10.68
Figure 10.22 CRC division using polynomials

1001=x3+0x2+0x1+1x0=x3+1

1001000=x6+0x5+0x4+1x3+0x2+0x1+0x0
1011=x +0x +1x +1x =x +x+1
3 2 1 0 3
=x6+x3

110

1001110
10.69
The divisor in a cyclic code is normally called the
generator polynomial or simply the generator.
Name Uses Generator Polynomials
CRC-6-GSM mobile networks x6+x5+x3+x2+x+1
CRC-8-Bluetooth wireless connectivity x8+x7+x5+x2+x+1

CRC-30 CDMA x30+x29+x21+x20+x15+x13+x12+x11+x8+x


7
+x6+x2+x+1
CRC-32 LANs x32+x26+x23+x22+x16+x12+x11+x10+x8+x
7
+x5+x4+x2+x+1
Ref. https://en.wikipedia.org/wiki/Cyclic_redundancy_check

10.70
CRC can detect a burst error
❏ All burst errors with L ≤ r will be detected.
❏ All burst errors with L = r + 1 will be detected with probability 1 – (1/2) r–1.
❏ All burst errors with L > r + 1 will be detected with probability 1 – (1/2) r.

Example: Find the suitability of the following generator in relation to burst


errors of different lengths: x6 + 1.
Solution:
This generator can detect all burst errors with a length less than or equal to 6 bits.
i.e., For L ≤ 6: Detection probability =100

For L=7: Detection probability =1-(0.5)6-1 = 1-(0.5)5 = 1- 0.03125 = 0.968≈97%

For L=8: Detection probability =1-(0.5)6 = 1-(0.5)6 = 1- 0.015625 = 0.984 ≈98%

10.71
Home Work
Problem 1: A bit stream 1101011011 is transmitted using the standard
CRC method. The generator polynomial is x4+x+1. What is the actual bit
string transmitted?
Ans: 11010110111110.

Problem 2: A bit stream 10011101 is transmitted using the standard CRC


method. The generator polynomial is x3+1. (i) what is the actual bit string
transmitted? (ii) suppose the third bit from the left is inverted during
transmission. How will receiver detect this error?
Ans: (i) = 10011101100.

10.72

You might also like