0% found this document useful (0 votes)
14 views5 pages

A Study On Turbo Code Performance Based On AWGN Channel

The paper discusses the performance of Turbo codes in AWGN channels, highlighting the decoding principles and various algorithms such as MAP, Log-MAP, and SOVA. Simulation results indicate that increasing the number of iterations and the length of the information sequence improves Turbo code performance. The study emphasizes the trade-off between frame length and decoding complexity, suggesting that longer frame lengths enhance error correction capabilities.

Uploaded by

Hoàng Anh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views5 pages

A Study On Turbo Code Performance Based On AWGN Channel

The paper discusses the performance of Turbo codes in AWGN channels, highlighting the decoding principles and various algorithms such as MAP, Log-MAP, and SOVA. Simulation results indicate that increasing the number of iterations and the length of the information sequence improves Turbo code performance. The study emphasizes the trade-off between frame length and decoding complexity, suggesting that longer frame lengths enhance error correction capabilities.

Uploaded by

Hoàng Anh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Proceedings of the 2nd International Conference on Computer Science and Electronics Engineering (ICCSEE 2013)

A Study on Turbo Code Performance Based on AWGN Channel

Peng Zhu, Jun Zhu, Xiang Liu


Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education
Anhui University,No.3, Feixi Rd, Hefei 230039, China
pengzhu19881021@[Link]

Abstract—Turbo codes have a wide range of applications in several important parameters of research are [Link]
3G mobile communications, deep-sea communications, results provide reference for the application of Turbo codes.
satellite communications and other power constrained fields.
In the paper, the Turbo Code Decoding Principle and several II. TURBO CODING PRINCIPLES
major decoding methods are introduced. Simulations of Turbo Usually Turbo Code encoder is connected by an
code performance under different parameters of AWGN
interweaver with two feedback RSC (Recursive Systematic
channel are made and the effects of the different interleaving
length, the number of iterations, and the decoding algorithm
Convolutional) code encoder. After encoding, when the
to Turbo code performance are also discussed in AWGN parity bit passes the puncturing array, code words of
channel. Simulation results show that under the same signal- different code rates are generated. The principal isshown in
to-noise ratio, the more the number of iterations is, the longer Fig.1.
the sequence of information is, and the more excellent Information sequence d′ ={d1′,dN′ } through the N bit
interweaver makes the length d ′ and content unchanged. But
decoding algorithm is, the better the performance of Turbo
codes is.
the bit position rearranged so as to form a new
Keyword-Turbo code; frame length; component code; sequence d ′ = {d ′1 ,d ′2 ,⋅⋅⋅,d ′N} .d and d ′ are transmitted
decoding algorithm; bit error rate
respectively to two component code encoders (RSC1 and
RSC2). Under normal circumstances, the structures of the
I. INTRODUCTION two component code encoders are the same and generate
In 1993, French Professor Berrou et al in the ICC 1p 2p
sequences X and X . In order to improve the bit rate,
meeting proposed a new coding-Turbo code [1-2]. Because 1p 2p
its performance is close to the theoretical limit of Shannon, X and X need to go through the puncturing unit. By
Turbo code has become one of the main criteria of the third- the puncturing technique, some parity bits are removed
generation high-speed mobile communications. In addition, periodically from these two check sequences, and then parity
p p
Turbo code is also widely used in data communications, bit sequence X is produced. After X and un-encoding
digital TV and etc [3-5]. The good performance of Turbo sequence u are multiplexed,Turbo code sequence is
code is mainly due to the following aspects: 1) the SISO generated.
iterative decoding ways; 2) Interlace is introduced which
directly affects the distance spectrum and performance of
Turbo codes [6]. 3) Member encoder uses the iterative d Xs
system volume code device [7]. Thanks to its excellent
performance and moderate complexity, Turbo code has been d X1p
developed by leaps and bounds since it was proposed.
Currently, Turbo codes have two main types of decoding
X
p
algorithm--MAP(maximum a posteriori) algorithm[1] and X
SOVA(soft output Viterbi algorithm) algorithm[8]. SOVA
algorithm is simpler than the Log-MAP algorithm because
d′ X2p
of differences between the two algorithms, while the defects
of the SOVA algorithm itself make its BER performance
relatively poor. The MAP algorithm is the sub-optimal
algorithm to Turbo decoding and achieves it more Figure 1. Block diagram of turbo encoder.
complexly. These two types of algorithms also evolved a lot
of algorithms in researches and developments, for example,
MAP class has MAP, Log-MAP and MAX-Log-MAP. III. DECODING PRINCIPLE AND DECODING ALGORITHM
This paper gives the system structure of Turbo code, a OF TURBO CODE
detailed analysis of the principle of compiler code, and
several kinds of main decoding algorithm. Help of A. Decoding Structure of Turbo code
MATLAB simulation, the number of iterations of Turbo The typical structure of Turbo decoder is shown in Fig.2.
code, the code rate, interleaver length decoding algorithm The demodulator of the receiver output the information into
the decoder, after appropriate quantification. The decoder

Published by Atlantis Press, Paris, France.


© the authors
0005
Proceedings of the 2nd International Conference on Computer Science and Electronics Engineering (ICCSEE 2013)

has two soft input-and-output (SISO) component decoders: through the front and the backward recursion is obtained,
DEC1 and DEC2. They respectively make system code γk ( s′, s) is a branch metric:
information Xs and check code information Xs which are
produced by the encoder be decoded. After the outside
information exinfol generated by Xs and DEC1 is 
α ( s ) =
interleaved, it goes into DEC2 and the DEC2 as a priori
information. The outside information produced by DEC2  k  αk−1 ( s′) ⋅γ k ( s′,s)
all s′
needs to be de-interleaved and then goes into the DEC1 as a 
priori information forming iterative computation. After a
number of iterations, the outside information generated by
βk −1 ( s′) =  βk ( s ) ⋅ γ k ( s′,s ) ( 3)
the two SISO DEC (soft-input soft-output decoder) and the  all s

likelihood information LLR become consistent. The LLR  uk L ( uk ) 2  1 n 


after being de-interleaved is sent into the hard decision. The γ k ( s′,s ) = Ce exp  − Lc xk , l yk , l 
judgment of LLR less than the zero is 0 and the other   2 l=1 
situation is 1. So the decoding process is completed. It can The recursive computation initialization problem exists,
be seen from the decoding process that each iteration needs where α k ( s ) 、β k ( s ) of the initial state is determined by
two SISO to be calculated to complete. It means that each
SISO are only responsible for half iterations and completing the α 0 ( S0 ) 、 β N ( S N ) , respectively:
all decoding requires long delay.
1 S0 = 0
α0 ( S0 ) =  ( 4)
0 S0 ≠ 0
Xs
1 SN = 0
Yp1 βN ( SN ) =  ( 5)
Yp2 0 SN ≠ 0
From MAP algorithm shows that the algorithm requires
a lot of addition, multiplication, exponentiation, and
logarithmic computation, computational, delay, practical
applications are based on the simplified algorithm.
Yp 2
C. LOG-MAP algorithm.
Log-MAP algorithm modified MAP done directly in the
Figure 2. Classsical construct of Turbo Decoder number field of the α k ( s ) , β k ( s ) , γ k ( s′, s) and L(uk |
,
Y ): is calculated, eliminating many exponential and
B. MAP algorithm logarithmic operation, greatly simplifying the amount of
MAP algorithm based on the received sequence Y,to computation. Ak ( s ) , Bk ( s ) and Γ k ( s′,s ) with the MAP
identify each information bit uk is the probability of the
"+1" (1) or "-1" (0), which is equivalent to the sequence Y algorithm in the α k ( s ) , β k ( s ) , γk ( s′, s) respectively
,
in the calculation of uk logarithmic likelihood ratio
corresponding to:
(LLR)L(uk | Y ):
 Ak ( s ) = ln (α k ( s ) )

 P ( uk = +1 Y )   Bk ( s ) = ln ( β k ( s ) ) (6 )
L ( uk Y ) = ln   (1)
 P ( uk = −1 Y )  
  Γ k ( s ′,s ) = ln ( γ k ( s ′,s ) )
From MAP algorithm, (1) can be described as: And then,
 ( s′,s)u =+1αk−1 ( s′) ⋅γk ( s′, s) ⋅ βk ( s) 
L( uk Y) = ln k
 ( 2)
 ( s′,s)u =−1αk−1 ( s′) ⋅γk ( s′, s) ⋅ βk ( s) 
 k 
In (2) α k ( s ) , β k ( s ) , γ k ( s′, s) calculation of more
complex, of the calculation can be provided with the
recursive method, α k ( s ) , β k ( s ) can be respectively

Published by Atlantis Press, Paris, France.


© the authors
0006
Proceedings of the 2nd International Conference on Computer Science and Electronics Engineering (ICCSEE 2013)

 computation does not increase when the length of the


 Ak ( s )  ln (α k ( s ) ) sequence increases. It is the convolutional code decoding
 algorithm with the best performance. If MAP algorithm is
   intended to make the error probability of each code element
 = ln   exp ( Ak −1 ( s′ ) + Γ k ( s′,s ) )  the lowest, then the SOVA algorithm is intended to make
  all s′  the error probability of each sequence the lowest. Compared

= max ( Ak −1 ( s′ ) + Γ k ( s′,s ) ) to traditional Viterbi algorithm, the SOVA algorithm adds a
*
 s′ calculation of the reliability of the operation and the refresh

 k −1 ( ) ln ( β k −1 ( s′ ) )
B s ′  process, thereby increasing the complexity of the algorithm.
But due to its relatively simple algorithm, the amount of

   calculation is smaller than that of the MAP algorithm. So it
 = ln   exp ( Bk ( s ) + Γ k ( s′,s ) )  (7 ) has a wide range of applications in the field of
  all s  communication.
 = max ( Bk ( s ) + Γk ( s′,s ) )
* In terms of the performance, it is that MAP>Log-
 s′ MAP>Max-Log-MAP>SOVA, so is the same with the
 1 Lc n complexity of the algorithm. It needs to find a balance
Γ k ( s′,s )  ln ( γ k ( s′,s ) ) = C + uk L ( uk ) + y x
kl kl between the performance of decoding and the complexity of
 2 2 l =1
implement.
In (7),C is a constant, Γ k ( s′, s ) is branch metric raster.
IV. TURBO CODE PERFORMANCE SIMULATION AND ANALYSIS
L ( uk Y ) is calculated into a subtraction method, such as
A. Sequence information on the Turbo code performance
(8).
The simulation parameters are set as follows in Fig.3:
  ( s′,s)u =+1αk −1 ( s′) ⋅ γ k ( s′, s ) ⋅ βk ( s )  the generate matrix is (7,5), the code rate is 1/3, interleaved
L ( uk Y ) = ln  k
 manner is the pseudo-random interleaver, the information
  ( s′,s)u =−1αk −1 ( s′) ⋅ γ k ( s′, s ) ⋅ βk ( s ) 
 k  sequence length is 64,512,1024; the iteration is five times,
the decoder algorithm is the Log-MAP algorithm.
= max ( Ak −1 ( s′) +Γk ( s′, s ) + Bk ( s ) ) −
*
In Fig.3, from the simulation results we can see that :
( s′ ,s)uk =+1 the case of the same SNR, Turbo code error rate decreases
max* ( Ak −1 ( s′) + Γk ( s′, s ) + Bk ( s ) ) with the increase of the frame length. When the signal-to-
noise ratio is low, increasing information sequence length
( s′ ,s)uk =−1
BER improvement can be seen, the signal-to-noise ratio
Defined: decoding performance. When the signal-to-noise ratio is
max* ( x,y) = ln( ex + ey ) greater than 0.8dB, the larger the frame size, the
performance of the decoder is more advantageous, the
= max( x,y) + ln(1+ e−|x−y| ) ( 9) better the performance of turbo code error correction.
Therefore, for the performance of Turbo codes, it is
= max( x,y) + fc (| x − y|) desirable that the frame length is longer, and although the
increase in the frame length does not increase the
fc called correction function. complexity of the decoding of the unit bit, but the frame
With MAP algorithm equivalent, only the MAP length determines the time delay of the system to transmit
algorithm in the multiplication is mapped to the adder, and decode storage space, so the choice of frame length
thereby reducing the complexity of the algorithm, and ease must be trade-offs.
of hardware implementation. 0
10
frame size 64
frame size 512
D. Max-Log-MAP algorithm -1 frame size 1024
10
Number domain algorithm, the addition on the number
of components in the formula ignored, using the
ln ( e x + e y ) = max ( x, y ) , the addition to fully become
-2
10
BER

seeking maximum operations, further simplify the -3


10
algorithm, but the expense of performance degradation.
E. SOVA Algorithm -4
10

SOVA algorithm is based on the Viterbi algorithm and


has a soft decision output and exploits the capacity of the -5
10
external [Link] Viterbi algorithm is a maximum 0 0.2 0.4 0.6 0.8 1
SNR(dB)
1.2 1.4 1.6 1.8 2

likelihood decoding algorithm and is relatively simple Figure 3. Performance simulation on Turbo under different the
compared to the MAP algorithm. Meanwhile, the amount of information sequence length

Published by Atlantis Press, Paris, France.


© the authors
0007
Proceedings of the 2nd International Conference on Computer Science and Electronics Engineering (ICCSEE 2013)

B. Encoding rate of Turbo code performance -1


10
The simulation parameters are set as follows in Fig.4: g(7,5)
the generator matrix is [7,5], the iteration is five times, g1(15,13)
interleaved manner is the pseudo-random interleaver, -2
10
the decoder algorithm is the Log-MAP , the information
sequence length is 1024.
-1
10

BER
-3
10
1/2code rate
1/3code rate
-2
10 -4
10

-3
10
-5
10
0 0.5 1 1.5 2 2.5
BER

SNR(dB)
-4
10
Figure 5. Performance simulation on Turbo under different branch code

-5
10
D. The number of iterations of Turbo code performance
The simulation parameters of the simulation curve
shown in Fig.6: the generation matrix is [7,5], the code rate
-6
10 is 1/3, interleaved manner is the pseudo-random
0 0.5 1 1.5 2 2.5 interleaver,the decoding algorithm is the Log-MAP
SNR(dB)
algorithm, tthe information sequence length is 1024.
0
10
Figure 4. Influence of different encoding rate to Turbo’s performance
iteration1
iteration3
Seen by the simulation results: In the case of the same -1
10 iteration5
SNR, the code rate is 1/3 of the code having a higher bit iteration7
error rate performance than the code rate is 1/2 code. -2
10
Because the bit rate of 1/2 Turbo Code After puncturing, the
Bit Error Rate

puncturing process may be useful information puncturing


swap, so that the decoded reduced accuracy, the error -3
10
correction performance of the code word is decreased, but
the complexity and delay than not been punctured coded, so -4
10
the actual communication channel, and comprehensive
consideration of the error rate and coding efficiency.
-5
10
C. Component codes of Turbo code performance
The simulation parameters are set as follows in Fig.5: -6
10
the iteration is five times, the code rate is 1/3, interleaved 0 0.5 1 1.5 2 2.5
Eb/N0 (dB)
manner is the pseudo-random interleaver,the iteration is
five times, decoding algorithm is the Log-MAP algorithm, Figure 6. Influence of different iterative number on Turbo’s
the information sequence length is 1024. performance
The conclusions can be obtained from the simulation
results is: under the conditions of a given length of the From the Fig.6. can be drawn: Turbo code error rate is
information sequence and a coding rate different constraint reduced with the number of iterations increases. With the
length, with the increase of the signal-to-noise ratio, the increase of the signal-to-noise ratio, increasing the number
difference in performance of Turbo code starts to increase. of iterations will make the bit error rate is drastically
Performance constraints of the constraint length Turbo codes reduced, but when a certain number of iterations is reached,
when the signal-to-noise ratio is small, the length of the and then increase the number of iterations is also not
Turbo code performance or less; With the increase in the significantly improve the BER. Instead, the increase in the
signal-to-noise ratio, the constraint length of the Turbo code number of iterations will cause unnecessary computational
performance better than the constraint length is small Turbo burden to consider the saturation point, so, in the actual
code, and more and more obvious advantages. system design iterations.
E. Decoding algorithm of Turbo code performance
The shown simulation curve of the simulation

Published by Atlantis Press, Paris, France.


© the authors
0008
Proceedings of the 2nd International Conference on Computer Science and Electronics Engineering (ICCSEE 2013)

parameters of: the generation matrix is [7,5], a code rate is [6] YEH CHIA J,UENG Y L,LIN M C,et al.Interblock memory for
1/2, interleaved manner is the pseudo-random Turbo coding[J]. IEEE Trans on Communications,2010,58( 2 ) :390-
393.
interleaver,the iteration is five times,the information
[7] LI Chong,DENG Wei-bo,DUAN Ling-jie. Performance analysis
sequence length is 512. of Turbo Codes in communication channels with impulsive
As can be drawn from Fig.7: The Log-MAP algorithm noise[J].IEEE Trans Inform Theory,2009,20( 1) : 27-31.
than the SOVA algorithm ldB about the decoded gain, and [8] HAGENAUER J, ROBERTSON P, PAPKE L. Itetative(‘TURBO’)
therefore, a better decoding performance. LOG-MAP decoding of systematic convolutional codes with MAP and SOVA
algorithm performance is equivalent to the MAP algorithm, algorithm[A]. Proc of the ITG-Conference on Source and Channel
but becomes multiplication addition, thereby greatly Coding(SCC'94)[C]. Munich, Germany, 1994. 21-29.
reducing the complexity of the algorithm, LOG-MAP
algorithm is more practical. SOVA algorithm is the most
simple, and at the same time also the worst performance. In
contrast, the computational complexity of the LOG-MAP
algorithm is about the SOVA algorithm twice the obviously
LOG-MAP algorithm decoding speed is a limitation, but the
application process is also perfectly acceptable. LOG-MAP
algorithm is the the Turbo decoding Best algorithm.
0
10
Log-MAP
SOVA
-1
10

-2
10
BER

-3
10

-4
10

-5
10
0 0.5 1 1.5 2 2.5
SNR(dB)

Figure 7. Influence of different decoding algorithm to Turbo’s


performance

V. CONCLUSIONS
Turbo code performance under different circumstances
in AWNG channel simulation, plus analysis through
software emulation, can draw the frame size, number of
iterations, the component code, bit rate, decoding algorithm
for Turbo performance, to provide a reference to the choice
of parameters for Turbo codes in hardware implementation.
REFERENCES
[1] Berrou C, Glavieux A. Near Shannon limit error correcting coding and
decoding: turbo codes [C].ICC’93, Geneva,1993:1064-1070.
[2] SHANNON C E. Mathematical Theory of Communication[M]. B S T J,
1948.
[3] HAN J H, ERDOGAN A T, ARSLAN T. Implementation of an
efficient two-step SOVA Turbo decoder for wireless communication
systems[A].IEEE Global Telecommunications Conference
(GLOBECOM'05)[C]. 2005. 2429-2433.
[4] NIKOPOUR H, KHANDANI A K, JAMALI S H. Turbo-coded OFDM
transmision over a nonlinear channel[J]. IEEE Transactions on Vehicular
Technology, 2005, 54(4): 1361-1367.
[5] DEHGHAN H. Digital television receivers with blind iterative linear
cancellers [A]. International Conference on Consumer Electronics
(ICCE’06)[C]. 2006. 17-18.

Published by Atlantis Press, Paris, France.


© the authors
0009

You might also like