Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009, 2009 IEEE International Conference on Communications
…
7 pages
1 file
In this paper the issue of improving the performance of iterative decoders based on sub-optimal calculation of the messages exchanged during iterations (L-values) is addressed. It is well known in the literature that a simple-yet very effectiveway to improve the performance of suboptimal iterative decoders is based on applying a scaling factor to the L-values. In this paper, starting with a theoretical model based on the so-called consistency condition of a random variable, we propose a methodology for correcting the L-values that relies only on the distribution of the soft information exchanged in the iterative process. This methodology gives a clear explanation of why the well-known linear scaling factor provides a very good performance. Additionally, the proposed methodology allows us to avoid the exhaustive search required otherwise. Numerical simulations show that for turbo codes the scaling factors found closely follow the optimum values, which translates to a close-to-optimal BER performance. Moreover, for LDPC codes, the proposed methodology produces a better BER performance compared with the known method in the literature.
IEEE Communications Letters, 2009
In this letter, a new metric for fast and efficient performance evaluation of iterative decoding algorithms is proposed. It is based on the estimation of distance between probability density function (pdf) of the symbol log likelihood ratio (LLR) of optimal and suboptimal iterative decoding algorithms. We apply the notion of entropy to evaluate this function. The metric is tested on data sets from the different sub optimal algorithms for the duo binary turbo codes used in WiMax(802.16e) application and the (251,502) Galois Field (2 6 ) LDPC codes. Experimental results confirm that the values of the proposed metrics correlate well with the BER performance of the suboptimal implementation of the iterative decoding algorithm.
IEEE Transactions on Information Theory, 2005
Asymptotic iterative decoding performance is analyzed for several classes of iteratively decodable codes when the block length of the codes and the number of iterations go to infinity. Three classes of codes are considered. These are Gallager's regular low-density parity-check (LDPC) codes, Tanner's generalized LDPC (GLDPC) codes, and the turbo codes due to Berrou et al. It is proved that there exist codes in these classes and iterative decoding algorithms for these codes for which not only the bit error probability b , but also the block (frame) error probability B , goes to zero as and go to infinity.
2002 IEEE International Conference on Communications. Conference Proceedings. ICC 2002 (Cat. No.02CH37333), 2002
In this paper, the use of a reliabilitybased decoding algorithm for some concatenated codes with an interleaver, known as turbo-like codes, is examined to address and overcome the suboptimality of iterative decoding. Simulation results show that the suboptimality of iterative decoding for moderate length codes can be at least partially compensated by this combined approach. Some insights about the potential additional coding gains achievable by the combined approach are investigated based on the characteristics of the constituent decoders, which highlights the nature of suboptimality in iterative decoding.
2009 IEEE International Conference on Acoustics, Speech and Signal Processing, 2009
Iterative decoding was not originally introduced as the solution to an optimization problem rendering the analysis of its convergence very difficult. In this paper, we investigate the link between iterative decoding and classical optimization techniques. We first show that iterative decoding can be rephrased as two embedded minimization processes involving the Fermi-Dirac distance. Based on this new formulation, an hybrid proximal point algorithm is first derived with the additional advantage of decreasing a desired criterion. In a second part, an hybrid minimum entropy algorithm is proposed with improved performance compared to the classical iterative decoding. Even if this paper focus on iterative decoding for BICM, the results can be applied to the large class of turbo-like decoders.
Journal of Advanced College of Engineering and Management, 2018
This paper presents a Thesis which consists of a study of turbo codes as an error-control Code and the software implementation of two different decoders, namely the Maximum a Posteriori (MAP), and soft-Output Viterbi Algorithm (SOVA) decoders. Turbo codes were introduced in 1993 by berrouet at [2] and are perhaps the most exciting and potentially important development in coding theory in recent years. They achieve near-Shannon-Limit error correction performance with relatively simple component codes and large interleavers. They can be constructed by concatenating at least two component codes in a parallel fashion, separated by an interleaver. The convolutional codes can achieve very good results. In order of a concatenated scheme such as a turbo codes to work properly, the decoding algorithm must affect an exchange of soft information between component decoders. The concept behind turbo decoding is to pass soft information from the output of one decoder to the input of the succeeding one, and to iterate this process several times to produce better decisions. Turbo codes are still in the process of standardization but future applications will include mobile communication systems, deep space communications, telemetry and multimedia. Finally, we will compare these two algorithms which have less complexity and which can produce better performance.
In this paper, the sub-optimality of iterative decoding of BCH product codes also called Block Turbo Codes (BTC) is investigated. Lower bounds on Maximum Likelihood (ML) decoding performances for Packet Error Rate (PER) and Bit Error Rate (BER) are given in order to evaluate the optimality of the iterative decoding algorithm. On an AWGN (Additive White Gaussian Noise) channel, simulations show that the turbo decoding of product code is sub-optimal for long codes, even when the elementary codes are decoded in an optimal way. We propose to apply after turbo decoding a scheme to combat this sub-optimality. This scheme is described, the performance gain is then evaluated on Gaussian and Rayleigh channels.
IEEE Transactions on Information Theory, 2000
Replica shuffled versions of iterative decodes for low-density parity-check (LDPC)codes and turbo codes are presented. The proposed schemes can converge faster than standard and plain shuffled approaches. Two methods, density evolution and extrinsic information transfer (EXIT) charts, are used to analyze the performance of the proposed algorithms. Both theoretical analysis and simulations show that the new schedules offer good tradeoffs with respect to performance, complexity, latency, and connectivity.
IEEE Communications Magazine, 2003
Implementation constraints imposed on iterative decoders applying message-passing algorithms are investigated. Serial implementations similar to traditional microprocessor datapaths are compared against architectures with multiple processing elements that exploit the inherent parallelism in the decoding algorithm. Turbo codes and low-density parity check codes, in particular, are evaluated in terms of their suitability for VLSI implementation in addition to their bit-error rate performance as a function of signal-to-noise ratio. It is necessary to consider efficient realizations of iterative decoders when area, power, and throughput of the decoding implementation are constrained by practical design issues of communications receivers. Yeo, Nikolic, Anantharam 5/30/2003 ultimately evaluated by its cost (silicon area), power, speed, latency, flexibility, and scalability.
Carpathian Journal of Electronic and Computer Engineering, 2017
Several modern error-correcting codes can perform close to the Shannon limit thanks to the turbo principle applied in their iterative decoding algorithms. In this paper the principle is discussed in relation to LDPC codes and turbo codes. Methods for improvement of both these codes are described, namely removal of short cycles for LDPC codes and trellis termination for turbo codes. Performance of reference LDPC and turbo codes with and without these improvements is simulated and compared.
Absfruct-This paper presents a new family of convolutional codes, nicknamed turbo-codes, built from a particular concatenation of two recursive systematic codes, linked together by nonuniform interleaving. Decoding calls on iterative processing in which each component decoder takes advantage of the work of the other at the previous step, with the aid of the original concept of extrinsic information. For sufficiently large interleaving sizes, the correcting performance of turbo-codes, investigated by simulation, appears to be close to the theoretical limit predicted by Shannon.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Transactions on Communications, 2000
Journal of Computer and Communications, 2016
2006 IEEE Workshop on Signal Processing Systems Design and Implementation, 2006
Communications on Applied Electronics, 2016
2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011
2012 IEEE Workshop on Signal Processing Systems, 2012
JPL TDA Progress …, 1996
The IMA Volumes in Mathematics and its Applications, 2001
IEEE Transactions on Communications, 2000
Lecture Notes in Electrical Engineering, 2009
European Transactions on Telecommunications, 2007
IEEE Journal on Selected Areas in Communications, 2001
IEEE Transactions on Vehicular Technology, 2015
IEEE Transactions on Communications, 1998