Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009, 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Iterative decoding was not originally introduced as the solution to an optimization problem rendering the analysis of its convergence very difficult. In this paper, we investigate the link between iterative decoding and classical optimization techniques. We first show that iterative decoding can be rephrased as two embedded minimization processes involving the Fermi-Dirac distance. Based on this new formulation, an hybrid proximal point algorithm is first derived with the additional advantage of decreasing a desired criterion. In a second part, an hybrid minimum entropy algorithm is proposed with improved performance compared to the classical iterative decoding. Even if this paper focus on iterative decoding for BICM, the results can be applied to the large class of turbo-like decoders.
2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011
Iterative decoding is considered in this paper from an optimization point of view. Starting from the optimal maximum likelihood decoding, a (tractable) approximate criterion is derived. The global maximum of the approximate criterion is analyzed: the maximum likelihood solution can be retrieved from the approximate criterion in some particular cases. The classical equations of turbo-decoders can be obtained as an instance of an hybrid Jacobi/Gauss-Seidel implementation of the iterative maximization for the tractable criterion. The extrinsics are a natural consequence of this implementation. In the simulation part, we show a practical application of these results.
2009 IEEE International Conference on Communications, 2009
In this paper the issue of improving the performance of iterative decoders based on sub-optimal calculation of the messages exchanged during iterations (L-values) is addressed. It is well known in the literature that a simple-yet very effectiveway to improve the performance of suboptimal iterative decoders is based on applying a scaling factor to the L-values. In this paper, starting with a theoretical model based on the so-called consistency condition of a random variable, we propose a methodology for correcting the L-values that relies only on the distribution of the soft information exchanged in the iterative process. This methodology gives a clear explanation of why the well-known linear scaling factor provides a very good performance. Additionally, the proposed methodology allows us to avoid the exhaustive search required otherwise. Numerical simulations show that for turbo codes the scaling factors found closely follow the optimum values, which translates to a close-to-optimal BER performance. Moreover, for LDPC codes, the proposed methodology produces a better BER performance compared with the known method in the literature.
IEEE Communications Letters, 2009
In this letter, a new metric for fast and efficient performance evaluation of iterative decoding algorithms is proposed. It is based on the estimation of distance between probability density function (pdf) of the symbol log likelihood ratio (LLR) of optimal and suboptimal iterative decoding algorithms. We apply the notion of entropy to evaluate this function. The metric is tested on data sets from the different sub optimal algorithms for the duo binary turbo codes used in WiMax(802.16e) application and the (251,502) Galois Field (2 6 ) LDPC codes. Experimental results confirm that the values of the proposed metrics correlate well with the BER performance of the suboptimal implementation of the iterative decoding algorithm.
IEEE Journal on Selected Areas in Communications, 2001
Iterative decoding is used to achieve backward compatible performance improvement in several existing systems. Concatenated coding and iterative decoding are first set up using composite mappings, so that various applications in digital communication and recording can be described in a concise and uniform manner. An ambiguity zone detection (AZD) based iterative decoder, operating on generalized erasures, is described as an alternative for concatenated systems where turbo decoding cannot be performed. Described iterative decoding techniques are then applied to selected wireless communication and digital recording systems. Simulation results and utilization of decoding gains are briefly discussed.
Journal of Advanced College of Engineering and Management, 2018
This paper presents a Thesis which consists of a study of turbo codes as an error-control Code and the software implementation of two different decoders, namely the Maximum a Posteriori (MAP), and soft-Output Viterbi Algorithm (SOVA) decoders. Turbo codes were introduced in 1993 by berrouet at [2] and are perhaps the most exciting and potentially important development in coding theory in recent years. They achieve near-Shannon-Limit error correction performance with relatively simple component codes and large interleavers. They can be constructed by concatenating at least two component codes in a parallel fashion, separated by an interleaver. The convolutional codes can achieve very good results. In order of a concatenated scheme such as a turbo codes to work properly, the decoding algorithm must affect an exchange of soft information between component decoders. The concept behind turbo decoding is to pass soft information from the output of one decoder to the input of the succeeding one, and to iterate this process several times to produce better decisions. Turbo codes are still in the process of standardization but future applications will include mobile communication systems, deep space communications, telemetry and multimedia. Finally, we will compare these two algorithms which have less complexity and which can produce better performance.
IEEE Transactions on Information Theory, 1995
IEEE Communications Magazine, 2003
Implementation constraints imposed on iterative decoders applying message-passing algorithms are investigated. Serial implementations similar to traditional microprocessor datapaths are compared against architectures with multiple processing elements that exploit the inherent parallelism in the decoding algorithm. Turbo codes and low-density parity check codes, in particular, are evaluated in terms of their suitability for VLSI implementation in addition to their bit-error rate performance as a function of signal-to-noise ratio. It is necessary to consider efficient realizations of iterative decoders when area, power, and throughput of the decoding implementation are constrained by practical design issues of communications receivers. Yeo, Nikolic, Anantharam 5/30/2003 ultimately evaluated by its cost (silicon area), power, speed, latency, flexibility, and scalability.
2006 IEEE Workshop on Signal Processing Systems Design and Implementation, 2006
Two novel stopping criteria for iterative decoding of turbo codes are introduced in this paper. The proposed criteria are shown to substantially reduce the required computational complexity, while the achieved bit-error rate is not significantly affected, compared to results based on previously published stopping criteria. Computational complexity reduction is achieved by reducing the number of iterations. Further complexity reduction is achieved by reducing the volume of the data on which the criteria operate. The proposed criteria are shown to reduce the required computational complexity by a factor of ten for cases of practical interest. Two solutions are presented that allow the exploitation of a performance-complexity trade-off.
Absfruct-This paper presents a new family of convolutional codes, nicknamed turbo-codes, built from a particular concatenation of two recursive systematic codes, linked together by nonuniform interleaving. Decoding calls on iterative processing in which each component decoder takes advantage of the work of the other at the previous step, with the aid of the original concept of extrinsic information. For sufficiently large interleaving sizes, the correcting performance of turbo-codes, investigated by simulation, appears to be close to the theoretical limit predicted by Shannon.
IEEE 6th Workshop on Signal Processing Advances in Wireless Communications, 2005., 2005
Reduced latency versions of iterative decoders for turbo codes are presented and analyzed. The proposed schemes converge faster than standard and shuffled decoders. EXIT charts are used to analyze the performance of the proposed algorithms. Both theoretical analysis and simulation results show that the new schedules offer good performance / complexity trade-offs.
IEEE Transactions on Communications, 2000
Recently, noncoherent sequence detection schemes for coded linear and continuous phase modulations have been proposed, which deliver hard decisions by means of a Viterbi algorithm. The current trend in digital transmission systems toward iterative decoding algorithms motivates an extension of these schemes. In this paper, we propose two noncoherent soft-output decoding algorithms. The first solution has a structure similar to that of the well-known algorithm by Bahl et al.(BCJR), whereas the second is based on noncoherent sequence detection and a reduced-state soft-output Viterbi algorithm.
JPL TDA Progress …, 1996
IEEE Transactions on Communications, 2000
Near-capacity performance of turbo codes is generally achieved with a large number of decoding iterations. Various iteration stopping rules introduced in the literature often induce performance loss. This paper proposes a novel partial decoding iteration scheme using a bit-level convergence test. We first establish decoding optimality of windowed partial iteration for non-converged bits given that convergence has been achieved on window boundaries. We next present two criteria for testing bit convergence based on cross-entropy, and propose a windowed partial iteration algorithm. The overall complexity and memory requirements of the new algorithm are evaluated and compared with known algorithms. Simulations reveal that the proposed scheme suffers essentially no performance loss compared to full iterations, while reducing the decoding complexity. We also briefly discuss possible extensions of the proposed scheme to general iterative receivers.
2015
Unary Error Correction (UEC) codes constitute a recently proposed Joint Source and Channel Code (JSCC) family, conceived for alphabets having an infinite cardinality, whilst out-performing previously used Separate Source and Channel Codes (SSCCs). UEC based schemes rely on an iterative decoding process, which involves three decoding blocks when concatenated with a turbo code. Owing to this, following the activation of one of the three blocks, the next block to be activated must be chosen from the other two decoding block options. Furthermore, the UEC decoder offers a number of decoding options, allowing its complexity and error correction capability to be dynamically adjusted. It has been shown that iterative decoding convergence can be expedited by activating the specific decoding option that offers the highest Mutual Information (MI) improvement to computational complexity ratio. This paper introduces an iterative demodulator, which is shown to improve the associated error correction performance, while reducing the overall iterative decoding complexity. The challenge is that the iterative demodulator has to forward its soft-information to the other two iterative decoding blocks, and hence the corresponding MI improvements cannot be compared on a like-for-like basis. Additionally, we also propose a method of eliminating the logarithmic calculations from the adaptive iterative decoding algorithm, hence further reducing its implementational complexity without impacting its error correcting performance.
In this paper, the sub-optimality of iterative decoding of BCH product codes also called Block Turbo Codes (BTC) is investigated. Lower bounds on Maximum Likelihood (ML) decoding performances for Packet Error Rate (PER) and Bit Error Rate (BER) are given in order to evaluate the optimality of the iterative decoding algorithm. On an AWGN (Additive White Gaussian Noise) channel, simulations show that the turbo decoding of product code is sub-optimal for long codes, even when the elementary codes are decoded in an optimal way. We propose to apply after turbo decoding a scheme to combat this sub-optimality. This scheme is described, the performance gain is then evaluated on Gaussian and Rayleigh channels.
2013
In this paper we present a study of the impact of connection schemes on the performance of iterative decoding of Generalized Parallel Concatenated block (GPCB) constructed from one step majority logic decodable (OSMLD) codes and we propose a new connection scheme for decoding them. All iterative decoding connection schemes use a soft-input soft-output threshold decoding algorithm as a component decoder. Numerical result for GPCB codes transmitted over Additive White Gaussian Noise (AWGN) channel are provided. It will show that the proposed scheme is better than Hagenauer's scheme and Lucas's scheme [1] and slightly better than the Pyndiah's scheme.
IEEE Transactions on Information Theory, 2005
Asymptotic iterative decoding performance is analyzed for several classes of iteratively decodable codes when the block length of the codes and the number of iterations go to infinity. Three classes of codes are considered. These are Gallager's regular low-density parity-check (LDPC) codes, Tanner's generalized LDPC (GLDPC) codes, and the turbo codes due to Berrou et al. It is proved that there exist codes in these classes and iterative decoding algorithms for these codes for which not only the bit error probability b , but also the block (frame) error probability B , goes to zero as and go to infinity.
The IMA Volumes in Mathematics and its Applications, 2001
Several popular, suboptimal algorithms for bit decoding of binary block codes such as turbo decoding, threshold decoding, and message passing for LDPC, were developed almost as a common sense approach to decoding of some specially designed codes. After their introduction, these algorithms have been studied by mathematical tools pertinent more to computer science than the conventional algebraic coding theory. We give an algebraic description of the optimal and suboptimal bit decoders and of the optimal and suboptimal message passing. We explain exactly how suboptimal algorithms approximate the optimal, and show how good this approximations are in some special cases.
Electronics Letters, 2005
We describe a Turbo-like iterative decoding scheme for analog product codes and prove that the iterative decoding method is an iterative projection in Euclidean space and converges to the least-squares solution. Using this geometric point of view, any block analog codes can be decoded by a similar iterative method. The described procedure may serve as a step towards a more intuitive understanding of Turbo decoding.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.