Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Iterative decoding is considered in this paper from an optimization point of view. Starting from the optimal maximum likelihood decoding, a (tractable) approximate criterion is derived. The global maximum of the approximate criterion is analyzed: the maximum likelihood solution can be retrieved from the approximate criterion in some particular cases. The classical equations of turbo-decoders can be obtained as an instance of an hybrid Jacobi/Gauss-Seidel implementation of the iterative maximization for the tractable criterion. The extrinsics are a natural consequence of this implementation. In the simulation part, we show a practical application of these results.
Journal of Advanced College of Engineering and Management, 2018
This paper presents a Thesis which consists of a study of turbo codes as an error-control Code and the software implementation of two different decoders, namely the Maximum a Posteriori (MAP), and soft-Output Viterbi Algorithm (SOVA) decoders. Turbo codes were introduced in 1993 by berrouet at [2] and are perhaps the most exciting and potentially important development in coding theory in recent years. They achieve near-Shannon-Limit error correction performance with relatively simple component codes and large interleavers. They can be constructed by concatenating at least two component codes in a parallel fashion, separated by an interleaver. The convolutional codes can achieve very good results. In order of a concatenated scheme such as a turbo codes to work properly, the decoding algorithm must affect an exchange of soft information between component decoders. The concept behind turbo decoding is to pass soft information from the output of one decoder to the input of the succeeding one, and to iterate this process several times to produce better decisions. Turbo codes are still in the process of standardization but future applications will include mobile communication systems, deep space communications, telemetry and multimedia. Finally, we will compare these two algorithms which have less complexity and which can produce better performance.
JPL TDA Progress …, 1996
2009 IEEE International Conference on Acoustics, Speech and Signal Processing, 2009
Iterative decoding was not originally introduced as the solution to an optimization problem rendering the analysis of its convergence very difficult. In this paper, we investigate the link between iterative decoding and classical optimization techniques. We first show that iterative decoding can be rephrased as two embedded minimization processes involving the Fermi-Dirac distance. Based on this new formulation, an hybrid proximal point algorithm is first derived with the additional advantage of decreasing a desired criterion. In a second part, an hybrid minimum entropy algorithm is proposed with improved performance compared to the classical iterative decoding. Even if this paper focus on iterative decoding for BICM, the results can be applied to the large class of turbo-like decoders.
2010
Turbo coding is the most commonly used error correcting scheme in wireless systems resulting in maximum coding gain. In this paper, a comparative study of the symbol-bysymbol maximum a posteriori (MAP) algorithm, its logarithmic versions, namely, Log-MAP and Max-Log-MAP decoding algorithms used in SISO Turbo Decoders are analyzed. The performance of Turbo coding algorithms are carried out in terms of bit error rate (BER) by varying parameters such as Frame size, number of iterations and choice of interleaver. Keywords-: Iterative decoding; MAP decoding; Turbo Codes.
International Journal of Computer Applications, 2012
Turbo codes are family of forward error correcting codes, whose performance is near Shannon limit. Turbo decoding is based on the maximum a-posterior algorithm (MAP) algorithm. In this paper, the problem of turbo decoding in ISI channel is studied. A Super-trellis structure method has been presented and modified turbo decoding is suggested. Two methods have been suggested for turbo decoding in ISI channel. In the first method, we take all possible combinations of output of encoder-2 and in method-2, output of each encoder is passed through channel filter independently. Method-2 performs better than method-1 but requires higher bandwidth. The improvement in performance is demonstrated through simulations.
Iterative decoder implementation for turbo codes is an demanding assignment. Several algorithms have been projected to facilitate the implementation of iterative decoder for turbo codes. This paper examines the implementation of an iterative decoder for turbo codes using the MAX−LOG−MAP algorithm and Fully parallel turbo decoding algorithm (FPTD). Despite the fact that the MAX-LOG-MAP practices turbo encoded bits in a serial forward-backward style, the proposed algorithm functions in a fully-parallel behaviour, processing all bits in both components of the turbo code at the same time. The FPTD algorithm is attuned with all turbo codes, including those of the LTE and WiMAX standards. BER performance among these two algorithms is envisaged.
In this paper, the sub-optimality of iterative decoding of BCH product codes also called Block Turbo Codes (BTC) is investigated. Lower bounds on Maximum Likelihood (ML) decoding performances for Packet Error Rate (PER) and Bit Error Rate (BER) are given in order to evaluate the optimality of the iterative decoding algorithm. On an AWGN (Additive White Gaussian Noise) channel, simulations show that the turbo decoding of product code is sub-optimal for long codes, even when the elementary codes are decoded in an optimal way. We propose to apply after turbo decoding a scheme to combat this sub-optimality. This scheme is described, the performance gain is then evaluated on Gaussian and Rayleigh channels.
Electronics Letters, 2005
We describe a Turbo-like iterative decoding scheme for analog product codes and prove that the iterative decoding method is an iterative projection in Euclidean space and converges to the least-squares solution. Using this geometric point of view, any block analog codes can be decoded by a similar iterative method. The described procedure may serve as a step towards a more intuitive understanding of Turbo decoding.
2002 IEEE International Conference on Communications. Conference Proceedings. ICC 2002 (Cat. No.02CH37333), 2002
In this paper, the use of a reliabilitybased decoding algorithm for some concatenated codes with an interleaver, known as turbo-like codes, is examined to address and overcome the suboptimality of iterative decoding. Simulation results show that the suboptimality of iterative decoding for moderate length codes can be at least partially compensated by this combined approach. Some insights about the potential additional coding gains achievable by the combined approach are investigated based on the characteristics of the constituent decoders, which highlights the nature of suboptimality in iterative decoding.
IEEE Transactions on Communications, 2000
Recently, noncoherent sequence detection schemes for coded linear and continuous phase modulations have been proposed, which deliver hard decisions by means of a Viterbi algorithm. The current trend in digital transmission systems toward iterative decoding algorithms motivates an extension of these schemes. In this paper, we propose two noncoherent soft-output decoding algorithms. The first solution has a structure similar to that of the well-known algorithm by Bahl et al.(BCJR), whereas the second is based on noncoherent sequence detection and a reduced-state soft-output Viterbi algorithm.
1999
In this paper, we address the use of the extrinsic information generated by each component decoder in an iterative decoding process. The algorithm proposed by Bahl et al. (BCJR) and the soft-output Viterbi algorithm (SOVA) are considered as component decoders. Numerical results for a classical rate-1/2 turbo code transmitted over a memoryless additive white Gaussian noise (AWGN) channel are provided.
2009 IEEE International Conference on Communications, 2009
In this paper the issue of improving the performance of iterative decoders based on sub-optimal calculation of the messages exchanged during iterations (L-values) is addressed. It is well known in the literature that a simple-yet very effectiveway to improve the performance of suboptimal iterative decoders is based on applying a scaling factor to the L-values. In this paper, starting with a theoretical model based on the so-called consistency condition of a random variable, we propose a methodology for correcting the L-values that relies only on the distribution of the soft information exchanged in the iterative process. This methodology gives a clear explanation of why the well-known linear scaling factor provides a very good performance. Additionally, the proposed methodology allows us to avoid the exhaustive search required otherwise. Numerical simulations show that for turbo codes the scaling factors found closely follow the optimum values, which translates to a close-to-optimal BER performance. Moreover, for LDPC codes, the proposed methodology produces a better BER performance compared with the known method in the literature.
IEEE Transactions on Communications, 1999
This paper presents two simple and effective criteria for stopping the iteration process in turbo decoding with a negligible degradation of the error performance. Both criteria are devised based on the cross-entropy (CE) concept. They are as efficient as the CE criterion, but require much less and simpler computations.
2006 IEEE Workshop on Signal Processing Systems Design and Implementation, 2006
Two novel stopping criteria for iterative decoding of turbo codes are introduced in this paper. The proposed criteria are shown to substantially reduce the required computational complexity, while the achieved bit-error rate is not significantly affected, compared to results based on previously published stopping criteria. Computational complexity reduction is achieved by reducing the number of iterations. Further complexity reduction is achieved by reducing the volume of the data on which the criteria operate. The proposed criteria are shown to reduce the required computational complexity by a factor of ten for cases of practical interest. Two solutions are presented that allow the exploitation of a performance-complexity trade-off.
IEEE Transactions on Instrumentation and Measurement, 2000
This paper proposes an improved Max-Logmaximum a posteriori (MAP) algorithm for turbo decoding and turbo equalization. The proposed algorithm utilizes the MacLaurin Series to expand the logarithmic term in the Jacobian logarithmic function of the Log-MAP algorithm. In terms of complexity, the proposed algorithm can easily be implemented by means of adders and comparators as this is the case for the Max-Log-MAP algorithm. In addition, simulation results show that the proposed algorithm performs very closely to the Log-MAP algorithm for both turbo decoding over additive-white-Gaussian-noise channels and turbo equalization over frequencyselective channels. Further, it is shown than even in a high-loss intersymbol-interference channel, the proposed algorithm preserves its performance close to that of the Log-Map algorithm, while there is a wide gap between the performance of the Log-MAP and Max-Log-MAP turbo equalizers. Index Terms-Maximum a posteriori (MAP) and Log-MAP algorithms, Max-Log-MAP algorithm, turbo decoding, turbo equalization. I. INTRODUCTION A PPLYING the turbo principle to the decoding of parallel concatenated codes was first introduced in [2]. Later, a flurry of research in related topics produced different algorithms that follow turbo-decoding approaches to provide a similar gain in performance. Turbo equalization employs the turbo principle for joint equalization and decoding over frequency-selective channels, which produce intersymbol interference (ISI). In [3], the turbo scheme using the soft output Viterbi algorithm was applied for detection and decoding of recursive systematic (RSC) code over a delay dispersive channel. Turbo equalization by exploiting maximum a posteriori (MAP)/Bahl-Cocke-Jelinek-Raviv algorithm was first introduced in [4]. It is known that for equalization and decoding, the optimal soft-in/soft-out (SISO) algorithm is a symbol by symbol algorithm in the sense of minimum bit error rate [1]. In practice, this algorithm is implemented in the logarithmic domain in
IEEE Transactions on Signal Processing, 2012
Iterative processing is widely adopted nowadays in modern wireless receivers for advanced channel codes like turbo and LDPC codes. Extension of this principle with an additional iterative feedback loop to the demapping function has proven to provide substantial error performance gain. However, the adoption of iterative demodulation with turbo decoding is constrained by the additional implied implementation complexity, heavily impacting latency and power consumption. In this paper, we analyze the convergence speed of these combined two iterative processes in order to determine the exact required number of iterations at each level. Extrinsic information transfer (EXIT) charts are used for a thorough analysis at different modulation orders and code rates. An original iteration scheduling is proposed reducing two demapping iterations with reasonable performance loss of less than 0.15 dB. Analyzing and normalizing the computational and memory access complexity, which directly impact latency and power consumption, demonstrates the considerable gains of the proposed scheduling and the promising contributions of the proposed analysis.
Lecture Notes in Electrical Engineering, 2009
IEEE Journal on Selected Areas in Communications, 2001
Iterative decoding is used to achieve backward compatible performance improvement in several existing systems. Concatenated coding and iterative decoding are first set up using composite mappings, so that various applications in digital communication and recording can be described in a concise and uniform manner. An ambiguity zone detection (AZD) based iterative decoder, operating on generalized erasures, is described as an alternative for concatenated systems where turbo decoding cannot be performed. Described iterative decoding techniques are then applied to selected wireless communication and digital recording systems. Simulation results and utilization of decoding gains are briefly discussed.
2006
High throughput decoding of turbo-codes can be achieved thanks to parallel decoding. However, for finite block sizes, the initialisation duration of each half-iteration reduces the activity of the processing units, especially for higher degrees of parallelism. To solve this issue, a new decoding scheduling is proposed, with a partial processing overlapping of two successive half iterations. Potential memory conflicts introduced by this new scheduling are solved by a constrained interleaver design. An example of application of the proposed technique shows that the complexity of the decoder is reduced by 25 % compared to a conventional approach.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.