Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, Laouini Nassib, Ben Hadj Slama Larbi & Bouallegue Ammar
Low Density Parity Check (LDPC) code approaches Shannon limit performance for binary field and long code lengths when decoded with the belief-propagation (BP) or the Sum-Product algorithm. However, performance of binary LDPC code is degraded when the code word length is small. Layered decoding is known to provide efficient and highthroughput implementation of LDPC decoders. The Variable-Node Layered Belief Propagation (VL-BP) algorithm is a modification of Belief Propagation algorithm (BP), where the varible nodes are divided in subgroups called layers and each iteration is broken into multiple sub-iterations. Min-Sum VL-BP (MS VL-BP) algorithm is an approximation of the VL-BP algorithm since the check node update is replaced by a selection of the minimum input value. An optimized MS VL-BP algorithm for LDPC code is proposed in this paper. In this algorithm, unlike other decoding methods, we consider for the first layer a set of variable nodes that has a low value of the intrinsic information. An optimization factor is introduced in check node update rule for each sub-iterations of the proposed algorithm. Simulation results show that the proposed Optimized MS VL-BP decoding algorithm performs very close to the Sum- Product decoding while preserving the main features of the Min-Sum VL-BP decoding, that is low complexity and independence with respect to noise variance estimation errors.
Advances in Electrical and Computer Engineering, 2013
The paper proposes a low complexity belief propagation (BP) based decoding algorithm for LDPC codes. In spite of the iterative nature of the decoding process, the proposed algorithm provides both reduced complexity and increased BER performances as compared with the classic min-sum (MS) algorithm, generally used for hardware implementations. Linear approximations of check-nodes update function are used in order to reduce the complexity of the BP algorithm. Considering this decoding approach, an FPGA based hardware architecture is proposed for implementing the decoding algorithm, aiming to increase the decoder throughput. FPGA technology was chosen for the LDPC decoder implementation, due to its parallel computation and reconfiguration capabilities. The obtained results show improvements regarding decoding throughput and BER performances compared with state-of-the-art approaches.
—Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update, or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.
IEEE Transactions on Communications, 2005
Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reducedcomplexity derivatives for LDPC codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update or symbol-node update or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution (DE) is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from a performance, latency, computational-complexity and memory-requirement perspective.
Journal of Communications, 2010
In this paper, we proposed a Parallel-Layered Belief-Propagation (PLBP) algorithm first, which makes a breakthrough in utilizing the layered decoding algorithm on the "non-layered" quasi-cyclic (QC) LDPC codes, whose column weights are higher than one within layers. Our proposed PLBP algorithm not only achieves a better error performance, but also requires almost 50% less iterations, compared with the original flooding algorithm. Then we propose a low-power partial parallel decoder architecture based on the PLBP algorithm. The PLBP decoder architecture requires less area and energy efficiency than other existing decoders. As a case study, a multi-rate 9216-bit LDPC decoder is implemented in SMIC 0.13 m 1P6M CMOS technology. The decoder dissipates an average power of 87mW with 10 iterations at a clock frequency of 83.3 MHz. The chip core size is 7.59 mm 2 , and the die area occupies 10.82 mm 2 .
IET Communications, 2012
In this paper, we propose an improved version of the min-sum algorithm for low density parity check (LDPC) code decoding, which we call "adaptive normalized BP-based" algorithm. Our decoder provides a compromise solution between the belief propagation and the min-sum algorithms by adding an exponent offset to each variable node's intrinsic information in the check node update equation. The extrinsic information from the min-sum decoder is then adjusted by applying a negative power of two scale factor, which can be easily implemented by right shifting the min-sum extrinsic information. The difference between our approach and other adaptive normalized min-sum decoders is that we select the normalization scale factor using a clear analytical approach based on underlying principles. Simulation results show that the proposed decoder outperforms the min-sum decoder and performs very close to the BP decoder, but with lower complexity.
2007
Low-Density Parity-check codes deliver very good performance when decoded with belief propagation also known as sum product algorithm. Most of the complexity in the belief propagation algorithm rests in the check node processor. This paper presents an overview of the approaches used for reducing the number of active nodes that participate in the decoding iteration loop. Also a novel approach for deciding the initial threshold for optimal elimination of high confidence level is proposed. Merits of the proposed algorithm termed as variable offset constant slope algorithm is discussed.
IIUM Engineering Journal
For binary field and long code lengths, Low Density Parity Check (LDPC) code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algorithm. VLSI Architecture is proposed which uses the value re-use property of min-sum algorithm and gives high throughput. The proposed work has been implemented and tested on Xilinx Virtex 5 FPGA. The MATLAB result of LDPC decoder for low bit error rate (BER) gives bit error rate in the range of 10-1 to 10-3.5 at SNR=1 to 2 for 20 no of iterations. So it gives good bit error rate performance. The latency of the parallel design of LDPC decoder has also reduced. It has accomplished 141.22 MHz maximu...
The Journal of Korean Institute of Communications and Information Sciences, 2005
In this paper, we propose a new sequential message-passing decoding algorithm of low-density parity-check (LDPC) codes by partitioning check nodes. This new decoding algorithm shows better bit error rate(BER) performance than that of the conventional message-passing decoding algorithm, especially for small number of iterations. Analytical results tell us that as the number of partitioned subsets of check nodes increases, the BER performance becomes better. We also derive the recursive equations for mean values of messages at variable nodes by using density evolution with Gaussian approximation. Simulation results also confirm the analytical results.
GLOBECOM '05. IEEE Global Telecommunications Conference, 2005., 2005
A two-dimensional post normalization scheme is proposed to improve the performance of conventional min-sum (MS) and normalized MS decoding of irregular low density parity check codes. An iterative procedure based on parallel differential algorithm is presented to obtain the optimal two-dimensional normalization factors. Both density evolution analysis and specific code simulation show that the proposed method provides a comparable performance as belief propagation decoding while requiring less complexity. Interestingly, the new method exhibits a lower error floor than that of belief propagation decoding in the high SNR region. With respect to standard MS and one-dimentional normalizated MS decodings, the two-dimensional normailzed MS offers a considerably better performance.
IEEE Transactions on Communications, 2000
In this paper we investigate the performance of belief propagation (BP) algorithm for decoding low density parity check (LDPC) codes over the additive white Gaussian noise (AWGN) channel when there is an incorrect estimate of the channel signal to noise ratio (SNR) (referred to as SNR mismatch) at the decoder. We propose a computationally efficient method based on the Gaussian approximation of density evolution to compute the threshold values for regular and irregular codes in the presence of SNR mismatch and show that these values are consistent with simulation results for the codes with finite block lengths. At the extremes for over-and underestimation of SNR, the performance of BP tends to that of min-sum (MS) algorithm and the channel bit error rate, respectively. Our results for regular codes indicate that the sensitivity to mismatch increases by increasing the variable node degree and by decreasing the check node degree. The effect of variable node degree however appears to be more profound such that at a given rate, the codes with the smallest variable and check degrees are more robust against SNR mismatch. For irregular codes, by comparing the thresholds of a few ensembles, we demonstrate that the ensemble which performs better in the absence of mismatch can perform worse in the presence of it.
Electronics Letters, 2008
A novel dual-min-sum decoding algorithm for low-density paritycheck codes is proposed. The proposed algorithm simplifies the check-node updates and thus reduces the computational complexity of the belief propagation (BP) algorithm significantly. Simulation results show that the proposed algorithm can achieve an error performance very close to that of the BP algorithm.
2012
In this paper, we propose an improved version of the min-sum algorithm for low density parity check (LDPC) code decoding, which we call "adaptive normalized BP-based" algorithm. Our decoder provides a compromise solution between the belief propagation and the min-sum algorithms by adding an exponent offset to each variable node's intrinsic information in the check node update equation. The extrinsic information from the min-sum decoder is then adjusted by applying a negative power of two scale factor, which can be easily implemented by right shifting the min-sum extrinsic information. The difference between our approach and other adaptive normalized min-sum decoders is that we select the normalization scale factor using a clear analytical approach based on underlying principles. Simulation results show that the proposed decoder outperforms the min-sum decoder and performs very close to the BP decoder, but with lower complexity.
EURASIP Journal on Wireless Communications and Networking, 2014
A word error rate (WER) reducing approach for a hybrid iterative error and erasure decoding algorithm for low-density parity-check codes is described. A lower WER is achieved when the maximum number of iterations of the min-sum belief propagation decoder stage is set to certain specific values which are code dependent. By proper choice of decoder parameters, this approach reduces WER by about 2 orders of magnitude for an equivalent decoding complexity. Computer simulation results are given for the efficient use of this hybrid decoding technique in the presence of additive white Gaussian noise.
2007 IEEE International Conference on Signal Processing and Communications, 2007
The paper presents a novel approach to reduce the bit error rate (BER) in iterative belief propagation (BP) decoding of low density parity check (LDPC) codes. The behavior of the BP algorithm is first investigated as a function of number of decoder iterations, and it is shown that typical uncorrected error patterns can be classified into 3 categories: oscillating, nearly-constant, or randomlike, with a predominance of oscillating patterns at high Signal-to-Noise (SNR) values. A proposed decoder modification is then introduced based on tracking the number of failed parity check equations in the intermediate decoding iterations, rather than relying on the final decoder output (after reaching the maximum number of iterations). Simulation results with a rate ½ (1024,512) progressive edge-growth (PEG) LDPC code show that the proposed modification can decrease the BER by as much as 10-to-40%, particularly for high SNR values.
IEEE Access, 2019
The informed dynamic scheduling (IDS) strategies for decoding of low-density parity-check codes obtained superior performance in error correction performance and convergence speed. However, there are still two problems existing in the current IDS algorithms. The first is that the current IDS algorithms only preferentially update the selected unreliable messages, but they do not guarantee the updating is performed with reliable information. In the paper, a two-step message selecting strategy is introduced. On the basis of the two reliability metrics and two types of variable node residuals, the residual BP decoding algorithm, short for TRM-TVRBP, is proposed. With the algorithm, the reliability of the updating-messages can be improved. The second is the greediness problem, prevalently existed in the IDSlike algorithms. The problem arises mainly from the fact that the major computing resources are allocated to or concentrated on some nodes and edges. To overcome the problem, the reliability metric-based RBP algorithm (RM-RBP) is proposed, which can force every variable node to contribute its intrinsic information to the iterative decoding. At the same time, the algorithm can force the related variable nodes to be updated, and make every edge have an equal opportunity of being updated. Simulation results show that both the TRM-TVRBP and the RM-RBP have appealing convergence rate and error-correcting performance compared to the previous IDS decoders over the white Gaussian noise (AWGN) channel. INDEX TERMS Low-density parity-check (LDPC) codes, dynamic selection strategies, dynamic updating strategies, residuals of variable nodes.
IEEE Transactions on Communications, 2000
We propose an augmented belief propagation (BP) decoder for low-density parity check (LDPC) codes which can be utilized on memoryless or intersymbol interference channels. The proposed method is a heuristic algorithm that eliminates a large number of pseudocodewords that can cause nonconvergence in the BP decoder. The augmented decoder is a multistage iterative decoder, where, at each stage, the original channel messages on select symbol nodes are replaced by saturated messages. The key element of the proposed method is the symbol selection process, which is based on the appropriately defined subgraphs of the code graph and/or the reliability of the information received from the channel. We demonstrate by examples that this decoder can be implemented to achieve substantial gains (compared to the standard locally-operating BP decoder) for short LDPC codes decoded on both memoryless and intersymbol interference Gaussian channels. Using the Margulis code example, we also show that the augmented decoder reduces the error floors. Finally, we discuss types of BP decoding errors and relate them to the augmented BP decoder.
2012 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), 2012
Low-Density-Parity-Check Code (LDPC) codes have received great attention recently because of the excellent error correcting capability and optional error correct code. A new method base on Min-Sum-Plus-Correction-Factor Algorithm using stopping node to reduce computing complexity on Low-Density-Parity-Check Code decoding algorithm is presented in this paper. The improved method shows how to make early decisions to reduce computation complexity of LDPC decoding algorithm in the next iteration under QPSK systems. Simulation results show that computational complexity of LDPC decoding using Stopping node criterion can reduce up to 6 times with maintaining quality level.
2015
Due to the increasing popularity of LDPC codes and its demand for future applications, first time in this paper , LDPC coding techniques have been systematically summarized and analyzed. The paper gives the comprehensive review of LDPC encoder, decoder and its architecture for simulation and implementation. The paper is specially intended for giving an insight of the algorithmic overview of the LDPC encoder, decoder and its architecture for research and practical purposes. The original belief propagation algorithm (BPA) , logarithmic model of BPA , and the other simplified form of the logarithmic sum product algorithms (SPA) has been elaborated and analyzed for medium and short length codes under AWGN channel.
IEEE Transactions on Communications, 2002
In this paper, we propose a belief-propagation (BP)-based decoding algorithm which utilizes normalization to improve the accuracy of the soft values delivered by a previously proposed simplified BP-based algorithm. The normalization factors can be obtained not only by simulation, but also, importantly, theoretically. This new BP-based algorithm is much simpler to implement than BP decoding as it requires only additions of the normalized received values and is universal, i.e., the decoding is independent of the channel characteristics. Some simulation results are given, which show this new decoding approach can achieve an error performance very close to that of BP on the additive white Gaussian noise channel, especially for low-density parity check (LDPC) codes whose check sums have large weights. The principle of normalization can also be used to improve the performance of the Max-Log-MAP algorithm in turbo decoding, and some coding gain can be achieved if the code length is long enough.
IEEE Transactions on Communications, 2000
In this paper, we analyze the sequential messagepassing decoding algorithm of low-density parity-check (LDPC) codes by partitioning check nodes. This decoding algorithm shows better bit error rate (BER) performance than the conventional message-passing decoding algorithm, especially for the small number of iterations. Analytical results indicate that as the number of partitioned subsets of check nodes increases, the BER performance is improved. We also derive the recursive equations for mean values of messages at check and variable nodes by using density evolution with a Gaussian approximation. From these equations, the mean values are obtained at each iteration of the sequential decoding algorithm and the corresponding BER values are calculated. They show that the sequential decoding algorithm converges faster than the conventional one. Finally, the analytical results are confirmed by the simulation results.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.