Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005, National Conference of Radio …
We investigate a modification to the sum-product algorithm used for decoding low-density parity-check (LDPC) codes. The sum-product algorithm is algorithmically simple and highly parallelizable, but suffers from high memory usage, making LDPC codes unsuitable for usage in battery powered devices such as cell phones and PDAs.
… , 2001. GLOBECOM'01. …, 2001
Efficient implementations of the sum-product algorithm (SPA) for decoding low-density parity-check (LDPC) codes using loglikelihood ratios (LLR) as messages between symbol and parity-check nodes are presented. Various reduced-complexity derivatives of the LLR-SPA are proposed. Both serial and parallel implementations are investigated, leading to trellis and tree topologies, respectively. Furthermore, by exploiting the inherent robustness of LLRs, it is shown, via simulations, that coarse quantization tables are sufficient to implement complex core operations with negligible or no loss in performance. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate design point in high-speed applications from a performance, latency, and computational complexity perspective.
GLOBECOM-NEW YORK-, 2001
Abstract— Efficient implementations of the sum-product algorithm (SPA) for decoding low-density parity-check (LDPC) codes using log- likelihood ratios (LLR) as messages between symbol and parity-check nodes are presented. Various reduced-complexity derivatives of the LLR- ...
… Conference, 2005. 23rd, 2005
Low-density parity-check codes have recently received extensive attention as a forward error correction scheme in a wide area of applications. The decoding algorithm is inherently parallelizable, allowing communication at high speeds. One of the main disadvantages, however, is large memory requirements for interim storing of decoding data. In this paper, we propose an architecture for an early decision decoding algorithm. The algorithm significantly reduces the number of memory accesses. Simulation results show that the increased energy dissipation of the components is small compared to the reduced dissipation of the memories.
IEEE Transactions on Communications, 2013
Non-binary low-density parity-check codes are robust to various channel impairments. However, based on the existing decoding algorithms, the decoder implementations are expensive because of their excessive computational complexity and memory usage. Based on the combinatorial optimization, we present an approximation method for the check node processing. The simulation results demonstrate that our scheme has small performance loss over the additive white Gaussian noise channel and independent Rayleigh fading channel. Furthermore, the proposed reduced-complexity realization provides significant savings on hardware, so it yields a good performance-complexity tradeoff and can be efficiently implemented. Index Terms Low-density parity-check (LDPC) codes, non-binary codes, iterative decoding, extended min-sum algorithm.
Circuit Theory and …, 2005
Low-density parity-check codes have recently received extensive attention as a forward error correction scheme in a wide area of applications. The decoding algorithm is inherently parallelizable, allowing communication at high speeds. One of the main disadvantages, however, is large memory requirements for interim storing of decoding data. In this paper, we investigate a modification to the decoding algorithm, using early decisions for bits with high reliabilities. This reduces the amount of messages passed by the algorithm, which can be expected to reduce the switching activity of a hardware implementation. While direct application of the modification results in severe performance penalties, we show how to adapt the algorithm to reduce the impact, resulting in a negligible decrease in error correction performance.
A noisy message-passing decoding scheme is considered for low-density parity-check (LDPC) codes over additive white Gaussian noise (AWGN) channels. The internal decoder noise is motivated by the quantization noise in digital implementations or the intrinsic noise of analog LDPC decoders. We modelled the decoder noise as AWGN on the exchanged messages in the iterative LDPC decoder. This is shown to render the message densities in the noisy LDPC decoder inconsistent. We then invoke Gaussian approximation and formulate a two-dimensional density evolution analysis for the noisy LDPC decoder. This allows for not only tracking the mean, but also the variance of the message densities, and hence, quantifying the threshold of the LDPC code. According to the results, a decoder noise of unit variance, increases the threshold for a regular (3,6) code by 1.672dB.
Signals, Systems and …, 2005
Low-density parity-check codes have recently received extensive attention as a forward error correction scheme in a wide area of applications. The decoding algorithm is inherently parallelizable, allowing communication at high speeds. One of the main disadvantages, however, is large memory requirements for interim storing of decoding data. In this paper, we investigate the performance of a hybrid decoding algorithm, using an approximating early decision algorithm and a regular probability propagation algorithm. When the early decision algorithm fails, the block is re-decoded using a probability propagation decoder. As almost all errors are detectable, the error correction performance of the hybrid algorithm is negligibly detoriated. However, simulations still achieve a 32% decrease of memory accesses.
1998
We develop a belief-propagation-based joint decoding and channel state estimationalgorithm for low-density parity check codes on fading channels with memory.The excellent empirical performance of LDPC codes on the AWGN channel and thepossibility of arranging the code memory relative to the channel correlation suggeststhat LDPC codes have the potential to perform well on fading channels with memory.We treat the block fading
2011
Owing to advancement in 4 G mobile communication and mobile TV, the throughput requirement in digital communication has been increasing rapidly. Thus, the need for efficient error-correcting codes is increasing. Furthermore, since most mobile devices operate with limited battery power, low-power communication techniques are attracting considerable attention lately. In this article, we propose a novel low-power, low-density parity check (LDPC) decoder. The LDPC code is one of the most common error-correcting codes. In mobile TV, SNR estimation is required for the adaptive coding and modulation technique. We apply the SNR estimation result to the proposed LDPC decoding to minimize power consumption due to unnecessary operations. The SNR estimation value is used for predicting the iteration count until the completion of the successful LDPC decoding. When the SNR value is low, we omit computing the parity check and the tentative decision. We implemented the proposed decoder which is capable of adaptively skipping unnecessary operations based on the SNR estimation. The power consumption was measured to show the efficiency of our approach. We verified that, by using our proposed method, power consumption is reduced by 10% for the SNR range of 1.5-2.5 dB.
2007
Low-Density Parity-check codes deliver very good performance when decoded with belief propagation also known as sum product algorithm. Most of the complexity in the belief propagation algorithm rests in the check node processor. This paper presents an overview of the approaches used for reducing the number of active nodes that participate in the decoding iteration loop. Also a novel approach for deciding the initial threshold for optimal elimination of high confidence level is proposed. Merits of the proposed algorithm termed as variable offset constant slope algorithm is discussed.
Proc. Swedish System-on-Chip …, 2005
Low-density parity-check codes have recently received extensive attention as a forward error correction scheme in a wide area of applications. The decoding algorithm is inherently parallelizable, allowing communication at high speeds. One of the main disadvantages, however, is large memory requirements for interim storing of decoding data. In this paper, we investigate a modification to the decoding algorithm, using early decisions for bits with high reliabilities. Currently, there are two early decision schemes proposed. We compare their theoretical performances and their suitability for hardware implementation. We also propose a new decision method, which we call weak decisions, that offers an increase in performance by a factor of two.
International Journal of Computing and Digital Systemss
This paper presents Verilog implementation of Low-Density Parity-Check (LDPC) decoders using Sum-Product and Min-Sum algorithms which will take more area as compared to other decoding algorithms. In this paper, area efficient LDPC decoder depending upon abridged complexity Minsum algorithm is presented. It reduces the permutational complexity with limiting the extrinsic information bit length to 4 bits and it modifies the check and variable node processing operation. Compilation at an algorithmic level explains that the proposed decoder attain good error performance as compared to a Sum Product Algorithm based decoder, and consequently handles the problem of immense error performance deprivation of a LDPC decoder. A Min Sum Based LDPC decoder with a matrix length (1000, 500) has been implemented in a MATLAB with a 10-1 BER and the design is implemented in HDL Verilog. The complete top level module was done by structural modeling style and simulated with SPARTAN FPGA Family. The percentage saving in area is about 33% of slices and provides a throughput of 1.46Gbps.
Low Density Parity Check (LDPC) codes are one of the block coding techniques that can approach the Shannon's limit within a fraction of a decibel for high block lengths. In many digital communication systems, these codes have strong competitors of turbo codes for error control. LDPC codes performance depends on the excellent design of parity check matrix and many diverse research methods have been used by different study groups to evaluate the performance. Unlike many other classes of codes, LDPC codes are already equipped with a fast, probabilistic decoding algorithm. This makes LDPC codes not only attractive from a theoretical point of view, but also very suitable for practical applications. This paper throws hard decision and soft decision algorithm of LDPC. The hard decision algorithm contains Bit flipping algorithm and Soft decoding algorithm contains Sum product algorithm which is used to correct burst error. Sum Product algorithm is also called as belief propagation. Bit ...
International Journal of Electrical and Computer Engineering (IJECE), 2023
It is proved that hard decision algorithms are more appropriate than a soft decision for low-density parity-check (LDPC) decoding since they are less complex at the decoding level. On the other hand, it is notable that the soft decision algorithm outperforms the hard decision one in terms of the bit error rate (BER) gap. In order to minimize the BER and the gap between these two families of LDPC codes, a new LDPC decoding algorithm is suggested in this paper, which is based on both the normalized min-sum (NMS) and modified-weighted bit-flipping (MWBF). The proposed algorithm is named normalized min sum-modified weighted bit flipping (NMSMWBF). The MWBF is executed after the NMS algorithm. The simulations show that our algorithm outperforms the NMS in terms of BER at 10-8 over the additive white gaussian noise (AWGN) channel by 0.25 dB. Furthermore, the proposed NMSMWBF and the NMS are both at the same level of decoding difficulty.
2015
Low Density comes from the characteristic of their parity-check matrix that contains small number of 1s in comparison to the amount of 0s in them. This sparseness of parity check matrix guarantees two features: First, a decoding complexity which increases only linearly with the code length and second, a minimum distance which also increases linearly with the code length. These codes are practical implementation of Shannon noisy coding theorem[1]. LDPC codes are similar to other linear block codes. Actually, every existing code can be successfully implemented with the LDPC iterative decodSukhleen Bindra Narang, Kunal Pubby*, Hashneet Kaur Department of Electronics Technology, Guru Nanak Dev University, Amritsar, (INDIA) E-mail: [email protected]
EURASIP Journal on Wireless Communications and Networking, 2014
A word error rate (WER) reducing approach for a hybrid iterative error and erasure decoding algorithm for low-density parity-check codes is described. A lower WER is achieved when the maximum number of iterations of the min-sum belief propagation decoder stage is set to certain specific values which are code dependent. By proper choice of decoder parameters, this approach reduces WER by about 2 orders of magnitude for an equivalent decoding complexity. Computer simulation results are given for the efficient use of this hybrid decoding technique in the presence of additive white Gaussian noise.
In this paper, a reduced-complexity, scalable implementation of LDPC decoder is presented. The decoder architecture in this paper is an improved version of . The new architecture makes the implementation of multiple code rates, multiple block sizes and multiple standards LDPC decoder very straightforward. As an example, we implemented a parameterized decoder that supports the LDPC code in IEEE 802.16e standard, which requires code rates of 1/2, 2/3 and 3/4, with block sizes varying from 576 to 2304. The decoder is synthesized with Texas Instruments' 90 nm ASIC process technology, with a target operation frequency of 100 MHz, 15 decoding iterations, the maximum data rate is up to 256 Mbps.
IJSCA, 2011
Low Density Parity Check (LDPC) code approaches Shannon-limit performance for binary field and long code lengths.
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
This paper introduces a new approach to costeffective, high-throughput hardware designs for Low Density Parity Check (LDPC) decoders. The proposed approach, called Non-Surjective Finite Alphabet Iterative Decoders (NS-FAIDs), exploits the robustness of message-passing LDPC decoders to inaccuracies in the calculation of exchanged messages, and it is shown to provide a unified framework for several designs previously proposed in the literature. NS-FAIDs are optimized by density evolution for regular and irregular LDPC codes, and are shown to provide different trade-offs between hardware complexity and decoding performance. Two hardware architectures targeting high-throughput applications are also proposed, integrating both Min-Sum (MS) and NS-FAID decoding kernels. ASIC post synthesis implementation results on 65nm CMOS technology show that NS-FAIDs yield significant improvements in the throughput to area ratio, by up to 58.75% with respect to the MS decoder, with even better or only slightly degraded error correction performance.
—Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update, or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.