Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1963, IEEE Transactions on Information Theory
A low-density parity-check code is a code specified by a parity-check matrix with the following properties : each column contains a small fixed numberj > 3 of I's and each row contains a small fixed number k > j of 1's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j. When used with maximum likelihood decoding on a snfhciently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixedj.
2007 10th Canadian Workshop on Information Theory (CWIT), 2007
A method for estimating the performance of low-density parity-check (LDPC) codes decoded by hard-decision iterative decoding algorithms on binary symmetric channels (BSC) is proposed. Based on the enumeration of the smallest weight error patterns that cannot be all corrected by the decoder, this method estimates both the frame error rate (FER) and the bit error rate (BER) of a given LDPC code with very good precision for all crossover probabilities of practical interest. Through a number of examples, we show that the proposed method can be effectively applied to both regular and irregular LDPC codes and to a variety of hard-decision iterative decoding algorithms. Compared with the conventional Monte Carlo simulation, the proposed method has a much smaller computational complexity, particularly for lower error rates. Index Terms Low-density parity-check (LDPC) codes, finite length LDPC codes, iterative decoding, hard-decision decoding algorithms, binary symmetric channels (BSC), error floor.
IEEE Transactions on Communications, 2000
A method for estimating the performance of low-density parity-check (LDPC) codes decoded by hard-decision iterative decoding algorithms on binary symmetric channels (BSC) is proposed. Based on the enumeration of the smallest weight error patterns that cannot be all corrected by the decoder, this method estimates both the frame error rate (FER) and the bit error rate (BER) of a given LDPC code with very good precision for all crossover probabilities of practical interest. Through a number of examples, we show that the proposed method can be effectively applied to both regular and irregular LDPC codes and to a variety of hard-decision iterative decoding algorithms. Compared with the conventional Monte Carlo simulation, the proposed method has a much smaller computational complexity, particularly for lower error rates. Index Terms Low-density parity-check (LDPC) codes, finite length LDPC codes, iterative decoding, hard-decision decoding algorithms, binary symmetric channels (BSC), error floor.
Journal of Physics A: Mathematical and General, 2003
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Theoretical framework for dealing with general symmetric channels is provided, based on which Gallager and MacKay-Neal codes are studied as examples of LDPC codes. It has been shown that the basic properties of these codes known for particular channels, including the property to potentially saturate Shannon's limit, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel noise models.
IEEE Transactions on Communications, 2000
The performance of low-density parity-check (LDPC) codes decoded by hard-decision iterative decoding algorithms can be accurately estimated if the weight J and the number |EJ | of the smallest error patterns that cannot be corrected by the decoder are known. To obtain J and |EJ |, one would need to perform the direct enumeration of error patterns with weight i ≤ J. The complexity of enumeration increases exponentially with J, essentially as n J , where n is the code block length. This limits the application of direct enumeration to codes with small n and J. In this letter, we approximate J and |EJ | by enumerating and testing the error patterns that are subsets of short cycles in the code's Tanner graph. This reduces the computational complexity by several orders of magnitude compared to direct enumeration, making it possible to estimate the error rates for almost any practical LDPC code. To obtain the error rate estimates, we propose an algorithm that progressively improves the estimates as larger cycles are enumerated. Through a number of examples, we demonstrate that the proposed method can accurately estimate both the bit error rate (BER) and the frame error rate (FER) of regular and irregular LDPC codes decoded by a variety of hard-decision iterative decoding algorithms. Index Terms-Binary symmetric channels (BSC), low-density parity-check (LDPC) codes, finite-length LDPC codes, error rate estimation of finite-length LDPC codes, error floor, hard-decision decoding algorithms, iterative decoding, Tanner graph cycles.
Journal of Physics A: Mathematical and General, 2003
We present a theoretical method for a direct evaluation of the average and reliability error exponents in low-density parity-check error-correcting codes using methods of statistical physics. Results for the binary symmetric channel (BSC) are presented for codes of both finite and infinite connectivity.
Error Detection and Correction [Working Title]
Scientists have competed to find codes that can be decoded with optimal decoding algorithms. Generalized LDPC codes were found to compare well with such codes. LDPC codes are well treated with both types of decoding; HDD and SDD. On the other hand GLDPC codes iterative decoding, on both AWGN and BSC channels, was not sufficiently investigated in the literature. This chapter first describes its construction then discusses its iterative decoding algorithms on both channels so far. The SISO decoders, of GLDPC component codes, show excellent error performance with moderate and high code rate. However, the complexities of such decoding algorithms are very high. When the HDD BF algorithm presented to LDPC for its simplicity and speed, it was far from the BSC capacity. Therefore involving LDPC codes in optical systems using such algorithms is a wrong choice. GLDPC codes can be introduced as a good alternative of LDPC codes as their performance under BF algorithm can be improved and they would then be a competitive choice for optical communications. This chapter will discuss the iterative HDD algorithms that improve decoding error performance of GLDPC codes. SDD algorithms that maintain the performance but lowering decoding simplicity are also described.
International Journal of Computer Applications, 2012
From Shannon limit it is known that, for a particular bandwidth and noise characteristics, there exists a maximum rate at which data can be transmitted with arbitrarily small number of errors. Coding schemes are utilized to improve the data transmission efficiency. The paper aims to represent the comparative performance analysis of Low Density Parity Check (LDPC) codes and Bose-Chaudhuri-Hocquenghem (BCH) codes in transmitting data over noisy channel for different parameters. The performance of LDPC block codes is simulated for different decoding schemes and code rates. Performance analysis of LDPC codes is also shown for regular and irregular codes. For fixed error correcting capability, the BCH coding scheme is further simulated for different code length with increasing code length. The simulated output is worthwhile to analyze the performance of a communication system before the physical implementation of the system.
IEEE Transactions on Information Theory, 2000
A new verification-based message-passing decoder for low-density parity-check (LDPC) codes is introduced and analyzed for the q-ary symmetric channel (q-SC). Rather than passing messages consisting of symbol probabilities, this decoder passes lists of possible symbols and marks some lists as verified. The density evolution (DE) equations for this decoder are derived and used to compute decoding thresholds. If the maximum list size is unbounded, then one finds that any capacity-achieving LDPC code for the binary erasure channel can be used to achieve capacity on the q-SC for large q. The decoding thresholds are also computed via DE for the case where each list is truncated to satisfy a maximum list-size constraint. Simulation results are also presented to confirm the DE results. During the simulations, we observed differences between two verification-based decoding algorithms, introduced by Luby and Mitzenmacher, that were implicitly assumed to be identical. In this paper, the node-based algorithms are evaluated via analysis and simulation.
The Low Density Parity-Check codes are one of the hottest topics in coding theory nowadays equipped with very fast encoding and decoding algorithms, LDPC are very attractive both theoretically and practically.In this article, we present a simulation of a work that has been accepted in an international journal. The newalgorithm allows us to correct errors quickly and without iterations. We show that the proposed algorithm simulation can be applied for both regular and irregular LDPC codes. First, we developed the design of the syndrome Block Second, we generated and simulated the hardware description language source code using Quartus software tools, and finally we show low complexity compared to the basic algorithm.
Entropy, 2021
This paper deals with the specific construction of binary low-density parity-check (LDPC) codes. We derive lower bounds on the error exponents for these codes transmitted over the memoryless binary symmetric channel (BSC) for both the well-known maximum-likelihood (ML) and proposed low-complexity decoding algorithms. We prove the existence of such LDPC codes that the probability of erroneous decoding decreases exponentially with the growth of the code length while keeping coding rates below the corresponding channel capacity. We also show that an obtained error exponent lower bound under ML decoding almost coincide with the error exponents of good linear codes.
Using a numerical approach, tradeoffs between code rate and decoding complexity are studied for long-block-length irregular low-density parity-check codes decoded using the sum-product algorithm under the usual parallel-update messagepassing schedule. The channel is an additive white Gaussian noise channel and the modulation format is binary antipodal signalling, although the methodology can be extended to any other channels for which a density evolution analysis may be carried out. A measure is introduced that incorporates two factors that contribute to the decoding complexity. One factor, which scales linearly with the number of edges in the code's factor graph, measures the number of operations required to carry out a single decoding iteration. The other factor is an estimate of the number of iterations required to reduce the the bit-error probability from that given by the channel to a desired target. The decoding complexity measure is obtained from a density-evolution analysis of the code, which is used to relate decoding complexity with the code's degree distribution and code rate. One natural optimization problem that arises in this context is to maximize code rate for a given channel subject to a constraint on decoding complexity. At one extreme (no constraint on decoding complexity) one obtains the "threshold-optimized" LDPC codes that have been the focus of much attention in recent years. Such codes themselves represent one possible means of trading decoding complexity for rate, as such codes can be applied in channels better than the one for which they are designed, achieving the benefit of a reduced decoding complexity. However, it is found that the codes optimized using the methods described in this paper provide a better tradeoff, often achieving the same code rate with approximately 1/3 the decoding complexity of the threshold-optimized codes.
2017
In recent years, advances in wireless technologies have led to a surge in the adoption of wireless networks for industrial control, and automation purposes. Existing standards are often based upon IEEE 802.15.4, which specifies automatic repeat request (ARQ) for packet retransmission. Alone, ARQ can lead to large transmission delays in the presence of adverse channel conditions. Forward error correction (FEC) as an alternative solution is tempting. However, if decoding errors persist, no provision is given for packet retransmission. Therefore, a type-II hybrid ARQ (HARQ) mechanism that utilizes pseudo-randomly punctured low-density parity-check (LDPC) codes is considered. The viability of LDPC codes in the proposed type-II HARQ system is fully explored by utilizing a range of short block lengths and varying decoder types. Hard-decision Gallager-A and soft and soft-decision sum-product decoding are considered. Additionally, an improved hard-decision decoder based on a modified peeling algorithm in tandem with the Gallager-A algorithm is proposed. The throughput and complexity of both decoder types are explored. The complexity is presented in terms of worst case clock cycles. The results show that short LDPC codes perform better than RS codes and Turbo codes with respect to bit error rate. As well, the improved hard-decision decoder greatly improves the bit error rate performance of hard-decision decoding in the presence of pseudo-random puncturing. The HARQ throughput performance of short codelengths surpasses that of longer codes, on a per frame basis, regardless of decoder type. Finally, the complexity of soft-decision decoding is shown to fall below hard-decision decoding for a range of SNR values.
Low density parity check (LDPC) codes are one of the best error correcting codes in today’s coding world and are known to approach the Shannon limit. As with all other channel coding schemes, LDPC codes add redundancy to the uncoded input data to make it more immune to channel impairments. In this paper, the impact of low-Density Parity-Check code (LDPC) on the performance of system under Binary Phase Shift keying (BPSK) over an Additive White Gaussian Noise (AWGN) and other fading (Raleigh and Rician) channels is investigated. Obtained results show that LDPC can improve transceiver system for various channel types. At Bit Error Rate (BER) of 10-4 such code with code rate of ½ reduces the Signal to Noise Ratio (SNR) by range of 6.5 to 9 dB for fading channels in contrast to uncoded system. By studying modern research it has been found that turbo code can achieved same manner but LDPC decoder faster than turbo decoder and can be implemented in parallel.
IEEE Transactions on Information Theory, 2000
In this paper we will investigate the performance of Raptor codes on arbitrary binary input memoryless symmetric channels (BIMSCs). In doing so, we generalize some of the results that were proved before for the erasure channel. We will generalize the stability condition to the class of Raptor codes. This generalization gives a lower bound on the fraction of output nodes of degree 2 of a Raptor code if the error probability of the belief-propagation decoder converges to zero. Using information theoretic arguments, we will show that if a sequence of output degree distributions is to achieve the capacity of the underlying channel, then the fraction of nodes of degree 2 in these degree distributions has to converge to a certain quantity depending on the channel. For the class of erasure channels this quantity is independent of the erasure probability of the channel, but for many other classes of BIMSCs this fraction depends on the particular channel chosen. This result has implications on the "universality" of Raptor codes for classes other than the class of erasure channels, in a sense that will be made more precise in the paper. We will also investigate the performance of specific Raptor codes which are optimized using a more exact version of the Gaussian approximation technique. * This work was initiated and carried out in part at Digital Fountain, Inc. † Work done while visiting Laboratoire d'algorithmique (ALGO) at EPFL as a summer student, and while a full time student at Sharif University of Technology in Tehran/Iran 1 many known symmetric channels.
2000
Recently Richardson, Shokrollahi, Urbanke have proposed irregular Low-Density Parity-Check Codes (LDPCC) (11 that outperform, on memoryless channels, the best known turbo-codes. These results have been obtained by allowing the degree of each node (variable or check) of a LDPCC t o vary according t o some distribution. In this paper we investigate the performance of such new codes over block fading channels (i.e. channels with memory), in terms of bit and codeword error rates adopting the standard decoding algorithm and a modified version which slightly improves performance. Also, a numerical comparison with conventional convolutional codes is carried out. For a code rate 1/2, it results that irregular LDPCC are convenient only for large codeword size (greater than 500 bits), and that the gain with respect t o a constraint length 7 convolutional code decreases considerably with the channel memory.
2009
The optimal complexity-rate tradeoff for errorcorrecting codes at rates strictly below the Shannon limit is a central question in coding theory. This paper proposes a numerical approach for the joint optimization of rate and decoding complexity for long-block-length irregular low-density parity-check (LDPC) codes. The proposed design methodology is applicable to any binary-input memoryless symmetric channel and any iterative message-passing decoding algorithm with a parallel-update schedule. A key feature of the proposed optimization method is a new complexity measure that incorporates both the number of operations required to carry out a single decoding iteration and the number of iterations required for convergence. This paper shows that the proposed complexity measure can be accurately estimated from a density-evolution and extrinsic-information transfer chart analysis of the code. Under certain mild conditions, the complexity measure is a convex function of the variable edge-degree distribution of the code, allowing an efficient design of complexity-optimized LDPC codes using convex optimization methods. The results presented herein show that when the decoding complexity is constrained, the complexity-optimized codes significantly outperform thresholdoptimized codes at long block lengths.
As the Low Density Parity Check (LDPC) code has Shannon limit approach error correcting performance so this code is used in many application. The iterative belief propagation algorithms as well as the approximations of the belief propagation algorithm used for the decoding purpose of the LDPC codes, But the belief propagation algorithms based decoding of the LDPC codes suffers the error floor problem. On finite length codes the error correcting performance curve in the low error rate region can flatten out due to the presence of cycles in the corresponding tanner graph. This is known as the error floor. This happens because decoding converges to the trapping sets and cannot correct all errors even if more numbers of decoding iterations carried out. The performance in the error floor region is important for applications which require very low error rate like flash memory and optical communications. To overcome this problem, a new type of decoder is proposed i.e. Finite Alphabet Iterative Decoders (FAIDs), were developed for the LDPC codes. In this decoder the messages are represented by alphabets with a very small number of levels and the variable to check messages are derived from the check to variable messages. The channel information given through the predefined Boolean map i.e. designed to optimize the error correcting capability in the error floor region. The FAIDs can better than the floating point BP decoders in the error floor region over the Binary Symmetric Channel (BSC). In addition multiple FAIDs with different map functions can be developed to further improve the performance with higher complexity.
IEEE GLOBECOM 2007-2007 IEEE Global Telecommunications Conference, 2007
We propose a deterministic method to design irregular Low-Density Parity-Check (LDPC) codes for binary erasure channels (BEC). Compared to the existing methods, which are based on the application of asymptomatic analysis tools such as density evolution or Extrinsic Information Transfer (EXIT) charts in an optimization process, the proposed method is much simpler and faster. Through a number of examples, we demonstrate that the codes designed by the proposed method perform very closely to the best codes designed by optimization. An important property of the proposed designs is the flexibility to select the number of constituent variable node degrees P. The proposed designs include existing deterministic designs as a special case with P = N-1, where N is the maximum variable node degree. Compared to the existing deterministic designs, for a given rate and a given δ > 0, the designed ensembles can have a threshold in δ-neighborhood of the capacity upper bound with smaller values of P and N. They can also achieve the capacity of the BEC as N, and correspondingly P and the maximum check node degree tend to infinity. Index Terms-channel coding, low-density parity-check (LDPC) codes, binary erasure channel (BEC), deterministic design. I. INTRDOUCTION Low-Density Parity-Check (LDPC) codes have received much attention in the past decade due to their attractive performance/complexity tradeoff on a variety of communication channels. In particular, on the Binary Erasure Channel (BEC), they achieve the channel capacity asymptotically [1-4]. In [1],[5],[6] a complete mathematical analysis for the performance of LDPC codes over the BEC, both asymptotically and for finite block lengths, has been developed. For other types of channels such as the Binary Symmetric Channel (BSC) and the Binary Input Additive White Gaussian Noise (BIAWGN) channel, only asymptotic analysis is available [7]. For irregular LDPC codes, the problem of finding ensemble
IEEE Transactions on Information Theory, 2000
Consider communication over a binary-input memoryless outputsymmetric channel with low density parity check (LDPC) codes and maximum a posteriori (MAP) decoding. The replica method of spin glass theory allows to conjecture an analytic formula for the average input-output conditional entropy per bit in the infinite block length limit. Montanari proved a lower bound for this entropy, in the case of LDPC ensembles with convex check degree polynomial, which matches the replica formula. Here we extend this lower bound to any irregular LDPC ensemble. The new feature of our work is an analysis of the second derivative of the conditional input-output entropy with respect to noise. A close relation arises between this second derivative and correlation or mutual information of codebits. This allows us to extend the realm of the "interpolation method", in particular we show how channel symmetry allows to control the fluctuations of the "overlap parameters".
2011
In this paper we present a novel two step design technique for Low Density Parity Check (LDPC) codes, which, among the others, have been exploited for performance enhancement of the second generation of Digital Video Broadcasting-Satellite (DVB-S2). In the first step we develop an efficient algorithm for construction of quasi-random LDPC codes via minimization of a cost function related to the distribution of the length of cycles in the Tanner graph of the code. The cost function aims at constructing high girth bipartite graphs with reduced number of cycles of low length. In the second optimization step we aim at improving the asymptotic performance of the code via edge perturbation. The design philosophy is to avoid asymptotically weak LDPCs that have low minimum distance values and could potentially perform badly under iterative soft decoding at moderate to high Signal to Noise Ratio (SNR) values. Subsequently, we present sample results of our LDPC design strategy, present their simulated performance over an AWGN channel and make comparisons to some of the construction methods presented in the literature.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.