Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015
Low Density comes from the characteristic of their parity-check matrix that contains small number of 1s in comparison to the amount of 0s in them. This sparseness of parity check matrix guarantees two features: First, a decoding complexity which increases only linearly with the code length and second, a minimum distance which also increases linearly with the code length. These codes are practical implementation of Shannon noisy coding theorem[1]. LDPC codes are similar to other linear block codes. Actually, every existing code can be successfully implemented with the LDPC iterative decodSukhleen Bindra Narang, Kunal Pubby*, Hashneet Kaur Department of Electronics Technology, Guru Nanak Dev University, Amritsar, (INDIA) E-mail: [email protected]
2007
Low-Density Parity-Check Codes: Construction and Implementation by Gabofetswe A. Malema Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon's limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing
2007
This paper introduces different construction techniques of parity-check matrix H for irregular Low-Density Parity-Check (LDPC) codes. The first one is the proposed Accurate Random Construction Technique (ARCT) which is an improvement of the Random Construction Technique (RCT) to satisfy an accurate profile. The second technique, Speed Up Technique (SUT), improves the performance of irregular LDPC codes by growing H from proposed initial construction but not from empty matrix as usual. The third and fourth techniques are further improvements of the SUT that insure simpler decoding. In Double Speed Up Technique (DSUT), the decoder size of SUT matrix is fixed and the size of H is doubled. In Partitioned Speed Up Technique (PSUT), the H size is fixed and the decoder size decreases by using small size of SUT matrices to grow H. Simulations show that the performance of LDPC codes formed using SUT outperforms ARCT at block length N = 1000 with 0.342dB at BER = 10 -5 and LDPC codes created by DSUT outperforms SUT with 0.194dB at BER = 10 -5 . Simulations illustrate that the partitioning of H to small SUT submatrices not only simplifies the decoding process, it also simplifies the implementation and improves the performance. The improvement, in case of half, is 0.139dB at BER=10 -5 however as partitioning increases the performance degrades. It is about 0.322dB at BER=10 -5 in case of one-fourth.
Error Detection and Correction [Working Title]
Scientists have competed to find codes that can be decoded with optimal decoding algorithms. Generalized LDPC codes were found to compare well with such codes. LDPC codes are well treated with both types of decoding; HDD and SDD. On the other hand GLDPC codes iterative decoding, on both AWGN and BSC channels, was not sufficiently investigated in the literature. This chapter first describes its construction then discusses its iterative decoding algorithms on both channels so far. The SISO decoders, of GLDPC component codes, show excellent error performance with moderate and high code rate. However, the complexities of such decoding algorithms are very high. When the HDD BF algorithm presented to LDPC for its simplicity and speed, it was far from the BSC capacity. Therefore involving LDPC codes in optical systems using such algorithms is a wrong choice. GLDPC codes can be introduced as a good alternative of LDPC codes as their performance under BF algorithm can be improved and they would then be a competitive choice for optical communications. This chapter will discuss the iterative HDD algorithms that improve decoding error performance of GLDPC codes. SDD algorithms that maintain the performance but lowering decoding simplicity are also described.
2012
Low-density parity-check codes (LDPC) have been shown to have good error correcting performance, putting in mind the Shannon's limit approaching capability. This enables an efficient and reliable communication. However, the construction method of LDPC code can vary over a wide range of parameters such as rate, girth and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance. This research studies the construction of LDPC codes in randomized and structured form. The contribution of this thesis is introducing a method called "Randomly permutated copies of parity check matrix" that uses a base parity check matrix designed by a random or structured construction method such as Gallager or QC-LDPC codes respectively to get codes with multiple lengths and same rate of the base matrix. This is done by using a seed matrix with row and column weights of one, distributed randomly and can be addressed by a number in the base matrix. This method reduces the memory space needed for storing large parity check matrices, and also reduces the probability of failing to construct a parity matrix by random approaches. Numerical results show that the proposed construction performs similarly to random codes with the same length and rate as in Gallager's and Mackay's codes. It also increases the girth average of the Tanner graph and reduces the number of 4 cycles in the resulted matrix if exists in a base graph.
Zenodo (CERN European Organization for Nuclear Research), 2007
This paper mainly about the study on one of the widely used error correcting codes that is Low parity check Codes (LDPC). In this paper, the Regular LDPC code has been discussed The LDPC codes explained in this paper is about the Regular Binary LDPC codes or the Gallager.
IEIE Transactions on Smart Processing & Computing
Active research has been carried out in the domain of channel coding in order to achieve the Shannon limit. This paper presents the performance analysis of binary and non-binary lowdensity parity-check (LDPC) codes for various Galois Field. Performance comparison has been made for non-binary LDPC and its binary counterpart by applying Progressive Edge Growth (PEG) algorithm. Decoding is based on belief propagation and Fast Fourier Transform (FFT) based belief propagation for binary and non-binary LDPC codes, respectively. Simulation results indicate that the non-binary LDPC code outperforms its binary counterpart significantly.
This paper deals with the design and decoding of an extremely powerful and flexible family of codes called low-density parity-check (LDPC) codes. LDPC codes can be designed to perform close to the capacity of many different types of channels with a practical decoding complexity. It is conjectured that they can achieve the capacity of many channels and, indeed, they have been shown to achieve the capacity of the binary erasure (BEC) channel, under iterative decoding. With help of this paper LDPC codes and their decoding techniques are explained with overview of LDPC.
arXiv (Cornell University), 2006
Low-Density Parity-Check (LDPC) codes received much attention recently due to their capacity-approaching performance. The iterative message-passing algorithm is a widely adopted decoding algorithm for LDPC codes [7]. An important design issue for LDPC codes is designing codes with fast decoding speed while maintaining capacityapproaching performance. In another words, it is desirable that the code can be successfully decoded in few number of decoding iterations, at the same time, achieves a significant portion of the channel capacity. Despite of its importance, this design issue received little attention so far. In this paper, we address this design issue for the case of binary erasure channel. We prove that density-efficient capacity-approaching LDPC codes satisfy a so called "flatness condition". We show an asymptotic approximation to the number of decoding iterations. Based on these facts, we propose an approximated optimization approach to finding the codes with good decoding speed. We further show that the optimal codes in the sense of decoding speed are "right-concentrated". That is, the degrees of check nodes concentrate around the average right degree.
IEEE Transactions on Communications, 2005
Gallager introduced LDPC codes in 1962, presenting a construction method to randomly allocate bits in a sparse parity-check matrix subject to constraints on the row and column weights. Since then improvements have been made to Gallager's construction method and some analytic constructions for LDPC codes have recently been presented. However, analytically constructed LDPC codes comprise only a very small subset of possible codes and as a result LDPC codes are still, for the most part, constructed randomly. This paper extends the class of LDPC codes that can be systematically generated by presenting a construction method for regular LDPC codes based on combinatorial designs known as Kirkman triple systems. We construct ¢ ¡ ¤ £ ¦ ¥ § -regular codes whose Tanner graph is free of © -cycles for any value of ¥ divisible by ¡ .
available at publik. tuwien. ac. at/files/ …, 2010
—Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update, or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.
IET Communications, 2012
It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E b /N 0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E b /N 0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.
IJIREEICE, 2015
In this paper, we will Discuss Decoding technique of Low Density Parity check (LDPC) codes. Using Results we can evaluate which decoding procedure is better.
The Journal of Korean Institute of Communications and Information Sciences, 2005
In this paper, we propose a new sequential message-passing decoding algorithm of low-density parity-check (LDPC) codes by partitioning check nodes. This new decoding algorithm shows better bit error rate(BER) performance than that of the conventional message-passing decoding algorithm, especially for small number of iterations. Analytical results tell us that as the number of partitioned subsets of check nodes increases, the BER performance becomes better. We also derive the recursive equations for mean values of messages at variable nodes by using density evolution with Gaussian approximation. Simulation results also confirm the analytical results.
2006
In this paper, a list decoding algorithm for lowdensity parity-check (LDPC) codes is presented. The algorithm uses a modification of the simple Gallager bit-flipping algorithm to generate a sequence of candidate codewords iteratively one at a time using a set of test error patterns based on the reliability information of the received symbols. It is particularly efficient for short block LDPC codes, both regular and irregular. Computer simulation results are used to compare the performance of the proposed algorithm with other known decoding algorithms for LDPC codes, with the result that the presented algorithm offers excellent performances. Performances comparable to those obtained with iterative decoding based on belief propagation can be achieved at a much smaller complexity. 1
2013
The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the errorcorrecting performance keeps increasing with increasing number of iterations.
2004
Low-density parity-check (LDPC) codes in their broader-sense definition are linear codes whose paritycheck matrices have fewer 1s than 0s. Finding their minimum distance is therefore in general an NP-hard problem; in other words there exists no known polynomial deterministic algorithm to compute the minimum distance of a particular, nontrivial LDPC code. We propose a randomized algorithm called the approximately nearest codewords (ANC) searching approach to attack this hard problem for iteratively decodable LDPC codes. The principle of the ANC searching approach is to search codewords locally around the all-zero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum-Hamming-weight codeword whose Hamming weight is equal to the minimum distance of the linear code. The effectiveness of the algorithm is demonstrated by numerous examples.
Science China Information Sciences, 2014
In this paper, we investigate the construction of time-varying convolutional low-density paritycheck (LDPC) codes derived from block LDPC codes based on improved progressive edge growth (PEG) method. Different from the conventional PEG algorithm, the parity-check matrix is initialized by inserting certain patterns. More specifically, the submatrices along the main diagonal are fixed to be the identity matrix that ensures the fast encoding feature of the LDPC convolutional codes. Second, we insert a nonzero pattern into the secondary diagonal submatrices that ensures the encoding memory length of the time-varying LDPC convolutional codes as large as possible. With this semi-random structure, we have analyzed the code performance by evaluating the number of short cycles as well as the the bound of free distance. Simulation results show that the constructed LDPC convolutional codes perform well over additive white Gaussian noise (AWGN) channels.
Computing Research Repository, 2009
We propose a new type of short to moderate block-length, linear error-correcting codes, called moderate-density parity-check (MDPC) codes. The number of ones of the parity-check matrix of the codes presented is typically higher than the number of ones of the parity-check matrix of low-density parity-check (LDPC) codes. But, still lower than those of the parity-check matrix of classical block codes.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.