Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1972, IEEE Transactions on Information Theory
…
7 pages
1 file
This work presents two innovative construction schemes for convolutional codes that incorporate block coding principles. The first scheme enlarges the generators of a self-orthogonal convolutional code (SOCC), while the second extends the parity constraints of SOCCs to establish hybrid convolutional codes. The resulting hybrid codes maintain the error correction capabilities of their constituent block codes, thereby achieving improvements in error correction for random errors and bursts. Notably, when using majority-decodable block codes, the resulting convolutional codes exhibit optimal performance. Detailed comparisons and variations for optimal codes, alongside significant findings on the simultaneous correction of error types, are discussed.
IEEE Transactions on Information Theory, 1970
Signals and Communication Technology, 2017
It is well known that an (n, k, d min) error-correcting code C , where n and k denote the code length and information length, can correct d min − 1 erasures [15, 16] where d min is the minimum Hamming distance of the code. However, it is not so well known that the average number of erasures correctable by most codes is significantly higher than this and almost equal to n − k. In this chapter, an expression is obtained for the probability density function (PDF) of the number of correctable erasures as a function of the weight enumerator function of the linear code. Analysis results are given of several common codes in comparison to maximum likelihood decoding performance for the binary erasure channel. Many codes including BCH codes, Goppa codes, double-circulant and self-dual codes have weight distributions that closely match the binomial distribution [13-15, 19]. It is shown for these codes that a lower bound of the number of correctable erasures is n−k−2. The decoder error rate performance for these codes is also analysed. Results are given for rate 0.9 codes and it is shown for code lengths 5000 bits or longer that there is insignificant difference in performance between these codes and the theoretical optimum maximum distance separable (MDS) codes. The results for specific codes are given including BCH codes, extended quadratic residue codes, LDPC codes designed using the progressive edge growth (PEG) technique [12] and turbo codes [1]. The erasure correcting performance of codes and associated decoders has received renewed interest in the study of network coding as a means of providing efficient computer communication protocols [18]. Furthermore, the erasure performance of LDPC codes, in particular, has been used as a measure of predicting the code performance for the additive white Gaussian noise (AWGN) channel [6, 17]. One of the first analyses of the erasure correction performance of particular linear block codes is provided in a keynote paper by Dumer and Farrell [7] who derive the erasure correcting performance of long binary BCH codes and their dual codes. Dumer and Farrell show that these codes achieve capacity for the erasure channel.
2010
For low-end devices with limited battery or computational power, low complexity decoders are beneficial. In this research we have searched for low complexity decoder alternatives for error and error-erasure channels. We have especially focused on low complexity error erasure decoders, which is a topic that has not been studied by many researchers.
Journal of Electrical Engineering
In [1] a new family of error detection codes called Weighted Sum Codes was proposed. In [2] it was noted, that these codes are equivalent to lengthened Reed Solomon Codes, and shortened versions of lengthened Reed Solomon codes respectively, constructed over GF(2^(h/2)). It was also shown that it is possible to use these codes for correction of one error in each codeword over GF(2^(h/2)). In [3] a class of modified Generalized Weighted Sum Codes for single error and conditionally double error correction were presented. In this paper we present a new family of double error correcting codes with code distance dm = 5. Weight spectrum for [59,49,5] code constructed over GF(8) which is an example of the new codes was obtained by computer using its dual [4]. The code rate of the new codes are higher than the code rate of ordinary Reed Solomon codes constructed over the same �finite fi�eld.
IEEE Transactions on Information Theory, 2005
Asymptotic iterative decoding performance is analyzed for several classes of iteratively decodable codes when the block length of the codes and the number of iterations go to infinity. Three classes of codes are considered. These are Gallager's regular low-density parity-check (LDPC) codes, Tanner's generalized LDPC (GLDPC) codes, and the turbo codes due to Berrou et al. It is proved that there exist codes in these classes and iterative decoding algorithms for these codes for which not only the bit error probability b , but also the block (frame) error probability B , goes to zero as and go to infinity.
Information and Control
A new decoding algorithm for some convolution~l codes constructed from block codes is given. The algorithm utilizes the decoding algorithm for the corresponding block code. It is shown that the codes obtained from one-step orthogonalizable block codes are majority deeodable.
Isita, 2010
We investigate adaptive single-trial error/erasure decoding of binary codes whose decoder is able to correct e errors and t erasures if le+t<=d-1. Thereby, d is the minimum Hamming distance of the code and 1<l<=2 is the tradeoff parameter between errors and erasures. The error/erasure decoder allows to exploit soft information by treating a set of most unreliable received symbols as erasures. The obvious question here is, how this erasing should be performed, i.e. how the unreliable symbols which must be erased to obtain the smallest possible residual codeword error probability are determined. In a previous paper, we answer this question for the case of fixed erasing, where only the channel state and not the individual symbol reliabilities are taken into consideration. In this paper, we address the adaptive case, where the optimal erasing strategy is determined for every given received vector.
IEEE Transactions on Information Theory, 1991
New combinatorial and algebraic techniques are presented for systematically constructing different (d,k) block codes capable of detecting and correcting single bit-errors, single-peak shift-errors, double adjacent-errors and multiple adjacent erasures. Constructions utilizing channel side information, such as the magnetic recording ternary channel output string, or erasures, do not impose any restriction on the /c-constraint, while some of the other constructions require k = 2d. Due to the small and fixed number of redundant bits, the rates of both classes of constructions can be made to approach the capacity of the d constrained channel for long codeword lengths. All the codes can be encoded and decoded with simple, structured logic circuits.
Computing Research Repository, 2005
We present error-correcting codes that achieve the information-theoretically best possible trade-off between the rate and error-correction radius. Specifically, for every 0 < R < 1 and ε > 0, we present an explicit construction of error-correcting codes of rate R that can be list decoded in polynomial time up to a fraction (1 − R − ε) of worst-case errors. At least theoretically, this meets one of the central challenges in algorithmic coding theory.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Transactions on Information Theory, 1991
Physical Communication, 2019
IEEE Signal Processing Magazine, 2004
Computing Research Repository, 2008
arXiv (Cornell University), 2020
2016 Symposium on Communications and Vehicular Technologies (SCVT), 2016
International Journal for Research in Applied Science and Engineering Technology (IJRASET) , 2021
IEEE Transactions on Information Theory, 1963
IEEE Transactions on Information Theory, 2005
ETRI Journal, 2016
IEEE Transactions on Information Theory, 1990
IEEE Transactions on Computers, 1989
IEEE Transactions on Communications, 2015
IEEE Transactions on Information Theory, 1999
Discrete Mathematics, 1990
IEEE Transactions on Information Theory, 1987