Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1991, IEEE Transactions on Information Theory
In the list-of-L decoding of a block code the receiver of a noisy sequence lists L possible transmitted messages, and is in error only if the correct message is not on the list. This paper considers ( n .~, L ) codes, which correct all sets of e or fewer errors in a block of n bits under list-of-L decoding. New geometric relations between the number of errors corrected under list-of-1 decoding and the (larger) number corrected under list-of-L decoding of the same code lead to new lower bounds on the maximum rate of ( n , e, L ) codes. They show that a jammer who can change a fixed fraction p < 1/2 of the bits in an )]-bit linear block code cannot prevent reliable communication at a positive rate using list-of-L decoding for sufficiently large 11 and an L I n. The new bounds are stronger for small n but weaker for fixed t / n in the limit of large 11 and L than known random coding bounds. Index Terms -List decoding, error correcting codes, correcting nearly 17 /2 errors.
Computing Research Repository, 2005
We present error-correcting codes that achieve the information-theoretically best possible trade-off between the rate and error-correction radius. Specifically, for every 0 < R < 1 and ε > 0, we present an explicit construction of error-correcting codes of rate R that can be list decoded in polynomial time up to a fraction (1 − R − ε) of worst-case errors. At least theoretically, this meets one of the central challenges in algorithmic coding theory.
IEEE Transactions on Information Theory, 2005
The monotone structure of correctable and uncorrectable errors given by the complete decoding for a binary linear code is investigated. New bounds on the error-correction capability of linear codes beyond half the minimum distance are presented, both for the best codes and for arbitrary codes under some restrictions on their parameters. It is proved that some known codes of low rate are as good as the best codes in an asymptotic sense.
eiris.it
This paper presents a lower and an upper bound on the number of parity check digits required for a linear code that corrects a single subblock containing errors which are in the form of 2-repeated bursts of length b or less. An illustration of such kind of codes has been provided. Further, the codes that correct m-repeated bursts of length b or less have also been studied.
IEEE Transactions on Information Theory
The probability of undetected error of linear block codes for use on a binary symmetric channel is investigated. Upper bounds are derived. Several classes of linear block codes are proved to have good error-detecting capability.
Physical Communication, 2019
The design of block codes for short information blocks (e.g., a thousand or less information bits) is an open research problem that is gaining relevance thanks to emerging applications in wireless communication networks. In this paper, we review some of the most promising code constructions targeting the short block regime, and we compare them with both finite-length performance bounds and classical error-correction coding schemes. The work addresses the use of both binary and high-order modulations over the additive white Gaussian noise channel. We will illustrate how to effectively approach the theoretical bounds with various performance versus decoding complexity tradeoffs.
IEEE Transactions on Information Theory, 1987
Given any fixed linear block code, the error rates for the message symbols depend both on the encoding function and on the decoding map. This research shows how to optimize the choice of a generator matrix and decoding map simultaneously to minimize the error rates for all message symbols. The model used assumes that the distibution of messages is flat and that the distribution of error vectors defining the channel is independent of the message transmitted. In addition, it is shown that, with proper choice of coset leaders, standard array decoding is optimal in this circumstance. The results generalize previously known results on unequal error protection and are sufficiently general to apply when a code is used for error detection only.
2002
A novel technique for deriving lower bounds on the error probability when communicating one of M signals over a communication channel is proposed. At the basis of the technique, stands an improvement on a recent lower bound on the probability of a union of events by de Caen. The new bound includes a function which can be optimized in order to achieve the tightest results. By applying this bound to the problem of lower bounding the error probability, while suggesting an appropriate optimization function, in the spirit of the relevant channel model and type of the code, new bounds on the error probability can be derived. In this talk, we apply the new bound to the problem of lower bounding the error probability of binary linear codes over the Binary Symmetric Channel (BSC). The resulting bound improves on the latest bound appearing in the current literature, by Keren and Litsyn.
IEEE Signal Processing Magazine, 2004
hannel coding is an error-control technique used for providing robust data transmission through imperfect channels by adding redundancy to the data. There are two important classes of such coding methods: block and convolutional. For this tutorial, we focus on linear block codes because they provide much insight and allow for a simple visualization of the error detection/correction process. Forward error correction (FEC) is the name used when the receiving equipment does most of the work. In the case of block codes, the decoder looks for errors and, once detected, corrects them (according to the capability of the code). The technique has become an important signal-processing tool used in modern communication systems and in a wide variety of other digital applications such as high-density memory and recording media. Such coding provides system performance improvements at significantly lower cost than through the use of other methods that increase signal-to-noise ratio (SNR) such as increased power or antenna gain. In this article we first develop the ideas behind simple binary codes. We then treat cyclic and nonbinary An intuitive treatment of error detection and correction
2012 Information Theory and Applications Workshop, 2012
It is shown that, for all prime powers q and all k ≥ 3, if n ≥ (k − 1)q k − 2 q k −q q−1 , then there exists an [n, k; q] code that is proper for error detection.
IEEE Transactions on Information Theory, 1996
In the first part of the correspondence we derive an upper bound on the undetected error probability of binary ( n , k ) block codes used on channels with memory described by Markov distributions. This bound is a generalization of the bound presented by Kasami et al. for the binary symmetric channel, and is given as an average value of some function of the composition of the state sequence of the channel. It can be extended in particular cases of Markov-type channels. As an example, such an extended bound is given for the Gilbert-Elliott channel and Markov channels with deterministic errors determined by the state. In the second part we develop a recursive technique for the exact calculation of the undetected error probability of an arbitrary linear block code used on a Markov-type channel. This technique is based on the trellis representation of block codes described by Wolf. Results of some computations are presented.
IEEE Transactions on Information Theory, 2000
Random coding performance bounds for L-list permutation invariant binary linear block codes transmitted over output symmetric channels are presented. Under list decoding, double and single exponential bounds are deduced by considering permutation ensembles of the above codes and exploiting the concavity of the double exponential function over the region of erroneous received vectors. The proposed technique specifies fixed list sizes L for specific codes under which the corresponding list decoding error probability approaches zero in a double exponential manner. The single exponential bound constitutes a generalization of Shulman-Feder bound and allows the treatment of codes with rates below the cutoff limit. Numerical examples of the new bounds for the specific category of codes are presented.
IEEE Transactions on Information Theory, 2004
New lower bounds on the error probability of block codes with maximum-likelihood decoding are proposed. The bounds are obtained by applying a new lower bound on the probability of a union of events, derived by improving on de Caen's lower bound. The new bound includes an arbitrary function to be optimized in order to achieve the tightest results. Since the optimal choice of this function is known, but leads to a trivial and useless identity, we find several useful approximations for it, each resulting in a new lower bound.
2006
Coding theory has played a central role in the development of computer science. One critical point of interaction is decoding error-correcting codes. First-and second-order Reed-Muller (RM(1) and RM(2), respectively) codes are two fundamental error-correcting codes which arise in communication as well as in probabilistically-checkable proofs and learning. In this paper, the first steps are taken toward extending the quick randomized decoding tools of RM(1) into the realm of quadratic binary and, equivalently, Z4 codes. The main algorithmic result is an extension of the RM(1) techniques from Goldreich-Levin and Kushilevitz-Mansour algorithms to the Hankel code, a code between RM(1) and RM(2). That is, given signal s of length N , a list is found that is a superset of all Hankel codewords ϕ with | s, ϕ | 2 ≥ (1/k) s 2 , in time poly(k, log(N )). A new and simple formulation of a known Kerdock code is given as a subcode of the Hankel code which leads to two immediate corollaries. First, the new Hankel list-decoding algorithm covers subcodes, including the new Kerdock construction, so it can list-decode Kerdock, too. Furthermore, because dot products of distinct Kerdock vectors have small magnitude, a quick algorithm is obtained for finding a sparse Kerdock approximation. That is, for k small compared with 1/ √ N and for > 0, in time poly(k log(N )/ ), a k-Kerdock-term approximation e s to s is found with Euclidean error at most the factor (1 + + O(k 2 / √ N )) times that of the best such approximation.
arXiv (Cornell University), 2022
Locally Decodable Codes (LDCs) are error-correcting codes C : Σ n → Σ m , encoding messages in Σ n to codewords in Σ m , with super-fast decoding algorithms. They are important mathematical objects in many areas of theoretical computer science, yet the best constructions so far have codeword length m that is super-polynomial in n, for codes with constant query complexity and constant alphabet size. In a very surprising result, Ben-Sasson, Goldreich, Harsha, Sudan, and Vadhan (SICOMP 2006) show how to construct a relaxed version of LDCs (RLDCs) with constant query complexity and almost linear codeword length over the binary alphabet, and used them to obtain significantly-improved constructions of Probabilistically Checkable Proofs. In this work, we study RLDCs in the standard Hamming-error setting, and introduce their variants in the insertion and deletion (Insdel) error setting. Standard LDCs for Insdel errors were first studied by Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015), and are further motivated by recent advances in DNA random access biotechnologies (Banal et al., Nature Materials, 2021), in which the goal is to retrieve individual files from a DNA storage database. Our first result is an exponential lower bound on the length of Hamming RLDCs making 2 queries (even adaptively), over the binary alphabet. This answers a question explicitly raised by Gur and Lachish (SICOMP 2021) and is the first exponential lower bound for RLDCs. Combined with the results of Ben-Sasson et al., our result exhibits a "phase-transition"-type behavior on the codeword length for some constant-query complexity. We achieve these lower bounds via a transformation of RLDCs to standard Hamming LDCs, using a careful analysis of restrictions of message bits that fix codeword bits. We further define two variants of RLDCs in the Insdel-error setting, a weak and a strong version. On the one hand, we construct weak Insdel RLDCs with almost linear codeword length and constant query complexity, matching the parameters of the Hamming variants. On the other hand, we prove exponential lower bounds for strong Insdel RLDCs. These results demonstrate that, while these variants are equivalent in the Hamming setting, they are significantly different in the insdel setting. Our results also prove a strict separation between Hamming RLDCs and Insdel RLDCs.
2006 IEEE International Symposium on Information Theory, 2006
There is a known best possible upper bound on the probability of undetected error for linear codes. The $[n,k;q]$ codes with probability of undetected error meeting the bound have support of size $k$ only. In this note, linear codes of full support ($=n$) are studied. A best possible upper bound on the probability of undetected error for such codes is given, and the codes with probability of undetected error meeting this bound are characterized.
arXiv (Cornell University), 2019
Insertion and deletion (Insdel for short) errors are synchronization errors in communication systems caused by the loss of positional information of the message. Since the work by Guruswami and Wang [13] that studied list decoding of binary codes with deletion errors only, there have been some further investigations on the list decoding of insertion codes, deletion codes and insdel codes. However, unlike classical Hamming metric or even rank-metric, there are still many unsolved problems on list decoding of insdel codes. The purpose of the current paper is to move toward complete or partial solutions for some of these problems. Our contributions mainly consist of two parts. Firstly, we analyse the list decodability of random insdel codes. We show that list decoding of random insdel codes surpasses the Singleton bound when there are more insertion errors than deletion errors and the alphabet size is sufficiently large. We also find that our results improve some previous findings in [19] and [13]. Furthermore, our results reveal the existence of an insdel code that can be list decoded against insdel errors beyond its minimum insdel distance while still having polynomial list size. This provides a more complete picture on the list decodability of insdel codes when both insertion and deletion errors happen. Secondly, we construct a family of explicit insdel codes with efficient list decoding algorithm. As a result, we derive a Zyablovtype bound for insdel errors. Recently, after our results appeared, Guruswami et al. [7] provided a complete solution for another open problem on list decoding of insdel codes. In contrast to the problems we considered, they provided a region containing all possible insertion and deletion errors that are still list decodable by some q-ary insdel codes of non-zero rate. More specifically, for a fixed number of insertion and deletion errors, while our paper focuses on maximizing the rate of a code that is list decodable against that amount of insertion and deletion errors, Guruswami et al. focuses on finding out the existence of a code with asymptotically non-zero rate which is list decodable against this amount of insertion and deletion errors.
arXiv: Information Theory, 2019
Insertion and deletion (Insdel for short) errors are synchronization errors in communication systems caused by the loss of positional information of the message. Since the work by Guruswami and Wang [13] that studied list decoding of binary codes with deletion errors only, there have been some further investigations on the list decoding of insertion codes, deletion codes and insdel codes. However, unlike classical Hamming metric or even rank-metric, there are still many unsolved problems on list decoding of insdel codes. The purpose of the current paper is to move toward complete or partial solutions for some of these problems. Our contributions mainly consist of two parts. Firstly, we analyse the list decodability of random insdel codes. We show that list decoding of random insdel codes surpasses the Singleton bound when there are more insertion errors than deletion errors and the alphabet size is sufficiently large. We also find that our results improve some previous findings in [19] and [13]. Furthermore, our results reveal the existence of an insdel code that can be list decoded against insdel errors beyond its minimum insdel distance while still having polynomial list size. This provides a more complete picture on the list decodability of insdel codes when both insertion and deletion errors happen. Secondly, we construct a family of explicit insdel codes with efficient list decoding algorithm. As a result, we derive a Zyablovtype bound for insdel errors. Recently, after our results appeared, Guruswami et al. [7] provided a complete solution for another open problem on list decoding of insdel codes. In contrast to the problems we considered, they provided a region containing all possible insertion and deletion errors that are still list decodable by some q-ary insdel codes of non-zero rate. More specifically, for a fixed number of insertion and deletion errors, while our paper focuses on maximizing the rate of a code that is list decodable against that amount of insertion and deletion errors, Guruswami et al. focuses on finding out the existence of a code with asymptotically non-zero rate which is list decodable against this amount of insertion and deletion errors.
Certain properties of the parity-check matrix H of (n, k) linear codes are used to establish a computerised search procedure for new binary linear codes. Of the new error-correcting codes found by this procedure, two codes were capable of correcting up to two errors, three codes up to three errors, four codes up to four errors and one code up to five errors. Two meet the lower bound given by Helgert and Stinaff, and seven codes exceed it. In addition, one meets the upper bound. Of the even-Hamming-distance versions of these codes, eight meet the upper bound, and the remaining two exceed the lower bound.
IEEE Transactions on Information Theory, 2007
A list decoder generates a list of more than one codeword candidates, and decoding is erroneous if the transmitted codeword is not included in the list. This decoding strategy can be implemented in a system that employs an inner error correcting code and an outer error detecting code that is used to choose the correct codeword from the list. Probability of codeword error analysis for a linear block code with list decoding is typically based on the "worst case" lower bound on the effective weights of codewords for list decoding evaluated from the weight enumerating function of the code. In this paper, the concepts of generalized pairwise error event and effective weight enumerating function are proposed for evaluation of the probability of codeword error of linear block codes with list decoding. Geometrical analysis shows that the effective Euclidean distances are not necessarily as low as those predicted by the lower bound. An approach to evaluate the effective weight enumerating function of a particular code with list decoding is proposed. The effective Euclidean distances for decisions in each pairwise error event are evaluated taking into consideration the actual Hamming distance relationships between codewords, which relaxes the pessimistic assumptions upon which the traditional lower bound analysis is based. Using the effective weight enumerating function, a more accurate approximation is achieved for the probability of codeword error of the code with list decoding. The proposed approach is applied to codes of practical interest, including terminated convolutional codes and turbo codes with the parallel concatenation structure.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.