Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Designs, Codes and Cryptography
We study the largest possible length B of (B − 1)dimensional linear codes over F q which can correct up to t errors taken from a restricted set A ⊆ F * q. Such codes can be applied to multilevel flash memories. Moreover, in the case that q = p is a prime and the errors are limited by a constant we show that often the primitive ℓth roots of unity, where ℓ is a prime divisor of p − 1, define good such codes.
IEEE Transactions on Information Theory, 2000
A relatively new model of error correction is the limited magnitude error model. That is, it is assumed that the absolute difference between the sent and received symbols is bounded above by a certain value l. In this paper, we propose systematic codes for asymmetric limited magnitude channels that are able to correct a single error. We also show how this construction can be slightly modified to design codes that can correct a single symmetric error of limited magnitude. The designed codes achieve higher code rate than single error correcting codes previously given in the literature.
2012 Information Theory and Applications Workshop, 2012
It is shown that, for all prime powers q and all k ≥ 3, if n ≥ (k − 1)q k − 2 q k −q q−1 , then there exists an [n, k; q] code that is proper for error detection.
2007
Several physical effects that limit the reliability and performance of Multilevel Flash memories induce errors that have low magnitude and are dominantly asymmetric. This paper studies block codes for asymmetric limited-magnitude errors over q-ary channels. We propose code constructions for such channels when the number of errors is bounded by t. The construction uses known codes for symmetric errors over small alphabets to protect large-alphabet symbols from asymmetric limited-magnitude errors. The encoding and decoding of these codes are performed over the small alphabet whose size depends only on the maximum error magnitude and is independent of the alphabet size of the outer code. An extension of the construction is proposed to include systematic codes as a benefit to practical implementation.
2010
Several physical effects that limit the reliability and performance of Multilevel Flash Memories induce errors that have low magnitudes and are dominantly asymmetric. This paper studies block codes for asymmetric limited-magnitude errors over q-ary channels. We propose code constructions and bounds for such channels when the number of errors is bounded by t and the error magnitudes are bounded by . The constructions utilize known codes for symmetric errors, over small alphabets, to protect large-alphabet symbols from asymmetric limitedmagnitude errors. The encoding and decoding of these codes are performed over the small alphabet whose size depends only on the maximum error magnitude and is independent of the alphabet size of the outer code. Moreover, the size of the codes is shown to exceed the sizes of known codes (for related error models), and asymptotic rate-optimality results are proved. Extensions of the construction are proposed to accommodate variations on the error model and to include systematic codes as a benefit to practical implementation.
IEEE Transactions on Information Theory, 2000
An error model with asymmetric errors of limited magnitude is a good model for some multilevel flash memories. This paper is about constructions of codes correcting such errors. The main results are about codes correcting a single such error and codes of length m correcting all errors in m 0 1 or less positions. Index Terms-Asymmetric error, error correction, flash memory, limited magnitude error.
IEEE Transactions on Information Theory, 1986
2011 IEEE International Symposium on Information Theory Proceedings, 2011
In multi-level flash memories, the dominant cell errors are asymmetric with limited-magnitude. With such an error model in mind, Cassuto et al. recently developed bounds and constructions for codes correcting t asymmetric errors with magnitude no more than . However, a more refined model of these memory devices reflects the fact that typically only a small number of errors have large magnitude while the remainder are of smaller magnitude.
2022 IEEE International Symposium on Information Theory (ISIT)
Motivated by applications to DNA-storage, flash memory, and magnetic recording, we study perfect burstcorrecting codes for the limited-magnitude error channel. These codes are lattices that tile the integer grid with the appropriate error ball. We construct two classes of such perfect codes correcting a single burst of length 2 for (1, 0)-limited-magnitude errors, both for cyclic and non-cyclic bursts. We also present a generic construction that requires a primitive element in a finite field with specific properties. We then show that in various parameter regimes such primitive elements exist, and hence, infinitely many perfect burst-correcting codes exist. Index Terms Integer coding, perfect codes, burst-correcting codes, lattices, limited-magnitude errors I. INTRODUCTION In many communication or storage systems, errors tend to occur in close proximity to each other, rather than occurring independently of each other. If the errors are confined to an interval of positions of length b, they are referred to as a burst of length b. Note that not all the positions in the interval are necessarily erroneous. A code that can correct any single burst of length b is called a b-burst-correcting code. The design of burst-correcting codes has been researched in the error models of substitutions, deletions and insertions. Concerning the substitutions, Abdel-Ghaffar et al. [1], [2] showed the existence of optimum cyclic bburst-correcting codes for any fixed b, and Etzion [10] gave a construction for perfect binary 2-burst-correcting codes. As for deletions and insertions, it has been shown in [20] that correcting a single burst of deletions is equivalent to correcting a single burst of insertions. Codes correcting a burst of exactly b consecutive deletions, or a burst of up to b consecutive deletions, were presented in [17], [20], with the redundancy being of optimal asymptotic order. The b-burst-correcting codes pertaining to deletions were treated in [3], called codes correcting localized deletions therein, and a class of such codes of asymptotically optimal redundancy was proposed. Similarly, permutation codes correcting a single burst of b consecutive deletions were studied in [8].
Computing Research Repository, 2006
We consider codes over the alphabet Q = {0,1, . . . , q − 1} intended for the control of unidirectional errors of level ℓ. That is, the transmission channel is such that the received word cannot contain both a component larger than the transmitted one and a component smaller than the transmitted one. Moreover, the absolute value of
Discrete Mathematics, 1990
In conventional channel coding, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some circumstances, some information symbols in a message are more significant than the other symbols. As a result, it is desirable to devise codes with multi-level error-correcting capabilities. In this paper, we investigate block codes with multi-level error-correcting capabilities, which are also known as unequal error protection (UEP) codes. Several classes of UEP codes are constructed. One class of codes satisfies the Hamming bound on the number of parity-check symbols for systematic linear UEP codes and hence is optimal.
2007
Abstract A construction of codes of length n= q+ 1 and minimum Hamming distance 3 over GF (q)\! ∖\!{0\} is given. Substitution of the derived codes into a concatenation construction yields nonlinear binary single-error correcting codes with better than known parameters. In particular, new binary single-error correcting codes having more codewords than the best previously known in the range n≤ 512 are obtained for the lengths 64–66, 128–133, 256–262, and 512.
IEEE Transactions on Information Theory, 2005
The monotone structure of correctable and uncorrectable errors given by the complete decoding for a binary linear code is investigated. New bounds on the error-correction capability of linear codes beyond half the minimum distance are presented, both for the best codes and for arbitrary codes under some restrictions on their parameters. It is proved that some known codes of low rate are as good as the best codes in an asymptotic sense.
2010 IEEE International Symposium on Information Theory, 2010
In this paper we construct multidimensional codes with high dimension. The codes can correct high dimensional errors which have the form of either small clusters, or confined to an area with a small radius. We also consider small number of errors in a small area. The clusters which are discussed are mainly spheres such as semi-crosses and crosses. Also considered are clusters with small number of errors such as 2-bursts, two errors in various clusters, and three errors on a line. Our main focus is on the redundancy of the codes when the most dominant parameter is the dimension of the code.
1997
Abstract A family of binary array codes of size (p-1)× n, with pa prime, correcting multiple column erasures is proposed. The codes coincide with a subclass of shortened Reed-Solomon codes and achieve the maximum possible correcting capability. Complexity of encoding and decoding is proportional to rnp, where r is the number of correctable erasures, ie, is simpler than the Forney decoding algorithm.
IEEE Transactions on Information Theory, 2013
The construction of asymmetric error correcting codes is a topic that was studied extensively, however, the existing approach for code construction assumes that every codeword could sustain t asymmetric errors. Our main observation is that in contrast to symmetric errors, asymmetric errors are context dependent. For example, the all-1 codeword is prone to have more errors than the all-0 codeword in Z-channels. However, in the applications of data storage, the reliability of each codeword should be content independent. It means, unaware of data importance, any stored content should be successfully retrieved with high probability. This motivates us to develop nonuniform codes whose codewords can tolerate different numbers of asymmetric errors depending on their Hamming weights. The goal of nonuniform codes is to guarantee the reliability of 'every' codeword while maximizing the code size. In this paper, we first study nonuniform codes for Z-channels, namely, they only suffer one type of errors, say 1 → 0. Specifically, we derive their upper bounds, analyze their asymmetric performances, and introduce two general constructions. Then we extend the concept and results of nonuniform codes to general binary asymmetric channels, where the error probability for each bit from 0 to 1 is smaller than that from 1 to 0.
2011 Information Theory and Applications Workshop, 2011
In this article we describe a class of error control codes called "diff-MDS" codes that are custom designed for highly resilient computer memory storage. The error scenarios of concern range from simple single bit errors, to memory chip failures and catastrophic memory module failures. Our approach to building codes for this setting relies on the concept of expurgating a parity code that is easy to decode for memory module failures so that a few additional small errors can be handled as well, thus preserving most of the decoding complexity advantages of the original code while extending its original intent. The manner in which we expurgate is carefully crafted so that the strength of the resulting code is comparable to that of a Reed-Solomon code when used for this particular setting. An instance of this class of algorithms has been incorporated in IBM's zEnterprise mainframe offering, setting a new industry standard for memory resiliency.
Signals and Communication Technology, 2017
It is well known that an (n, k, d min) error-correcting code C , where n and k denote the code length and information length, can correct d min − 1 erasures [15, 16] where d min is the minimum Hamming distance of the code. However, it is not so well known that the average number of erasures correctable by most codes is significantly higher than this and almost equal to n − k. In this chapter, an expression is obtained for the probability density function (PDF) of the number of correctable erasures as a function of the weight enumerator function of the linear code. Analysis results are given of several common codes in comparison to maximum likelihood decoding performance for the binary erasure channel. Many codes including BCH codes, Goppa codes, double-circulant and self-dual codes have weight distributions that closely match the binomial distribution [13-15, 19]. It is shown for these codes that a lower bound of the number of correctable erasures is n−k−2. The decoder error rate performance for these codes is also analysed. Results are given for rate 0.9 codes and it is shown for code lengths 5000 bits or longer that there is insignificant difference in performance between these codes and the theoretical optimum maximum distance separable (MDS) codes. The results for specific codes are given including BCH codes, extended quadratic residue codes, LDPC codes designed using the progressive edge growth (PEG) technique [12] and turbo codes [1]. The erasure correcting performance of codes and associated decoders has received renewed interest in the study of network coding as a means of providing efficient computer communication protocols [18]. Furthermore, the erasure performance of LDPC codes, in particular, has been used as a measure of predicting the code performance for the additive white Gaussian noise (AWGN) channel [6, 17]. One of the first analyses of the erasure correction performance of particular linear block codes is provided in a keynote paper by Dumer and Farrell [7] who derive the erasure correcting performance of long binary BCH codes and their dual codes. Dumer and Farrell show that these codes achieve capacity for the erasure channel.
Designs, Codes and Cryptography, 1997
This paper introduces a class of linear codes which are non-uniform error correcting, i.e. they have the capability of correcting different errors in different codewords. A technique for specifying error characteristics in terms of algebraic inequalities, rather than the traditional spheres of radius e, is used. A construction is given for deriving these codes from known linear block codes. This is accomplished by a new method called parity sectioned reduction. In this method, the parity check matrix of a uniform error correcting linear code is reduced by dropping some rows and columns and the error range inequalities are modified.
2007
We begin with the definition of Reed-Solomon codes. Definition 1.1 (Reed-Solomon code) . Let Fq be a finite field andFq[x] denote theFq-space of univariate polynomials where all the coefficients of x are fromFq. Pick {α1, α2, ...αn} distinct elements (also calledevaluation points ) of Fq and choosen and k such thatk ≤ n ≤ q. We define an encoding function for Reed-Solomon code as RS : Fq → F n q as follows. A message m = (m0, m1, ..., mk−1) with mi ∈ Fq is mapped to a degree k − 1 polynomial.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.